{“payload”:{“allShortcutsEnabled”:false,”fileTree”:{“”:{“items”:[{“name”:”data”,”path”:”data”,”contentType”:”directory”},{“name”:”lib”,”path”:”lib”,”contentType”:”directory”},{“name”:”BERT_for_laptops.ipynb”,”path”:”BERT_for_laptops.ipynb”,”contentType”:”file”}],”totalCount”:3}},”fileTreeProcessingTime”:1.589477,”foldersToFetch”:[],”reducedMotionEnabled”:null,”repo”:{“id”:673536158,”defaultBranch”:”main”,”name”:”bert-for-laptops”,”ownerLogin”:”samvher”,”currentUserCanPush”:false,”isFork”:false,”isEmpty”:false,”createdAt”:”2023-08-01T21:19:30.000Z”,”ownerAvatar”:”https://avatars.githubusercontent.com/u/4366473?v=4″,”public”:true,”private”:false,”isOrgOwned”:false},”symbolsExpanded”:false,”treeExpanded”:true,”refInfo”:{“name”:”main”,”listCacheKey”:”v0:1691268014.0″,”canEdit”:false,”refType”:”branch”,”currentOid”:”1f904b870c455e909d2858428779b657e69445aa”},”path”:”BERT_for_laptops.ipynb”,”currentUser”:null,”blob”:{“rawLines”:[“{“,” “cells”: [“,” {“,” “cell_type”: “markdown”,”,” “id”: “8826cabb-5de1-494b-b814-0f17646c81dd”,”,” “metadata”: {},”,” “source”: [“,” “# A BERT for laptops, from scratch\n”,”,” “\n”,”,” “This is a simple BERT lookalike that was developed for training on a laptop (with an Nvidia 3070 RTX GPU). The notebook is developed for educational purposes more than performance, but in a bit more than half a day of training you can get a model that (after further finetuning) obtains ~94% of the performance of the original BERT-base on the GLUE benchmark. The code here builds on work by Geiping & Goldstein [0], Izsak et al [1] and Karpathy [2] who have all made large language models (LLMs) more accessible for modest budgets.\n”,”,” “\n”,”,” “You can execute this notebook from start to end to see the full process of setting up and training a tokenizer, pretraining a BERT model, and finetuning a BERT model on downstream NLP tasks. Most of the code from the notebook can also be found in this repository in regular Python files if you prefer, together with a few extra bits (e.g. [SpanBERT](https://arxiv.org/abs/1907.10529) style sample generation).\n”,”,” “\n”,”,” “The document is split into three sections:\n”,”,” “* _Data:_ This is where we obtain and preprocess the data for pretraining, and build and train the BPE tokenizer.\n”,”,” “* _Architecture:_ This is where we define our BERT.\n”,”,” “* _Training:_ First we pretrain the model on a \”masked language modeling\” (MLM) objective with a lot of data, then we finetune on a few smaller tasks from the GLUE benchmark.\n”,”,” “\n”,”,” “If you want to run the full notebook on a full size model, expect training the tokenizer to take ~15 hours, pretraining with the MLM objective to take ~17 hours (on a 3070 RTX, adjust expectations for your own system), and finetuning to take about an hour. The notebook was tested with 32GB of regular RAM and 8GB of GPU memory, if you have less you might need to make some changes.\n”,”,” “\n”,”,” “This BERT variant uses:\n”,”,” “* BPE (Byte Pair Encoding) tokenization.\n”,”,” “* Relative position embeddings.\n”,”,” “* Pre-layernorm.\n”,”,” “* No dropout.\n”,”,” “* Automatic mixed precision.\n”,”,” “\n”,”,” “[0] Geiping, Jonas, and Tom Goldstein. \”Cramming: Training a Language Model on a single GPU in one day.\” _International Conference on Machine Learning_. PMLR, 2023.\n”,”,” “\n”,”,” “[1] Izsak, Peter, Moshe Berchansky, and Omer Levy. \”How to train BERT with an academic budget.\” _arXiv preprint arXiv:2104.07705_ (2021).\n”,”,” “\n”,”,” “[2] Karpathy, Andrej. [\”MinGPT\”](https://github.com/karpathy/minGPT) (2020).””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 1,”,” “id”: “6af7245f-be5f-4a0a-82d7-b05d898727e9″,”,” “metadata”: {},”,” “outputs”: [],”,” “source”: [“,” “# Torch, cuda, numpy, scipy, matplotlib, etc are all assumed to be present\n”,”,” “\n”,”,” “# Other dependencies, uncomment this line if you don’t have these installed:\n”,”,” “# ! pip install datasets tqdm unidecode\n”,”,” “\n”,”,” “import math\n”,”,” “import os\n”,”,” “import pickle\n”,”,” “import random\n”,”,” “import re\n”,”,” “import string\n”,”,” “import time\n”,”,” “\n”,”,” “from collections import Counter\n”,”,” “from multiprocessing import Pool\n”,”,” “\n”,”,” “from matplotlib import pyplot as plt\n”,”,” “\n”,”,” “import numpy as np\n”,”,” “import scipy\n”,”,” “\n”,”,” “import torch\n”,”,” “import torch.nn as nn\n”,”,” “from torch import optim\n”,”,” “from torch.amp import autocast\n”,”,” “from torch.cuda.amp import GradScaler\n”,”,” “from torch.nn import functional as F\n”,”,” “\n”,”,” “from datasets import load_dataset\n”,”,” “from tqdm import tqdm\n”,”,” “from unidecode import unidecode””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “f0e20a1f-62ff-41df-abac-faa52644569b”,”,” “metadata”: {},”,” “source”: [“,” “## Data\n”,”,” “\n”,”,” “We will be working with two datasets: [BookCorpusOpen](https://huggingface.co/datasets/bookcorpusopen) and [Wikipedia English](https://huggingface.co/datasets/wikipedia), both of which are available from [Hugging Face](https://huggingface.co/). We are going to do a few things with the data:\n”,”,” “\n”,”,” “1. Fetch the data.\n”,”,” “2. Clean the data and collect unique words with their statistics.\n”,”,” “3. Generate a byte-pair encoding for the words in the data.\n”,”,” “4. Chunk training data to sequences of 128 tokens.\n”,”,” “5. Generate samples for training BERT.””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “5d101c75-2961-4aa2-a0d9-ae9dcae89796″,”,” “metadata”: {},”,” “source”: [“,” “### Fetching\n”,”,” “\n”,”,” “Fetch the datasets from Hugging Face. The first time you run this it will download ~26GB of data, plan accordingly. Later calls will use a local cache. The Wikipedia data is split into chunks of roughly similar size to support parallel processing later on.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 2,”,” “id”: “b7cb0485-7aa8-473a-a107-d4e9dd2e1d1a”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “name”: “stderr”,”,” “output_type”: “stream”,”,” “text”: [“,” “Found cached dataset bookcorpus (/home/sam/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/eddee3cae1cc263a431aa98207d4d27fd8a73b0a9742f692af0e6c65afa4d75f)\n”,”,” “Found cached dataset wikipedia (/home/sam/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559)\n”,”,” “Found cached dataset wikipedia (/home/sam/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559)\n”,”,” “Found cached dataset wikipedia (/home/sam/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559)\n”,”,” “Found cached dataset wikipedia (/home/sam/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559)\n”,”,” “Found cached dataset wikipedia (/home/sam/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559)\n””,” ]”,” }”,” ],”,” “source”: [“,” “bc = load_dataset(\”bookcorpus\”, split = \”train\”)\n”,”,” “wp_a = load_dataset(\”wikipedia\”, \”20220301.en\”, split = \”train[0:750000]\”)\n”,”,” “wp_b = load_dataset(\”wikipedia\”, \”20220301.en\”, split = \”train[750000:1500000]\”)\n”,”,” “wp_c = load_dataset(\”wikipedia\”, \”20220301.en\”, split = \”train[1500000:3250000]\”)\n”,”,” “wp_d = load_dataset(\”wikipedia\”, \”20220301.en\”, split = \”train[3250000:5000000]\”)\n”,”,” “wp_e = load_dataset(\”wikipedia\”, \”20220301.en\”, split = \”train[5000000:]\”)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “21adc2db-723a-49a9-be06-4fe40d652d73″,”,” “metadata”: {},”,” “source”: [“,” “### Cleaning and collecting word frequency\n”,”,” “\n”,”,” “Define the cleaning logic:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 3,”,” “id”: “0178fcd3-c522-4483-9f1c-298fb2dd867a”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def clean_string(s):\n”,”,” ” s = unidecode(s) # Make sure we have only ASCII characters\n”,”,” ” s = s.lower() # Lowercase\n”,”,” ” s = re.sub(‘[ \\t]+’, ‘ ‘, s) # Replace tabs and sequences of spaces with a single space\n”,”,” ” s = s.replace(‘\\n’, ‘\\\\n’) # Escape newlines\n”,”,” ” return s.strip() # Remove leading and trailing whitespace””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “b9d8a9b4-c65e-49e3-89eb-a56c939c3cc8″,”,” “metadata”: {},”,” “source”: [“,” “Preprocess the data. The code in this section will generate some large files in the `data/` directory.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 4,”,” “id”: “1cc037b7-c2d0-4fa2-809b-cf6787442a03″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def preprocess_dataset(d, tag):\n”,”,” ” c = Counter() # Will keep track of word counts\n”,”,” ” \n”,”,” ” # Save clean data to a local text file\n”,”,” ” with open(f\”data/{tag}.txt\”, \”w\”) as f:\n”,”,” ” for sample in d:\n”,”,” ” s_clean = clean_string(sample[‘text’])\n”,”,” ” f.write(s_clean + ‘\\n’)\n”,”,” ” words = re.findall(r'[a-zA-Z]+’, s_clean.replace(‘\\\\n’, ‘ ‘)) # avoid capturing the ‘n’s of newlines\n”,”,” ” c.update(words)\n”,”,” ” \n”,”,” ” # Pickle counts\n”,”,” ” with open(f\”data/{tag}_counts.pkl\”, \”wb\”) as f:\n”,”,” ” pickle.dump(c, f)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “44c935a2-7e0f-4cf6-8507-ec95a12765fa”,”,” “metadata”: {},”,” “source”: [“,” “The function above cleans the input data, stores the clean data, and counts word frequencies in the clean version of the data. These counts will be used to feed the training of the Byte Pair Encoding scheme later on.\n”,”,” “\n”,”,” “We’ll process the data in parallel to save quite a bit of time. The below cell might still take half an hour or so to complete.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 5,”,” “id”: “a7bfc86b-a80a-4519-915d-6c778948832b”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “with Pool(6) as p:\n”,”,” ” p.starmap(preprocess_dataset, [(bc, \”bookcorpus\”),\n”,”,” ” (wp_a, \”wikipedia_a\”),\n”,”,” ” (wp_b, \”wikipedia_b\”),\n”,”,” ” (wp_c, \”wikipedia_c\”),\n”,”,” ” (wp_d, \”wikipedia_d\”),\n”,”,” ” (wp_e, \”wikipedia_e\”)])””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “871a61a7-bc9a-4f73-a5cc-344258075d96″,”,” “metadata”: {},”,” “source”: [“,” “The above cell generated counts per subset of the data, let’s merge this into a single data structure:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 6,”,” “id”: “c3ebe174-a042-4c72-bcc8-3c4c8f74a340″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “word_counts = Counter()\n”,”,” “for tag in [\”bookcorpus\”, \”wikipedia_a\”, \”wikipedia_b\”, \”wikipedia_c\”, \”wikipedia_d\”, \”wikipedia_e\”]:\n”,”,” ” with open(f\”data/{tag}_counts.pkl\”, \”rb\”) as f:\n”,”,” ” word_counts.update(pickle.load(f))””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “1a45c3bf-8b87-4bef-8957-9f2ca104fc4e”,”,” “metadata”: {},”,” “source”: [“,” “And then let’s have a quick look at common and rare words, as a sanity check:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 7,”,” “id”: “b2868a89-6478-4a36-927d-9bfdf73d5216″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “Most common:\n”,”,” “[(‘the’, 238227230), (‘of’, 117781415), (‘and’, 104334764), (‘in’, 97520033), (‘to’, 78612948), (‘a’, 76470732), (‘was’, 44506401), (‘he’, 30383513), (‘s’, 29998804), (‘for’, 29685590)]\n”,”,” “Least common:\n”,”,” “[(‘mattinglly’, 1), (‘tennley’, 1), (‘thatnormally’, 1), (‘forsince’, 1), (‘convincingthe’, 1), (‘darkchocolate’, 1), (‘towardyour’, 1), (‘nfidence’, 1), (‘thepenthouse’, 1), (‘andunyielding’, 1)]\n””,” ]”,” }”,” ],”,” “source”: [“,” “print(\”Most common:\”)\n”,”,” “print(sorted(word_counts.items(), key = lambda x: -x[1])[:10])\n”,”,” “print(\”Least common:\”)\n”,”,” “print(sorted(word_counts.items(), key = lambda x: x[1])[:10])””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “140a5869-5953-4b52-9426-baf0f000fa39″,”,” “metadata”: {},”,” “source”: [“,” “The most common sequences generally seem to make sense. It also makes sense that the least common sequences are mainly misspellings.””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “74771ef9-81ff-4e38-9f94-aedae4cb0885″,”,” “metadata”: {},”,” “source”: [“,” “### Byte Pair Encoding (BPE)\n”,”,” “\n”,”,” “For a general description of BPE check out [this Hugging Face tutorial](https://huggingface.co/learn/nlp-course/chapter6/5?fw=pt). At a high level, BPE is used to shorten the sequences we will feed into the language model in a meaningful way. Training the model on raw characters would be expensive – by using BPE we can represent common words by single unique tokens, and less common words by sequences of a few tokens, where each token hopefully carries some meaning of its own. We will briefly look at what the tokenizer does later on.\n”,”,” “\n”,”,” “We start by defining the alphabet for our data, which is basically all printable ASCII characters and a few special tokens. The `[CLS]` token is fed as the first token of every sequence and may aid fine tuning. The `[MASK]` token is used during pre-training to indicate hidden tokens. The `[SEP]` token can be used to separate sequences in the input (this one becomes important during finetuning). The `[PAD]` token is used to make sure all input sequences consist of a fixed number of tokens.\n”,”,” “\n”,”,” “We also differentiate between characters that are at the beginning of a word and characters that are in the middle. For example the token `\”_a\”` corresponds to an \”a\” starting a word, while `\”a\”` corresponds to an \”a\” inside the word.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 8,”,” “id”: “e48bfe6a-d4dc-40e6-bb68-908ce3c379c9″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “alphabet = ([\”[CLS]\”, \”[MASK]\”, \”[SEP]\”, \”[PAD]\”] +\n”,”,” ” [c for c in string.ascii_lowercase] +\n”,”,” ” [f\”_{c}\” for c in string.ascii_lowercase] +\n”,”,” ” [symbol for symbol in ‘0123456789!\”#$%&\\'()*+,-./:;<=>?@[\\\\]^_`{|}~’] +\n”,”,” ” [\”\\\\n\”])””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “d795423f-fc24-46be-a890-90116032477d”,”,” “metadata”: {},”,” “source”: [“,” “First we will define a class to take care of encoding sequences. When byte pair encoding a sequence, the alphabet is combined with an ordered set of merge rules which are applied in sequence.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 9,”,” “id”: “adc548d6-8f8a-4566-8cca-b77fb2c9a78b”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “class BPEncoder:\n”,”,” ” \n”,”,” ” def __init__(self, alphabet, merge_rules, bpe_cache = dict()):\n”,”,” ” \”\”\”\n”,”,” ” alphabet: a list of strings\n”,”,” ” merge_rules: a list of string pairs to be merged\n”,”,” ” \”\”\”\n”,”,” ” self.alphabet = alphabet\n”,”,” ” self.merge_rules = merge_rules\n”,”,” ” self.bpe_cache = bpe_cache\n”,”,” ” \n”,”,” ” def total_tokens(self):\n”,”,” ” return len(self.alphabet) + len(self.merge_rules)\n”,”,” ” \n”,”,” ” def all_tokens(self):\n”,”,” ” return self.alphabet + [a + b for a, b in self.merge_rules]\n”,”,” “\n”,”,” ” def token_mapping(self):\n”,”,” ” tokens = self.all_tokens()\n”,”,” ” return {tok: i for i, tok in enumerate(tokens)}\n”,”,” ” \n”,”,” ” def add_merge_rule(self, merge_rule):\n”,”,” ” self.merge_rules.append(merge_rule)\n”,”,” ” \n”,”,” ” def split_seq(self, s):\n”,”,” ” \”\”\”Split s into units from the alphabet.\”\”\”\n”,”,” ” t = sorted([a for a in alphabet if s.startswith(a)], key = lambda x: -len(x))[0]\n”,”,” ” if len(t) < len(s):\n","," " return [t] + self.split_seq(s[len(t):])\n","," " else:\n","," " return [t]\n","," " \n","," " def apply_merge_rule(self, merge_rule, bpe_seq):\n","," " ret = []\n","," " delta_dict = Counter()\n","," " i = 0\n","," " while i < len(bpe_seq) - 1:\n","," " if merge_rule == (bpe_seq[i], bpe_seq[i+1]):\n","," " ret.append(bpe_seq[i] + bpe_seq[i+1])\n","," " \n","," " # This part is a bit hairy and only really necessary for training the encoder (done further down).\n","," " # It's essentially accounting logic to keep track of the occurrence of\n","," " # sequential pairs: when we apply a merge rule, the merged pair disappears\n","," " # everywhere in the sequence, but new pairs also appear. Keeping track of\n","," " # that change this way is a bit more efficient than just re-counting all pairs.\n","," " \n","," " # Example:\n","," " # We have the sequence [t1, t2, t3, t4] and the merge rule (t2, t3).\n","," " # The pair (t2, t3) disappears, and pairs (t1, t2+t3) and (t2+t3, t4) appear.\n","," " \n","," " delta_dict.update({merge_rule: -1})\n","," " if i > 0:\n”,”,” ” delta_dict.update({(ret[-2], bpe_seq[i]): -1})\n”,”,” ” delta_dict.update({(ret[-2], bpe_seq[i] + bpe_seq[i+1]): 1})\n”,”,” ” if i < len(bpe_seq) - 2:\n","," " delta_dict.update({(bpe_seq[i+1], bpe_seq[i+2]): -1})\n","," " delta_dict.update({(bpe_seq[i] + bpe_seq[i+1], bpe_seq[i+2]): 1})\n","," " \n","," " i += 2\n","," " else:\n","," " ret.append(bpe_seq[i])\n","," " i += 1\n","," " if i == len(bpe_seq) - 1:\n","," " ret.append(bpe_seq[i])\n","," " return ret, delta_dict\n","," " \n","," " def encode(self, s):\n","," " \"\"\"\n","," " Apply BPE to s.\n","," " This implementation is very slow for encodings with many merge rules.\n","," " In our case that doesn't matter much, we will cache encodings.\n","," " \"\"\"\n","," " if s in self.bpe_cache:\n","," " return self.bpe_cache[s]\n","," " else:\n","," " ret = self.split_seq(s)\n","," " for mr in self.merge_rules:\n","," " ret, _ = self.apply_merge_rule(mr, ret)\n","," " self.bpe_cache[s] = ret\n","," " return ret""," ]"," },"," {"," "cell_type": "markdown","," "id": "ffa7b703-1d78-4ca5-86de-3a4949f173a8","," "metadata": {},"," "source": ["," "Let's go through an example to get a feel for this:""," ]"," },"," {"," "cell_type": "code","," "execution_count": 10,"," "id": "f67a05e2-1c1e-40b5-8f20-177b18fdcf04","," "metadata": {"," "tags": []"," },"," "outputs": ["," {"," "name": "stdout","," "output_type": "stream","," "text": ["," "Split to alphabet elements:\n","," "['_t', 'h', 'e', 'r', 'e', 'f', 'o', 'r', 'e']\n","," "\n","," "Apply BPE:\n","," "['_the', 'r', 'e', 'for', 'e']\n","," "\n","," "Convert to numeric representation:\n","," "[100, 21, 8, 102, 8]\n""," ]"," }"," ],"," "source": ["," "demo_bpe = BPEncoder(alphabet, [(\"h\", \"e\"), (\"_t\", \"he\"), (\"o\", \"r\"), (\"f\", \"or\")])\n","," "\n","," "print(\"Split to alphabet elements:\")\n","," "print(demo_bpe.split_seq(\"_therefore\"))\n","," "print()\n","," "print(\"Apply BPE:\")\n","," "print(demo_bpe.encode(\"_therefore\"))\n","," "print()\n","," "tok2idx = demo_bpe.token_mapping()\n","," "print(\"Convert to numeric representation:\")\n","," "print([tok2idx[tok] for tok in demo_bpe.encode(\"_therefore\")])""," ]"," },"," {"," "cell_type": "markdown","," "id": "1eb63a4c-404b-4d02-b4e3-d92dd7012621","," "metadata": {},"," "source": ["," "Now that we have a BPE encoder, we need to learn the encoding from our data. This is what we collected the `word_counts` for earlier. We are only going to generate merge rules for alphabetic sequences: any tokens, numbers or symbols will be kept at the alphabet level.\n","," "\n","," "The algorithm is roughly:\n","," "1. Split the sequences for all words into alphabet elements.\n","," "2. Count the number of occurrences of all pairs.\n","," "3. Add a merge rule for the pair that occurs most often.\n","," "4. Apply the merge rule for all words.\n","," "5. If we have not reached our desired number of tokens, repeat starting at step 2.\n","," "\n","," "The code has been set up to be a bit more efficient than a naive algorithm (e.g. pair occurrences are not re-calculated every loop, during the loop we keep track of the changes resulting from a new merge rule, see the `apply_merge_rule` method in `BPEncoder`). However, this algorithm could still be sped up significantly. That would add complexity and is beyond the scope of this project.""," ]"," },"," {"," "cell_type": "code","," "execution_count": 11,"," "id": "9d63a82e-5fc4-452b-8e6b-cb57e3bb085f","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "class BPETrainer:\n","," " \n","," " def __init__(self, word_counts, alphabet):\n","," " self.bpe = BPEncoder(alphabet, [])\n","," " \n","," " self.data = [] # Will hold the encoded version of the words from our data with its count\n","," " self.pair_counts = Counter() # Will hold occurrence frequencies of token pairs\n","," " self.token_word_index = {token: [] for token in self.bpe.all_tokens()} # Maps tokens to the words in which they occur\n","," " for i in range(len(word_counts)):\n","," " word, count = word_counts[i]\n","," " word_enc = self.bpe.split_seq('_' + word) # Prepend underscore to differentiate leading tokens\n","," " self.data.append((word_enc, count))\n","," " for j in range(0, len(word_enc) - 1):\n","," " self.pair_counts.update({(word_enc[j], word_enc[j+1]): count})\n","," " for tok in set(word_enc):\n","," " self.token_word_index[tok].append(i)\n","," " \n","," " def add_merge_rule(self, t1, t2):\n","," " \"\"\"Adds the rule to merge t1 and t2 to the BPE and updates internal statistics.\"\"\"\n","," " \n","," " # Add the new (merged) token to the word mapping\n","," " self.token_word_index[t1 + t2] = []\n","," " \n","," " # The below code finds words that contain *both* t1 and t2 in a somewhat efficient way.\n","," " # It relies on the fact that the list values in self.token_word_index are in sorted order.\n","," " i = 0\n","," " j = 0\n","," " while i < len(self.token_word_index[t1]) and j < len(self.token_word_index[t2]):\n","," "\n","," " if self.token_word_index[t1][i] < self.token_word_index[t2][j]:\n","," " i += 1\n","," "\n","," " elif self.token_word_index[t2][j] < self.token_word_index[t1][i]:\n","," " j += 1\n","," "\n","," " else:\n","," " # This word contains both t1 and t2: we might need to merge pairs here.\n","," " \n","," " word_idx = self.token_word_index[t1][i]\n","," " word_enc, count = self.data[word_idx]\n","," "\n","," " # Get the encoded word after applying the merge rule, and the changes\n","," " # that we need to make to our `pair_counts`.\n","," " word_enc_post, delta = self.bpe.apply_merge_rule((t1, t2), word_enc)\n","," " self.data[word_idx] = (word_enc_post, count)\n","," " self.pair_counts.update({pair: d*count for pair, d in delta.items()})\n","," " \n","," " # Update the word index\n","," " if t1 not in word_enc_post:\n","," " del self.token_word_index[t1][i]\n","," " else:\n","," " i += 1\n","," " if t2 not in word_enc_post:\n","," " if t2 != t1:\n","," " del self.token_word_index[t2][j]\n","," " else:\n","," " j += 1\n","," " if t1 + t2 in word_enc_post:\n","," " self.token_word_index[t1 + t2].append(word_idx)\n","," "\n","," " # Update the BPE to include the new merge rule\n","," " self.bpe.add_merge_rule((t1, t2))\n","," " \n","," " def find_merge_rules(self, token_limit, verbose = False):\n","," " \"\"\"Add merge rules to the BPE until token_limit is reached.\"\"\"\n","," " \n","," " while self.bpe.total_tokens() < token_limit:\n","," " \n","," " # Find the most frequent pair.\n","," " # This call could be sped up with a better data structure.\n","," " t1, t2 = max(self.pair_counts, key = self.pair_counts.get)\n","," " count = self.pair_counts.get((t1, t2))\n","," " \n","," " if count == 0:\n","," " print(f\"No more tokens to add, every word has its own token already.\")\n","," " break\n","," " \n","," " if verbose:\n","," " print(f\"{self.bpe.total_tokens()}: {t1} + {t2} -> {t1}{t2} (count = {self.pair_counts.get((t1, t2))})\”)\n”,”,” ” \n”,”,” ” # Add the most frequent pair as a merge rule.\n”,”,” ” self.add_merge_rule(t1, t2)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “ef91f887-b4a5-4bda-aaba-79035e4d0f11″,”,” “metadata”: {},”,” “source”: [“,” “For the purpose of this notebook, let’s run this process on a random subset of the data only, and up to a relatively small number of merge rules. Running the process on the full `word_counts` dictionary to the full token count (2^15 tokens) takes about 15 hours. We will also print some of the words on which the small tokenizer is trained and show how they are broken up into tokens:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 12,”,” “id”: “e0614787-a255-47b9-9b0a-800daf010e72″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “gladding (282 occ): [‘_gl’, ‘ad’, ‘ding’]\n”,”,” “memoire (3863 occ): [‘_memoire’]\n”,”,” “camisa (380 occ): [‘_cam’, ‘isa’]\n”,”,” “democratico (771 occ): [‘_dem’, ‘oc’, ‘rat’, ‘ico’]\n”,”,” “templars (4578 occ): [‘_templars’]\n”,”,” “overconfident (1118 occ): [‘_overconfid’, ‘ent’]\n”,”,” “agenesis (598 occ): [‘_ag’, ‘enes’, ‘is’]\n”,”,” “bernese (2982 occ): [‘_bernese’]\n”,”,” “bonnies (655 occ): [‘_bon’, ‘nies’]\n”,”,” “shax (405 occ): [‘_sh’, ‘ax’]\n””,” ]”,” }”,” ],”,” “source”: [“,” “word_counts_small = random.sample([wc for wc in word_counts.items() if wc[1] >= 256], 2**13)\n”,”,” “\n”,”,” “bpet_small = BPETrainer(word_counts_small, alphabet)\n”,”,” “bpet_small.find_merge_rules(2**13)\n”,”,” “\n”,”,” “for word, count in word_counts_small[:10]:\n”,”,” ” print(f\”{word} ({count} occ): {bpet_small.bpe.encode(‘_’ + word)}\”)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “3e1ed7d6-1993-4668-bd42-3963d0a0049b”,”,” “metadata”: {},”,” “source”: [“,” “As you can see, common words are likely to be represented by a single token. Less common words are broken into chunks of common sequences, where these sequences often (though definitely not always) have some semantic meaning.””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “2599b4e3-a7ec-43a5-810d-fc4b81dcbfcd”,”,” “metadata”: {},”,” “source”: [“,” “To run the process on the full training data, up to the full token count, execute this cell:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: null,”,” “id”: “16bf52bf-fbb1-4445-9535-90bd54cc84c8″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “# This code generates the data loaded below\n”,”,” “\n”,”,” “bpet = BPETrainer(word_counts, alphabet)\n”,”,” “bpet.find_merge_rules(2**15, verbose = True)\n”,”,” ” \n”,”,” “with open(\”data/bert_mr.pkl\”, \”wb\”) as f:\n”,”,” ” pickle.dump(bpet.bpe.merge_rules, f)\n”,”,” “\n”,”,” “with open(\”data/bert_bpet_data.pkl\”, \”wb\”) as f:\n”,”,” ” pickle.dump(bpet.data, f)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “b445e90c-d609-4dba-a38d-abbfec2938f8″,”,” “metadata”: {},”,” “source”: [“,” “For reruns it’s easier to just load saved results from earlier:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 13,”,” “id”: “0c7a10ca-45e7-40f8-ac62-55626f2c8778″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “with open(\”data/bert_mr.pkl\”, \”rb\”) as f:\n”,”,” ” bert_mr = pickle.load(f)\n”,”,” “\n”,”,” “with open(\”data/bert_bpet_data.pkl\”, \”rb\”) as f:\n”,”,” ” bert_bpet_data = pickle.load(f)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “44b0afc8-a893-4725-bca0-ac6151bd3b93″,”,” “metadata”: {},”,” “source”: [“,” “A nice side effect of structuring the training process the way it is in this notebook is that for every single word in our pretraining data, we get the encoded form as part of the BPE training process. That means that rather than re-encoding a word every time we encounter it, we can create a lookup table to find the encoding, which is a lot faster (at least for this poorly optimized encoder):””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 14,”,” “id”: “f4d54e8e-e458-4d01-8074-c7266c839b1d”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “# Mapping from each word to its encoding\n”,”,” “bpe_cache = {”.join(w_enc): w_enc for w_enc, _ in bert_bpet_data}\n”,”,” “\n”,”,” “bert_bpe = BPEncoder(alphabet, bert_mr, bpe_cache)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “a115bc61-15b6-4b19-bd61-7a16b40abc85″,”,” “metadata”: {},”,” “source”: [“,” “### Chunking\n”,”,” “\n”,”,” “The cleaned data that we generated earlier did not take into account the fact that we want to train on fixed sample lengths – during pretraining, all our samples will be 128 tokens long. For this purpose we will apply one more preprocessing step, generating a dataset where each line of data has a suitable length, given our trained BPE encoding.\n”,”,” “\n”,”,” “We will need to do this slightly differently between the BookCorpus data and the Wikipedia data – with the BookCorpus data, each sample represents a sentence from a book, and the next sample is the following sentence. For our training process, we will merge such samples to try and fill up the 128 token frames. The Wikipedia data on the other hand has one sample per article, and we want to avoid combining unrelated sentences on different topics. For the Wikipedia data, once we reach the end of an article, we will fill the rest of the current training sample with `[PAD]` tokens.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 15,”,” “id”: “8b473261-67f9-49c1-81d3-9e3ba60e53c7″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def atomize(s):\n”,”,” ” \”\”\”Break down a sample into symbols and words.\”\”\”\n”,”,” ” atom_re = r'(\\[(CLS|SEP|PAD|MASK)\\]|[a-z]+|\\\\n|[0-9]|\\\\|[!#$%&\\'()*+,-./:;<=>?@[\\]^_`{|}~])’\n”,”,” ” return [m[0] for m in re.findall(atom_re, s)]\n”,”,” ” \n”,”,” “def chunks(fname, bpe, max_length, merge_lines = False):\n”,”,” ” ret_list = []\n”,”,” ” ret_tok_len = 0\n”,”,” ” \n”,”,” ” with open(fname, \”r\”) as f:\n”,”,” ” for line in f:\n”,”,” ” atoms = atomize(line)\n”,”,” ” for atom in atoms:\n”,”,” ” if atom.isalpha():\n”,”,” ” # Deal with some weird sequences in the training data\n”,”,” ” if len(bpe.encode(‘_’ + atom)) > max_length:\n”,”,” ” continue\n”,”,” ” if ret_tok_len + len(bpe.encode(‘_’ + atom)) > max_length:\n”,”,” ” yield ‘ ‘.join(ret_list)\n”,”,” ” ret_list = []\n”,”,” ” ret_tok_len = 0\n”,”,” ” ret_list.append(atom)\n”,”,” ” ret_tok_len += len(bpe.encode(‘_’ + atom))\n”,”,” ” else:\n”,”,” ” if ret_tok_len == max_length:\n”,”,” ” yield ‘ ‘.join(ret_list)\n”,”,” ” ret_list = []\n”,”,” ” ret_tok_len = 0\n”,”,” ” ret_list.append(atom)\n”,”,” ” ret_tok_len += 1\n”,”,” ” if not merge_lines:\n”,”,” ” yield ‘ ‘.join(ret_list)\n”,”,” ” ret_list = []\n”,”,” ” ret_tok_len = 0\n”,”,” ” yield ‘ ‘.join(ret_list)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “12a00622-07a7-4219-88d8-afa7add916f2″,”,” “metadata”: {},”,” “source”: [“,” “Let’s see what a sample looks like. We will take `max_length` to be 126, because during training we will still prepend a `[CLS]` token and append a `[SEP]` token.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 16,”,” “id”: “1ffa1ce4-f91d-4c98-834d-bb627292e81c”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “the camelen iv 4 4 0 is a four wheel drive modular mission system vehicle designed by jez hermer mbe , ceo of ovik special vehicles . designed and developed in 2 0 1 0 , it is based upon the iveco daily 4 x 4 chassis but incorporates a number of modifications designed by ovik plus a range of specialist mission modules which can be interchanged rapidly , giving the vehicle a multi – functional utility . \\n \\n concept of use \\n the general concept behind the cameleom system is to provide military forces , civil and emergency services and commercial users with a modular vehicle which can be reconfigured , rapidly , into\n””,” ]”,” }”,” ],”,” “source”: [“,” “test_chunk = chunks(\”data/wikipedia_d.txt\”, bert_bpe, 126, merge_lines = True)\n”,”,” “print(next(test_chunk))””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “f2868da2-8ff9-44c3-97ee-a595c13f05db”,”,” “metadata”: {},”,” “source”: [“,” “The below code generates a chunked version of all the training data – again we will parallellize the process, it will take up quite a bit of disk space and take approximately half an hour.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 17,”,” “id”: “316e643a-c66a-410a-86d7-957258659d4a”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def chunk_samples(fname, merge_lines):\n”,”,” ” cs = chunks(\”data/\” + fname, bert_bpe, 126, merge_lines)\n”,”,” ” with open(f\”data/chunked_{fname}\”, \”w\”) as f:\n”,”,” ” for c in cs:\n”,”,” ” f.write(c + ‘\\n’)\n”,”,” “\n”,”,” “with Pool(6) as p:\n”,”,” ” p.starmap(chunk_samples, [(\”bookcorpus.txt\”, True),\n”,”,” ” (\”wikipedia_a.txt\”, False),\n”,”,” ” (\”wikipedia_b.txt\”, False),\n”,”,” ” (\”wikipedia_c.txt\”, False),\n”,”,” ” (\”wikipedia_d.txt\”, False),\n”,”,” ” (\”wikipedia_e.txt\”, False)])””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “ccfe47e2-129d-4ee1-ab76-a08f19723a91″,”,” “metadata”: {},”,” “source”: [“,” “We will now merge all the data into one file and shuffle it. First we shuffle the individual files, then we randomly merge the resulting files. The reason for the roundabout procedure is that the combined file is too large to fit in memory (at least on my machine). The reason that we apply weights during file merge is that we don’t want shorter files to be over-represented in earlier parts of the training data – the files we generated vary a bit in size, and also in content, so that could potentially cause problems.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 18,”,” “id”: “7fa9051e-a81b-4877-a692-ad6950715118″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “tags = [\”bookcorpus\”, \”wikipedia_a\”, \”wikipedia_b\”, \”wikipedia_c\”, \”wikipedia_d\”, \”wikipedia_e\”]””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 19,”,” “id”: “eef7b9b4-80a6-4d46-a223-c8ef1042d6b4″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “for tag in tags:\n”,”,” ” lines = open(f\”data/chunked_{tag}.txt\”, \”r\”).readlines()\n”,”,” ” random.shuffle(lines)\n”,”,” ” open(f\”data/shuffled_chunked_{tag}.txt\”, \”w\”).writelines(lines)””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 20,”,” “id”: “0b97ae38-ecb8-43f4-88e5-a97ad8168730″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “Done with wikipedia_a\n”,”,” “Done with bookcorpus\n”,”,” “Done with wikipedia_b\n”,”,” “Done with wikipedia_c\n”,”,” “Done with wikipedia_d\n”,”,” “Done with wikipedia_e\n””,” ]”,” }”,” ],”,” “source”: [“,” “read_handles = [open(f\”data/shuffled_chunked_{tag}.txt\”, \”r\”) for tag in tags]\n”,”,” “sizes = [os.stat(f\”data/shuffled_chunked_{tag}.txt\”).st_size for tag in tags]\n”,”,” “names = [t for t in tags]\n”,”,” “\n”,”,” “with open(\”data/pretrain.txt\”, \”w\”) as f:\n”,”,” ” while len(read_handles) > 0:\n”,”,” ” i = random.choices(range(len(read_handles)), weights = sizes)[0]\n”,”,” ” try:\n”,”,” ” line = next(read_handles[i])\n”,”,” ” if line != ‘\\n’: # A few cases of empty lines show up\n”,”,” ” f.write(line)\n”,”,” ” except StopIteration:\n”,”,” ” read_handles[i].close()\n”,”,” ” print(f\”Done with {names[i]}\”)\n”,”,” ” del read_handles[i], sizes[i], names[i]””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “492c5a0d-9e3d-4d30-8247-66fb77275a24″,”,” “metadata”: {},”,” “source”: [“,” “### Generating training samples\n”,”,” “\n”,”,” “We now have input data of the right size in a workable format. The next step is to generate the actual training samples for pre-training BERT. Different approaches have been used for this by different people, we will go with a relatively simple approach here. We are going to train a BERT on 128 tokens at a time, and during pre-training we will mask 15% of the input tokens and score the model on how well it manages to predict what the missing tokens are.\n”,”,” “\n”,”,” “To give some intuition for what we are trying to do, an input sample might look something like\n”,”,” “\n”,”,” “`[‘i’, ‘went’, ‘to’, ‘the’, ‘[MASK]’, ‘for’, ‘lunch’]`\n”,”,” “\n”,”,” “for which the model would need to predict\n”,”,” “\n”,”,” “`[ – , – , – , – , ‘cafeteria’, – , – ]`\n”,”,” “\n”,”,” “The common case will be to leave out a token and replace it with `[MASK]` (80% of the time). However, in 10% of cases instead of `[MASK]` we will use a random token, and in 10% of cases we won’t perform a replacement. This is similar to how training was done in the original BERT paper.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 21,”,” “id”: “185a4460-dc29-4e2d-be9b-d095d3481dd0″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def encode_sample(sample, bpe):\n”,”,” ” encoded = []\n”,”,” ” for item in sample.strip().split(‘ ‘):\n”,”,” ” if item.isalpha():\n”,”,” ” encoded += [tok for tok in bpe.encode(‘_’ + item)]\n”,”,” ” else:\n”,”,” ” encoded.append(item)\n”,”,” ” return encoded\n”,”,” “\n”,”,” “def samples_and_masks(fname, length, bpe):\n”,”,” ” \”\”\”Assumes that all samples in fname have been sized not to exceed `length`.\”\”\”\n”,”,” ” tok2idx = bpe.token_mapping()\n”,”,” ” \n”,”,” ” with open(fname, \”r\”) as f:\n”,”,” ” for sample in f:\n”,”,” ” \n”,”,” ” # Apply BPE to the sample\n”,”,” ” encoded = [tok2idx[e] for e in encode_sample(sample, bpe)]\n”,”,” ” total_tokens = len(encoded)\n”,”,” ” \n”,”,” ” # Generate mask in the shape of the sample\n”,”,” ” mask_count = math.ceil(0.15 * total_tokens)\n”,”,” ” mask = [1] * mask_count + [0] * (total_tokens – mask_count)\n”,”,” ” random.shuffle(mask)\n”,”,” ” \n”,”,” ” # Generate ground truth and mask in matching shape\n”,”,” ” training_output = [tok2idx[\”[CLS]\”]] + encoded + [tok2idx[\”[SEP]\”]] + [tok2idx[\”[PAD]\”]] * (length – total_tokens – 2)\n”,”,” ” training_mask = [0] + mask + [0] + [0] * (length – total_tokens – 2)\n”,”,” ” \n”,”,” ” # Generate input data\n”,”,” ” training_input = [t for t in training_output]\n”,”,” ” for i in range(length):\n”,”,” ” if training_mask[i] == 1: # Mask this token\n”,”,” ” r = random.random()\n”,”,” ” if r < 0.8: # Regular masking\n","," " training_input[i] = tok2idx[\"[MASK]\"]\n","," " elif r < 0.9: # Random other token instead of [MASK]\n","," " training_input[i] = random.randrange(bpe.total_tokens())\n","," " else: # Feed the original token as input, untouched\n","," " pass\n","," " \n","," " yield [training_input, training_output, training_mask]""," ]"," },"," {"," "cell_type": "markdown","," "id": "b7b77786-d270-49b7-aed2-bd228ff2ce31","," "metadata": {},"," "source": ["," "This time we will not write training samples to disk - we will generate them on the fly during training. This makes it easier to apply different transformation (e.g. a SpanBERT objective) or to re-randomize for subsequent epochs.""," ]"," },"," {"," "cell_type": "markdown","," "id": "70fae2c2-8e22-43b5-8602-ea4db87665ed","," "metadata": {},"," "source": ["," "## Architecture\n","," "\n","," "We are now at a point where we can define the BERT model. We will pretty closely follow the architecture described in the [Cramming paper](https://arxiv.org/abs/2212.14034). A lot of the structure of the below code is borrowed from Andrej Karpathy's [minGPT](https://github.com/karpathy/minGPT).\n","," "\n","," "The main differences from the original BERT implementation are the following:\n","," "* Pre-LayerNorm (marked in the code below).\n","," "* No dropout.\n","," "* No biases in transformer attention, in transformer MLPs, or in the decoder of the model.\n","," "* An additional LayerNorm right after the embedding layer.\n","," "\n","," "The main difference from the Cramming paper is that the architecture here uses relative position embeddings as introduced by [Shaw et al](https://arxiv.org/abs/1803.02155). The implementation here only adds position representations to keys, not to values. Compared to the original absolute position embeddings, this approach slows down training by ~4% but it makes up for it in improved model performance.\n","," "\n","," "This is not the time and place to explain transformers in detail. For that, if you have a lot of time, go through [Stanford's CS 224N](https://web.stanford.edu/class/cs224n/). If you don't have a lot of time, Andrej Karpathy gives [a good high-level overview](https://www.youtube.com/watch?v=kCc8FmEb1nY).\n","," "\n","," "The first two components define the transformer block:""," ]"," },"," {"," "cell_type": "code","," "execution_count": 22,"," "id": "f687cbb2-8c7b-48c2-91db-ca7773ec0569","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "class SelfAttention(nn.Module):\n","," " \"\"\"\n","," " Bi-directional transformer self-attention.\n","," " Uses relative position embeddings, shared across tokens and attention heads, but unique for each layer.\n","," " \"\"\"\n","," "\n","," " def __init__(self, config):\n","," " super().__init__()\n","," " \n","," " self.config = config\n","," " embed_size = config[\"embed_size\"]\n","," " n_head = config[\"n_head\"]\n","," " assert embed_size % n_head == 0\n","," " \n","," " # This is clipping distance (k) in Shaw et al\n","," " pos_emb_radius = config[\"pos_emb_radius\"]\n","," " pos_emb_units = config[\"embed_size\"] // config[\"n_head\"]\n","," " \n","," " # Position embedding vectors for use on keys\n","," " # This is w^K in Shaw et al\n","," " self.pos_emb_k = nn.Parameter(torch.zeros(2 * pos_emb_radius, pos_emb_units))\n","," " torch.nn.init.normal_(self.pos_emb_k, mean=0.0, std=0.02)\n","," " \n","," " # key, query, value projections for all heads\n","," " self.key = nn.Linear(embed_size, embed_size, bias = False)\n","," " self.query = nn.Linear(embed_size, embed_size, bias = False)\n","," " self.value = nn.Linear(embed_size, embed_size, bias = False)\n","," " \n","," " # output projection\n","," " self.proj = nn.Linear(embed_size, embed_size, bias = False)\n","," "\n","," " def forward(self, x):\n","," " batch_size, context_size, embed_size = x.size()\n","," " assert embed_size == self.config[\"embed_size\"]\n","," " \n","," " n_head = self.config[\"n_head\"]\n","," " head_size = embed_size // n_head\n","," " \n","," " pos_emb_size, head_size = self.pos_emb_k.size()\n","," " pos_emb_radius = self.config[\"pos_emb_radius\"]\n","," " assert pos_emb_size == 2 * pos_emb_radius\n","," "\n","," " # calculate query, key, values for all heads in batch and move head forward to be the batch dim\n","," " k = self.key(x).view(batch_size, context_size, n_head, head_size).transpose(1, 2) # (batch_size, n_head, context_size, head_size)\n","," " q = self.query(x).view(batch_size, context_size, n_head, head_size).transpose(1, 2) # (batch_size, n_head, context_size, head_size)\n","," " v = self.value(x).view(batch_size, context_size, n_head, head_size).transpose(1, 2) # (batch_size, n_head, context_size, head_size)\n","," " \n","," " # Below section implements x_i W^Q (a_{ij}^K)^T from Shaw et al\n","," " # position attention: (batch_size, n_head, context_size, head_size) x (1, 1, pos_emb_size, head_size) -> (batch_size, n_head, context_size, pos_emb_size)\n”,”,” ” att_rel_pos = q @ self.pos_emb_k.view(1, 1, pos_emb_size, head_size).transpose(-2, -1)\n”,”,” ” att_idxs = (torch.clamp(torch.arange(context_size)[None, :] – torch.arange(context_size)[:, None], -pos_emb_radius, pos_emb_radius-1) % pos_emb_size).to(\”cuda\”)\n”,”,” ” att_pos = torch.gather(att_rel_pos, 3, att_idxs.expand((batch_size, n_head, context_size, context_size)))\n”,”,” ” assert att_pos.shape == (batch_size, n_head, context_size, context_size)\n”,”,” ” \n”,”,” ” # value attention: (batch_size, n_head, context_size, head_size) x (batch_size, n_head, context_size, head_size) -> (batch_size, n_head, context_size, context_size)\n”,”,” ” att_val = q @ k.transpose(-2, -1)\n”,”,” ” \n”,”,” ” # combined attention\n”,”,” ” att_scale = 1.0 / math.sqrt(k.size(-1))\n”,”,” ” att = F.softmax((att_val + att_pos) * att_scale, dim=-1) # Equation (5) from Shaw et al\n”,”,” ” \n”,”,” ” y = att @ v # (batch_size, n_head, context_size, context_size) x (batch_size, n_head, context_size, head_size) -> (batch_size, n_head, context_size, head_size)\n”,”,” ” y = y.transpose(1, 2).contiguous().view(batch_size, context_size, embed_size) # re-assemble all head outputs side by side\n”,”,” “\n”,”,” ” # output projection\n”,”,” ” y = self.proj(y)\n”,”,” ” return y””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “f2a02333-c9ac-406e-85c9-ac52d018196e”,”,” “metadata”: {},”,” “source”: [“,” “The relative position embeddings make the attention code a bit more complex than it could be, e.g. MinGPT has simpler attent logic. It also slows down training by about 4%. It seems to make up for this in improved training, however, and provides a lot of flexibility for training on larger samples (more than 128 tokens) after pretraining. With absolute position embeddings it’s not straightforward to change the context size.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 23,”,” “id”: “41a4d5a1-ec0f-45a3-84c7-5919e554125e”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “class Block(nn.Module):\n”,”,” ” \”\”\”Pre-LayerNorm transformer block.\”\”\”\n”,”,” “\n”,”,” ” def __init__(self, config):\n”,”,” ” super().__init__()\n”,”,” ” \n”,”,” ” embed_size = config[\”embed_size\”]\n”,”,” ” \n”,”,” ” self.norm1 = nn.LayerNorm(embed_size, eps = 1e-6)\n”,”,” ” self.attn = SelfAttention(config)\n”,”,” ” \n”,”,” ” self.norm2 = nn.LayerNorm(embed_size, eps = 1e-6)\n”,”,” ” self.mlp = nn.Sequential(\n”,”,” ” nn.Linear(embed_size, 4 * embed_size, bias = False),\n”,”,” ” nn.GELU(),\n”,”,” ” nn.Linear(4 * embed_size, embed_size, bias = False),\n”,”,” ” )\n”,”,” “\n”,”,” ” def forward(self, x):\n”,”,” ” # This is Pre-LayerNorm\n”,”,” ” x = x + self.attn(self.norm1(x))\n”,”,” ” x = x + self.mlp(self.norm2(x))\n”,”,” ” \n”,”,” ” # Post-LayerNorm would look more like\n”,”,” ” # x = self.norm1(x + self.attn)\n”,”,” ” # x = self.norm2(x + self.mlp)\n”,”,” ” \n”,”,” ” return x””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “c35d2a7a-8914-4739-8c55-96c2e0705478″,”,” “metadata”: {},”,” “source”: [“,” “We then build BERT out of an embedding layer and a sequence of transformer blocks, with some carefully placed layernorms.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 24,”,” “id”: “8b3eed86-67eb-49db-b76f-0945a469e1ea”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “class BERT(nn.Module):\n”,”,” ” \”\”\”Headless BERT.\”\”\”\n”,”,” “\n”,”,” ” def __init__(self, config):\n”,”,” ” super().__init__()\n”,”,” ” \n”,”,” ” self.config = config\n”,”,” ” vocab_size = config[\”vocab_size\”]\n”,”,” ” embed_size = config[\”embed_size\”]\n”,”,” ” n_layer = config[\”n_layer\”]\n”,”,” “\n”,”,” ” # token embedding\n”,”,” ” self.tok_emb = nn.Embedding(vocab_size, embed_size)\n”,”,” ” self.norm_emb = nn.LayerNorm(embed_size, eps = 1e-6)\n”,”,” ” \n”,”,” ” # transformer\n”,”,” ” self.transformer = nn.Sequential(*[Block(config) for _ in range(n_layer)])\n”,”,” ” \n”,”,” ” # final layernorm\n”,”,” ” self.norm_final = nn.LayerNorm(embed_size, eps = 1e-6)\n”,”,” “\n”,”,” ” print(\”number of parameters: {}\”.format(sum(p.numel() for p in self.parameters())))\n”,”,” ” \n”,”,” ” self.apply(self._init_weights)\n”,”,” ” for pn, p in self.named_parameters():\n”,”,” ” if pn.endswith(‘proj.weight’):\n”,”,” ” torch.nn.init.normal_(p, mean=0.0, std=0.02/math.sqrt(2 * n_layer))\n”,”,” “\n”,”,” ” def _init_weights(self, module):\n”,”,” ” if isinstance(module, nn.Linear):\n”,”,” ” torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)\n”,”,” ” if module.bias is not None:\n”,”,” ” torch.nn.init.zeros_(module.bias)\n”,”,” ” elif isinstance(module, nn.Embedding):\n”,”,” ” torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)\n”,”,” ” elif isinstance(module, nn.LayerNorm):\n”,”,” ” torch.nn.init.zeros_(module.bias)\n”,”,” ” torch.nn.init.ones_(module.weight)\n”,”,” “\n”,”,” ” def forward(self, x):\n”,”,” ” batch_size, context_size = x.size()\n”,”,” ” \n”,”,” ” x = self.tok_emb(x)\n”,”,” ” x = self.norm_emb(x)\n”,”,” ” x = self.transformer(x)\n”,”,” ” x = self.norm_final(x)\n”,”,” ” \n”,”,” ” return x””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “17ed9e50-3a49-4f5c-8bf5-85f0e7984141″,”,” “metadata”: {},”,” “source”: [“,” “We have BERT, but we can’t do anything with it yet: the top layer generates only hidden unit activations/embeddings. Depending on the training scenario, we will put a different \”head\” on BERT. During pre-training, BERT is learning masked language modeling (MLM), and it will take an MLM head that makes a token prediction for every position in the input. For our fine-tuning, BERT learns a single output for each input sentence (e.g. a true/false classification or some score), and we will use two different heads, one for classification and one for regression. These heads ignore most of the activations at BERT’s top, and use only the activations for the very first token as input features during training.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 25,”,” “id”: “4d5f8222-3422-43ea-9292-675da6ab427d”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “class MLMHead(nn.Module):\n”,”,” ” \”\”\”\n”,”,” ” BERT head for masked language modeling.\n”,”,” ” Note that this does *not* implement sparse prediction as mentioned in the Cramming paper. Predictions are calculated for all tokens.\n”,”,” ” \”\”\”\n”,”,” ” \n”,”,” ” def __init__(self, config):\n”,”,” ” super().__init__()\n”,”,” ” \n”,”,” ” self.config = config\n”,”,” ” vocab_size = config[\”vocab_size\”]\n”,”,” ” embed_size = config[\”embed_size\”]\n”,”,” ” \n”,”,” ” self.tok_unemb = nn.Linear(embed_size, vocab_size, bias = False)\n”,”,” ” \n”,”,” ” def forward(self, x, y):\n”,”,” ” logits = self.tok_unemb(x)\n”,”,” ” loss = F.cross_entropy(logits.view(-1, logits.size(-1)), y.view(-1), ignore_index = 0)\n”,”,” ” return logits, loss””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 26,”,” “id”: “7698a79c-764b-4f56-8cad-55bb20ac7f77″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “class CLSHead(nn.Module):\n”,”,” ” \”\”\”\n”,”,” ” BERT head for classification.\n”,”,” ” A prediction is only calculated for the first ([CLS]) token.\n”,”,” ” \”\”\”\n”,”,” ” \n”,”,” ” def __init__(self, config, n_classes):\n”,”,” ” super().__init__()\n”,”,” ” \n”,”,” ” self.config = config\n”,”,” ” embed_size = config[\”embed_size\”]\n”,”,” ” \n”,”,” ” self.classifier = nn.Linear(embed_size, n_classes)\n”,”,” ” \n”,”,” ” def forward(self, x, y = None):\n”,”,” ” logits = self.classifier(x[:, 0, :])\n”,”,” ” loss = None\n”,”,” ” if y is not None:\n”,”,” ” loss = F.cross_entropy(logits.view(-1, logits.size(-1)), y.view(-1))\n”,”,” ” return logits, loss””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 27,”,” “id”: “c35e1242-197b-42bf-8ef8-c421823c92b7″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “class RegHead(nn.Module):\n”,”,” ” \”\”\”\n”,”,” ” BERT head for regression.\n”,”,” ” A prediction is only calculated for the first ([CLS]) token.\n”,”,” ” \”\”\”\n”,”,” ” \n”,”,” ” def __init__(self, config):\n”,”,” ” super().__init__()\n”,”,” ” \n”,”,” ” self.config = config\n”,”,” ” embed_size = config[\”embed_size\”]\n”,”,” ” \n”,”,” ” self.regressor = nn.Linear(embed_size, 1)\n”,”,” ” self.loss_fn = nn.MSELoss()\n”,”,” ” \n”,”,” ” def forward(self, x, y = None):\n”,”,” ” y_hat = self.regressor(x[:, 0, :])\n”,”,” ” loss = None\n”,”,” ” if y is not None:\n”,”,” ” loss = self.loss_fn(y_hat.view(-1), y.view(-1))\n”,”,” ” return y_hat, loss””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “977de2bb-ec76-4f87-9064-079c34267cb3″,”,” “metadata”: {“,” “tags”: []”,” },”,” “source”: [“,” “## Training\n”,”,” “\n”,”,” “We have our data and we have our model. Time to set up training loops.\n”,”,” “\n”,”,” “We will train in two phases:\n”,”,” “1. Unsupervised pre-training on the data we’re generating with the `samples_and_masks` function we defined earlier.\n”,”,” “2. Supervised fine-tuning on specific tasks where we take a model from (1) and specialize it further.\n”,”,” “\n”,”,” “### Pre-training””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “a6d4a4f3-45a0-4906-805e-a3b287660213″,”,” “metadata”: {},”,” “source”: [“,” “Configure the model:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 28,”,” “id”: “30a264fd-19ef-4968-ac38-46c5de7d7a4a”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “number of parameters: 110164992\n””,” ]”,” }”,” ],”,” “source”: [“,” “# BERT base, but with relative position embeddings\n”,”,” “config = {\”vocab_size\”: 2**15, \”embed_size\”: 768, \”context_size\”: 128, \”n_layer\”: 12, \”n_head\”: 12, \”pos_emb_radius\”: 16}\n”,”,” “device = \”cuda\”\n”,”,” “bert = BERT(config).to(device)\n”,”,” “mlm_head = MLMHead(config).to(device)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “57526ca9-a12f-4629-80ff-1367d44394be”,”,” “metadata”: {},”,” “source”: [“,” “Configure the different levels of batching. In this notebook, a minibatch is a set of training samples for which gradients are calculated simultaneously (they are processed by the GPU at the same time). A (regular) batch is a set of training samples for which gradients are accumulated before a training step is taken, batch sizes changed throughout training and are defined in the `get_batch_size` function below. A macrobatch is a set of training samples that gets transferred to the GPU simultaneously. It is the granularity at which learning rate and batch size are varied, and the full duration of the training procedure is expressed in number of macrobatches (`macrobatch_count`). On a 3070 RTX GPU one macrobatch takes a bit over 4 minutes to process.””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 29,”,” “id”: “0353b3f4-bbdd-428e-a68d-1c0f73e5cb24″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “minibatch_size = 2**5 # Number of samples on which we compute gradients simultaneously\n”,”,” “macrobatch_size = 2**15 # Number of samples we transfer to the GPU at the same time\n”,”,” “macrobatch_count = 2**8 # Total number of macrobatches we transfer in the whole training process””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “b30f9be6-fb3e-4e5b-84b1-badc672149fc”,”,” “metadata”: {},”,” “source”: [“,” “The below function yields the training data as Torch tensors, one macrobatch at a time. It returns two values, the first being inputs (the \”x\” data), the second being training targets (the \”y\” data).””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 30,”,” “id”: “929ee0e3-7bfc-4645-aeba-776383f46f88″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def macrobatches(macrobatch_size):\n”,”,” ” \”\”\”Convert `samples_and_masks` to Torch matrices of size macrobatch_size * 128.\”\”\”\n”,”,” ” ss = samples_and_masks(\”data/pretrain.txt\”, 128, bert_bpe)\n”,”,” ” training_data = []\n”,”,” ” for s in ss:\n”,”,” ” training_data.append(s)\n”,”,” ” if len(training_data) == macrobatch_size:\n”,”,” ” training_data = np.array(training_data, dtype = ‘int16’)\n”,”,” ” yield training_data[:, 0, :], training_data[:, 1, :] * training_data[:, 2, :]\n”,”,” ” training_data = []””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “bbbc13a8-cc4e-46dd-85b8-7cb5bec47712″,”,” “metadata”: {},”,” “source”: [“,” “We vary the learning rate throughout training. The learning rate is first gradually increased (\”warmup\”) and at the end is gradually decreased (\”annealed\”).””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 31,”,” “id”: “ae62d551-e161-4238-94d1-7e61b78229c2″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def get_lr(macrobatch, max_lr = 1e-3):\n”,”,” ” \”\”\”\n”,”,” ” One-cycle LR schedule, scaled by fraction of training time remaining, as described in Cramming.\n”,”,” ” See plot below.\n”,”,” ” \”\”\”\n”,”,” ” c = macrobatch + 0.5 # Midpoint of chunk\n”,”,” ” lr = max_lr\n”,”,” ” if c / macrobatch_count < 0.5:\n","," " lr = lr * 2 * c / macrobatch_count\n","," " else:\n","," " lr = lr * 2 * (macrobatch_count - c) / macrobatch_count\n","," " lr = lr * (macrobatch_count - c) / macrobatch_count\n","," " return lr""," ]"," },"," {"," "cell_type": "markdown","," "id": "e7ab5a95-8d82-4946-ad80-1c5bfd1e65ae","," "metadata": {},"," "source": ["," "Here is a visualization of how learning rate changes throughout pretraining:""," ]"," },"," {"," "cell_type": "code","," "execution_count": 32,"," "id": "8b424de5-0774-41ee-979a-73cc51e13445","," "metadata": {"," "tags": []"," },"," "outputs": ["," {"," "data": {"," "image/png": "iVBORw0KGgoAAAANSUhEUgAAAlEAAAHHCAYAAACfqw0dAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAB3ZElEQVR4nO3deVxU5f4H8M/MwAzrgIissqm4oAgKSriWkrikYZZLlktebdEybbWbWv2617TsmmWZLdrtZu6ZexFuqQjK5oYGirKDgOz7zPP7A5maBAUEDsvn/XrNSznnOTOfOSLz5TnPeR6ZEEKAiIiIiOpFLnUAIiIiotaIRRQRERFRA7CIIiIiImoAFlFEREREDcAiioiIiKgBWEQRERERNQCLKCIiIqIGYBFFRERE1AAsooiIiIgagEUUEUnG1dUVs2bNkjpGm3T06FHIZDLs2LGjyV/rnXfegUwma9CxmzZtgkwmw/Xr1xs3FFEzYBFF1MpVfwidPXtW6ijtSnFxMd555x0cPXpU6ihEJBEDqQMQUft15coVyOWt83e54uJivPvuuwCABx98UNowRCQJFlFE1CgqKyuh1WqhVCrrfIxKpWrCRPXTkPxE1L61zl8BiajeUlJS8Mwzz8DW1hYqlQq9e/fGt99+q9emvLwcy5Ytg4+PDywsLGBqaoqhQ4fiyJEjeu2uX78OmUyGjz76CGvWrEHXrl2hUqlw6dIl3fiY+Ph4zJo1C5aWlrCwsMDs2bNRXFys9zx/HxNVfWny5MmTWLx4MTp16gRTU1NMnDgRN2/e1DtWq9XinXfegYODA0xMTPDQQw/h0qVLdRpndbf8dTkH169fR6dOnQAA7777LmQyGWQyGd555x1dm8uXL+Pxxx+HlZUVjIyM4Ovriz179tzrnwkAsGXLFvj4+MDc3BxqtRqenp745JNP9Nrk5uZi0aJFcHV1hUqlQufOnTFjxgxkZWXdcZ7+9a9/oXPnzjAyMsLIkSMRHx9/x2uGhYVh9OjRsLCwgImJCYYPH46TJ0/e0e7EiRMYMGAAjIyM0LVrV3z55Ze1nt9Nmzbdse/v56k2Bw8exNChQ2Fqagpzc3OMGzcOFy9evOdxRM2JPVFE7UBGRgYeeOAByGQyLFiwAJ06dcLBgwcxZ84c5Ofn4+WXXwYA5Ofn4+uvv8a0adMwd+5cFBQU4JtvvkFgYCDCw8Ph7e2t97wbN25EaWkp5s2bB5VKBSsrK92+yZMnw83NDStWrEBkZCS+/vpr2NjYYOXKlffM++KLL6JDhw5Yvnw5rl+/jjVr1mDBggXYunWrrs2SJUuwatUqjB8/HoGBgYiJiUFgYCBKS0vrfF5qyl+Xc9CpUyd88cUXeP755zFx4kQ89thjAIC+ffsCAC5evIjBgwfD0dERb775JkxNTbFt2zYEBQVh586dmDhxYq2ZgoODMW3aNIwcOVJ3rmJjY3Hy5EksXLgQAFBYWIihQ4ciNjYWzzzzDPr374+srCzs2bMHycnJsLa21j3fBx98ALlcjldffRV5eXlYtWoVpk+fjrCwMF2bw4cPY8yYMfDx8cHy5cshl8uxceNGjBgxAr///jsGDhwIADh//jxGjRqFTp064Z133kFlZSWWL18OW1vbOp/zuvj+++8xc+ZMBAYGYuXKlSguLsYXX3yBIUOGICoqCq6uro36ekQNJoioVdu4caMAIM6cOVNrmzlz5gh7e3uRlZWlt33q1KnCwsJCFBcXCyGEqKysFGVlZXptbt26JWxtbcUzzzyj25aQkCAACLVaLTIzM/XaL1++XADQay+EEBMnThQdO3bU2+bi4iJmzpx5x3sJCAgQWq1Wt33RokVCoVCI3NxcIYQQ6enpwsDAQAQFBek93zvvvCMA6D1nTe6Wv67n4ObNmwKAWL58+R3PP3LkSOHp6SlKS0t127RarRg0aJBwd3e/a7aFCxcKtVotKisra22zbNkyAUDs2rXrjn3V5+3IkSMCgOjVq5fe+/nkk08EAHH+/Hlde3d3dxEYGKh3zouLi4Wbm5t4+OGHdduCgoKEkZGRuHHjhm7bpUuXhEKhEH/9OKk+vxs3brwj39/PWfW/eUJCghBCiIKCAmFpaSnmzp2rd1x6erqwsLC4YzuRlHg5j6iNE0Jg586dGD9+PIQQyMrK0j0CAwORl5eHyMhIAIBCodCNCdJqtcjJyUFlZSV8fX11bf5q0qRJustaf/fcc8/pfT106FBkZ2cjPz//npnnzZund8v80KFDodFocOPGDQBASEgIKisr8cILL+gd9+KLL97zue+Vv77n4O9ycnJw+PBhTJ48GQUFBbpznZ2djcDAQMTFxSElJaXW4y0tLVFUVITg4OBa2+zcuRNeXl419mj9faqB2bNn643zGjp0KADg2rVrAIDo6GjExcXhySefRHZ2ti5vUVERRo4ciePHj0Or1UKj0eCXX35BUFAQnJ2ddc/Xq1cvBAYG3vO81FVwcDByc3Mxbdo0ve9VhUIBPz+/Oy4tE0mJl/OI2ribN28iNzcXGzZswIYNG2psk5mZqfv7d999h9WrV+Py5cuoqKjQbXdzc7vjuJq2VfvrBy0AdOjQAQBw69YtqNXqu2a+27EAdMVUt27d9NpZWVnp2tZFbfnrcw7+Lj4+HkIILF26FEuXLq2xTWZmJhwdHWvc98ILL2Dbtm0YM2YMHB0dMWrUKEyePBmjR4/Wtbl69SomTZp0zyzAvc9lXFwcAGDmzJm1PkdeXh7KyspQUlICd3f3O/b36NEDBw4cqFOee6nOM2LEiBr33+t7h6g5sYgiauO0Wi0A4Kmnnqr1g7J6LM///vc/zJo1C0FBQXjttddgY2MDhUKBFStW4OrVq3ccZ2xsXOvrKhSKGrcLIe6Z+X6OrY+a8tf3HPxd9fl+9dVXa+2h+Xvx91c2NjaIjo7GL7/8goMHD+LgwYPYuHEjZsyYge+++66O7+xP9zqX1Xk//PDDO8a8VTMzM0NZWVmdX7O2iTc1Gs09j63O8/3338POzu6O/QYG/NiiloPfjURtXKdOnWBubg6NRoOAgIC7tt2xYwe6dOmCXbt26X0QLl++vKlj1ouLiwuAql6fv/YOZWdn63pYGqqu56C2QqFLly4AAENDw3ue79oolUqMHz8e48ePh1arxQsvvIAvv/wSS5cuRbdu3dC1a1dcuHChQc/9d127dgVQ1cNzt7ydOnWCsbGxrqfor65cuaL3dXVvV25urt726h7EuuSxsbFp8Pkjai4cE0XUxikUCkyaNAk7d+6s8YP3r1MHVPda/LXHJywsDKGhoU0ftB5GjhwJAwMDfPHFF3rbP/vss/t+7rqeAxMTEwB3Fgo2NjZ48MEH8eWXXyItLe2O5//7VA1/l52drfe1XC7X9RRW9wZNmjQJMTEx+Omnn+44vr69dT4+PujatSs++ugjFBYW1ppXoVAgMDAQu3fvRmJiom5/bGwsfvnlF71j1Go1rK2tcfz4cb3tn3/++T3zBAYGQq1W49///rfepdS/5yFqCdgTRdRGfPvttzh06NAd2xcuXIgPPvgAR44cgZ+fH+bOnQsPDw/k5OQgMjISv/32G3JycgAAjzzyCHbt2oWJEydi3LhxSEhIwPr16+Hh4VHjB6xUbG1tsXDhQqxevRoTJkzA6NGjERMTg4MHD8La2rrB67gBdT8HxsbG8PDwwNatW9G9e3dYWVmhT58+6NOnD9atW4chQ4bA09MTc+fORZcuXZCRkYHQ0FAkJycjJiam1tf/xz/+gZycHIwYMQKdO3fGjRs38Omnn8Lb2xu9evUCALz22mvYsWMHnnjiCTzzzDPw8fFBTk4O9uzZg/Xr18PLy6vO71cul+Prr7/GmDFj0Lt3b8yePRuOjo5ISUnBkSNHoFarsXfvXgBVc2IdOnQIQ4cOxQsvvIDKykp8+umn6N27N86dO3fH+/jggw/wj3/8A76+vjh+/Dj++OOPe+ZRq9X44osv8PTTT6N///6YOnUqOnXqhMTEROzfvx+DBw9ulGKZqFFIdl8gETWK6lvEa3skJSUJIYTIyMgQ8+fPF05OTsLQ0FDY2dmJkSNHig0bNuieS6vVin//+9/CxcVFqFQq0a9fP7Fv3z4xc+ZM4eLiomtXfQv7hx9+eEee6ikObt68WWPO6lvZhah9ioO/T9dQfbv+kSNHdNsqKyvF0qVLhZ2dnTA2NhYjRowQsbGxomPHjuK555676zm7W/66ngMhhDh16pTw8fERSqXyjlv3r169KmbMmCHs7OyEoaGhcHR0FI888ojYsWPHXbPt2LFDjBo1StjY2AilUimcnZ3Fs88+K9LS0vTaZWdniwULFghHR0ehVCpF586dxcyZM3XTWFSfs+3bt9f43v8+/UBUVJR47LHHRMeOHYVKpRIuLi5i8uTJIiQkRK/dsWPHdO+5S5cuYv369bp/878qLi4Wc+bMERYWFsLc3FxMnjxZZGZm3nOKg2pHjhwRgYGBwsLCQhgZGYmuXbuKWbNmibNnz971/BE1J5kQjTxSk4hIIrm5uejQoQPef/99/POf/5Q6DhG1cRwTRUStUklJyR3b1qxZA4ALAhNR8+CYKCJqlbZu3YpNmzZh7NixMDMzw4kTJ/Djjz9i1KhRGDx4sNTxiKgdYBFFRK1S3759YWBggFWrViE/P1832Pz999+XOhoRtRMcE0VERETUABwTRURERNQALKKIiIiIGoBjopqQVqtFamoqzM3N72vyPyIiImo+QggUFBTAwcEBcnnt/U0soppQamoqnJycpI5BREREDZCUlITOnTvXup9FVBMyNzcHUPWPoFarJU5DREREdZGfnw8nJyfd53htWEQ1oepLeGq1mkUUERFRK3OvoTgcWE5ERETUACyiiIiIiBqARRQRERFRA7CIIiIiImoAFlFEREREDcAiioiIiKgBWEQRERERNQCLKCIiIqIGYBFFRERE1AAsooiIiIgaoEUUUevWrYOrqyuMjIzg5+eH8PDwu7bfvn07evbsCSMjI3h6euLAgQN6+4UQWLZsGezt7WFsbIyAgADExcXptcnJycH06dOhVqthaWmJOXPmoLCwULf/+vXrkMlkdzxOnz7deG+ciIiIWi3Ji6itW7di8eLFWL58OSIjI+Hl5YXAwEBkZmbW2P7UqVOYNm0a5syZg6ioKAQFBSEoKAgXLlzQtVm1ahXWrl2L9evXIywsDKampggMDERpaamuzfTp03Hx4kUEBwdj3759OH78OObNm3fH6/32229IS0vTPXx8fBr/JBAREVHrIyQ2cOBAMX/+fN3XGo1GODg4iBUrVtTYfvLkyWLcuHF62/z8/MSzzz4rhBBCq9UKOzs78eGHH+r25+bmCpVKJX788UchhBCXLl0SAMSZM2d0bQ4ePChkMplISUkRQgiRkJAgAIioqKgGv7e8vDwBQOTl5TX4OYioZaio1IiC0gqRVVAqMvJKhFarlToSETWRun5+G0hZwJWXlyMiIgJLlizRbZPL5QgICEBoaGiNx4SGhmLx4sV62wIDA7F7924AQEJCAtLT0xEQEKDbb2FhAT8/P4SGhmLq1KkIDQ2FpaUlfH19dW0CAgIgl8sRFhaGiRMn6rZPmDABpaWl6N69O15//XVMmDCh1vdTVlaGsrIy3df5+fl1OxFEJKmySg2uZhbhenYRknKKkXyrBCm5JcgqLEN2YTlyispRUqHRO2a6nzP+NdFTosRE1BJIWkRlZWVBo9HA1tZWb7utrS0uX75c4zHp6ek1tk9PT9ftr952tzY2NjZ6+w0MDGBlZaVrY2ZmhtWrV2Pw4MGQy+XYuXMngoKCsHv37loLqRUrVuDdd9+ty1snIomUVmhwISUPkYm3EJOchyvpBUjIKoJGK+r1PD+EJWJETxuM7GV778ZE1CZJWkS1ZNbW1no9XgMGDEBqaio+/PDDWouoJUuW6B2Tn58PJyenJs9KRLUrq9QgKjEXJ+KycCI+CxdS8lBZQ8GkNjJANxszOFmZwKmDCRw7GKOTmQpWZkpYmShhYWwII0MFVAZy/PtALL4+kYA3d51H8KIOsDRRSvDOiEhqkhZR1tbWUCgUyMjI0NuekZEBOzu7Go+xs7O7a/vqPzMyMmBvb6/XxtvbW9fm7wPXKysrkZOTU+vrAoCfnx+Cg4Nr3a9SqaBSqWrdT0TN41ZROUIuZ+LXi+n4PS7rjktx1mYq9He2hLezJTzs1ehpp4atWgWZTFan5381sAeOXMnE1ZtF2HjyOhY93L0p3gYRtXCSFlFKpRI+Pj4ICQlBUFAQAECr1SIkJAQLFiyo8Rh/f3+EhITg5Zdf1m0LDg6Gv78/AMDNzQ12dnYICQnRFU35+fkICwvD888/r3uO3NxcRERE6O62O3z4MLRaLfz8/GrNGx0drVeYEVHLkV9agUPn0/FzTApOX8vRuzxnbabE4G7WGNzNGv5dOqJzB+M6F0w1MTJUYMGIbli0NQZ7z6Xi5QD3+3o+ImqdJL+ct3jxYsycORO+vr4YOHAg1qxZg6KiIsyePRsAMGPGDDg6OmLFihUAgIULF2L48OFYvXo1xo0bhy1btuDs2bPYsGEDAEAmk+Hll1/G+++/D3d3d7i5uWHp0qVwcHDQFWq9evXC6NGjMXfuXKxfvx4VFRVYsGABpk6dCgcHBwDAd999B6VSiX79+gEAdu3ahW+//RZff/11M58hIqqNVitwPO4mtkck47dLGSir1Or29bJXY5SHLR72sEVvB3WjFzkBvWyhMpDj2s0iXErLR28Hi0Z9fiJq+SQvoqZMmYKbN29i2bJlSE9Ph7e3Nw4dOqQbGJ6YmAi5/M/prAYNGoTNmzfj7bffxltvvQV3d3fs3r0bffr00bV5/fXXUVRUhHnz5iE3NxdDhgzBoUOHYGRkpGvzww8/YMGCBRg5ciTkcjkmTZqEtWvX6mX7v//7P9y4cQMGBgbo2bMntm7discff7yJzwgR3UtWYRm2nU3Cj+GJSMop0W3vZmOGif0cMb6vA5w7mjRpBnMjQ4zoaYODF9KxNyaNRRRROyQTQtTvlhSqs/z8fFhYWCAvLw9qtVrqOESt3h8ZBfj692vYHZWKck1Vr5PayACP9e+Mx306N0mP093sP5eG+Zsj4WhpjBNvPMRLekRtRF0/vyXviSIiupfwhBx8cTQeR67c1G3zcrLEU37OeKSvA4yVCklyjehpAxOlAim5JYhKykV/5w6S5CAiabCIIqIWKzLxFv4T/Ad+j8sCAMhkwOjedvjH0C7wcZG+YDFWKvCwhy1+jk7F3phUFlFE7QyLKCJqcc4l5+I/wX/oep4M5DI84euEZ4d1gau1qcTp9I3v64Cfo1Ox/1wa3h7nAYWcl/SI2gsWUUTUYtzILsK/D8Til4tVc8Ep5DI83r8zFozoBierph0o3lBDu1tDbWSAzIIynLmegwe6dJQ6EhE1ExZRRCS5wrJKfHY4Ht+eSEC5Rgu5DAjq54iXRri3uJ6nv1MZKDC6jx22nU3G3phUFlFE7QiLKCKSjFYrsCsqBSsPXcbNgqrFu4e6W2PZIx5wtzWXOF3djfdywLazyTh4IR3vTOgNQ4X83gcRUavHIoqIJHExNQ9v/XQBMUm5AADXjiZY+ogHRvS0aXVTBfh36YiOpkpkF5Xj1NVsDO/eSepIRNQMWEQRUbMqrdDg08NxWH/sGjRaAVOlAi+OdMfswa5QGUgzVcH9MlDIMdbTHt+fvoG9MaksoojaCfY5E1GzibiRg3Frf8e6I1eh0QqM9bTDkVcfxHPDu7baAqraeK+qJaN+uZCOskrNPVoTUVvAnigianJFZZX48Jcr+C70OoQArM1UeD+oN0b3aTsLevu6dICd2gjp+aU4duUmRvW2kzoSETUx9kQRUZM6l5yLRz49gU2nqgqoJ3w6I2Tx8DZVQAGAXC7DuL5V72nvuTSJ0xBRc2BPFBE1Ca1WYMPv1/DRL1dQqRWwtzDCykl9MawNjxca7+WAb04k4LdLGSgur4SJkj9iidoy/g8nokaXnleKV7ZH42R8NgBgrKcdVkzsCwsTQ4mTNS2vzhZwsjJGUk4JQmIzdeOkiKht4uU8ImpUwZcyMOaT4zgZnw1jQwVWTvLEuif7t/kCCgBkMhnG960qnPbGpEqchoiaGosoImoUGq3AqkOXMfe/Z3GruAK9HdTY99IQTBng3Ormfbof1b1PR6/cRH5phcRpiKgpsYgiovt2q6gcszaG4/OjVwEAswe7YtcLg9C1k5nEyZpfTztzdLMxQ7lGi19vrwFIRG0Tiygiui8XUvIw/rMT+D0uC8aGCnwy1RvLx/du9fM+NdRfL+n9HJ0icRoiakosooiowXZGJGPSF6eQfKsELh1NsOuFQXjU21HqWJJ71LuqiDoZn4XMglKJ0xBRU2ERRUT1ptUKrDgQi1e2x6CsUouHenTCnvlD0MteLXW0FsHV2hReTpbQCmBfDOeMImqrWEQRUb2UlGswf3Mkvjx+DQDw0ohu+GbmgHZx9119BN3ujfqZd+kRtVksooiozm4WlGHqV6dx8EI6lAo5/jPFC4tH9YBc3n7uvqurR/o6QCGXISYpFwlZRVLHIaImwCKKiOrkj4wCBK07iZikXFiaGOL7OQMxsV9nqWO1WJ3MVRjczRoAB5gTtVUsoojonk7FZ2HS56eQklsC144m2PX8IPh16Sh1rBZPd0kvOhVCCInTEFFjYxFFRHd16EI6Zm08g4KySgxw7YBdLwxGl3Y4/1NDjOptByNDORKyinAuOU/qOETUyFhEEVGttp1Nwgs/RKBco0Vgb1t8P8cPVqZKqWO1GmYqAwT0sgUA7OYlPaI2h0UUEdXo69+v4fUd56AVwGTfzlj3ZH8YGbbPCTTvR9DtebP2xqShUqOVOA0RNSYWUUSkRwiBj365gvf3xwIA5g3rgpWT+sJAwR8XDTGseydYmhgiq7AModeypY5DRI2IPxWJSEerFVi+5yI+OxIPAHh9dA8sGdOzXS0g3NiUBnKM87QHAOyO4pxRRG0JiygiAlBVQL398wX8N/QGZDLgXxP74IUHu7GAagRB/aou6f1yMR2lFRqJ0xBRY2ERRUS6AmpzWCJkMmD1E16Y7ucidaw2w8e5AxwtjVFYVonfYjOkjkNEjYRFFFE7p9UK/HO3fgH1WH9OotmY5HKZblFiXtIjajtYRBG1Y1UF1Hn8GJ4IuQz4eDILqKZSfUnv2B+ZyC0ulzgNETUGFlFE7dSfBVTS7QLKm8u4NKHutuboaWeOCo3A/vNpUschokbAIoqoHRJC4L19l/QKqOqeEmo61ef452he0iNqC1hEEbVDa36Lw6ZT1wEAHz7uxQKqmUzwcoBMBoQn5CAlt0TqOER0n1hEEbUzX/9+DZ+ExAEA3p3QG5N8eAmvuThYGmOgqxUAYA97o4haPRZRRO3ItjNJupnIXx3VHTMHuUobqB2aeLvXb1dkMoQQEqchovvBIoqonThwPg1v7joHoGopl/kPdZM4Ufs0xtMeSgM54jILcTE1X+o4RHQfWEQRtQO/x93Ewi1R0Apg6gAnLuUiIQtjQzzsYQsA2BmZLHEaIrofLKKI2riLqXl47vsIVGgExvW1x78merKAktik/lWX9PZEp6JCo5U4DRE1FIsoojYsJbcEszeeQVG5Bg90scLHk72gkLOAktpQ906wNlMiu6gcx/+4KXUcImogFlFEbVReSQVmbwxHZkEZutua4cunfaEyUEgdiwAYKuSY4HV7gHlUisRpiKihWEQRtUFllRo8+/1Z/JFRCFu1CptmD4SFsaHUsegvHrt9SS/4UgbySiokTkNEDcEiiqiN0WoFXt9xDqev5cBMZYCNswbCwdJY6lj0N70d1Ohha47ySi0OcBkYolaJRRRRG/PRr1fwc3QqDOQyfPFUf3g4qKWORDWQyWS63qhdvEuPqFViEUXUhuyKTMbnR68CAFY85omh7p0kTkR3E9TPEXIZcOb6LdzILpI6DhHVE4soojYiMvEW3tx5HgAw/6GueMLXSeJEdC+2aiMM7mYNAPiJA8yJWh0WUURtQGpuCeb9NwLlGi1GedjilYd7SB2J6mhS/6q1C3dFpnAZGKJWhkUUUStXXF6Juf89i6zCMvS0M8d/pnhDzrmgWo1RvW1hqlQgMacYETduSR2HiOqBRRRRK6bVCryyLQYXU/PR0VSJr2f6wlRlIHUsqgcTpQHGeNoD4JxRRK0NiyiiVmxNSBwOXkiHUiHHl0/7oHMHE6kjUQNU36W3LyYVpRUaidMQUV2xiCJqpQ5dSMfakDgAwL8m9oGvq5XEiaihHnDrCAcLI+SXVuLw5Uyp4xBRHbGIImqFrt4sxKvbYwAAc4a48U68Vk4ul2Ei54wianVYRBG1MkVllXju+wgUllVioJsV3hzTU+pI1Agm9qu6S+/olZvIKiyTOA0R1QWLKKJWRAiB13eeQ1xm1Zp4657sD0MF/xu3Bd1szODV2QKVWoE90alSxyGiOuBPX6JW5JsTCdh/Lg2GChk+n94fncxVUkeiRvTY7TmjdvKSHlGr0CKKqHXr1sHV1RVGRkbw8/NDeHj4Xdtv374dPXv2hJGRETw9PXHgwAG9/UIILFu2DPb29jA2NkZAQADi4uL02uTk5GD69OlQq9WwtLTEnDlzUFhYWOPrxcfHw9zcHJaWlvf1Ponux+lr2Vhx8DIAYOkjHvBx4UDytmaClwOUCjkupubjUmq+1HGI6B4kL6K2bt2KxYsXY/ny5YiMjISXlxcCAwORmVnzHSqnTp3CtGnTMGfOHERFRSEoKAhBQUG4cOGCrs2qVauwdu1arF+/HmFhYTA1NUVgYCBKS0t1baZPn46LFy8iODgY+/btw/HjxzFv3rw7Xq+iogLTpk3D0KFDG//NE9VRRn4pFmyOhEYrMLGfI55+wEXqSNQEOpgq8bCHLQBge0SSxGmI6F5kQuJ1Bvz8/DBgwAB89tlnAACtVgsnJye8+OKLePPNN+9oP2XKFBQVFWHfvn26bQ888AC8vb2xfv16CCHg4OCAV155Ba+++ioAIC8vD7a2tti0aROmTp2K2NhYeHh44MyZM/D19QUAHDp0CGPHjkVycjIcHBx0z/3GG28gNTUVI0eOxMsvv4zc3Nw6v7f8/HxYWFggLy8ParW6IaeHCJUaLZ78OgzhCTnoaWeOn14YDGOlQupY1ESOXMnE7I1n0MHEEGFvBUBpIPnvukTtTl0/vyX931leXo6IiAgEBATotsnlcgQEBCA0NLTGY0JDQ/XaA0BgYKCufUJCAtLT0/XaWFhYwM/PT9cmNDQUlpaWugIKAAICAiCXyxEWFqbbdvjwYWzfvh3r1q2r0/spKytDfn6+3oPofq0NiUN4Qg5MlQp88ZQPC6g2bph7J9iqVbhVXIGQ2Ayp4xDRXUhaRGVlZUGj0cDW1lZvu62tLdLT02s8Jj09/a7tq/+8VxsbGxu9/QYGBrCystK1yc7OxqxZs7Bp06Y69yKtWLECFhYWuoeTE+fuoftzMj4Lnx6JBwCsmNQXbtamEieipqaQy3SLEm+P4ABzopaM/cS1mDt3Lp588kkMGzaszscsWbIEeXl5ukdSEsc0UMNlFpRi4ZZoCAFMG+iECV4O9z6I2oTHfarnjMpERn7pPVoTkVQkLaKsra2hUCiQkaHfZZ2RkQE7O7saj7Gzs7tr++o/79Xm7wPXKysrkZOTo2tz+PBhfPTRRzAwMICBgQHmzJmDvLw8GBgY4Ntvv60xm0qlglqt1nsQNYRGK7BoazSyCsvQw9Ycyx7pLXUkakZdOpnB16UDtALYFclFiYlaKkmLKKVSCR8fH4SEhOi2abVahISEwN/fv8Zj/P399doDQHBwsK69m5sb7Ozs9Nrk5+cjLCxM18bf3x+5ubmIiIjQtTl8+DC0Wi38/PwAVI2bio6O1j3ee+89mJubIzo6GhMnTmycE0BUiy+OxuNkfDaMDRVYN70fx0G1Q0/4Vl/SS4LE9/8QUS0MpA6wePFizJw5E76+vhg4cCDWrFmDoqIizJ49GwAwY8YMODo6YsWKFQCAhQsXYvjw4Vi9ejXGjRuHLVu24OzZs9iwYQMAQCaT4eWXX8b7778Pd3d3uLm5YenSpXBwcEBQUBAAoFevXhg9ejTmzp2L9evXo6KiAgsWLMDUqVN1d+b16tVLL+fZs2chl8vRp0+fZjoz1F6FJ+Tg4+A/AAD/F9QH3WzMJU5EUhjX1wHv7LmEazeLEJmYCx+XDlJHIqK/kbyImjJlCm7evIlly5YhPT0d3t7eOHTokG5geGJiIuTyPzvMBg0ahM2bN+Ptt9/GW2+9BXd3d+zevVuvuHn99ddRVFSEefPmITc3F0OGDMGhQ4dgZGSka/PDDz9gwYIFGDlyJORyOSZNmoS1a9c23xsnqkFecQUWbomCVgCP9XfUjY2h9sdMZYCxnvbYGZmMHRFJLKKIWiDJ54lqyzhPFNWHEAIv/hiFfefS4NrRBPtfGgpTleS/55CETl/LxtQNp2GmMkD4P0fCRMnvB6Lm0CrmiSKiP/0UlYJ959KgkMvwnyneLKAIfm5WcLYyQWFZJQ5dqHnaFyKSDosoohYgKacYy36+CABYONId/Zx56Yaqxng+cfuS7vaznDOKqKVhEUUkserpDArLKuHj0gEvPNhV6kjUgkzy6QyZDAi9lo2knGKp4xDRX7CIIpLYF0fjcfbGLZipDLBmijcMFPxvSX9ysDTGkG7WADiDOVFLw5/WRBKKScrFmt/iAADvTugNJysTiRNRS1R9l+bOiGRotbwXiKilYBFFJJGScg1e3hqNSq3AuL72eKy/o9SRqIUK7G0HtZEBUnJLEHotW+o4RHQbiygiiaw8dBkJWUWwUxvh30GekMlkUkeiFsrIUIEJ3lUTAW87yzU5iVoKFlFEEjh9LRubTl0HAKx8vC8sTAylDUQt3mRfJwDAwQvpyC0ulzgNEQEsooiaXVFZJV7bEQMAmDbQCcO7d5I4EbUGno4W6GWvRnmlFj9FcVFiopaARRRRM1txMBZJOSVwtDTGP8d5SB2HWgmZTIZpA6t6o7aEc1FiopaARRRRMzoRl4X/nU4EAHz4eF+YcVZyqodHvR1hZCjHlYwCRCflSh2HqN1jEUXUTApKK/DGznMAgBn+Lhh0e+4forqyMDbEWE97AFW9UUQkLRZRRM3kX/tjkZJbAmcrE7wxuqfUcaiVmjrAGQCw91wqCssqJU5D1L6xiCJqBsf+uIktZ5IgkwEfPeHFxYWpwQa4dkDXTqYoLtdgT3Sq1HGI2jUWUURNrKisEm/tOg8AmDXIFQPdrCRORK2ZTCbT9UZtPZMocRqi9o1FFFET++jXK0jJLUHnDsZ4LbCH1HGoDXisvyMMFTLEJOfhUmq+1HGI2i0WUURNKDLxlm5SzX9P9ISJkpfx6P51NFNhlIcdAGALe6OIJMMiiqiJlFdq8ebOcxCiqudgGCfVpEY09facUT9FpaCkXCNxGqL2iUUUURP54uhV/JFRiI6mSizlpJrUyAZ3tUbnDsYoKK3EwQtpUschapdYRBE1gbiMAnx2JA4A8M6E3uhgqpQ4EbU1crkMU3z/nMGciJofiyiiRqbRCryx8xwqNAIje9rgkb72UkeiNuoJXyfIZUD49RzEZxZKHYeo3WERRdTI/nf6BiITc2GmMsD7E/tAJpNJHYnaKDsLI4zoaQOA0x0QSYFFFFEjysgvxYe/XAEAvDG6B+wtjCVORG3dlNtzRu2MTEF5pVbiNETtC4sookb0f/suobCsEt5Olpju5yJ1HGoHHurRCbZqFXKKyhF8KUPqOETtCosookZy7I+b2HcuDXIZ8H5QH8jlvIxHTc9AIccTPlUDzH8M5yU9oubEIoqoEZRWaLDs5wsAgFmD3NDH0ULiRNSeTBngBJkMOBGfhetZRVLHIWo3WEQRNYLPj17Fjexi2KmNsHhUd6njUDvjZGWC4bcnc93M3iiiZsMiiug+Xb1ZiPVHrwIAlo33gJmKS7tQ83vq9hi87WeTUFrBGcyJmgOLKKL7IITA0t0XUK7R4sEenTCmj53UkaideqinDRwsjHCruIIzmBM1ExZRRPfh5+hUnLqaDZWBHO9N4JxQJB2FXIZpA6umO/jfaV7SI2oOLKKIGii/tALv748FALw4ohucO5pInIjauykDnGAglyHixi3EpuVLHYeozWMRRdRAn/wWh6zCMnSxNsXcYV2kjkMEG7URRvW2BQD8EHZD4jREbR+LKKIGiMsowHenrgMAlk/oDZWBQtpARLdVDzD/KTIFhWWVEqchattYRBHVkxAC7+y9iEqtwMMetrpby4laAv+uHdHF2hRF5RrsjkqROg5Rm8YiiqieDl1Ix8n4bCgN5Fg6zkPqOER6ZDIZnvSrGmD+Q1gihBASJyJqu1hEEdVDSblGN5j8uWFdOJicWqTHfTpDZSBHbFo+IhNzpY5D1GaxiCKqhy+OxiMltwSOlsZ4/sFuUschqpGliRLjvRwAcIA5UVNiEUVUR4nZxVh//BoA4O1xvWCs5GByarmm376kt+9cGm4VlUuchqhtYhFFVEf/t/8Syiu1GNytI0ZzZnJq4bydLNHbQY3ySi12RCRLHYeoTWIRRVQHv8fdRPClDBjIZXhnfG/OTE4tnkwmw1MPVE13sDk8EVotB5gTNTYWUUT3UKnR4v19VYPJn/Z3gbutucSJiOpmgpcDzFUGSMgqwqmr2VLHIWpzWEQR3cOWM0m4klEASxNDvDyyu9RxiOrMVGWAif0dAQD/O80B5kSNjUUU0V3kl1bgP8F/AABeHukOCxNDiRMR1U/1Jb3g2Ayk5pZInIaobWERRXQX6w7HI7uoHF07mWL67Q8jotaku605HuhiBY1WcLoDokbGIoqoFonZxdh48joA4O1xHjBU8L8LtU6zBrkCAH4MT0JphUbaMERtCD8ViGqx4mAsyjVaDHW3xoM9uD4etV4BvWzhYGGEnKJy7D+XJnUcojaDRRRRDcKuZePghXTIZVW9UJzSgFozA4Vcdzn6u9DrXE+PqJGwiCL6G61W4P/2XwIATBvojB52nNKAWr9pA52hNJDjXHIeopJypY5D1CawiCL6m11RKbiQkg9zlQEWP8wpDahtsDJVYsLt9fT+e+q6tGGI2ogGFVGVlZX47bff8OWXX6KgoAAAkJqaisLCwkYNR9TcSis0+OiXKwCA+SO6oaOZSuJERI1npr8rAGD/+TRkFpRKG4aoDah3EXXjxg14enri0Ucfxfz583Hz5k0AwMqVK/Hqq682ekCi5vTtyQSk55fC0dJYd0cTUVvh2dkC/Z0tUaER+DEsSeo4RK1evYuohQsXwtfXF7du3YKxsbFu+8SJExESEtKo4YiaU05ROb44chUA8GpgdxgZKiRORNT4Zt7+5eCHsBsor9RKG4aolat3EfX777/j7bffhlKp1Nvu6uqKlJSURgtG1Nw+OxyPgrJKeNir8aiXo9RxiJrEmD726GSuQmZBGX65mC51HKJWrd5FlFarhUZz52RtycnJMDfnXUzUOiVmF+P709cBAEvG9oRczikNqG1SGsjx5EBnAMB3HGBOdF/qXUSNGjUKa9as0X0tk8lQWFiI5cuXY+zYsY2ZjajZfPTrFVRoBIa6W2OoOyfWpLZtup8zDOQynL1xCxdS8qSOQ9Rq1buIWr16NU6ePAkPDw+UlpbiySef1F3KW7lyZVNkJGpS55JzsScmFTIZ8OaYnlLHIWpyNmojjPW0B8DeKKL7Ue8iqnPnzoiJicE///lPLFq0CP369cMHH3yAqKgo2NjYNCjEunXr4OrqCiMjI/j5+SE8PPyu7bdv346ePXvCyMgInp6eOHDggN5+IQSWLVsGe3t7GBsbIyAgAHFxcXptcnJyMH36dKjValhaWmLOnDl6UzRcuXIFDz30EGxtbWFkZIQuXbrg7bffRkVFRYPeI7VMQgisOHAZADDR2xG9HSwkTkTUPGYOqprB/OeYVNwqKpc4DVHrVO8i6vjx4wCA6dOnY9WqVfj888/xj3/8A4aGhrp99bF161YsXrwYy5cvR2RkJLy8vBAYGIjMzMwa2586dQrTpk3DnDlzEBUVhaCgIAQFBeHChQu6NqtWrcLatWuxfv16hIWFwdTUFIGBgSgt/XNelOnTp+PixYsIDg7Gvn37cPz4ccybN0+339DQEDNmzMCvv/6KK1euYM2aNfjqq6+wfPnyer9HarmO/nETodeyoTSQY/EoTqxJ7Ud/5w7o46hGeaUWW85wugOiBhH1JJfLRUZGxh3bs7KyhFwur+/TiYEDB4r58+frvtZoNMLBwUGsWLGixvaTJ08W48aN09vm5+cnnn32WSGEEFqtVtjZ2YkPP/xQtz83N1eoVCrx448/CiGEuHTpkgAgzpw5o2tz8OBBIZPJREpKSq1ZFy1aJIYMGVLn95aXlycAiLy8vDofQ82nUqMVgf85Jlze2Cf+tf+S1HGImt22M4nC5Y19YtCKEFFRqZE6DlGLUdfP73r3RAkhalyMNTs7G6ampvV6rvLyckRERCAgIEC3TS6XIyAgAKGhoTUeExoaqtceAAIDA3XtExISkJ6ertfGwsICfn5+ujahoaGwtLSEr6+vrk1AQADkcjnCwsJqfN34+HgcOnQIw4cPr/X9lJWVIT8/X+9BLdfemFRcTi+A2sgALzzYVeo4RM1uvJcDrEyVSMktwS8XM6SOQ9TqGNS14WOPPQag6m68WbNmQaX6czkMjUaDc+fOYdCgQfV68aysLGg0Gtja2uptt7W1xeXLl2s8Jj09vcb26enpuv3V2+7W5u/jtwwMDGBlZaVrU23QoEGIjIxEWVkZ5s2bh/fee6/W97NixQq8++67te6nlqO8UouPg/8AADw7vCssTZT3OIKo7TEyVOApP2esPRyPb05cw7i+9lJHImpV6twTZWFhAQsLCwghYG5urvvawsICdnZ2mDdvHv73v/81ZVZJbN26FZGRkdi8eTP279+Pjz76qNa2S5YsQV5enu6RlMRxBi3V1rNJSMwphrWZCrMHu0odh0gyT/m7QKmQIzIxF5GJt6SOQ9Sq1LknauPGjQCqZiZ/9dVX633pribW1tZQKBTIyNDvRs7IyICdnV2Nx9jZ2d21ffWfGRkZsLe312vj7e2ta/P3geuVlZXIycm543WdnJwAAB4eHtBoNJg3bx5eeeUVKBR3LgmiUqn0euioZSop1+DTkKq7NV8c0Q0myjr/NyBqc2zMjTDB2wE7IpLxzYkE9H+yg9SRiFqNeo+JWr58eaMUUACgVCrh4+Ojt+aeVqtFSEgI/P39azzG39//jjX6goODde3d3NxgZ2en1yY/Px9hYWG6Nv7+/sjNzUVERISuzeHDh6HVauHn51drXq1Wi4qKCmi1XG+qNfsu9DoyC8rgaGmMqQOdpI5DJLlnBrsBAA5dSEdKbonEaYhajwb9Cr5jxw5s27YNiYmJKC/Xn18kMjKyXs+1ePFizJw5E76+vhg4cCDWrFmDoqIizJ49GwAwY8YMODo6YsWKFQCqFkAePnw4Vq9ejXHjxmHLli04e/YsNmzYAKBqzNbLL7+M999/H+7u7nBzc8PSpUvh4OCAoKAgAECvXr0wevRozJ07F+vXr0dFRQUWLFiAqVOnwsHBAQDwww8/wNDQEJ6enlCpVDh79iyWLFmCKVOmwNDQsCGnjVqAvJIKfHG0apHhRQ93h8qAiwwTeTioMahrR5y6mo3vTl3HW2N7SR2JqFWod0/U2rVrMXv2bNja2iIqKgoDBw5Ex44dce3aNYwZM6beAaZMmYKPPvoIy5Ytg7e3N6Kjo3Ho0CHdwPDExESkpaXp2g8aNAibN2/Ghg0b4OXlhR07dmD37t3o06ePrs3rr7+OF198EfPmzcOAAQNQWFiIQ4cOwcjISNfmhx9+QM+ePTFy5EiMHTsWQ4YM0RViQNVA85UrV2LgwIHo27cv3n33XSxYsABff/11vd8jtRxf/34NeSUV6GZjhon9uMgwUbU5Q6p6o34MT0RRWaXEaYhaB5kQQtTngJ49e2L58uWYNm0azM3NERMTgy5dumDZsmXIycnBZ5991lRZW538/HxYWFggLy8ParVa6jjt3s2CMgz/8AiKyzVY/1R/jO7DO5GIqmm1AgEfH8O1rCK8M94Ds25f4iNqj+r6+V3vnqjExETdVAbGxsYoKCgAADz99NP48ccfGxiXqOmtOxKP4nIN+na2QGDvmm9cIGqv5HIZZt/ujfr25HVotPX6/ZqoXap3EWVnZ4ecnBwAgLOzM06fPg2gapLLenZqETWblNwSbA5LBAC8Ftijxgljidq7Sf0dYWFsiMScYvwWy8k3ie6l3kXUiBEjsGfPHgDA7NmzsWjRIjz88MOYMmUKJk6c2OgBiRrDuiPxKNdo8UAXKwzpZi11HKIWyURpgCf9nAEA35xIkDgNUctX77vzNmzYoLvFf/78+ejYsSNOnTqFCRMm4Nlnn230gET3K/lWMbafrZr4dFFAd/ZCEd3FTH9XfHX8GsITcnAhJQ99HC2kjkTUYtWrJ6qyshLvv/++3tIoU6dOxdq1a/Hiiy9CqeTSGdTyrDsSjwqNwKCuHeHXpaPUcYhaNDsLIzxye/kX9kYR3V29iigDAwOsWrUKlZW8/ZVah6ScYmw/mwygal4oIrq3OUO6AKhapDs9r1TiNEQtV73HRI0cORLHjh1riixEje6zw/Go1AoMdbfGAFcrqeMQtQqenS0w0NUKlVqB/4ZelzoOUYtV7zFRY8aMwZtvvonz58/Dx8fnjiVgJkyY0GjhiO5HYnYxdkRW9UK9HMBeKKL6eGaIG8Kv52BzeCIWcI1JohrV+3/FCy+8AAD4+OOP79gnk8mg0WjuPxVRI/j0cBw0WoFh3TvBx4WLqhLVx8MetnDpaIIb2cXYdiaJk28S1aDel/O0Wm2tDxZQ1FJczyrCrqgUAMCiAHeJ0xC1Pgq5DHOHVo2N+ur3BFRquPA60d/Vu4giag0+PRwPjVbgwR6d0M+ZvVBEDfG4T2d0NFUiJbcE+8+n3fsAonaGRRS1OQlZRfgpimOhiO6XkaECswa5AgC+PHaNq1IQ/Q2LKGpzPg2Jg1YAI3rawNvJUuo4RK3a0/4uMDZU4FJaPk7EZ0kdh6hFYRFFbcrVm4XYHV01FupljoUium+WJkpMHegEoKo3ioj+xCKK2pTqXqiAXjbo29lS6jhEbcKcIW5QyGU4EZ+FCyl5UschajHqXUTl5+fX+CgoKEB5eXlTZCSqk/jMQvwckwqAY6GIGlPnDiYYf3spmC+PszeKqFq9iyhLS0t06NDhjoelpSWMjY3h4uKC5cuX6xYpJmounx+NhxBAQC9bLppK1MjmDesKANh/LhVJOcUSpyFqGepdRG3atAkODg546623sHv3buzevRtvvfUWHB0d8cUXX2DevHlYu3YtPvjgg6bIS1SjpJxi/Bxd1Qv14ohuEqchans8HNQY1r0TtAL4+nf2RhEBDZix/LvvvsPq1asxefJk3bbx48fD09MTX375JUJCQuDs7Ix//etfeOuttxo1LFFt1h+7Cs3tNfK8eEceUZN4blgXHP/jJraeTcLCgO6wMlVKHYlIUvXuiTp16hT69et3x/Z+/fohNDQUADBkyBAkJibefzqiOsjIL8X2s1XzQi14iL1QRE3Fv2tHeDpaoLRCy4WJidCAIsrJyQnffPPNHdu/+eYbODlV3QabnZ2NDh04SzQ1jw3Hr6Fco8UA1w7w69JR6jhEbZZMJsOzw6uWgvnu1HWUlHOpL2rf6n0576OPPsITTzyBgwcPYsCAAQCAs2fP4vLly9ixYwcA4MyZM5gyZUrjJiWqQU5ROTaHVfV6LhjBeaGImtro3nZwtjJBYk4xtkckYYa/q9SRiCRT756oCRMm4PLlyxgzZgxycnKQk5ODMWPG4PLly3jkkUcAAM8//zw+/vjjRg9L9HffnkhASYUGno4WGOZuLXUcojbPQCHH3GFVvVEbjl/jwsTUrtW7JwoA3NzcePcdSS6vpALfnboOAJj/UDfIZDJpAxG1E0/4dMaa4D+QfKsE+86lIaifo9SRiCTRoCIqNzcX4eHhyMzMvGM+qBkzZjRKMKJ7+T70OgrKKtHd1gyjPGyljkPUbhgZKvDMEDd8+MsVrDsSjwleDpDL+UsMtT/1LqL27t2L6dOno7CwEGq1Wu+3f5lMxiKKmkVxeSW+OZEAoKoXij/AiZrX0/4uWH/sKuIyC/HrpXSM7mMvdSSiZlfvMVGvvPIKnnnmGRQWFiI3Nxe3bt3SPXJycpoiI9EdNocl4lZxBVw6mmCcJ394EzU3tZEhZg1yBQB8diQeQghpAxFJoN5FVEpKCl566SWYmJg0RR6ieyqt0GDD7fW7XniwKwwUXEebSAqzB7vB2FCBCyn5OPrHTanjEDW7en/6BAYG4uzZs02RhahOdkQkI7OgDA4WRpjYr7PUcYjaLStTJZ56wBkAsO4we6Oo/an3mKhx48bhtddew6VLl+Dp6QlDQ0O9/RMmTGi0cER/V6HR4oujVwEA84Z1gdKAvVBEUpo7tAu+C72BszduISwhBw9wwltqR+pdRM2dOxcA8N57792xTyaTQaPhDLbUdPbGpCIltwTWZkpMHegsdRyids9GbYTJvp3xv9OJ+OxwPIsoalfq/Wu8Vqut9cECipqSEAJfHqsaCzV7sBuMDBUSJyIiAHh2WFco5DKciM9CdFKu1HGImg2vhVCrcfSPm7iSUQBTpQJPPeAidRwius3JygQTb0+4+dnheInTEDWfOl3OW7t2LebNmwcjIyOsXbv2rm1feumlRglG9HdfHqsaC/WknzMsjA3v0ZqImtPzD3bFzshk/Babgdi0fPSyV0sdiajJyUQdbqdwc3PD2bNn0bFjR7i5udX+ZDIZrl271qgBW7P8/HxYWFggLy8PajV/oNyP6KRcBK07CQO5DL+/8RDsLYyljkREfzN/cyT2n0vDI33t8dmT/aWOQ9Rgdf38rlNPVEJCQo1/J2ouG45X9UI96u3IAoqohZr/YDfsP5eG/efTsOhmIbp2MpM6ElGT4pgoavGuZxXh4IV0AFXTGhBRy+ThoEZALxsIAd1UJERtWb2nONBoNNi0aRNCQkJqXID48OHDjRaOCAC++v0ahABG9LRBDztzqeMQ0V3Mf6gbfovNxO6oFCwc6Q4nK65uQW1XvYuohQsXYtOmTRg3bhz69OmjtwAxUWO7WVCG7RHJAIBn2QtF1OL1c+6AId2scSI+C58djsfKx/tKHYmoydS7iNqyZQu2bduGsWPHNkUeIj3/Db2O8kot+jlbYqCbldRxiKgOFj3sjhPxWdgRmYz5D3WDc0f2RlHbVO8xUUqlEt26dWuKLER6isoq8d/QGwCqJvNjrydR6+DjYoWh7tbQaAU+PRwndRyiJlPvIuqVV17BJ598woUmqcltOZOEvJIKdLE2xcMetlLHIaJ6WPRwdwDArqgUXM8qkjgNUdOo9+W8EydO4MiRIzh48CB69+59xwLEu3btarRw1H5VaLT45veqOcfmDusChZy9UEStSX/nDhjevROO/XETnx6Ox+rJXlJHImp09S6iLC0tMXHixKbIQqSz71wqUvNKYW2m0i0nQUSty6KHu+PYHzfxU1QyFozoBjdrU6kjETWqehVRlZWVeOihhzBq1CjY2dk1VSZq5/QXGnblQsNErZS3kyUe6tEJR67cxKchcfh4irfUkYgaVb3GRBkYGOC5555DWVlZU+UhwrE/buJy+u2Fhv240DBRa/ZyQNXYqN3RKbh6s1DiNESNq94DywcOHIioqKimyEIEAPjmRNXSQlMGOMPChAsNE7VmXk6WCOhlA60APg3hnXrUttR7TNQLL7yAV155BcnJyfDx8YGpqf417r59ObEaNdyV9AL8HpcFuazqUh4RtX4vB3THb7GZ2BOTigUj3NHNhmvqUdtQ7yJq6tSpAICXXnpJt00mk0EIAZlMBo1G03jpqN359nYvVGBvOy4XQdRG9HG0wMMetgi+lIG1IXFYO62f1JGIGkW9i6iEhISmyEGErMIy/BSdAgCYM8RN4jRE1JheDnBH8KUM7D2XihdHdIO7LdfBpNav3kWUiwsH+lLT+OF0IsortfBysoSPSwep4xBRI+rtYIHA3rb45WIGPgmJw2dP9pc6EtF9q3cRVe3SpUtITExEeXm53vYJEybcdyhqf0orNPj+9HUAVb1QXOKFqO15OaA7frmYgf3n0/BiegF62LE3ilq3ehdR165dw8SJE3H+/HndWCgAug89jomihtgTk4qswnLYWxhhTB/OQUbUFvWyV2Ospx0OnE/HR79ewVczfKWORHRf6j3FwcKFC+Hm5obMzEyYmJjg4sWLOH78OHx9fXH06NEmiEhtnRBCN6B85iBXGCrq/W1JRK3E4oe7Qy4Dgi9lICrxltRxiO5LvT+tQkND8d5778Ha2hpyuRxyuRxDhgzBihUr9O7YI6qrU1ezcTm9AMaGCkwb4Cx1HCJqQt1szDGpf2cAwIe/XJE4DdH9qXcRpdFoYG5edR3b2toaqampAKoGnF+5wv8QVH/Vk2s+4duZk2sStQMLA9yhVMhx6mo2TsZnSR2HqMHqXUT16dMHMTExAAA/Pz+sWrUKJ0+exHvvvYcuXbo0KMS6devg6uoKIyMj+Pn5ITw8/K7tt2/fjp49e8LIyAienp44cOCA3n4hBJYtWwZ7e3sYGxsjICAAcXH6M+Xm5ORg+vTpUKvVsLS0xJw5c1BY+OeSBEePHsWjjz4Ke3t7mJqawtvbGz/88EOD3h/V7urNQhy+nAmZDJg9mNMaELUHnTuY4Em/ql7nVb9c0Y2tJWpt6l1Evf3229BqtQCA9957DwkJCRg6dCgOHDiAtWvX1jvA1q1bsXjxYixfvhyRkZHw8vJCYGAgMjMza2x/6tQpTJs2DXPmzEFUVBSCgoIQFBSECxcu6NqsWrUKa9euxfr16xEWFgZTU1MEBgaitLRU12b69Om4ePEigoODsW/fPhw/fhzz5s3Te52+ffti586dOHfuHGbPno0ZM2Zg37599X6PVLuNJ6t6oUb2tOUK70TtyPyHusFEqUBMUi5+uZghdRyihhGNIDs7W2i12gYdO3DgQDF//nzd1xqNRjg4OIgVK1bU2H7y5Mli3Lhxetv8/PzEs88+K4QQQqvVCjs7O/Hhhx/q9ufm5gqVSiV+/PFHIYQQly5dEgDEmTNndG0OHjwoZDKZSElJqTXr2LFjxezZs+v83vLy8gQAkZeXV+dj2pOcwjLR4+0DwuWNfeJUfJbUcYiomX146LJweWOfCFh9VFRqGvYZQtQU6vr53eDboOLj4/HLL7+gpKQEVlZWDXqO8vJyREREICAgQLdNLpcjICAAoaGhNR4TGhqq1x4AAgMDde0TEhKQnp6u18bCwgJ+fn66NqGhobC0tISv75+31wYEBEAulyMsLKzWvHl5eQ1+r3SnzeGJKK3QwsNejQe68LwStTdzh3WBhbEh4jILsTsqReo4RPVW7yIqOzsbI0eORPfu3TF27FikpaUBAObMmYNXXnmlXs+VlZUFjUYDW1tbve22trZIT0+v8Zj09PS7tq/+815tbGxs9PYbGBjAysqq1tfdtm0bzpw5g9mzZ9f6fsrKypCfn6/3oJqVV2rx39DrADi5JlF7ZWFsiOeGdwUA/Oe3P1BeqZU4EVH91LuIWrRoEQwNDZGYmAgTkz8XiJ0yZQoOHTrUqOFaiiNHjmD27Nn46quv0Lt371rbrVixAhYWFrqHk5NTM6ZsXQ5eSENGfhlszFUY7+UgdRwiksisQa6wMVch+VYJfgxPlDoOUb3Uu4j69ddfsXLlSnTu3Flvu7u7O27cuFGv57K2toZCoUBGhv6gwoyMDNjZ1TxrtZ2d3V3bV/95rzZ/H7heWVmJnJycO1732LFjGD9+PP7zn/9gxowZd30/S5YsQV5enu6RlJR01/bt2caT1wEATz/gAqUBJ9ckaq+MlQq8ONIdALA2JA6FZZUSJyKqu3p/ehUVFen1QFXLycmBSqWq13MplUr4+PggJCREt02r1SIkJAT+/v41HuPv76/XHgCCg4N17d3c3GBnZ6fXJj8/H2FhYbo2/v7+yM3NRUREhK7N4cOHodVq4efnp9t29OhRjBs3DitXrtS7c682KpUKarVa70F3iknKRXRSLpQKOab5cXJNovZu6gAndLE2RXZROTYcuyp1HKI6q3cRNXToUPz3v//VfS2TyaDVarFq1So89NBD9Q6wePFifPXVV/juu+8QGxuL559/HkVFRbqxRzNmzMCSJUt07RcuXIhDhw5h9erVuHz5Mt555x2cPXsWCxYs0OV5+eWX8f7772PPnj04f/48ZsyYAQcHBwQFBQEAevXqhdGjR2Pu3LkIDw/HyZMnsWDBAkydOhUODlWXlo4cOYJx48bhpZdewqRJk5Ceno709HTk5OTU+z2Svu9uj4V6pK89rM3qV3gTUdtjqJDj9dE9AABf/Z6AjPzSexxB1ELU97a/8+fPCxsbGzF69GihVCrF448/Lnr16iVsbW1FfHx8g24l/PTTT4Wzs7NQKpVi4MCB4vTp07p9w4cPFzNnztRrv23bNtG9e3ehVCpF7969xf79+/X2a7VasXTpUmFraytUKpUYOXKkuHLlil6b7OxsMW3aNGFmZibUarWYPXu2KCgo0O2fOXOmAHDHY/jw4XV+X5zi4E43C0qF+1tV0xpEJd6SOg4RtRBarVZMXHdCuLyxT7y5M0bqONTO1fXzWyZE/aeKzcvLw2effYaYmBgUFhaif//+mD9/Puzt7Ru1wGvt8vPzYWFhgby8PF7au+2zw3H46Nc/4OVkiZ/nD5Y6DhG1IGev5+Dx9aGQy4BfFw1DNxtzqSNRO1XXz2+Dhjy5hYUF/vnPf+ptS05Oxrx587Bhw4aGPCW1AxUaLf53uurum1mDXCROQ0Qtja+rFUZ52OLXSxn44OAVfD3T994HEUmo0W6Lys7OxjfffNNYT0dtUPClDKTnl8LaTImxnuy1JKI7vT66JxRyGX6LzUB4AsegUsvGe8up2Ww6dR0AMG2gM1QGCmnDEFGL1M3GDFMHVM2x9+8DsVycmFo0FlHULGLT8hGekAMDuQzT/Xgpj4hqtzDAHSZKBaKTcnHwQs2rSBC1BCyiqFl8d7sXKrCPHewsjKQNQ0Qtmo25EeYO7QIAWHnoMsoqNRInIqpZnQeWP/bYY3fdn5ube79ZqI3KLS7H7uiqxUVnDXKVNgwRtQrzhnXB5vBE3Mguxn9P3cDcYV2kjkR0hzr3RP11TbiaHi4uLvdcFoXap21nk1BaoUUvezV8XTpIHYeIWgFTlQFeG1U1Aefaw3HILiyTOBHRnercE7Vx48amzEFtlEYr8N/QqjUVZw1ygUwmkzgREbUWk3w6Y9Op67iUlo81v8Xh/4L6SB2JSA/HRFGTOnw5E8m3SmBpYohHvR2ljkNErYhCLsPSRzwAAJvDE/FHRoHEiYj0sYiiJvXf2+vkTfF1gpEhpzUgovrx79oRgb1todEKvL8/Vuo4RHpYRFGTic8sxO9xWZDLgKce4LQGRNQwS8b0gqFChuN/3MSRK5lSxyHSYRFFTeZ/p6vGQo3sZQsnKxOJ0xBRa+VqbYrZg90AAP/aH4sKjVbiRERVWERRkygur8TOiGQAwNPshSKi+7RgRDdYmSoRn1mIH8MTpY5DBIBFFDWRPdGpKCirhEtHEwzpZi11HCJq5dRGhlj0cHcAwMfBfyCvuELiREQsoqiJ/BBW9ZvidD9nyOWc1oCI7t+0AU7oYWuO3OIKfBx8Reo4RCyiqPHFJOXifEoelAZyPO7jJHUcImojDBRyLJ9QNeXB96dvIDYtX+JE1N6xiKJG90NY1YDycZ72sDJVSpyGiNqSQV2tMc7THloBLP/5IoQQUkeidoxFFDWqvOIK7IlJBQA89YCzxGmIqC16a1wvGBnKEX49R/fzhkgKLKKoUe2MTEZphRY97czR35nr5BFR43O0NMb8B7sBAP59IBZFZZUSJ6L2ikUUNRohhO5S3vQHuE4eETWducO6wNnKBBn5ZfjsSLzUcaidYhFFjeb0tRxcvVkEE6UCQd4OUschojbMyFChW1fv69+v4drNQokTUXvEIooaTXUvVFA/R5gbGUqchojauoBeNniwRydUaATe23eJg8yp2bGIokZxs6AMv1xMBwA85ccZyomo6clkMix7xAOGChmOXrmJ32K5rh41LxZR1Ci2nU1ChUagn7MlPBzUUschonaiSyczzBnSBQDwzp6LKC7nIHNqPiyi6L5ptAKbb89Qzl4oImpuL43sBkdLY6TklmBtCAeZU/NhEUX37dgfmUjJLYGFsSHG9bWXOg4RtTMmSgO8O6E3gKpB5n9kFEiciNoLFlF03344XdUL9YRPZxgZKiROQ0TtUYCHLR72sEWlVuCfP52HVstB5tT0WETRfUm+VYzDV6oGcz7pxxnKiUg670zoDWNDBc5cv4UdkclSx6F2gEUU3Zct4UkQAhjcrSO6dDKTOg4RtWOOlsZY9LA7AGDFgVjcKiqXOBG1dSyiqMEqNVpsO5sEAHhyIAeUE5H0Zg92Q087c9wqrsAHBy9LHYfaOBZR1GBHrtxEZkEZOpoq8bCHrdRxiIhgqJDj/aA+AICtZ5Nw9nqOxImoLWMRRQ22JbxqQPkkn85QGvBbiYhaBl9XK0wd4AQAWLLrPMortRInoraKn3zUIGl5JThye0D5lNs/rIiIWoo3x/SEtZkScZmF+Pwo546ipsEiihpk+9lkaAUw0M0KXTmgnIhaGEsTJZaPr5o7at2ReMRx7ihqAiyiqN60WoGtZ6oGlE8byF4oImqZHulrj4BeNqjQCLyx8xw0nDuKGhmLKKq3E/FZSMktgdrIAGP6cIZyImqZZDIZ/i+oD8xUBohMzMX3odeljkRtDIsoqrctZ6oGlD/WnzOUE1HLZm9hjDfG9AQArPrlCpJvFUuciNoSFlFUL1mFZQi+lAGAA8qJqHWYPtAZA1w7oLhcg7d3X4AQvKxHjYNFFNXLzohkVGgEvJws0cteLXUcIqJ7kstlWPFYXygVchy9chM/R6dKHYnaCBZRVGdC/GVAOXuhiKgV6WZjhpdGdgMAvLv3Im4WlEmciNoCFlFUZ+EJObiWVQRTpQLjvRykjkNEVC/PDu+KXvZq3CquwNu7z/OyHt03FlFUZ1tu90JN8HaAqcpA4jRERPVjqJDjoyf6wkAuwy8XM7Anhpf16P6wiKI6ySuuwIHzaQCAKQOcJU5DRNQwvR0s8NJIdwDAsp8vIjO/VOJE1JqxiKI6+SkqGWWVWvS0M4dXZwup4xARNdjzD3ZFH0c18koq8NZPvKxHDcciiu5JCKG7lDdtoDNkMpnEiYiIGs5QIcfqJ7yhVMjxW2wmdkamSB2JWikWUXRPMcl5uJxeAJWBHEHejlLHISK6bz3szPHyw1WX9d7dexFpeSUSJ6LWiEUU3dPW2zOUj/O0h4WJocRpiIgax7yhXeDlZImC0kq8uZOX9aj+WETRXRWXV2JvTNWA8smcG4qI2hADhRyrn/CC0kCOY3/cxObwRKkjUSvDIoru6uD5dBSWVcKlown83KykjkNE1Ki62Zjh9cAeAID398Xi2s1CiRNRa8Iiiu5qe0TVgPLH+3fmgHIiapOeGeyGId2sUVKhwctbo1Gh0UodiVoJFlFUq8TsYpy+lgOZDJjk01nqOERETUIul+GjJ7xgaWKIc8l5WPPbH1JHolaCRRTVakdkMgBgSDdrOFgaS5yGiKjp2FkYYcVETwDA50evIjwhR+JE1BqwiKIaabUCOyOqiqgnfDmgnIjavjGe9njCpzOEABZtjUZ+aYXUkaiFYxFFNTp1NRspuSVQGxlglIet1HGIiJrF8gm94dLRBCm5JVi2+4LUcaiFYxFFNaoeUD7B2wFGhgqJ0xARNQ8zlQH+M8UbCrkMu6NT8XM0ZzOn2rGIojvklVTg0IV0AMATPryUR0TtS3/nDnhxRDcAwD9/uoDrWUUSJ6KWSvIiat26dXB1dYWRkRH8/PwQHh5+1/bbt29Hz549YWRkBE9PTxw4cEBvvxACy5Ytg729PYyNjREQEIC4uDi9Njk5OZg+fTrUajUsLS0xZ84cFBb+OTdIaWkpZs2aBU9PTxgYGCAoKKjR3m9rsDcmFWWVWvSwNUdfLjZMRO3Qgoe6YaCrFQrLKrHgx0iUVWqkjkQtkKRF1NatW7F48WIsX74ckZGR8PLyQmBgIDIzM2tsf+rUKUybNg1z5sxBVFQUgoKCEBQUhAsX/rxuvWrVKqxduxbr169HWFgYTE1NERgYiNLSUl2b6dOn4+LFiwgODsa+fftw/PhxzJs3T7dfo9HA2NgYL730EgICApruBLRQ23UDyjk3FBG1TwYKOT6Z5o0OJoa4kJKPFQcuSx2JWiCZkHCxID8/PwwYMACfffYZAECr1cLJyQkvvvgi3nzzzTvaT5kyBUVFRdi3b59u2wMPPABvb2+sX78eQgg4ODjglVdewauvvgoAyMvLg62tLTZt2oSpU6ciNjYWHh4eOHPmDHx9fQEAhw4dwtixY5GcnAwHBwe915w1axZyc3Oxe/fuer+//Px8WFhYIC8vD2q1ut7HSyEuowAP/+c4DOQynH5rJKzNVFJHIiKSzJHLmZi96QwAYP1TPhjdx07iRNQc6vr5LVlPVHl5OSIiIvR6euRyOQICAhAaGlrjMaGhoXf0DAUGBuraJyQkID09Xa+NhYUF/Pz8dG1CQ0NhaWmpK6AAICAgAHK5HGFhYY32/lqr6l6oh3rasIAionbvoZ42eHZYFwDA6ztikJRTLHEiakkkK6KysrKg0Whga6t/+7ytrS3S09NrPCY9Pf2u7av/vFcbGxsbvf0GBgawsrKq9XXrqqysDPn5+XqP1qRCo8Wu2xNsPsEZyomIAACvBvZAP2dL5JdW4sUfo1BeyWVhqIrkA8vbkhUrVsDCwkL3cHJqXXe2Hb1yE1mF5bA2U+Khnjb3PoCIqB0wVMjx6bR+UBsZIDopFx/+wvFRVEWyIsra2hoKhQIZGRl62zMyMmBnV/M1Zzs7u7u2r/7zXm3+PnC9srISOTk5tb5uXS1ZsgR5eXm6R1JS0n09X3PbfrYq78R+jjBUsL4mIqrWuYMJPnzCCwDw1e8JOHQhTeJE1BJI9kmpVCrh4+ODkJAQ3TatVouQkBD4+/vXeIy/v79eewAIDg7WtXdzc4OdnZ1em/z8fISFhena+Pv7Izc3FxEREbo2hw8fhlarhZ+f3329J5VKBbVarfdoLbIKy3D4clVxyWVeiIjuFNjbDnOHugEAXt1+DvGZhfc4gto6SbsbFi9ejK+++grfffcdYmNj8fzzz6OoqAizZ88GAMyYMQNLlizRtV+4cCEOHTqE1atX4/Lly3jnnXdw9uxZLFiwAAAgk8nw8ssv4/3338eePXtw/vx5zJgxAw4ODrq5nnr16oXRo0dj7ty5CA8Px8mTJ7FgwQJMnTpV7868S5cuITo6Gjk5OcjLy0N0dDSio6Ob7dw0t91RKajUCnh1tkB3W3Op4xARtUhvjO4JP7eq+aOe+18ECssqpY5EEjKQ8sWnTJmCmzdvYtmyZUhPT4e3tzcOHTqkGxiemJgIufzPOm/QoEHYvHkz3n77bbz11ltwd3fH7t270adPH12b119/HUVFRZg3bx5yc3MxZMgQHDp0CEZGRro2P/zwAxYsWICRI0dCLpdj0qRJWLt2rV62sWPH4saNG7qv+/XrB6BqMs+2aMftu/IeZy8UEVGtDBRyfPZkfzzy6e+IzyzEGzvO4bMn+3FOvXZK0nmi2rrWMk/UpdR8jF37O5QKOc78MwAWJoZSRyIiatEibtzC1A2hqNAI/HNsL8y9PQ0CtQ0tfp4oajmqpzUI8LBhAUVEVAc+Lh2w9BEPAMAHhy4j9Gq2xIlICiyi2rlKjRa7o1MBAI/149xQRER19fQDLnisnyM0WoEXf4xEam6J1JGombGIaud+j89CVmEZrEyVGN6jk9RxiIhaDZlMhn9N9EQvezWyCssx979nUVzOgebtCYuodm5XZAoAYIKXA+eGIiKqJ2OlAhue9kFHUyUupubj1e0x0Go51Li94KdmO5ZfWoFfL1YtdTOpPy/lERE1hJOVCdY/7QNDhQwHzqfjk5A4qSNRM2ER1Y4dPJ+GskotutmYoY9jy717kIiopRvgaoV/TfQEAHwSEod951IlTkTNgUVUO7bz9qW8x/o7co4TIqL7NNnX6S8zmsfgfHKexImoqbGIaqeScooRnpADmQwI8naUOg4RUZvw5pheeLBHJ5RWaDH3v2eRmV8qdSRqQiyi2qmfoqp6oQZ17QgHS2OJ0xARtQ0KuQxrp/VDNxszpOeX4pnvzqCIS8O0WSyi2iEhhG6CTc4NRUTUuNRGhvh25gB0NFXiQko+5m+ORKVGK3UsagIsotqhyMRcXM8uhrGhAqP72Ekdh4iozXHuaIJvZg2AkaEcR6/cxNu7L7TZtVfbMxZR7dBPUVW9UGP62MFUJeka1EREbZa3kyU+ndYfchmw5UwSPjscL3UkamQsotqZskoN9sakAQAe49xQRERN6mEPW7w7oTcAYHXwH9gRkSxxImpMLKLamSOXM5FXUgE7tRH8u3aUOg4RUZv3tL8rnh3eBQDw5s5z+D3upsSJqLGwiGpnqueGCurnCIWcc0MRETWHNwJ7YryXAyq1As99H4HopFypI1EjYBHVjuQUlePI5UwAVRNsEhFR85DLZfjoib4Y1LUjiso1mLUxHFfSC6SORfeJRVQ7sjcmFZVaAU9HC3S3NZc6DhFRu6IyUGDDDF94O1kit7gCT38ThsTsYqlj0X1gEdWO6OaGYi8UEZEkzFQG2DR7AHrYmiOzoAzTvzmNDM5q3mqxiGonErKKEJOcB4VchvFeDlLHISJqtyxNlPh+zkC4dDRBUk4Jnvo6DLeKyqWORQ3AIqqd2H17mZch3axhbaaSOA0RUftmozbC/+b4wU5thLjMQszcGI780gqpY1E9sYhqB4QQ+Dm6+q489kIREbUETlYm+N8/BqKDiSHOJedhxjcspFobFlHtwLnkPFzPLoaRoRyjPLjMCxFRS9HNxhz/+4cfLE0MEZ2Ui6dZSLUqLKLagd23e6Ee9uAyL0RELU1vBwv8cLuQirldSOWVsJBqDVhEtXGVGq1umZcgb17KIyJqiXo7WGDzPx7QFVIzvgljIdUKsIhq405dzUZWYRk6mBhiWPdOUschIqJaeDiosfkfD6CDiSFikvNYSLUCLKLauOpLeeP62sNQwX9uIqKWzMNBjR/+UkhN3XAaNwvKpI5FteCnahtWWqHBLxfSAQBB3pxgk4ioNfBwUGPz3AdgbaZCbFo+nlh/Ckk5nNm8JWIR1Yb9FpuBonINOncwho9LB6njEBFRHfWyV2PHc/7o3MEY17OL8fj6U4jL4Fp7LQ2LqDZsd1QqAOBRbwfIZDKJ0xARUX24Wptix3OD4G5jhoz8MjzxZSiik3KljkV/wSKqjcotLsexPzIBAI/yUh4RUatkZ2GEbc/6w+v2osVPfnUaJ+KypI5Ft7GIaqP2n09DhUagl70a3W3NpY5DREQN1MFUic3/8MPgbh1RXK7BrI3h2HY2SepYBBZRbdbP0VWX8jg3FBFR62eqMsC3swZgvJcDKrUCr+84h49+uQKtVkgdrV1jEdUGpeSWIDwhBzIZMIFFFBFRm6AyUOCTKd54cUQ3AMBnR+Lx0pYolFZoJE7WfrGIaoP23O6F8nOzgr2FscRpiIioscjlMrwyqgc+fLwvDOQy7DuXhulfhyG7kHNJSYFFVBv08+0JNjk3FBFR2/SErxP++8xAqI0MEHHjFiZ+fgqxaflSx2p3WES1MZfT83E5vQBKhRxj+thLHYeIiJrIoG7W2PXCYDhZGSMxpxgTPz+p+yWamgeLqDamekD5gz06wcLEUOI0RETUlLrZmGHP/CEY6m6N0gotFm6Jxnt7L6FCo5U6WrvAIqoNEULoxkMF9eOlPCKi9qCDqRKbZg/E/Ie6AgC+PZmA6V+Hcc29ZsAiqg2JTMxFSm4JzFQGGNHTRuo4RETUTBRyGV4L7In1T/nATGWA8IQcPPLp7whPyJE6WpvGIqoN2RtT1Qs1ysMWRoYKidMQEVFzG93HDrvnD0bXTqbIyC/D1A2h+Dj4D1Ty8l6TYBHVRmi0AvvPpwEAxntxbigiovaqm40Zfl4wBI/7dIZWAGtD4jD5y1Ak5RRLHa3NYRHVRoQlZONmQRksTQwxuJu11HGIiEhCZioDfPSEF9ZO6wdzlQEiE3Mx9pPfefdeI2MR1UbsjanqhRrTxw5KA/6zEhERMMHLAQcWDoWPSwcUlFVi4ZZovPRjFCfnbCT8tG0DKjRaHLxw+1JeX17KIyKiPzlZmWDrvAewcKQ75DJgT0wqHv7PceyJSYUQXHvvfrCIagNOxGcht7gC1mYq+HXpKHUcIiJqYQwUcix6uDt+emEwetqZI6eoHC/9GIW5/z2LtLwSqeO1Wiyi2oDqu/Ie6WsPhVwmcRoiImqpvJwssWfBECwK6A5DhQy/xWZi1MfH8b/TN6DRsleqvlhEtXKlFRr8ejEDADDei8u8EBHR3SkN5FgY4I79Lw2Ft5MlCsoq8fbuC5jw2QnOK1VPLKJauaNXbqKwrBKOlsbo59RB6jhERNRKdLc1x87nB2H5eA+YGxngYmo+Jn8Zihd/jEJqLi/x1QWLqFZu77k/L+XJeSmPiIjqQSGXYfZgNxx99UFMG+gMmaxqiMiI1Uex5rc/UFRWKXXEFo1FVCtWVFaJkNiqS3mP8K48IiJqoI5mKqx4zBP7XhyCgW5WKK3QYs1vcRi66gg2HL+KknKN1BFbJBZRrdhvsRkordDCtaMJ+jiqpY5DREStXG8HC2yd9wA+e7If3KxNkVNUjn8fuIxhHx7BxpMJKK1gMfVXLKJaseoJNsd7OUAm46U8IiK6fzKZDI/0dUDwomH48PG+cLIyxs2CMry79xIe/PAovjmRgEJe5gMAyARn2moy+fn5sLCwQF5eHtTqxu0pyiuugO+/glGhEfh10TB0tzVv1OcnIiICgPJKLXZEJOOzw3FIzSsFAJgbGeBJP2fMHuQGOwsjiRM2vrp+frMnqpX65VI6KjQCPWzNWUAREVGTURrI8aSfM4689iD+PdETXaxNUVBaiS+PXcOQlYfx/P8icCIuC9p2OM+UgdQBqGGqJ9jk3FBERNQcVAYKPOnnjKkDnBByORNfHb+G8Os5OHghHQcvpMO1owme8HXCo94O6NzBROq4zYKX85pQU13Oyyosg9+/Q6DRChx99UG4Wps22nMTERHV1eX0fGwOS8SuyBS9cVID3awQ5O2Ihz1s0clcJWHChqnr5zeLqCbUVEXU96dvYOnuC+jb2QJ7FgxptOclIiJqiKKySuw/n4bdUSkIvZaN6spCJgN8nDtgVG9bjOhpg66dzFrFjVCtakzUunXr4OrqCiMjI/j5+SE8PPyu7bdv346ePXvCyMgInp6eOHDggN5+IQSWLVsGe3t7GBsbIyAgAHFxcXptcnJyMH36dKjValhaWmLOnDkoLCzUa3Pu3DkMHToURkZGcHJywqpVqxrnDd8n3aU8zg1FREQtgKnKAJN9nbB57gM49eYIvDmmJ7w6W0AI4OyNW/j3gcsI+Pg4HlgRgsXborEzIhnXs4rQ2vtxJO+J2rp1K2bMmIH169fDz88Pa9aswfbt23HlyhXY2Njc0f7UqVMYNmwYVqxYgUceeQSbN2/GypUrERkZiT59+gAAVq5ciRUrVuC7776Dm5sbli5divPnz+PSpUswMqq6i2DMmDFIS0vDl19+iYqKCsyePRsDBgzA5s2bAVRVod27d0dAQACWLFmC8+fP45lnnsGaNWswb968Or23puiJysgvxQMrQiAEcOrNEXCwNG6U5yUiImpsaXklCL6UgeBLGQhLyEF5pVZvv5WpEv2cLNHH0QI97MzRw84crh1NoZB4BY5WcznPz88PAwYMwGeffQYA0Gq1cHJywosvvog333zzjvZTpkxBUVER9u3bp9v2wAMPwNvbG+vXr4cQAg4ODnjllVfw6quvAgDy8vJga2uLTZs2YerUqYiNjYWHhwfOnDkDX19fAMChQ4cwduxYJCcnw8HBAV988QX++c9/Ij09HUqlEgDw5ptvYvfu3bh8+XKd3ltTFFFCCMQk5yHyxi08M8StUZ6TiIioqZVWaBBx4xZ+j8tCWEI2Lqbko1yjvaOd0kCOzpbG6Gxlgs4djGGnNoKVqRJWpkp0MFHCWKmAkaEcRgYKGBkqYGOuavRlz+r6+S3p3Xnl5eWIiIjAkiVLdNvkcjkCAgIQGhpa4zGhoaFYvHix3rbAwEDs3r0bAJCQkID09HQEBATo9ltYWMDPzw+hoaGYOnUqQkNDYWlpqSugACAgIAByuRxhYWGYOHEiQkNDMWzYMF0BVf06K1euxK1bt9Chw52L/ZaVlaGsrEz3dX5+fv1OSB3IZDJ4O1nC28my0Z+biIioqRgZKjC4mzUGd7MGAJRVanAxNR9Ribm4nJaPPzIK8EdGIUoqNLiWVYRrWUV1et7Y90bDWKloyui1krSIysrKgkajga2trd52W1vbWnt70tPTa2yfnp6u21+97W5t/n6p0MDAAFZWVnpt3Nzc7niO6n01FVErVqzAu+++W/sbJiIiIgBVUyb0d+6A/s5/fp5qtQIpuSVIulWM5FslSM4pRmZBGXKKypFTVI5bxeUordCitEKD0goNyiq1UBlIN7yb80Q1oiVLluj1kuXn58PJyUnCRERERK2HXC6Dk5UJnKxaxzxTkt6dZ21tDYVCgYyMDL3tGRkZsLOzq/EYOzu7u7av/vNebTIzM/X2V1ZWIicnR69NTc/x19f4O5VKBbVarfcgIiKitknSIkqpVMLHxwchISG6bVqtFiEhIfD396/xGH9/f732ABAcHKxr7+bmBjs7O702+fn5CAsL07Xx9/dHbm4uIiIidG0OHz4MrVYLPz8/XZvjx4+joqJC73V69OhR46U8IiIiameExLZs2SJUKpXYtGmTuHTpkpg3b56wtLQU6enpQgghnn76afHmm2/q2p88eVIYGBiIjz76SMTGxorly5cLQ0NDcf78eV2bDz74QFhaWoqff/5ZnDt3Tjz66KPCzc1NlJSU6NqMHj1a9OvXT4SFhYkTJ04Id3d3MW3aNN3+3NxcYWtrK55++mlx4cIFsWXLFmFiYiK+/PLLOr+3vLw8AUDk5eXdzykiIiKiZlTXz2/JiyghhPj000+Fs7OzUCqVYuDAgeL06dO6fcOHDxczZ87Ua79t2zbRvXt3oVQqRe/evcX+/fv19mu1WrF06VJha2srVCqVGDlypLhy5Ypem+zsbDFt2jRhZmYm1Gq1mD17tigoKNBrExMTI4YMGSJUKpVwdHQUH3zwQb3eF4soIiKi1qeun9+SzxPVljXVsi9ERETUdFrVsi9ERERErQ2LKCIiIqIGYBFFRERE1AAsooiIiIgagEUUERERUQOwiCIiIiJqABZRRERERA3AIoqIiIioAVhEERERETWAgdQB2rLqyeDz8/MlTkJERER1Vf25fa9FXVhENaGCggIAgJOTk8RJiIiIqL4KCgpgYWFR636undeEtFotUlNTYW5uDplM1mjPm5+fDycnJyQlJXFNvibA89v0eI6bFs9v0+M5blpSn18hBAoKCuDg4AC5vPaRT+yJakJyuRydO3dusudXq9X8z9uEeH6bHs9x0+L5bXo8x01LyvN7tx6oahxYTkRERNQALKKIiIiIGoBFVCukUqmwfPlyqFQqqaO0STy/TY/nuGnx/DY9nuOm1VrOLweWExERETUAe6KIiIiIGoBFFBEREVEDsIgiIiIiagAWUUREREQNwCKqFVq3bh1cXV1hZGQEPz8/hIeHSx2pVXrnnXcgk8n0Hj179tTtLy0txfz589GxY0eYmZlh0qRJyMjIkDBxy3b8+HGMHz8eDg4OkMlk2L17t95+IQSWLVsGe3t7GBsbIyAgAHFxcXptcnJyMH36dKjValhaWmLOnDkoLCxsxnfRst3rHM+aNeuO7+nRo0frteE5rt2KFSswYMAAmJubw8bGBkFBQbhy5Ypem7r8XEhMTMS4ceNgYmICGxsbvPbaa6isrGzOt9Ii1eX8Pvjgg3d8Dz/33HN6bVrS+WUR1cps3boVixcvxvLlyxEZGQkvLy8EBgYiMzNT6mitUu/evZGWlqZ7nDhxQrdv0aJF2Lt3L7Zv345jx44hNTUVjz32mIRpW7aioiJ4eXlh3bp1Ne5ftWoV1q5di/Xr1yMsLAympqYIDAxEaWmprs306dNx8eJFBAcHY9++fTh+/DjmzZvXXG+hxbvXOQaA0aNH631P//jjj3r7eY5rd+zYMcyfPx+nT59GcHAwKioqMGrUKBQVFena3Ovngkajwbhx41BeXo5Tp07hu+++w6ZNm7Bs2TIp3lKLUpfzCwBz587V+x5etWqVbl+LO7+CWpWBAweK+fPn677WaDTCwcFBrFixQsJUrdPy5cuFl5dXjftyc3OFoaGh2L59u25bbGysACBCQ0ObKWHrBUD89NNPuq+1Wq2ws7MTH374oW5bbm6uUKlU4scffxRCCHHp0iUBQJw5c0bX5uDBg0Imk4mUlJRmy95a/P0cCyHEzJkzxaOPPlrrMTzH9ZOZmSkAiGPHjgkh6vZz4cCBA0Iul4v09HRdmy+++EKo1WpRVlbWvG+ghfv7+RVCiOHDh4uFCxfWekxLO7/siWpFysvLERERgYCAAN02uVyOgIAAhIaGSpis9YqLi4ODgwO6dOmC6dOnIzExEQAQERGBiooKvXPds2dPODs781w3QEJCAtLT0/XOp4WFBfz8/HTnMzQ0FJaWlvD19dW1CQgIgFwuR1hYWLNnbq2OHj0KGxsb9OjRA88//zyys7N1+3iO6ycvLw8AYGVlBaBuPxdCQ0Ph6ekJW1tbXZvAwEDk5+fj4sWLzZi+5fv7+a32ww8/wNraGn369MGSJUtQXFys29fSzi8XIG5FsrKyoNFo9L55AMDW1haXL1+WKFXr5efnh02bNqFHjx5IS0vDu+++i6FDh+LChQtIT0+HUqmEpaWl3jG2trZIT0+XJnArVn3Oavrerd6Xnp4OGxsbvf0GBgawsrLiOa+j0aNH47HHHoObmxuuXr2Kt956C2PGjEFoaCgUCgXPcT1otVq8/PLLGDx4MPr06QMAdfq5kJ6eXuP3efU+qlLT+QWAJ598Ei4uLnBwcMC5c+fwxhtv4MqVK9i1axeAlnd+WURRuzVmzBjd3/v27Qs/Pz+4uLhg27ZtMDY2ljAZUcNMnTpV93dPT0/07dsXXbt2xdGjRzFy5EgJk7U+8+fPx4ULF/TGSVLjqe38/nV8nqenJ+zt7TFy5EhcvXoVXbt2be6Y98TLea2ItbU1FArFHXeCZGRkwM7OTqJUbYelpSW6d++O+Ph42NnZoby8HLm5uXpteK4bpvqc3e17187O7o4bJCorK5GTk8Nz3kBdunSBtbU14uPjAfAc19WCBQuwb98+HDlyBJ07d9Ztr8vPBTs7uxq/z6v3Ue3ntyZ+fn4AoPc93JLOL4uoVkSpVMLHxwchISG6bVqtFiEhIfD395cwWdtQWFiIq1evwt7eHj4+PjA0NNQ711euXEFiYiLPdQO4ubnBzs5O73zm5+cjLCxMdz79/f2Rm5uLiIgIXZvDhw9Dq9XqfpBS/SQnJyM7Oxv29vYAeI7vRQiBBQsW4KeffsLhw4fh5uamt78uPxf8/f1x/vx5vWI1ODgYarUaHh4ezfNGWqh7nd+aREdHA4De93CLOr/NPpSd7suWLVuESqUSmzZtEpcuXRLz5s0TlpaWencqUN288sor4ujRoyIhIUGcPHlSBAQECGtra5GZmSmEEOK5554Tzs7O4vDhw+Ls2bPC399f+Pv7S5y65SooKBBRUVEiKipKABAff/yxiIqKEjdu3BBCCPHBBx8IS0tL8fPPP4tz586JRx99VLi5uYmSkhLdc4wePVr069dPhIWFiRMnTgh3d3cxbdo0qd5Si3O3c1xQUCBeffVVERoaKhISEsRvv/0m+vfvL9zd3UVpaanuOXiOa/f8888LCwsLcfToUZGWlqZ7FBcX69rc6+dCZWWl6NOnjxg1apSIjo4Whw4dEp06dRJLliyR4i21KPc6v/Hx8eK9994TZ8+eFQkJCeLnn38WXbp0EcOGDdM9R0s7vyyiWqFPP/1UODs7C6VSKQYOHChOnz4tdaRWacqUKcLe3l4olUrh6OgopkyZIuLj43X7S0pKxAsvvCA6dOggTExMxMSJE0VaWpqEiVu2I0eOCAB3PGbOnCmEqJrmYOnSpcLW1laoVCoxcuRIceXKFb3nyM7OFtOmTRNmZmZCrVaL2bNni4KCAgneTct0t3NcXFwsRo0aJTp16iQMDQ2Fi4uLmDt37h2/YPEc166mcwtAbNy4UdemLj8Xrl+/LsaMGSOMjY2FtbW1eOWVV0RFRUUzv5uW517nNzExUQwbNkxYWVkJlUolunXrJl577TWRl5en9zwt6fzKhBCi+fq9iIiIiNoGjokiIiIiagAWUUREREQNwCKKiIiIqAFYRBERERE1AIsoIiIiogZgEUVERETUACyiiIiIiBqARRQR0X1455134O3tLclru7q6Ys2aNZK8NhGxiCKiFmzWrFmQyWR47rnn7tg3f/58yGQyzJo1q/mDNQGZTIbdu3dLHYOI6oFFFBG1aE5OTtiyZQtKSkp020pLS7F582Y4Ozs32euWl5c32XMTUdvAIoqIWrT+/fvDyckJu3bt0m3btWsXnJ2d0a9fP922Q4cOYciQIbC0tETHjh3xyCOP4OrVq3rPlZycjGnTpsHKygqmpqbw9fVFWFgYgD8vy3399ddwc3ODkZERACAxMRGPPvoozMzMoFarMXnyZGRkZNyR88svv4STkxNMTEwwefJk5OXl6fadOXMGDz/8MKytrWFhYYHhw4cjMjJSt9/V1RUAMHHiRMhkMt3XALB3714MGDAARkZGsLa2xsSJE/Vet7i4GM888wzMzc3h7OyMDRs21PMME1FDsYgiohbvmWeewcaNG3Vff/vtt5g9e7Zem6KiIixevBhnz55FSEgI5HI5Jk6cCK1WCwAoLCzE8OHDkZKSgj179iAmJgavv/66bj8AxMfHY+fOndi1axeio6Oh1Wrx6KOPIicnB8eOHUNwcDCuXbuGKVOm6L12fHw8tm3bhr179+LQoUOIiorCCy+8oNtfUFCAmTNn4sSJEzh9+jTc3d0xduxYFBQUAKgqsgBg48aNSEtL0329f/9+TJw4EWPHjkVUVBRCQkIwcOBAvddevXo1fH19da/5/PPP48qVK/d7yomoLiRZ9piIqA5mzpwpHn30UZGZmSlUKpW4fv26uH79ujAyMhI3b94Ujz76qJg5c2aNx968eVMAEOfPnxdCCPHll18Kc3NzkZ2dXWP75cuXC0NDQ5GZmanb9uuvvwqFQiESExN12y5evCgAiPDwcN1xCoVCJCcn69ocPHhQyOVykZaWVuNraTQaYW5uLvbu3avbBkD89NNPeu38/f3F9OnTaz0/Li4u4qmnntJ9rdVqhY2Njfjiiy9qPYaIGg97ooioxevUqRPGjRuHTZs2YePGjRg3bhysra312sTFxWHatGno0qUL1Gq17pJYYmIiACA6Ohr9+vWDlZVVra/j4uKCTp066b6OjY2Fk5MTnJycdNs8PDxgaWmJ2NhY3TZnZ2c4Ojrqvvb394dWq9X1CGVkZGDu3Llwd3eHhYUF1Go1CgsLddlqEx0djZEjR961Td++fXV/l8lksLOzQ2Zm5l2PIaLGYSB1ACKiunjmmWewYMECAMC6devu2D9+/Hi4uLjgq6++goODA7RaLfr06aMbIG5sbHzP1zA1NW3c0LfNnDkT2dnZ+OSTT+Di4gKVSgV/f/97Dl6vS2ZDQ0O9r2Uymd4lSiJqOuyJIqJWYfTo0SgvL0dFRQUCAwP19mVnZ+PKlSt4++23MXLkSPTq1Qu3bt3Sa9O3b19ER0cjJyenzq/Zq1cvJCUlISkpSbft0qVLyM3NhYeHh25bYmIiUlNTdV+fPn0acrkcPXr0AACcPHkSL730EsaOHYvevXtDpVIhKytL77UMDQ2h0WjuyBwSElLnvETUvFhEEVGroFAoEBsbi0uXLkGhUOjt69ChAzp27IgNGzYgPj4ehw8fxuLFi/XaTJs2DXZ2dggKCsLJkydx7do17Ny5E6GhobW+ZkBAADw9PTF9+nRERkYiPDwcM2bMwPDhw+Hr66trZ2RkhJkzZyImJga///47XnrpJUyePBl2dnYAAHd3d3z//feIjY1FWFgYpk+ffkcvk6urK0JCQpCenq4rAJcvX44ff/wRy5cvR2xsLM6fP4+VK1fe13kkosbDIoqIWg21Wg21Wn3Hdrlcji1btiAiIgJ9+vTBokWL8OGHH+q1USqV+PXXX2FjY4OxY8fC09MTH3zwwR0F2V/JZDL8/PPP6NChA4YNG4aAgAB06dIFW7du1WvXrVs3PPbYYxg7dixGjRqFvn374vPPP9ft/+abb3Dr1i30798fTz/9NF566SXY2NjoPcfq1asRHBwMJycn3dQNDz74ILZv3449e/bA29sbI0aMQHh4eL3PGxE1DZkQQkgdgoiIiKi1YU8UERERUQOwiCIiIiJqABZRRERERA3AIoqIiIioAVhEERERETUAiygiIiKiBmARRURERNQALKKIiIiIGoBFFBEREVEDsIgiIiIiagAWUUREREQNwCKKiIiIqAH+HzsC4tDSq2/hAAAAAElFTkSuQmCC","," "text/plain": ["," "

“”,” ]”,” },”,” “metadata”: {},”,” “output_type”: “display_data””,” }”,” ],”,” “source”: [“,” “plt.plot(range(macrobatch_count), [get_lr(m) for m in range(macrobatch_count)])\n”,”,” “plt.title(\”Learning rate schedule\”)\n”,”,” “plt.xlabel(\”Macrobatch\”)\n”,”,” “plt.ylabel(\”Learning rate\”)\n”,”,” “plt.show()””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “b941d9d9-631b-4b05-b359-218f0404cbdd”,”,” “metadata”: {},”,” “source”: [“,” “Batch size is also varied throughout pretraining – in the beginning our network is still poorly fit and our priority is to make rapid progress, so we use small batch sizes (noisy gradients). As the network gets better, it becomes more important to take our optimization steps carefully, so we gradually increase batch sizes (for more precise gradients).””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 33,”,” “id”: “239b68eb-f9fd-446b-913e-7650f49b48a7″,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “def get_batch_size(macrobatch):\n”,”,” ” \”\”\”Gradually increasing batch size through the training process based on description in Cramming.\”\”\”\n”,”,” ” if macrobatch >= 2**7:\n”,”,” ” return 2**11\n”,”,” ” elif macrobatch >= 2**6:\n”,”,” ” return 2**10\n”,”,” ” elif macrobatch >= 2**5:\n”,”,” ” return 2**9\n”,”,” ” elif macrobatch >= 2**4:\n”,”,” ” return 2**8\n”,”,” ” elif macrobatch >= 2**3:\n”,”,” ” return 2**7\n”,”,” ” elif macrobatch >= 2**2:\n”,”,” ” return 2**6\n”,”,” ” else:\n”,”,” ” return 2**5 # == 32, is minibatch_size, which we don’t want to go below””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “9761539d-1371-46a7-a67e-c158c7d7afcf”,”,” “metadata”: {},”,” “source”: [“,” “Again, a visualization might help:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 34,”,” “id”: “4a5e31f7-5898-4f9f-8458-10ad5a66517f”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [“,” {“,” “data”: {“,” “image/png”: “iVBORw0KGgoAAAANSUhEUgAAAkQAAAHHCAYAAABeLEexAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABPTElEQVR4nO3de1gU5eIH8O9y2QWRBbkuKAKiaSioaSKVtyAQ0Uzt5DXwkqahHcXKPMd7PWJapr/ydI6VWidNK81Sy0S8YIaXNLzHEVOxZEFBWEDltu/vD93RDVQglt1lvp/n2Sdm5t2ZdyYdvr7vO+8ohBACRERERDJmY+4KEBEREZkbAxERERHJHgMRERERyR4DEREREckeAxERERHJHgMRERERyR4DEREREckeAxERERHJHgMRERERyR4DERHVuzVr1kChUODnn3822TF69+6N3r17m2z/pnLhwgUoFAq8/fbbJj+W4f/DhQsXav3dPXv2QKFQYM+ePfVeLyJLxEBEZOUMv/Tu/nh5eaFPnz74/vvv67zfhQsXYvPmzfVXUSIiC2Zn7goQUf1YsGABAgMDIYRATk4O1qxZg379+mHLli3o379/rfe3cOFCPPvss3jmmWfqv7L1YMeOHeauAhE1IgxERI1ETEwMunbtKi2PGzcO3t7e+Pzzz+sUiCydUqk0dxWIqBFhlxlRI+Xq6gpHR0fY2Rn/u+ftt9/GY489Bnd3dzg6OqJLly746quvjMooFAqUlJTgk08+kbrhRo8eLW3/448/MG7cOPj6+kKlUiEwMBCTJk1CWVmZ0X5KS0uRmJgIT09PODk5YdCgQbhy5coD667VajFmzBi0aNECKpUKPj4+GDhwoNFYmD+PIQoICKjSdWj43D0O5o8//sDYsWPh7e0NlUqF9u3bY9WqVQ++oACSk5PxxBNPwNXVFU2bNkXbtm3xj3/8w6jMzZs3MW/ePDz00ENwcHCAj48PBg8ejHPnzlXZ38qVKxEUFASVSoVHH30Uhw8frlLm119/xbPPPgs3Nzc4ODiga9eu+Pbbb6uUO3XqFJ588kk4OjqiRYsWePPNN6HX66uUUygUmDdvXpX1AQEBRv+P7+XgwYPo27cvXFxc0KRJE/Tq1Qv79+9/4PeILB1biIgaicLCQly9ehVCCOTm5uK9995DcXExRo0aZVRu+fLlePrppzFy5EiUlZVh/fr1+Nvf/oatW7ciNjYWAPDf//4XL7zwArp164YJEyYAAIKCggAAly9fRrdu3VBQUIAJEyagXbt2+OOPP/DVV1/h+vXrRi03U6ZMQbNmzTB37lxcuHABy5Ytw+TJk7Fhw4b7nsuQIUNw6tQpTJkyBQEBAcjNzUVycjKysrIQEBBQ7XeWLVuG4uJio3Xvvvsu0tPT4e7uDgDIyclB9+7doVAoMHnyZHh6euL777/HuHHjoNPpMHXq1HvW6dSpU+jfvz9CQ0OxYMECqFQqZGZmGoWByspK9O/fHykpKRg2bBj+/ve/o6ioCMnJyTh58qR0DQFg3bp1KCoqwosvvgiFQoHFixdj8ODB+O2332Bvby8d8/HHH0fz5s3x+uuvw8nJCV988QWeeeYZbNy4EYMGDQJwK0D26dMHFRUVUrmVK1fC0dHxvte5tnbt2oWYmBh06dIFc+fOhY2NDVavXo0nn3wS+/btQ7du3er1eEQNShCRVVu9erUAUOWjUqnEmjVrqpS/fv260XJZWZno0KGDePLJJ43WOzk5ifj4+Crfj4uLEzY2NuLw4cNVtun1eqM6RUZGSuuEEGLatGnC1tZWFBQU3PN8rl27JgCIJUuW3Pe8e/XqJXr16nXP7V988YUAIBYsWCCtGzdunPDx8RFXr141Kjts2DDh4uJS5drc7d133xUAxJUrV+5ZZtWqVQKAWLp0aZVthutw/vx5AUC4u7uL/Px8afs333wjAIgtW7ZI6yIiIkRISIi4efOm0X4ee+wx0aZNG2nd1KlTBQBx8OBBaV1ubq5wcXERAMT58+el9QDE3Llzq9TP39/f6P/37t27BQCxe/du6bht2rQR0dHRRv9Pr1+/LgIDA8VTTz11z+tCZA3YZUbUSKxYsQLJyclITk7GZ599hj59+uCFF17Apk2bjMrd3Wpw7do1FBYWokePHjh69OgDj6HX67F582YMGDDAaLySgUKhMFqeMGGC0boePXqgsrISFy9evOcxHB0doVQqsWfPHly7du2BdarO6dOnMXbsWAwcOBCzZs0CAAghsHHjRgwYMABCCFy9elX6REdHo7Cw8L7XwNXVFQDwzTffVNsVBQAbN26Eh4cHpkyZUmXbn6/N0KFD0axZM2m5R48eAIDffvsNAJCfn49du3bhueeeQ1FRkVTXvLw8REdH4+zZs/jjjz8AAN999x26d+9u1ELj6emJkSNHPuhS1Vh6ejrOnj2LESNGIC8vT6pPSUkJIiIikJqaes/rQmQN2GVG1Eh069bNKKQMHz4cnTt3xuTJk9G/f3+pK2vr1q148803kZ6ejtLSUqn8n39hV+fKlSvQ6XTo0KFDjerUsmVLo2VDALhf0FGpVHjrrbcwffp0eHt7o3v37ujfvz/i4uKg0WgeeEydTofBgwejefPm+PTTT6XzunLlCgoKCrBy5UqsXLmy2u/m5ubec79Dhw7FRx99hBdeeAGvv/46IiIiMHjwYDz77LOwsbn1b8tz586hbdu2VcZtVedB1yYzMxNCCMyePRuzZ8++Z32bN2+OixcvIiwsrMr2tm3bPrAeNXX27FkAQHx8/D3LFBYWGoU8ImvCQETUSNnY2KBPnz5Yvnw5zp49i/bt22Pfvn14+umn0bNnT/zrX/+Cj48P7O3tsXr1aqxbt67e62Bra1vteiHEfb83depUDBgwAJs3b8YPP/yA2bNnIykpCbt27ULnzp3v+93Ro0fj8uXLOHToENRqtbTe0HoxatSoe/5SDw0Nved+HR0dkZqait27d2Pbtm3Yvn07NmzYgCeffBI7duy457ney4OujaG+r7zyCqKjo6st27p161od834qKyvvu91QnyVLlqBTp07VlmnatGm91YeooTEQETViFRUVACANNt64cSMcHBzwww8/QKVSSeVWr15d5bvVtRh5enpCrVbj5MmTJqrxHUFBQZg+fTqmT5+Os2fPolOnTnjnnXfw2Wef3fM7ixYtwubNm7Fp0ya0a9fOaJunpyecnZ1RWVmJyMjIOtXJxsYGERERiIiIwNKlS7Fw4UL885//xO7duxEZGYmgoCAcPHgQ5eXl0sDoumrVqhUAwN7e/oH19ff3l1pw7paRkVFlXbNmzVBQUGC0rqysDNnZ2fc9hmFAuFqtrvP1I7JkHENE1EiVl5djx44dUCqVePjhhwHcapVQKBRGrQEXLlyodkZqJyenKr84bWxs8Mwzz2DLli3VvpbjQS0/NXH9+nXcvHnTaF1QUBCcnZ2Nuvj+bOfOnZg1axb++c9/VjuZpK2tLYYMGYKNGzdWG+geNB1Afn5+lXWGlhJDvYYMGYKrV6/i/fffr1K2ttfGy8sLvXv3xn/+859qw8rd9e3Xrx8OHDiAQ4cOGW1fu3Ztle8FBQUhNTXVaN3KlSsf2ELUpUsXBAUF4e23367yNN+f60NkjdhCRNRIfP/99/j1118B3Bpbsm7dOpw9exavv/661HUUGxuLpUuXom/fvhgxYgRyc3OxYsUKtG7dGsePHzfaX5cuXbBz504sXboUvr6+CAwMRFhYGBYuXIgdO3agV69emDBhAh5++GFkZ2fjyy+/xI8//igNPq6r//3vf4iIiMBzzz2H4OBg2NnZ4euvv0ZOTg6GDRt2z+8NHz4cnp6eaNOmTZVWpKeeegre3t5YtGgRdu/ejbCwMIwfPx7BwcHIz8/H0aNHsXPnzmpDj8GCBQuQmpqK2NhY+Pv7Izc3F//617/QokULPPHEEwCAuLg4fPrpp0hMTMShQ4fQo0cPlJSUYOfOnXjppZcwcODAWl2LFStW4IknnkBISAjGjx+PVq1aIScnB2lpafj9999x7NgxAMBrr72G//73v+jbty/+/ve/S4/d+/v7V/n/+sILL2DixIkYMmQInnrqKRw7dgw//PADPDw87lsXGxsbfPTRR4iJiUH79u0xZswYNG/eHH/88Qd2794NtVqNLVu21Or8iCyKGZ9wI6J6UN1j9w4ODqJTp07igw8+MHpEWgghPv74Y9GmTRuhUqlEu3btxOrVq8XcuXPFn28Hv/76q+jZs6dwdHQUAIweyb548aKIi4sTnp6eQqVSiVatWomEhARRWlpqVKc/P5r/50e5q3P16lWRkJAg2rVrJ5ycnISLi4sICwsTX3zxhVG5Pz92/+drcPfn7uPl5OSIhIQE4efnJ+zt7YVGoxERERFi5cqV973OKSkpYuDAgcLX11colUrh6+srhg8fLv73v/8Zlbt+/br45z//KQIDA6X9P/vss+LcuXNCiDuP3Vc3rQCqeST+3LlzIi4uTmg0GmFvby+aN28u+vfvL7766iujcsePHxe9evUSDg4Oonnz5uKNN94QH3/8cZXH7isrK8WMGTOEh4eHaNKkiYiOjhaZmZkPfOze4JdffhGDBw8W7u7uQqVSCX9/f/Hcc8+JlJSU+14/IkunEKIe2riJiIiIrBjHEBEREZHsMRARERGR7DEQERERkewxEBEREZHsMRARERGR7DEQERERkexxYsYa0Ov1uHz5MpydnWv0AkwiIiIyPyEEioqK4OvrK72E+V4YiGrg8uXL8PPzM3c1iIiIqA4uXbqEFi1a3LcMA1ENODs7A7h1Qe9+ezYRERFZLp1OBz8/P+n3+P0wENWAoZtMrVYzEBEREVmZmgx34aBqIiIikj0GIiIiIpI9BiIiIiKSPQYiIiIikj0GIiIiIpI9BiIiIiKSPQYiIiIikj0GIiIiIpI9BiIiIiKSPQYiIiIikj0GIiIiIpI9BiIiIiKSPb7clYhkTwiBHF0pKvR6c1eFSLZsbRTwcXE02/EZiIhI9hZ+dwYf7jtv7moQyZqXswqH/hlptuMzEBGR7KVfKgAA2NsqYKNQmLcyRDKlsjfvKB4GIiKSvUq9AACsGPEIotprzFwbIjIHDqomItmrvJWH2DpEJGMMREQke0LcSkS2NgxERHJl1kCUlJSERx99FM7OzvDy8sIzzzyDjIwMozI3b95EQkIC3N3d0bRpUwwZMgQ5OTlGZbKyshAbG4smTZrAy8sLr776KioqKozK7NmzB4888ghUKhVat26NNWvWmPr0iMhKGLrMbBiIiGTLrIFo7969SEhIwIEDB5CcnIzy8nJERUWhpKREKjNt2jRs2bIFX375Jfbu3YvLly9j8ODB0vbKykrExsairKwMP/30Ez755BOsWbMGc+bMkcqcP38esbGx6NOnD9LT0zF16lS88MIL+OGHHxr0fInIMkmBiHmISLYUwtBWbAGuXLkCLy8v7N27Fz179kRhYSE8PT2xbt06PPvsswCAX3/9FQ8//DDS0tLQvXt3fP/99+jfvz8uX74Mb29vAMC///1vzJgxA1euXIFSqcSMGTOwbds2nDx5UjrWsGHDUFBQgO3btz+wXjqdDi4uLigsLIRarTbNyROR2US/m4qMnCKseyEMj7X2MHd1iKie1Ob3t0WNISosLAQAuLm5AQCOHDmC8vJyREbemZegXbt2aNmyJdLS0gAAaWlpCAkJkcIQAERHR0On0+HUqVNSmbv3YShj2MeflZaWQqfTGX2IqPGqFOwyI5I7iwlEer0eU6dOxeOPP44OHToAALRaLZRKJVxdXY3Kent7Q6vVSmXuDkOG7YZt9yuj0+lw48aNKnVJSkqCi4uL9PHz86uXcyQiy6SXuswYiIjkymICUUJCAk6ePIn169ebuyqYOXMmCgsLpc+lS5fMXSUiMiG99JSZmStCRGZjERMzTp48GVu3bkVqaipatGghrddoNCgrK0NBQYFRK1FOTg40Go1U5tChQ0b7MzyFdneZPz+ZlpOTA7VaDUfHqu9NUalUUKlU9XJuRGT5DF1mCrYQEcmWWf89JITA5MmT8fXXX2PXrl0IDAw02t6lSxfY29sjJSVFWpeRkYGsrCyEh4cDAMLDw3HixAnk5uZKZZKTk6FWqxEcHCyVuXsfhjKGfRCRvBne6WrLQEQkW2ZtIUpISMC6devwzTffwNnZWRrz4+LiAkdHR7i4uGDcuHFITEyEm5sb1Go1pkyZgvDwcHTv3h0AEBUVheDgYDz//PNYvHgxtFotZs2ahYSEBKmVZ+LEiXj//ffx2muvYezYsdi1axe++OILbNu2zWznTkSWQ8+JGYlkz6wtRB988AEKCwvRu3dv+Pj4SJ8NGzZIZd599130798fQ4YMQc+ePaHRaLBp0yZpu62tLbZu3QpbW1uEh4dj1KhRiIuLw4IFC6QygYGB2LZtG5KTk9GxY0e88847+OijjxAdHd2g50tElskwDxEbiIjky6LmIbJUnIeIqHHr+mYyrhaXYfvUHmin4d9xosbCauchIiIyh9sNRBxDRCRjDEREJHt3uswYiIjkioGIiGSPg6qJiIGIiGTPMFM1u8yI5IuBiIhk787EjGauCBGZDQMREcmeNKiaXWZEssVARESyJ3WZMRARyRYDERHJHrvMiIiBiIhkTQgBwXmIiGSPgYiIZE1/11z97DIjki8GIiKStcq7EhEnZiSSLwYiIpI1/V2vc2QLEZF8MRARkawZBSK2EBHJFgMREcmacZeZGStCRGbFQEREssZB1UQEMBARkczp9ewyIyIGIiKSuUrBLjMiYiAiIpkzDKq2UfCxeyI5YyAiIlnT62/914ZhiEjWGIiISNYMXWY2HFBNJGsMREQka9Kb7tlCRCRrDEREJGt3jyEiIvliICIiWTNMzMguMyJ5YyAiIlkzTEPESRmJ5I2BiIhk7U6XGQMRkZwxEBGRrEldZgxERLLGQEREsmZoIbLl3ZBI1ngLICJZ48SMRAQwEBGRzHEMEREBDEREJHOVUpcZAxGRnJk1EKWmpmLAgAHw9fWFQqHA5s2bjbYrFIpqP0uWLJHKBAQEVNm+aNEio/0cP34cPXr0gIODA/z8/LB48eKGOD0isgJ6PSdmJCIzB6KSkhJ07NgRK1asqHZ7dna20WfVqlVQKBQYMmSIUbkFCxYYlZsyZYq0TafTISoqCv7+/jhy5AiWLFmCefPmYeXKlSY9NyKyDoZ5iDgxI5G82Znz4DExMYiJibnndo1GY7T8zTffoE+fPmjVqpXRemdn5yplDdauXYuysjKsWrUKSqUS7du3R3p6OpYuXYoJEyb89ZMgIqtWyXeZERGsaAxRTk4Otm3bhnHjxlXZtmjRIri7u6Nz585YsmQJKioqpG1paWno2bMnlEqltC46OhoZGRm4du1ag9SdiCwXB1UTEWDmFqLa+OSTT+Ds7IzBgwcbrX/55ZfxyCOPwM3NDT/99BNmzpyJ7OxsLF26FACg1WoRGBho9B1vb29pW7Nmzaocq7S0FKWlpdKyTqer79MhIgshBSJ2mRHJmtUEolWrVmHkyJFwcHAwWp+YmCj9HBoaCqVSiRdffBFJSUlQqVR1OlZSUhLmz5//l+pLRNZB6jKzmvZyIjIFq7gF7Nu3DxkZGXjhhRceWDYsLAwVFRW4cOECgFvjkHJycozKGJbvNe5o5syZKCwslD6XLl36aydARBaLXWZEBFhJIPr444/RpUsXdOzY8YFl09PTYWNjAy8vLwBAeHg4UlNTUV5eLpVJTk5G27Ztq+0uAwCVSgW1Wm30IaLGiTNVExFg5kBUXFyM9PR0pKenAwDOnz+P9PR0ZGVlSWV0Oh2+/PLLaluH0tLSsGzZMhw7dgy//fYb1q5di2nTpmHUqFFS2BkxYgSUSiXGjRuHU6dOYcOGDVi+fLlRVxsRyRcnZiQiwMxjiH7++Wf06dNHWjaElPj4eKxZswYAsH79egghMHz48CrfV6lUWL9+PebNm4fS0lIEBgZi2rRpRmHHxcUFO3bsQEJCArp06QIPDw/MmTOHj9wTEQBOzEhEtyiEuP3PI7onnU4HFxcXFBYWsvuMqJHZdjwbCeuOIizQDRteDDd3dYioHtXm97dVjCEiIjKVSg6qJiIwEBGRzOn1HENERAxERCRznJiRiAAGIiKSuUoOqiYiMBARkcwZWoj4clcieWMgIiJZu91AxC4zIpljICIiWWOXGREBDEREJHN6zlRNRGAgIiKZuzNTNQMRkZwxEBGRrFUaxhAxEBHJGgMREcmaYJcZEYGBiIhkrpJdZkQEBiIikrk77zIzc0WIyKwYiIhI1m7nIXaZEckcAxERyZrUZcZARCRrDEREJGucmJGIAAYiIpI5wXeZEREYiIhI5qRB1WwiIpI1BiIikrVK/a3/8rF7InljICIiWePEjEQEMBARkcxxYkYiAhiIiEjmODEjEQEMREQkc5yYkYgABiIikjl2mRERwEBERDJ3p8uMgYhIzhiIiEjW7jxlZuaKEJFZ8RZARLJm6DJTsIWISNYYiIhI1gwTM3JQNZG8MRARkazxXWZEBDAQEZHMGQZVMw8RyRsDERHJmmEMEbvMiOTNrIEoNTUVAwYMgK+vLxQKBTZv3my0ffTo0VAoFEafvn37GpXJz8/HyJEjoVar4erqinHjxqG4uNiozPHjx9GjRw84ODjAz88PixcvNvWpEZGV4MSMRASYORCVlJSgY8eOWLFixT3L9O3bF9nZ2dLn888/N9o+cuRInDp1CsnJydi6dStSU1MxYcIEabtOp0NUVBT8/f1x5MgRLFmyBPPmzcPKlStNdl5EZD34lBkRAYCdOQ8eExODmJiY+5ZRqVTQaDTVbjtz5gy2b9+Ow4cPo2vXrgCA9957D/369cPbb78NX19frF27FmVlZVi1ahWUSiXat2+P9PR0LF261Cg4EZE86TmomohgBWOI9uzZAy8vL7Rt2xaTJk1CXl6etC0tLQ2urq5SGAKAyMhI2NjY4ODBg1KZnj17QqlUSmWio6ORkZGBa9euVXvM0tJS6HQ6ow8RNU56TsxIRLDwQNS3b198+umnSElJwVtvvYW9e/ciJiYGlZWVAACtVgsvLy+j79jZ2cHNzQ1arVYq4+3tbVTGsGwo82dJSUlwcXGRPn5+fvV9akRkIdhlRkSAmbvMHmTYsGHSzyEhIQgNDUVQUBD27NmDiIgIkx135syZSExMlJZ1Oh1DEVEjpTcMqmYgIpI1i24h+rNWrVrBw8MDmZmZAACNRoPc3FyjMhUVFcjPz5fGHWk0GuTk5BiVMSzfa2ySSqWCWq02+hBR43Sny4yBiEjOrCoQ/f7778jLy4OPjw8AIDw8HAUFBThy5IhUZteuXdDr9QgLC5PKpKamory8XCqTnJyMtm3bolmzZg17AkRkce50mZm5IkRkVmYNRMXFxUhPT0d6ejoA4Pz580hPT0dWVhaKi4vx6quv4sCBA7hw4QJSUlIwcOBAtG7dGtHR0QCAhx9+GH379sX48eNx6NAh7N+/H5MnT8awYcPg6+sLABgxYgSUSiXGjRuHU6dOYcOGDVi+fLlRlxgRyRdbiIgIMHMg+vnnn9G5c2d07twZAJCYmIjOnTtjzpw5sLW1xfHjx/H000/joYcewrhx49ClSxfs27cPKpVK2sfatWvRrl07REREoF+/fnjiiSeM5hhycXHBjh07cP78eXTp0gXTp0/HnDlz+Mg9EQEA9IaXu7KJiEjWzDqounfv3tKLFavzww8/PHAfbm5uWLdu3X3LhIaGYt++fbWuHxE1fnfeZcZARCRnVjWGiIiovrHLjIgABiIikjm9nhMzEhEDERHJHLvMiAhgICIimeOgaiICGIiISOY4hoiIAAYiIpI5TsxIRAADERHJnNRCxEREJGsMREQka4aXu9qwy4xI1hiIiEjWDF1mNmwhIpI1BiIikjUOqiYigIGIiGROL7UQmbkiRGRWDEREJGuGiRnZZUYkbwxERCRrhkHV7DIjkjcGIiKSNT0HVRMRGIiISObuDKo2c0WIyKx4CyAiWeNj90QEMBARkcxJEzMyEBHJGgMREcka5yEiIoCBiIhkTuoyYyAikjUGIiKSNb3gxIxExEBERDInzUPEMUREssZARESyxi4zIgIYiIhIxgyTMgJ8yoxI7hiIiEi2DOOHAHaZEckdAxERyVblXYHIhndDIlnjLYCIZEuvv/Mzu8yI5I2BiIhky6jLjIOqiWSNgYiIZMuoy4wtRESyxkBERLJl/JSZGStCRGbHQEREsnVXHmKXGZHMMRARkWxV3pWIFOwyI5I1swai1NRUDBgwAL6+vlAoFNi8ebO0rby8HDNmzEBISAicnJzg6+uLuLg4XL582WgfAQEBUCgURp9FixYZlTl+/Dh69OgBBwcH+Pn5YfHixQ1xekRk4fimeyIyMGsgKikpQceOHbFixYoq265fv46jR49i9uzZOHr0KDZt2oSMjAw8/fTTVcouWLAA2dnZ0mfKlCnSNp1Oh6ioKPj7++PIkSNYsmQJ5s2bh5UrV5r03IjI8kmBiK1DRLJnZ86Dx8TEICYmptptLi4uSE5ONlr3/vvvo1u3bsjKykLLli2l9c7OztBoNNXuZ+3atSgrK8OqVaugVCrRvn17pKenY+nSpZgwYUL9nQwRWR1DlxnzEBFZ1RiiwsJCKBQKuLq6Gq1ftGgR3N3d0blzZyxZsgQVFRXStrS0NPTs2RNKpVJaFx0djYyMDFy7dq3a45SWlkKn0xl9iKjxMUzMyC4zIjJrC1Ft3Lx5EzNmzMDw4cOhVqul9S+//DIeeeQRuLm54aeffsLMmTORnZ2NpUuXAgC0Wi0CAwON9uXt7S1ta9asWZVjJSUlYf78+SY8GyKyBOwyIyKDOgWic+fOYfXq1Th37hyWL18OLy8vfP/992jZsiXat29f33VEeXk5nnvuOQgh8MEHHxhtS0xMlH4ODQ2FUqnEiy++iKSkJKhUqjodb+bMmUb71el08PPzq1vlichiGSZmZB4iolp3me3duxchISE4ePAgNm3ahOLiYgDAsWPHMHfu3HqvoCEMXbx4EcnJyUatQ9UJCwtDRUUFLly4AADQaDTIyckxKmNYvte4I5VKBbVabfQhosbHMDEju8yIqNaB6PXXX8ebb76J5ORko3E5Tz75JA4cOFCvlTOEobNnz2Lnzp1wd3d/4HfS09NhY2MDLy8vAEB4eDhSU1NRXl4ulUlOTkbbtm2r7S4jIvkwTEPEQEREte4yO3HiBNatW1dlvZeXF65evVqrfRUXFyMzM1NaPn/+PNLT0+Hm5gYfHx88++yzOHr0KLZu3YrKykpotVoAgJubG5RKJdLS0nDw4EH06dMHzs7OSEtLw7Rp0zBq1Cgp7IwYMQLz58/HuHHjMGPGDJw8eRLLly/Hu+++W9tTJ6JG5s5TZgxERHJX60Dk6uqK7OzsKgOVf/nlFzRv3rxW+/r555/Rp08fadkwbic+Ph7z5s3Dt99+CwDo1KmT0fd2796N3r17Q6VSYf369Zg3bx5KS0sRGBiIadOmGY3/cXFxwY4dO5CQkIAuXbrAw8MDc+bM4SP3RMRB1UQkqXUgGjZsGGbMmIEvv/wSCoUCer0e+/fvxyuvvIK4uLha7at3794Qd71t+s/utw0AHnnkkRp104WGhmLfvn21qhsRNX6cqZqIDGo9hmjhwoVo164d/Pz8UFxcjODgYPTs2ROPPfYYZs2aZYo6EhGZBCdmJCKDWrcQKZVKfPjhh5gzZw5OnDiB4uJidO7cGW3atDFF/YiITIaDqonIoNaBKDU1VWohuntunvLycmlWaCIia8AxRERkUOsus969e6Njx45Vxu7k5+cbDZAmIrJ07DIjIoM6vcts2LBhiIiIwJo1a4zWP2gQNBGRJeGgaiIyqHUgUigUmDlzJv773/9i8uTJSExMlIIQ5/IgImtieLmrDe9dRLJX60BkCD+DBw/Gvn378NVXXyEmJgYFBQX1XTciIpMyvMuMgYiI6tRlZtC5c2ccOnQIBQUFiIiIqK86ERE1CHaZEZFBrQNRfHw8HB0dpWWNRoO9e/ciIiICLVu2rNfKERGZkuHlrjYMRESyV+vH7levXl1lnUqlwieffFIvFSIiaiiGp8yYh4ioRoHo+PHj6NChA2xsbHD8+PH7lg0NDa2XihERmZo0MSPHEBHJXo0CUadOnaDVauHl5YVOnTpBoVAYPWJvWFYoFKisrDRZZYmI6pNhDBG7zIioRoHo/Pnz8PT0lH4mImoM2GVGRAY1CkT+/v7V/kxEZM34lBkRGdT6KbNPPvkE27Ztk5Zfe+01uLq64rHHHsPFixfrtXJERKak5zxERHRbrQPRwoULpcfu09LS8P7772Px4sXw8PDAtGnT6r2CRESmUsmZqonotlo/dn/p0iW0bt0aALB582Y8++yzmDBhAh5//HH07t27vutHRGQy7DIjIoNatxA1bdoUeXl5AIAdO3bgqaeeAgA4ODjgxo0b9Vs7IiIT0nNQNRHdVusWoqeeegovvPACOnfujP/973/o168fAODUqVMICAio7/oREZkM32VGRAa1biFasWIFwsPDceXKFWzcuBHu7u4AgCNHjmD48OH1XkEiIlORJmZkExGR7NW6hcjV1RXvv/9+lfXz58+vlwoRETWUO11mDEREcveX3nZPRGTNOFM1ERkwEBGRbBlmqrZlHiKSPQYiIpItTsxIRAYMREQkW4ZB1ewyIyIGIiKSrTtdZgxERHJX60CUk5OD559/Hr6+vrCzs4Otra3Rh4jIWkhPmfGfhkSyV+vH7kePHo2srCzMnj0bPj4+UPBfVkRkpaQuM97HiGSv1oHoxx9/xL59+9CpUycTVIeIqOFU8l1mRHRbrQORn58fxO2bCBEBZ7J12Hf2irmrQXVw5GI+ALYQEVEdAtGyZcvw+uuv4z//+c9ffndZamoqlixZgiNHjiA7Oxtff/01nnnmGWm7EAJz587Fhx9+iIKCAjz++OP44IMP0KZNG6lMfn4+pkyZgi1btsDGxgZDhgzB8uXL0bRpU6nM8ePHkZCQgMOHD8PT0xNTpkzBa6+99pfqTmQw8bMjuJh33dzVoL/AUcnxj0RyV6NA1KxZM6OxQiUlJQgKCkKTJk1gb29vVDY/P7/GBy8pKUHHjh0xduxYDB48uMr2xYsX4//+7//wySefIDAwELNnz0Z0dDROnz4NBwcHAMDIkSORnZ2N5ORklJeXY8yYMZgwYQLWrVsHANDpdIiKikJkZCT+/e9/48SJExg7dixcXV0xYcKEGteV6F6uFpUCAKLbe8NJVet/Y5CZOSntMKq7v7mrQURmVqO797Jly0xy8JiYGMTExFS7TQiBZcuWYdasWRg4cCAA4NNPP4W3tzc2b96MYcOG4cyZM9i+fTsOHz6Mrl27AgDee+899OvXD2+//TZ8fX2xdu1alJWVYdWqVVAqlWjfvj3S09OxdOlSBiKqF+W3R+bOGdAezV0dzVwbIiKqixoFovj4eFPXo4rz589Dq9UiMjJSWufi4oKwsDCkpaVh2LBhSEtLg6urqxSGACAyMhI2NjY4ePAgBg0ahLS0NPTs2RNKpVIqEx0djbfeegvXrl1Ds2bNGvS8qPGpqNQDAOw5MJeIyGrVun3/u+++g62tLaKjo43W79ixA5WVlfds8aktrVYLAPD29jZa7+3tLW3TarXw8vIy2m5nZwc3NzejMoGBgVX2YdhWXSAqLS1FaWmptKzT6f7i2VBjVakX0qPbdraczIaIyFrV+g7++uuvo7Kyssp6vV6P119/vV4qZW5JSUlwcXGRPn5+fuauElmo8tutQwBgzzeEEhFZrVoHorNnzyI4OLjK+nbt2iEzM7NeKgUAGo0GwK2Zse+Wk5MjbdNoNMjNzTXaXlFRgfz8fKMy1e3j7mP82cyZM1FYWCh9Ll269NdPiBqlCv2dKSjs2UJERGS1an0Hd3FxwW+//VZlfWZmJpycnOqlUgAQGBgIjUaDlJQUaZ1Op8PBgwcRHh4OAAgPD0dBQQGOHDkildm1axf0ej3CwsKkMqmpqSgvL5fKJCcno23btvccP6RSqaBWq40+RNUpr7jTQmTHMURERFar1oFo4MCBmDp1Ks6dOyety8zMxPTp0/H000/Xal/FxcVIT09Heno6gFsDqdPT05GVlQWFQoGpU6fizTffxLfffosTJ04gLi4Ovr6+0lxFDz/8MPr27Yvx48fj0KFD2L9/PyZPnoxhw4bB19cXADBixAgolUqMGzcOp06dwoYNG7B8+XIkJibW9tSJqijX3wpECgVnOyYismqilgoKCkT37t2FnZ2dCAgIEAEBAcLOzk706dNHXLt2rVb72r17twBQ5RMfHy+EEEKv14vZs2cLb29voVKpREREhMjIyDDaR15enhg+fLho2rSpUKvVYsyYMaKoqMiozLFjx8QTTzwhVCqVaN68uVi0aFGt6llYWCgAiMLCwlp9jxq/P65dF/4ztoo2//jO3FUhIqI/qc3vb4UQtX8PhxACycnJOHbsGBwdHREaGoqePXvWa1CzJDqdDi4uLigsLGT3GRm5mFeCXkv2oInSFqcX9DV3dYiI6C61+f1d68fuP/30UwwdOhRRUVGIioqS1peVlWH9+vWIi4urfY2JrFR55a1/T3D8EBGRdav1GKIxY8agsLCwyvqioiKMGTOmXipFZC0qbo8hUtrxCTMiImtW67u4EMLovWYGv//+O1xcXOqlUkTWokJqIWIgIiKyZjXuMuvcuTMUCgUUCgUiIiJgZ3fnq5WVlTh//jz69uUYCpKXstsTM9pxUkYiIqtW40BkeNQ9PT0d0dHRaNq0qbRNqVQiICAAQ4YMqfcKElkyQwuRkpMyEhFZtRoHorlz5wIAAgICMHToUDg4OJisUkTWooItREREjUKtnzKLN8Ob74ksldRlxjFERERWrdaBqLKyEu+++y6++OILZGVloayszGh7fn5+vVWOyNIZuszs+ZQZEZFVq/VdfP78+Vi6dCmGDh2KwsJCJCYmYvDgwbCxscG8efNMUEUiy2V47N6e8xAREVm1WgeitWvX4sMPP8T06dNhZ2eH4cOH46OPPsKcOXNw4MABU9SRyGKVGR675xgiIiKrVutApNVqERISAgBo2rSpNElj//79sW3btvqtHZGFMwyqtudTZkREVq3Wd/EWLVogOzsbABAUFIQdO3YAAA4fPgyVSlW/tSOycNIYIgYiIiKrVuu7+KBBg5CSkgIAmDJlCmbPno02bdogLi4OY8eOrfcKElmycr3hKTN2mRERWbNaP2W2aNEi6eehQ4eiZcuWSEtLQ5s2bTBgwIB6rRyRpSuvYJcZEVFjUOtA9Gfh4eEIDw+vj7oQWZ0KvaHLjC1ERETWrNaBKC8vD+7u7gCAS5cu4cMPP8SNGzfw9NNPo0ePHvVeQSJLVi49ZcYWIiIia1bju/iJEycQEBAALy8vtGvXDunp6Xj00Ufx7rvvYuXKlejTpw82b95swqoSWZ5y6SkzthAREVmzGgei1157DSEhIUhNTUXv3r3Rv39/xMbGorCwENeuXcOLL75oNL6ISA742D0RUeNQ4y6zw4cPY9euXQgNDUXHjh2xcuVKvPTSS7C5/Q6nKVOmoHv37iarKJElKr89hojvMiMism41vovn5+dDo9EAuDUho5OTE5o1ayZtb9asGYqKiuq/hkQW7M5TZuwyIyKyZrX6Z61CobjvMpHc3HnKjC1ERETWrFZPmY0ePVqajfrmzZuYOHEinJycAAClpaX1XzsiC2cYVM13mRERWbcaB6L4+Hij5VGjRlUpExcX99drRGRF+OoOIqLGocaBaPXq1aasB5FVklqI+OoOIiKrxn/WEv0F5RxDRETUKPAuTvQXVHBiRiKiRoGBiOgvuDOomn+ViIisGe/iRH9BOQdVExE1CryLE/0FFXp2mRERNQYMRER/QXkFX91BRNQY8C5O9BeUs4WIiKhRsPhAFBAQAIVCUeWTkJAAAOjdu3eVbRMnTjTaR1ZWFmJjY9GkSRN4eXnh1VdfRUVFhTlOhxoZTsxIRNQ41OrVHeZw+PBhVFZWSssnT57EU089hb/97W/SuvHjx2PBggXScpMmTaSfKysrERsbC41Gg59++gnZ2dmIi4uDvb09Fi5c2DAnQY0WX91BRNQ4WHwg8vT0NFpetGgRgoKC0KtXL2ldkyZNoNFoqv3+jh07cPr0aezcuRPe3t7o1KkT3njjDcyYMQPz5s2DUqk0af2pcbszUzVbiIiIrJlV3cXLysrw2WefYezYsVAo7vyLfO3atfDw8ECHDh0wc+ZMXL9+XdqWlpaGkJAQeHt7S+uio6Oh0+lw6tSpao9TWloKnU5n9CGqjuFt90o7thAREVkzi28hutvmzZtRUFCA0aNHS+tGjBgBf39/+Pr64vjx45gxYwYyMjKwadMmAIBWqzUKQwCkZa1WW+1xkpKSMH/+fNOcBDUqhjFEbCEiIrJuVhWIPv74Y8TExMDX11daN2HCBOnnkJAQ+Pj4ICIiAufOnUNQUFCdjjNz5kwkJiZKyzqdDn5+fnWvODVaZRxDRETUKFhNILp48SJ27twptfzcS1hYGAAgMzMTQUFB0Gg0OHTokFGZnJwcALjnuCOVSgWVSlUPtabGzvAuMyWfMiMismpWcxdfvXo1vLy8EBsbe99y6enpAAAfHx8AQHh4OE6cOIHc3FypTHJyMtRqNYKDg01WX5IHqcuMgYiIyKpZRQuRXq/H6tWrER8fDzu7O1U+d+4c1q1bh379+sHd3R3Hjx/HtGnT0LNnT4SGhgIAoqKiEBwcjOeffx6LFy+GVqvFrFmzkJCQwFYg+sukLjMbdpkREVkzqwhEO3fuRFZWFsaOHWu0XqlUYufOnVi2bBlKSkrg5+eHIUOGYNasWVIZW1tbbN26FZMmTUJ4eDicnJwQHx9vNG8RUV3decqMLURERNbMKgJRVFQUhBBV1vv5+WHv3r0P/L6/vz++++47U1SNZEwIgUq94SkzthAREVkz/rOWqI7KK++EdI4hIiKybryLE9WRYZZqgC93JSKydgxERHVUcVcLEV/uSkRk3XgXJ6qjcv2dFiKOISIism4MRER1VH7XI/d3v1uPiIisDwMRUR0ZuszYXUZEZP14Jyeqo3K+x4yIqNFgICKqo3K2EBERNRq8kxPVkaGFiI/cExFZPwYiojqqkGap5l8jIiJrxzs5UR2xhYiIqPFgICKqozuDqvnXiIjI2vFOTlRHfOyeiKjx4J2cqI4q9OwyIyJqLBiIiOqorMIwqJqBiIjI2jEQEdXRnRYi/jUiIrJ2vJMT1RHHEBERNR68kxPVURlf3UFE1GgwEBHVEVuIiIgaD97JieqIT5kRETUeDEREdVRWcbvLjK/uICKyeryTE9WR9C4zthAREVk9BiKiOqq4PahayTFERERWj3dyojoqr2QLERFRY8FARFRH0stdOYaIiMjq8U5OVEeGMURKO/41IiKydryTE9XRnRYidpkREVk7BiKiOpICEQdVExFZPd7JierIMFO1koOqiYisHgMRUR3decqMf42IiKwd7+REdcQxREREjYeduStwP/PmzcP8+fON1rVt2xa//vorAODmzZuYPn061q9fj9LSUkRHR+Nf//oXvL29pfJZWVmYNGkSdu/ejaZNmyI+Ph5JSUmws7PoU29U8kvKcL2swtzVqHdFN8sB8OWuRESNgcWngvbt22Pnzp3S8t1BZtq0adi2bRu+/PJLuLi4YPLkyRg8eDD2798PAKisrERsbCw0Gg1++uknZGdnIy4uDvb29li4cGGDn4scbT+pxaS1RyCEuWtiOpyYkYjI+ll8ILKzs4NGo6myvrCwEB9//DHWrVuHJ598EgCwevVqPPzwwzhw4AC6d++OHTt24PTp09i5cye8vb3RqVMnvPHGG5gxYwbmzZsHpVLZ0KcjO79cugYhAFsbRaPsWvJoqkL3Vu7mrgYREf1FFh+Izp49C19fXzg4OCA8PBxJSUlo2bIljhw5gvLyckRGRkpl27Vrh5YtWyItLQ3du3dHWloaQkJCjLrQoqOjMWnSJJw6dQqdO3eu9pilpaUoLS2VlnU6nelOsJG7UVYJAEjoHYTEqLZmrg0REVH1LHrwQ1hYGNasWYPt27fjgw8+wPnz59GjRw8UFRVBq9VCqVTC1dXV6Dve3t7QarUAAK1WaxSGDNsN2+4lKSkJLi4u0sfPz69+T0xGrt8ORA5KWzPXhIiI6N4suoUoJiZG+jk0NBRhYWHw9/fHF198AUdHR5Mdd+bMmUhMTJSWdTodQ1Ed3Si/FYia2DMQERGR5bLoFqI/c3V1xUMPPYTMzExoNBqUlZWhoKDAqExOTo405kij0SAnJ6fKdsO2e1GpVFCr1UYfqhtDl1kTpUVnbyIikjmrCkTFxcU4d+4cfHx80KVLF9jb2yMlJUXanpGRgaysLISHhwMAwsPDceLECeTm5kplkpOToVarERwc3OD1l6Mb7DIjIiIrYNH/bH/llVcwYMAA+Pv74/Lly5g7dy5sbW0xfPhwuLi4YNy4cUhMTISbmxvUajWmTJmC8PBwdO/eHQAQFRWF4OBgPP/881i8eDG0Wi1mzZqFhIQEqFQqM5+dPFxnlxkREVkBiw5Ev//+O4YPH468vDx4enriiSeewIEDB+Dp6QkAePfdd2FjY4MhQ4YYTcxoYGtri61bt2LSpEkIDw+Hk5MT4uPjsWDBAnOdkuzcuD0hYxO2EBERkQVTCNGYp8yrHzqdDi4uLigsLOR4olrqsXgXLuXfwKaXHsMjLZuZuzpERCQjtfn9bVVjiMj63BlUzRYiIiKyXAxEZFKGQOTIMURERGTBGIjIZIQQ0qBqR7YQERGRBWMgIpMprdBLL3XlPERERGTJGIjIZAzdZQC7zIiIyLIxEJHJGLrLlHY2sG2Eb7onIqLGg4GITMYwBxFbh4iIyNIxEJHJ3CjTA+Aj90REZPkYiMhkrhtaiBiIiIjIwjEQkcncKOccREREZB0YiMhkOEs1ERFZCwYiMpnrtwORA1uIiIjIwjEQkckYuszYQkRERJaOgYhM5k6XGWepJiIiy8ZARCZjaCFilxkREVk6BiIymescVE1ERFaCgYhMhjNVExGRtWAgIpOR5iFiCxEREVk4BiIyGXaZERGRtWAgIpO5yZmqiYjISjAQkckYWojYZUZERJaOgYhMRgpEbCEiIiILx0BEJnOznBMzEhGRdWAgIpNhlxkREVkLBiIymRscVE1ERFaCgYhM5gYfuyciIivBQEQmIYTA9dszVTMQERGRpWMgIpMoq9RDL2797MBAREREFo6BiEzC0F0GcAwRERFZPgYiMgnDgGp7WwXsbfnHjIiILJtF/6ZKSkrCo48+CmdnZ3h5eeGZZ55BRkaGUZnevXtDoVAYfSZOnGhUJisrC7GxsWjSpAm8vLzw6quvoqKioiFPRXY4KSMREVkTi54xb+/evUhISMCjjz6KiooK/OMf/0BUVBROnz4NJycnqdz48eOxYMECablJkybSz5WVlYiNjYVGo8FPP/2E7OxsxMXFwd7eHgsXLmzQ85GTO0+YWfQfMSIiIgAWHoi2b99utLxmzRp4eXnhyJEj6Nmzp7S+SZMm0Gg01e5jx44dOH36NHbu3Alvb2906tQJb7zxBmbMmIF58+ZBqVSa9BwsTVbedew4rYVeCJMe5/drNwBwUkYiIrIOFh2I/qywsBAA4ObmZrR+7dq1+Oyzz6DRaDBgwADMnj1baiVKS0tDSEgIvL29pfLR0dGYNGkSTp06hc6dOzfcCViA1zYew4Hf8hvseGpH+wY7FhERUV1ZTSDS6/WYOnUqHn/8cXTo0EFaP2LECPj7+8PX1xfHjx/HjBkzkJGRgU2bNgEAtFqtURgCIC1rtdpqj1VaWorS0lJpWafT1ffpmE1W3nUAQEQ7L7g0MW1YsVEo8FxXP5Meg4iIqD5YTSBKSEjAyZMn8eOPPxqtnzBhgvRzSEgIfHx8EBERgXPnziEoKKhOx0pKSsL8+fP/Un0tkRACeSVlAIB5T7eHn1uTB3yDiIhIHiz6KTODyZMnY+vWrdi9ezdatGhx37JhYWEAgMzMTACARqNBTk6OURnD8r3GHc2cOROFhYXS59KlS3/1FCxCSVklSiv0AAD3pvIaO0VERHQ/Fh2IhBCYPHkyvv76a+zatQuBgYEP/E56ejoAwMfHBwAQHh6OEydOIDc3VyqTnJwMtVqN4ODgavehUqmgVquNPo1BfvGt1iEHexs+/UVERHQXi/6tmJCQgHXr1uGbb76Bs7OzNObHxcUFjo6OOHfuHNatW4d+/frB3d0dx48fx7Rp09CzZ0+EhoYCAKKiohAcHIznn38eixcvhlarxaxZs5CQkACVSmXO02tweSW3xkW5O8nrvImIiB7EoluIPvjgAxQWFqJ3797w8fGRPhs2bAAAKJVK7Ny5E1FRUWjXrh2mT5+OIUOGYMuWLdI+bG1tsXXrVtja2iI8PByjRo1CXFyc0bxFcpF/e/yQmxO7y4iIiO5m0S1E4gFz5fj5+WHv3r0P3I+/vz++++67+qqW1cq73WXG8UNERETGLLqFiOpXHluIiIiIqsVAJCP50hgiBiIiIqK7MRDJyJ0WIg6qJiIiuhsDkYxIY4jYQkRERGSEgUhGDE+ZcVA1ERGRMQYiGeFj90RERNVjIJKJW+8x48SMRERE1WEgkonrZZW4WX7rPWZu7DIjIiIywkAkE4buMqWdDZyUtmauDRERkWVhIJIJwyP3Hk5KKBQKM9eGiIjIsjAQyYRhUkZ2lxEREVXFQCQTV4s5KSMREdG9WPTLXeneKvUC2YU3alz+Yl4JAE7KSEREVB0GIis1bGUaDl+4VuvvcQ4iIiKiqhiIrFDhjXIpDKnsat7r2VRlh6eCvU1VLSIiIqvFQGSFMnOLAAA+Lg5Imxlh5toQERFZPw6qtkL/yykGALT2amrmmhARETUODERW6OztQPSQt7OZa0JERNQ4MBBZobO3u8zasIWIiIioXjAQWSFDC1EbbwYiIiKi+sBAZGV0N8uh1d0EALT2YpcZERFRfWAgsjKG1iFvtQoujvZmrg0REVHjwEBkZQyP3HNANRERUf3hPEQW7PCFfPySZTwb9d7/XQHAR+6JiIjqEwORhSq4XoaRHx1EWYW+2u3tNGwhIiIiqi8MRBZqx6kclFXo4a1W4fHWHkbb3J2UGNDR10w1IyIianwYiCzUthPZAIBRYf6YEtHGzLUhIiJq3Dio2gIVXC/D/syrAIB+oT5mrg0REVHjxxYiM6rUC2QX3qiyfsepHFToBdppnBHkycHTREREpsZAZEZ5JaV44q3d99we04GtQ0RERA2BgcjMVHbV91p6qVX4W9cWDVwbIiIieZJVIFqxYgWWLFkCrVaLjh074r333kO3bt3MVh8vZwdkvBljtuMTERHRLbIZVL1hwwYkJiZi7ty5OHr0KDp27Ijo6Gjk5uaau2pERERkZrIJREuXLsX48eMxZswYBAcH49///jeaNGmCVatWmbtqREREZGayCERlZWU4cuQIIiMjpXU2NjaIjIxEWlpalfKlpaXQ6XRGHyIiImq8ZBGIrl69isrKSnh7exut9/b2hlarrVI+KSkJLi4u0sfPz6+hqkpERERmIItAVFszZ85EYWGh9Ll06ZK5q0REREQmJIunzDw8PGBra4ucnByj9Tk5OdBoNFXKq1QqqFSqhqoeERERmZksWoiUSiW6dOmClJQUaZ1er0dKSgrCw8PNWDMiIiKyBLJoIQKAxMRExMfHo2vXrujWrRuWLVuGkpISjBkzxtxVIyIiIjOTTSAaOnQorly5gjlz5kCr1aJTp07Yvn17lYHWREREJD8KIYQwdyUsnU6ng4uLCwoLC6FWq81dHSIiIqqB2vz+lsUYIiIiIqL7YSAiIiIi2WMgIiIiItljICIiIiLZk81TZn+FYdw532lGRERkPQy/t2vy/BgDUQ0UFRUBAN9pRkREZIWKiorg4uJy3zJ87L4G9Ho9Ll++DGdnZygUinrdt06ng5+fHy5dusRH+k2A19f0eI1Ni9fX9HiNTcuc11cIgaKiIvj6+sLG5v6jhNhCVAM2NjZo0aKFSY+hVqv5F9GEeH1Nj9fYtHh9TY/X2LTMdX0f1DJkwEHVREREJHsMRERERCR7DERmplKpMHfuXKhUKnNXpVHi9TU9XmPT4vU1PV5j07KW68tB1URERCR7bCEiIiIi2WMgIiIiItljICIiIiLZYyAiIiIi2WMgMqMVK1YgICAADg4OCAsLw6FDh8xdJas1b948KBQKo0+7du2k7Tdv3kRCQgLc3d3RtGlTDBkyBDk5OWassWVLTU3FgAED4OvrC4VCgc2bNxttF0Jgzpw58PHxgaOjIyIjI3H27FmjMvn5+Rg5ciTUajVcXV0xbtw4FBcXN+BZWLYHXePRo0dX+TPdt29fozK8xtVLSkrCo48+CmdnZ3h5eeGZZ55BRkaGUZma3BOysrIQGxuLJk2awMvLC6+++ioqKioa8lQsVk2uce/evav8GZ44caJRGUu6xgxEZrJhwwYkJiZi7ty5OHr0KDp27Ijo6Gjk5uaau2pWq3379sjOzpY+P/74o7Rt2rRp2LJlC7788kvs3bsXly9fxuDBg81YW8tWUlKCjh07YsWKFdVuX7x4Mf7v//4P//73v3Hw4EE4OTkhOjoaN2/elMqMHDkSp06dQnJyMrZu3YrU1FRMmDChoU7B4j3oGgNA3759jf5Mf/7550bbeY2rt3fvXiQkJODAgQNITk5GeXk5oqKiUFJSIpV50D2hsrISsbGxKCsrw08//YRPPvkEa9aswZw5c8xxShanJtcYAMaPH2/0Z3jx4sXSNou7xoLMolu3biIhIUFarqysFL6+viIpKcmMtbJec+fOFR07dqx2W0FBgbC3txdffvmltO7MmTMCgEhLS2ugGlovAOLrr7+WlvV6vdBoNGLJkiXSuoKCAqFSqcTnn38uhBDi9OnTAoA4fPiwVOb7778XCoVC/PHHHw1Wd2vx52sshBDx8fFi4MCB9/wOr3HN5ebmCgBi7969Qoia3RO+++47YWNjI7RarVTmgw8+EGq1WpSWljbsCViBP19jIYTo1auX+Pvf/37P71jaNWYLkRmUlZXhyJEjiIyMlNbZ2NggMjISaWlpZqyZdTt79ix8fX3RqlUrjBw5EllZWQCAI0eOoLy83Oh6t2vXDi1btuT1roPz589Dq9UaXU8XFxeEhYVJ1zMtLQ2urq7o2rWrVCYyMhI2NjY4ePBgg9fZWu3ZswdeXl5o27YtJk2ahLy8PGkbr3HNFRYWAgDc3NwA1OyekJaWhpCQEHh7e0tloqOjodPpcOrUqQasvXX48zU2WLt2LTw8PNChQwfMnDkT169fl7ZZ2jXmy13N4OrVq6isrDT6QwAA3t7e+PXXX81UK+sWFhaGNWvWoG3btsjOzsb8+fPRo0cPnDx5ElqtFkqlEq6urkbf8fb2hlarNU+FrZjhmlX359ewTavVwsvLy2i7nZ0d3NzceM1rqG/fvhg8eDACAwNx7tw5/OMf/0BMTAzS0tJga2vLa1xDer0eU6dOxeOPP44OHToAQI3uCVqttto/44ZtdEd11xgARowYAX9/f/j6+uL48eOYMWMGMjIysGnTJgCWd40ZiKhRiImJkX4ODQ1FWFgY/P398cUXX8DR0dGMNSOqm2HDhkk/h4SEIDQ0FEFBQdizZw8iIiLMWDPrkpCQgJMnTxqNKaT6da9rfPd4tpCQEPj4+CAiIgLnzp1DUFBQQ1fzgdhlZgYeHh6wtbWt8kRDTk4ONBqNmWrVuLi6uuKhhx5CZmYmNBoNysrKUFBQYFSG17tuDNfsfn9+NRpNlQcEKioqkJ+fz2teR61atYKHhwcyMzMB8BrXxOTJk7F161bs3r0bLVq0kNbX5J6g0Wiq/TNu2Ea33OsaVycsLAwAjP4MW9I1ZiAyA6VSiS5duiAlJUVap9frkZKSgvDwcDPWrPEoLi7GuXPn4OPjgy5dusDe3t7oemdkZCArK4vXuw4CAwOh0WiMrqdOp8PBgwel6xkeHo6CggIcOXJEKrNr1y7o9Xrppki18/vvvyMvLw8+Pj4AeI3vRwiByZMn4+uvv8auXbsQGBhotL0m94Tw8HCcOHHCKHQmJydDrVYjODi4YU7Egj3oGlcnPT0dAIz+DFvUNW7wYdwkhBBi/fr1QqVSiTVr1ojTp0+LCRMmCFdXV6PR9lRz06dPF3v27BHnz58X+/fvF5GRkcLDw0Pk5uYKIYSYOHGiaNmypdi1a5f4+eefRXh4uAgPDzdzrS1XUVGR+OWXX8Qvv/wiAIilS5eKX375RVy8eFEIIcSiRYuEq6ur+Oabb8Tx48fFwIEDRWBgoLhx44a0j759+4rOnTuLgwcPih9//FG0adNGDB8+3FynZHHud42LiorEK6+8ItLS0sT58+fFzp07xSOPPCLatGkjbt68Ke2D17h6kyZNEi4uLmLPnj0iOztb+ly/fl0q86B7QkVFhejQoYOIiooS6enpYvv27cLT01PMnDnTHKdkcR50jTMzM8WCBQvEzz//LM6fPy+++eYb0apVK9GzZ09pH5Z2jRmIzOi9994TLVu2FEqlUnTr1k0cOHDA3FWyWkOHDhU+Pj5CqVSK5s2bi6FDh4rMzExp+40bN8RLL70kmjVrJpo0aSIGDRoksrOzzVhjy7Z7924BoMonPj5eCHHr0fvZs2cLb29voVKpREREhMjIyDDaR15enhg+fLho2rSpUKvVYsyYMaKoqMgMZ2OZ7neNr1+/LqKiooSnp6ewt7cX/v7+Yvz48VX+wcRrXL3qrisAsXr1aqlMTe4JFy5cEDExMcLR0VF4eHiI6dOni/Ly8gY+G8v0oGuclZUlevbsKdzc3IRKpRKtW7cWr776qigsLDTajyVdY4UQQjRcexQRERGR5eEYIiIiIpI9BiIiIiKSPQYiIiIikj0GIiIiIpI9BiIiIiKSPQYiIiIikj0GIiIiIpI9BiIiotvmzZuHTp06meXYAQEBWLZsmVmOTUQMRETUQEaPHg2FQoGJEydW2ZaQkACFQoHRo0c3fMVMQKFQYPPmzeauBhHVAgMRETUYPz8/rF+/Hjdu3JDW3bx5E+vWrUPLli1NdtyysjKT7ZuIGgcGIiJqMI888gj8/PywadMmad2mTZvQsmVLdO7cWVq3fft2PPHEE3B1dYW7uzv69++Pc+fOGe3r999/x/Dhw+Hm5gYnJyd07doVBw8eBHCn6+ujjz5CYGAgHBwcAABZWVkYOHAgmjZtCrVajeeeew45OTlV6vmf//wHfn5+aNKkCZ577jkUFhZK2w4fPoynnnoKHh4ecHFxQa9evXD06FFpe0BAAABg0KBBUCgU0jIAbNmyBY8++igcHBzg4eGBQYMGGR33+vXrGDt2LJydndGyZUusXLmylleYiOqKgYiIGtTYsWOxevVqaXnVqlUYM2aMUZmSkhIkJibi559/RkpKCmxsbDBo0CDo9XoAQHFxMXr16oU//vgD3377LY4dO4bXXntN2g4AmZmZ2LhxIzZt2oT09HTo9XoMHDgQ+fn52Lt3L5KTk/Hbb79h6NChRsfOzMzEF198gS1btmD79u345Zdf8NJLL0nbi4qKEB8fjx9//BEHDhxAmzZt0K9fPxQVFQG4FZgAYPXq1cjOzpaWt23bhkGDBqFfv3745ZdfkJKSgm7duhkd+5133kHXrl2lY06aNAkZGRl/9ZITUU2Y5ZWyRCQ78fHxYuDAgSI3N1eoVCpx4cIFceHCBeHg4CCuXLkiBg4cKOLj46v97pUrVwQAceLECSGEEP/5z3+Es7OzyMvLq7b83Llzhb29vcjNzZXW7dixQ9ja2oqsrCxp3alTpwQAcejQIel7tra24vfff5fKfP/998LGxqbKm9ANKisrhbOzs9iyZYu0DoD4+uuvjcqFh4eLkSNH3vP6+Pv7i1GjRknLer1eeHl5iQ8++OCe3yGi+sMWIiJqUJ6enoiNjcWaNWuwevVqxMbGwsPDw6jM2bNnMXz4cLRq1QpqtVrqdsrKygIApKeno3PnznBzc7vncfz9/eHp6SktnzlzBn5+fvDz85PWBQcHw9XVFWfOnJHWtWzZEs2bN5eWw8PDodfrpZaanJwcjB8/Hm3atIGLiwvUajWKi4ulut1Leno6IiIi7lsmNDRU+lmhUECj0SA3N/e+3yGi+mFn7goQkfyMHTsWkydPBgCsWLGiyvYBAwbA398fH374IXx9faHX69GhQwdpcLSjo+MDj+Hk5FS/lb4tPj4eeXl5WL58Ofz9/aFSqRAeHv7Agds1qbO9vb3RskKhMOoGJCLTYQsRETW4vn37oqysDOXl5YiOjjbalpeXh4yMDMyaNQsRERF4+OGHce3aNaMyoaGhSE9PR35+fo2P+fDDD+PSpUu4dOmStO706dMoKChAcHCwtC4rKwuXL1+Wlg8cOAAbGxu0bdsWALB//368/PLL6NevH9q3bw+VSoWrV68aHcve3h6VlZVV6pySklLj+hJRw2IgIqIGZ2trizNnzuD06dOwtbU12tasWTO4u7tj5cqVyMzMxK5du5CYmGhUZvjw4dBoNHjmmWewf/9+/Pbbb9i4cSPS0tLueczIyEiEhIRg5MiROHr0KA4dOoS4uDj06tULXbt2lco5ODggPj4ex44dw759+/Dyyy/jueeeg0ajAQC0adMG//3vf3HmzBkcPHgQI0eOrNL6ExAQgJSUFGi1WinMzZ07F59//jnmzp2LM2fO4MSJE3jrrbf+0nUkovrDQEREZqFWq6FWq6ust7Gxwfr163HkyBF06NAB06ZNw5IlS4zKKJVK7NixA15eXujXrx9CQkKwaNGiKuHqbgqFAt988w2aNWuGnj17IjIyEq1atcKGDRuMyrVu3RqDBw9Gv379EBUVhdDQUPzrX/+Stn/88ce4du0aHnnkETz//PN4+eWX4eXlZbSPd955B8nJyfDz85OmE+jduze+/PJLfPvtt+jUqROefPJJHDp0qNbXjYhMQyGEEOauBBEREZE5sYWIiIiIZI+BiIiIiGSPgYiIiIhkj4GIiIiIZI+BiIiIiGSPgYiIiIhkj4GIiIiIZI+BiIiIiGSPgYiIiIhkj4GIiIiIZI+BiIiIiGSPgYiIiIhk7/8BFamIlMfzJH0AAAAASUVORK5CYII=”,”,” “text/plain”: [“,” “

“”,” ]”,” },”,” “metadata”: {},”,” “output_type”: “display_data””,” }”,” ],”,” “source”: [“,” “plt.plot(range(macrobatch_count), [get_batch_size(m) for m in range(macrobatch_count)])\n”,”,” “plt.title(\”Batch size schedule\”)\n”,”,” “plt.xlabel(\”Macrobatch\”)\n”,”,” “plt.ylabel(\”Batch size\”)\n”,”,” “plt.show()””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “135dd4c6-7497-4559-bc7d-25ee9b3c099d”,”,” “metadata”: {},”,” “source”: [“,” “The optimizer is configured so that regularization is only applied to weights, not biases:””,” ]”,” },”,” {“,” “cell_type”: “code”,”,” “execution_count”: 35,”,” “id”: “be266c48-482d-4a01-9983-1256f24bc04e”,”,” “metadata”: {“,” “tags”: []”,” },”,” “outputs”: [],”,” “source”: [“,” “param_groups = [{‘params’: [p for p in list(bert.parameters()) + list(mlm_head.parameters()) if p.dim() >= 2], ‘weight_decay’: 0.01},\n”,”,” ” {‘params’: [p for p in list(bert.parameters()) + list(mlm_head.parameters()) if p.dim() < 2], 'weight_decay': 0}]\n","," "optimizer = optim.AdamW(param_groups, lr = get_lr(0), betas = (0.9, 0.98), eps = 1e-12, fused = True)\n","," "scaler = GradScaler() # This is for automatic mixed precision""," ]"," },"," {"," "cell_type": "markdown","," "id": "37b3c0a5-6bf0-441e-b042-f784027e7495","," "metadata": {},"," "source": ["," "And finally we get to the training loop. A small amount of complexity is introduced by the use of automatic mixed precision, but this is worthwhile as it speeds up training approximately two-fold (!). Note that the code below will run for 2 iterations only, to do a full training run comment out/remove the indicated lines. Later on in the notebook there will be a sleight of hand where we actually load weights fitted during a full run.""," ]"," },"," {"," "cell_type": "code","," "execution_count": 36,"," "id": "ee60f6c0-6f40-4103-913b-9253bdfe99f3","," "metadata": {"," "tags": []"," },"," "outputs": ["," {"," "name": "stderr","," "output_type": "stream","," "text": ["," " 1%|▎ | 2/256 [08:46<18:34:23, 263.24s/it]\n""," ]"," }"," ],"," "source": ["," "cumulative_samples = 0\n","," "mbs = macrobatches(macrobatch_size)\n","," "\n","," "f_log = \"BERT.csv\"\n","," "\n","," "with open(f_log, \"w\") as f:\n","," " f.write(\"macrobatch,cumulative_samples,duration,loss,lr\")\n","," "\n","," "for macrobatch in tqdm(range(macrobatch_count)):\n","," " \n","," " # REMOVE THE BELOW TWO LINES IF YOU WANT TO DO A FULL TRAINING RUN\n","," " if macrobatch == 2:\n","," " break\n","," " \n","," " # Set chunk training parameters\n","," " batch_size = get_batch_size(macrobatch)\n","," " lr = get_lr(macrobatch)\n","," " for g in optimizer.param_groups:\n","," " g['lr'] = get_lr(macrobatch)\n","," " \n","," " # Load a new macrobatch\n","," " xs, ys = next(mbs)\n","," " torch_xs = torch.LongTensor(xs).to(device)\n","," " torch_ys = torch.LongTensor(ys).to(device)\n","," " \n","," " # Iterate over the batches in the macrobatch\n","," " for i in range(0, xs.shape[0] // batch_size):\n","," " batch_start_time = time.time()\n","," " \n","," " batch_loss = 0\n","," " \n","," " batch_start_idx = i * batch_size\n","," " batch_data_torch_xs = torch_xs[batch_start_idx:batch_start_idx+batch_size, :]\n","," " batch_data_torch_ys = torch_ys[batch_start_idx:batch_start_idx+batch_size, :]\n","," "\n","," " optimizer.zero_grad(set_to_none = True)\n","," " \n","," " # Iterate over the minibatches in the batch\n","," " for j in range(0, batch_size // minibatch_size):\n","," " mb_start_idx = minibatch_size * j\n","," " mb_end_idx = mb_start_idx + minibatch_size\n","," "\n","," " # Use automatic mixed precision for (much) better performance\n","," " with autocast(device_type='cuda', dtype=torch.float16):\n","," " _, loss = mlm_head(bert(batch_data_torch_xs[mb_start_idx:mb_end_idx]), batch_data_torch_ys[mb_start_idx:mb_end_idx])\n","," "\n","," " # Correct for the fact that we are minibatching\n","," " corrected_loss = loss / (batch_size // minibatch_size)\n","," " batch_loss += corrected_loss\n","," "\n","," " # Need to use scaler.scale for automatic mixed precision\n","," " scaler.scale(corrected_loss).backward()\n","," " \n","," " cumulative_samples += minibatch_size\n","," "\n","," " # If we don't scaler.unscale_ here, gradient clipping will fail spectacularly, because it will act on arbitrarily scaled gradients\n","," " scaler.unscale_(optimizer)\n","," " torch.nn.utils.clip_grad_norm_(bert.parameters(), 0.5)\n","," " scaler.step(optimizer)\n","," " scaler.update()\n","," " \n","," " batch_duration = time.time() - batch_start_time\n","," " \n","," " with open(f_log, \"a\") as f:\n","," " f.write(f\"{macrobatch:03d},{cumulative_samples:09d},{batch_duration:05.3f},{batch_loss.item():05.2f},{lr:0.6f}\")\n","," " \n","," " torch.save(bert.state_dict(), f\"BERT.weights\")\n","," " torch.save(mlm_head.state_dict(), f\"MLMHead.weights\")""," ]"," },"," {"," "cell_type": "markdown","," "id": "b46ceabe-5be7-4231-b7da-8441b3b98ab8","," "metadata": {},"," "source": ["," "### Fine-tuning""," ]"," },"," {"," "cell_type": "markdown","," "id": "b84246f0-eead-4d52-ba7b-cf0a19cff44d","," "metadata": {},"," "source": ["," "We have a pre-trained BERT! That's great, but now let's do something real with it (because who cares about guessing tokens that we hid on purpose?).\n","," "\n","," "We will now train on two tasks from the GLUE benchmark. There are more tasks in the benchmark (BERT was evaluated on 6 other tasks as well) and you can find code for those in this repository, but for this notebook we will keep it simple.\n","," "\n","," "The first task we will use is STS-B, the Semantic Textual Similary Benchmark. In this task, the model gets two input sentences, and has to predict how similar they are in meaning on a scale of 1 to 5. An example from the data set is: (sentence 1) \"People are playing cricket.\", (sentence 2) \"Men are playing cricket.\", which has a similarity score of 3.2 in the data, indicating that the sentences are fairly similar, but not perfectly so (people could include women, after all). This is the only regression task in the GLUE benchmark on which BERT is evaluated, all other tasks are classification tasks.\n","," "\n","," "The second task is SST-2, the Stanford Sentiment Treebank. This task takes only a single sentence as input, and the requirement is to determine whether the sentence has a positive (1) or negative (0) sentiment. A positive example from the dataset is \"a gorgeous , witty , seductive movie . \", while a negative example is \"unflinchingly bleak and desperate \".\n","," "\n","," "Because we are using different datasets than during pretraining, we need to redo some of the logic for cleaning input sentences:""," ]"," },"," {"," "cell_type": "code","," "execution_count": 37,"," "id": "ad990858-dbc5-44db-bbaf-220a3cfe49c2","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def encode_sentence(sentence, bpe):\n","," " \"\"\"Take a string sentence and turn it into a list of BPE tokens.\"\"\"\n","," " encoded = []\n","," " for atom in atomize(clean_string(sentence)):\n","," " if atom.isalpha():\n","," " encoded += [tok for tok in bpe.encode('_' + atom)]\n","," " else:\n","," " encoded.append(atom)\n","," " return encoded""," ]"," },"," {"," "cell_type": "markdown","," "id": "cd15a973-30c8-4ee6-8a12-b0b8e17f3f9a","," "metadata": {},"," "source": ["," "For a number of downstream tasks (including STS-B), the structure of the input data is also different because now we have _2_ sentences as input. This is where the `[SEP]` token that we introduced earlier comes in. Training data is fed in in the form `['[CLS]'] + sentence1 + ['[SEP]'] + sentence2 + ['[PAD]'] * x` where the number of `[PAD]` tokens at the end is chosen such that the total number of tokens has the right length.\n","," "\n","," "Some downstream tasks also include data that requires more than 128 tokens to represent. This is where we can benefit from the fact that our BERT uses relative position embeddings rather than absolute: we can use the model on longer samples by need.""," ]"," },"," {"," "cell_type": "code","," "execution_count": 38,"," "id": "fe263ed3-4613-4890-a2b5-83512fe06dbe","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def prep_data(left_sentences, right_sentences, targets, bpe, length = 128, classification_target = True):\n","," " \"\"\"\n","," " Take two lists of string sentences and a list of targets and generate Torch matrices for training.\n","," " If the targets are not categorical (i.e. we're doing regression), set classification_target = False.\n","," " \"\"\"\n","," " assert len(left_sentences) == len(right_sentences) == len(targets)\n","," " num_samples = len(left_sentences)\n","," " tok2idx = bpe.token_mapping()\n","," " xs = []\n","," " ys = []\n","," " skipped = 0\n","," " for i in range(num_samples):\n","," " left_encoded = encode_sentence(left_sentences[i], bpe)\n","," " right_encoded = encode_sentence(right_sentences[i], bpe)\n","," " x = ([tok2idx[\"[CLS]\"]] + \n","," " [tok2idx[e] for e in left_encoded] +\n","," " [tok2idx[\"[SEP]\"]] +\n","," " [tok2idx[e] for e in right_encoded] +\n","," " [tok2idx[\"[PAD]\"]] * (length - len(left_encoded) - len(right_encoded) - 2))\n","," " if len(x) == length:\n","," " xs.append(x)\n","," " ys.append(targets[i])\n","," " else:\n","," " print(f\"WARNING: Skipping sample of length {len(x)} at index {i}\")\n","," " skipped += 1\n","," " print(f\"Skipped {skipped} samples ({skipped/num_samples * 100}%)\")\n","," " joint = list(zip(xs, ys))\n","," " random.shuffle(joint)\n","," " xs, ys = zip(*joint)\n","," " xs = torch.LongTensor(xs).to(device)\n","," " if classification_target:\n","," " ys = torch.LongTensor(ys).to(device)\n","," " else:\n","," " ys = torch.tensor(ys, device = device)\n","," " return xs, ys""," ]"," },"," {"," "cell_type": "markdown","," "id": "8f131088-6d65-4649-b9a4-b05e9103a77e","," "metadata": {},"," "source": ["," "This helper function implements the training loop for fine tuning. The datasets for the GLUE benchmark all fit in GPU memory entirely, so there is no need for \"macrobatching\" logic here. The implementation also uses a much simpler learning rate schedule, and a constant batch size. The current settings are likely suboptimal, and are probably one of the first places to look if you want to get a higher GLUE score out of this model (and you don't just want to train for longer).""," ]"," },"," {"," "cell_type": "code","," "execution_count": 39,"," "id": "44ba075a-3ab8-47a0-b51c-d5faf486842c","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def finetune(bert, head, xs, ys):\n","," " \"\"\"\n","," " Fairly simple training procedure going through xs and ys for 5 epochs.\n","," " Batch size is constant, learning rate is warmed up and decayed but is constant per epoch.\n","," " `bert` and `head` are modified in-place (you might not want to do this at home), this function does not return anything.\n","," " \"\"\"\n","," " batch_size = 16\n","," " total_samples = xs.shape[0]\n","," " \n","," " param_groups = [{'params': [p for p in list(bert.parameters()) + list(head.parameters()) if p.dim() >= 2], ‘weight_decay’: 0.01},\n”,”,” ” {‘params’: [p for p in list(bert.parameters()) + list(head.parameters()) if p.dim() < 2], 'weight_decay': 0}]\n","," " optimizer = optim.AdamW(param_groups, lr = 4e-5, betas = (0.9, 0.98), eps = 1e-12, fused = True)\n","," " scaler = GradScaler()\n","," " \n","," " # Poor man's warmup and decay\n","," " lrs = [1e-5, 4e-5, 4e-5, 2e-5, 1e-5]\n","," " \n","," " for epoch in tqdm(range(5)):\n","," " \n","," " for g in optimizer.param_groups:\n","," " g['lr'] = lrs[epoch]\n","," " \n","," " i = 0\n","," " while i < total_samples:\n","," "\n","," " batch_xs = xs[i:min(i+batch_size, total_samples), :]\n","," " batch_ys = ys[i:min(i+batch_size, total_samples)]\n","," "\n","," " optimizer.zero_grad(set_to_none = True)\n","," "\n","," " with autocast(device_type='cuda', dtype=torch.float16):\n","," " _, loss = head(bert(batch_xs), batch_ys)\n","," "\n","," " scaler.scale(loss).backward()\n","," " scaler.step(optimizer)\n","," " scaler.update()\n","," "\n","," " i += batch_size""," ]"," },"," {"," "cell_type": "markdown","," "id": "cee62797-fc05-4bf0-bb48-b027613067ad","," "metadata": {},"," "source": ["," "Once we have a finetuned trained model, we need to evaluate its performance on test data (we will use the validation datasets for that, we haven't used them for any other purpose so that can give a safe performance estimate).""," ]"," },"," {"," "cell_type": "code","," "execution_count": 40,"," "id": "997cf039-b295-418a-b7e0-0cde43f9f946","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def cls_predict(bert, cls_head, xs):\n","," " \"\"\"Take a trained BERT and CLSHead and generate predictions for the inputs xs.\"\"\"\n","," " pred = []\n","," " for i in tqdm(range(xs.shape[0])):\n","," " with torch.no_grad():\n","," " logits, _ = cls_head(bert(xs[i:i+1]))\n","," " pred.append(torch.argmax(logits))\n","," " return torch.LongTensor(pred).to(device)""," ]"," },"," {"," "cell_type": "code","," "execution_count": 41,"," "id": "90840e68-ca60-4f52-ac82-32247cc6032f","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def reg_predict(bert, reg_head, xs):\n","," " \"\"\"Take a trained BERT and RegHead and generate predictions for the inputs xs.\"\"\"\n","," " pred = []\n","," " for i in tqdm(range(xs.shape[0])):\n","," " with torch.no_grad():\n","," " y_hat, _ = reg_head(bert(xs[i:i+1]))\n","," " pred.append(y_hat)\n","," " return torch.tensor(pred, device = device)""," ]"," },"," {"," "cell_type": "markdown","," "id": "f0fa9b3f-64b9-4591-88fd-b322e0d890af","," "metadata": {},"," "source": ["," "And once we have predictions for our validation data, we need a way to quantify how good those predictions are. STS-B uses Spearman correlation in the GLUE benchmark, and SST-2 uses plain old accuracy.""," ]"," },"," {"," "cell_type": "code","," "execution_count": 42,"," "id": "7a8a669a-a400-4279-a6c2-db58373f73ae","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def accuracy(pred, true):\n","," " \"\"\"Calculate accuracy from predictions and ground truth.\"\"\"\n","," " return (torch.sum(pred == true) / pred.shape[0]).item()""," ]"," },"," {"," "cell_type": "code","," "execution_count": 43,"," "id": "224ca233-f31e-4370-85a2-178b2ea8eae8","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def spearman(pred, true):\n","," " \"\"\"Return Spearman correlation for predictions and ground truth.\"\"\"\n","," " return scipy.stats.spearmanr(np.array(pred.cpu()), np.array(true.cpu())).correlation""," ]"," },"," {"," "cell_type": "markdown","," "id": "a36d486d-8aee-49b0-a4ff-1cb47742247d","," "metadata": {},"," "source": ["," "We now have all the components we need to evaluate the performance of our pretrained model after finetuning:""," ]"," },"," {"," "cell_type": "code","," "execution_count": 44,"," "id": "abec54ab-d8ff-43ab-b5e9-b2f06f02bd40","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def eval_stsb(bert, bpe, length = 192):\n","," " \"\"\"Take a pre-trained BERT, finetune on STS-B, and return performance.\"\"\"\n","," " reg_head_stsb = RegHead(config).to(device)\n","," " \n","," " stsb_train = load_dataset(\"glue\", \"stsb\", split = \"train\")\n","," " stsb_train_xs, stsb_train_ys = prep_data([s['sentence1'] for s in stsb_train],\n","," " [s['sentence2'] for s in stsb_train],\n","," " [s['label'] for s in stsb_train],\n","," " bpe,\n","," " length = length,\n","," " classification_target = False)\n","," "\n","," " finetune(bert, reg_head_stsb, stsb_train_xs, stsb_train_ys)\n","," "\n","," " stsb_val = load_dataset(\"glue\", \"stsb\", split = \"validation\")\n","," " stsb_val_xs, stsb_val_ys = prep_data([s['sentence1'] for s in stsb_val],\n","," " [s['sentence2'] for s in stsb_val],\n","," " [s['label'] for s in stsb_val],\n","," " bpe,\n","," " length = length,\n","," " classification_target = False)\n","," "\n","," " return spearman(reg_predict(bert, reg_head_stsb, stsb_val_xs), stsb_val_ys)""," ]"," },"," {"," "cell_type": "code","," "execution_count": 45,"," "id": "ff3d665c-4080-42e6-a7b3-dac57ec0facc","," "metadata": {"," "tags": []"," },"," "outputs": [],"," "source": ["," "def eval_sst2(bert, bpe):\n","," " \"\"\"Take a pre-trained BERT, finetune on SST2, and return performance.\"\"\"\n","," " cls_head_sst2 = CLSHead(config, 2).to(device)\n","," " \n","," " sst2_train = load_dataset(\"glue\", \"sst2\", split = \"train\")\n","," " sst2_train_xs, sst2_train_ys = prep_data([s['sentence'] for s in sst2_train],\n","," " ['' for s in sst2_train],\n","," " [s['label'] for s in sst2_train],\n","," " bpe)\n","," "\n","," " finetune(bert, cls_head_sst2, sst2_train_xs, sst2_train_ys)\n","," "\n","," " sst2_val = load_dataset(\"glue\", \"sst2\", split = \"validation\")\n","," " sst2_val_xs, sst2_val_ys = prep_data([s['sentence'] for s in sst2_val],\n","," " ['' for s in sst2_val],\n","," " [s['label'] for s in sst2_val],\n","," " bpe)\n","," "\n","," " return accuracy(cls_predict(bert, cls_head_sst2, sst2_val_xs), sst2_val_ys)""," ]"," },"," {"," "cell_type": "code","," "execution_count": 47,"," "id": "3ab3865f-c6cb-45d9-8802-d1d597b17b50","," "metadata": {"," "tags": []"," },"," "outputs": ["," {"," "name": "stdout","," "output_type": "stream","," "text": ["," "number of parameters: 110164992\n","," "BERT.weights -> Starting STS-B…\n””,” ]”,” },”,” {“,” “name”: “stderr”,”,” “output_type”: “stream”,”,” “text”: [“,” “Found cached dataset glue (/home/sam/.cache/huggingface/datasets/glue/stsb/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n””,” ]”,” },”,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “Skipped 0 samples (0.0%)\n””,” ]”,” },”,” {“,” “name”: “stderr”,”,” “output_type”: “stream”,”,” “text”: [“,” “100%|█████████████████████████████████████████████| 5/5 [05:18<00:00, 63.77s/it]\n","," "Found cached dataset glue (/home/sam/.cache/huggingface/datasets/glue/stsb/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n""," ]"," },"," {"," "name": "stdout","," "output_type": "stream","," "text": ["," "Skipped 0 samples (0.0%)\n""," ]"," },"," {"," "name": "stderr","," "output_type": "stream","," "text": ["," "100%|██████████████████████████████████████| 1500/1500 [00:13<00:00, 108.07it/s]\n""," ]"," },"," {"," "name": "stdout","," "output_type": "stream","," "text": ["," "BERT.weights -> STS-B score: 0.8353077198403909\n”,”,” “BERT.weights -> Starting SST-2…\n””,” ]”,” },”,” {“,” “name”: “stderr”,”,” “output_type”: “stream”,”,” “text”: [“,” “Found cached dataset glue (/home/sam/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n””,” ]”,” },”,” {“,” “name”: “stdout”,”,” “output_type”: “stream”,”,” “text”: [“,” “Skipped 0 samples (0.0%)\n””,” ]”,” },”,” {“,” “name”: “stderr”,”,” “output_type”: “stream”,”,” “text”: [“,” “100%|████████████████████████████████████████████| 5/5 [48:37<00:00, 583.49s/it]\n","," "Found cached dataset glue (/home/sam/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)\n""," ]"," },"," {"," "name": "stdout","," "output_type": "stream","," "text": ["," "Skipped 0 samples (0.0%)\n""," ]"," },"," {"," "name": "stderr","," "output_type": "stream","," "text": ["," "100%|████████████████████████████████████████| 872/872 [00:05<00:00, 147.42it/s]""," ]"," },"," {"," "name": "stdout","," "output_type": "stream","," "text": ["," "BERT.weights -> SST-2 score: 0.8841742873191833\n””,” ]”,” },”,” {“,” “name”: “stderr”,”,” “output_type”: “stream”,”,” “text”: [“,” “\n””,” ]”,” }”,” ],”,” “source”: [“,” “results = []\n”,”,” “bert = BERT(config).to(device)\n”,”,” “\n”,”,” “ws = \”BERT.weights\”\n”,”,” “\n”,”,” “print(f\”{ws} -> Starting STS-B…\”)\n”,”,” “bert.load_state_dict(torch.load(ws), strict = False)\n”,”,” “stsb_score = eval_stsb(bert, bert_bpe)\n”,”,” “print(f\”{ws} -> STS-B score: {stsb_score}\”)\n”,”,” “\n”,”,” “print(f\”{ws} -> Starting SST-2…\”)\n”,”,” “bert.load_state_dict(torch.load(ws), strict = False)\n”,”,” “sst2_score = eval_sst2(bert, bert_bpe)\n”,”,” “print(f\”{ws} -> SST-2 score: {sst2_score}\”)””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “41a47354-4d55-4160-9f37-b34d0b6175ab”,”,” “metadata”: {},”,” “source”: [“,” “The convention with GLUE is to scale these scores by 100, so we scored 83.5 on STS-B and 88.4 on SST-2. For reference, the original BERT-base scored 85.8 on STS-B and 93.5 on SST-2. Not bad! (Note though that BERT was evaluated on a held out test set, and in our case we’re evaluating on the validation set, so some caution is required).””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “618c5738-e9b9-4de4-8b10-364cfebea774″,”,” “metadata”: {},”,” “source”: [“,” “### Tinker time\n”,”,” “\n”,”,” “That’s it – you’ve now seen the whole process of training and evaluating a BERT lookalike. The training process is sufficiently fast that you can do some interesting experimentation even just on a laptop. Here are some results from variants that I’ve tried:\n”,”,” “\n”,”,” “| | Tokens seen | MLM loss | MNLI m | MNLI mm | QQP | QNLI | SST-2 | CoLA | CoLA run 2 | STS-B | MRPC | RTE | RTE run 2 | Average |\n”,”,” “| —————————————————- | ———– | ——– | —— | ——- | —- | —- | —– | —- | ———- | —– | —- | —- | ——— | ——- |\n”,”,” “| % samples longer than 128 tokens | | | 0.32 | 0.34 | 0.02 | 0.55 | 0 | 0 | 0 | 0 | 0 | 12.6 | 12.6 | |\n”,”,” “| —————————————————- | ———– | ——– | —— | ——- | —- | —- | —– | —- | ———- | —– | —- | —- | ——— | ——- |\n”,”,” “| Absolute position embeddings | 2^30 | 2.07 | 75.8 | 75.3 | 84.2 | 82.3 | 88.2 | 36.9 | 35.7 | 81 | 82.2 | 52.5 | 50 | 73.0 |\n”,”,” “| Relative position embeddings | 2^30 | 1.99 | 76.5 | 77.4 | 86.2 | 85.2 | 87.6 | 37.2 | 37.3 | 83.5 | 84.2 | 53.1 | 57 | 74.8 |\n”,”,” “| Relative position embeddings | 2^31 | 1.82 | 77.8 | 77.5 | 86.2 | 86.3 | 88.2 | 42.2 | 45.2 | 84.7 | 85.5 | 52.3 | 50.9 | 75.7 |\n”,”,” “| Relative position embeddings, [Sophia optimizer](https://arxiv.org/abs/2305.14342) | 2^30 | 1.92 | 76.4 | 76.4 | 85.2 | 84.1 | 88 | 44.4 | 25.2 | 80.2 | 85.1 | 46.2 | 56 | 73.5 |\n”,”,” “| Relative position embeddings, Sophia, span objective | 2^30 | 3.74 | 75.6 | 76.5 | 84.6 | 83.9 | 84.7 | 29.5 | 30.4 | 83.9 | 83.4 | 58.1 | 63.5 | 73.7 |\n”,”,” “| Relative position embeddings, Sophia, span objective | 2^31 | 3.58 | 76.1 | 76.2 | 85.2 | 85.1 | 87.4 | 37.3 | 42.9 | 83.7 | 87 | 64.6 | 53.8 | 75.6 |\n”,”,” “| —————————————————- | ———– | ——– | —— | ——- | —- | —- | —– | —- | ———- | —– | —- | —- | ——— | ——- |\n”,”,” “| Cramming results on 2080 TI (Arxiv version) | 2^32-ish | 1.84 | 82.8 | 83.4 | 87.2 | 89 | 91.5 | 47.2 | – | 83.1 | 86.2 | 54 | – | 78.3 |\n”,”,” “\n”,”,” “As mentioned above, the finetuning process could probably be improved quite a bit – in particular MNLI scores seem low compared to what’s reported in the Cramming paper (also for runs with poor MLM loss, where the Cramming paper still manages to obtain good MNLI performance). It might also be possible to still do better with Sophia – in my tests Sophia improved MLM performance but that did not translate to better GLUE performance. However, I didn’t really do any hyperparameter optimization, using only a single set of (mostly default) settings.\n”,”,” “\n”,”,” “For Cramming, the authors also implemented \”sparse token prediction\” which improves efficiency by only generating token predictions for masked tokens. This wouldn’t affect the accuracy of the model, but it would make it faster to train. Similarly, something like [FlashAttention](https://arxiv.org/abs/2205.14135) could bring some welcome performance gains that make laptop training more feasible. Both of these changes would add some complexity, however.\n”,”,” “\n”,”,” “What will you try?””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “67c96f0c-366a-4ef4-ab78-29f92f4a150a”,”,” “metadata”: {“,” “tags”: []”,” },”,” “source”: [“,” “### Bloopers\n”,”,” “\n”,”,” “A variety of things went wrong as I went through the process of getting this BERT to train. I learned a lot from making, finding and solving these mistakes so it seems worth at least mentioning them:\n”,”,” “* `torch.optim.Adam` and `torch.optim.AdamW` are *not* the same thing, and `Adam` (without `W`) actually fails to converge on this model. The difference is in the way weight decay (~ L2 regularization) is implemented in the optimizer.\n”,”,” “* Similarly I had convergence issues when I forgot to apply the layernorm right after the embedding layer. This modification to the original BERT architecture is mentioned in the Cramming paper as improving training stability, so it was interesting to see that play out.\n”,”,” “* I also faced issues when I used Torch default random initializations for most of the model weights. The current initialization scheme comes from [nanoGPT](https://github.com/karpathy/nanoGPT) and again it makes the difference between convergence and divergence.\n”,”,” “\n”,”,” “Needless to say, these seemingly subtle differences can be hard to identify: [this blog post](http://karpathy.github.io/2019/04/25/recipe/) by Andrej Karpathy was very helpful for strategizing through the debugging process.””,” ]”,” },”,” {“,” “cell_type”: “markdown”,”,” “id”: “a321cbe6-169c-4a55-9bb4-8527e4500772″,”,” “metadata”: {},”,” “source”: [“,” “### Have fun, and good luck!””,” ]”,” }”,” ],”,” “metadata”: {“,” “kernelspec”: {“,” “display_name”: “Python 3 (ipykernel)”,”,” “language”: “python”,”,” “name”: “python3″”,” },”,” “language_info”: {“,” “codemirror_mode”: {“,” “name”: “ipython”,”,” “version”: 3″,” },”,” “file_extension”: “.py”,”,” “mimetype”: “text/x-python”,”,” “name”: “python”,”,” “nbconvert_exporter”: “python”,”,” “pygments_lexer”: “ipython3″,”,” “version”: “3.8.10””,” }”,” },”,” “nbformat”: 4,”,” “nbformat_minor”: 5″,”}”],”stylingDirectives”:null,”csv”:null,”csvError”:null,”dependabotInfo”:{“showConfigurationBanner”:false,”configFilePath”:null,”networkDependabotPath”:”/samvher/bert-for-laptops/network/updates”,”dismissConfigurationNoticePath”:”/settings/dismiss-notice/dependabot_configuration_notice”,”configurationNoticeDismissed”:null,”repoAlertsPath”:”/samvher/bert-for-laptops/security/dependabot”,”repoSecurityAndAnalysisPath”:”/samvher/bert-for-laptops/settings/security_analysis”,”repoOwnerIsOrg”:false,”currentUserCanAdminRepo”:false},”displayName”:”BERT_for_laptops.ipynb”,”displayUrl”:”https://notebooks.githubusercontent.com/view/ipynb?browser=chrome&bypass_fastly=true&color_mode=auto&commit=1f904b870c455e909d2858428779b657e69445aa&device=unknown_device&docs_host=https%3A%2F%2Fdocs.github.com&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f73616d766865722f626572742d666f722d6c6170746f70732f316639303462383730633435356539303964323835383432383737396236353765363934343561612f424552545f666f725f6c6170746f70732e6970796e62&logged_in=false&nwo=samvher%2Fbert-for-laptops&path=BERT_for_laptops.ipynb&platform=windows&repository_id=673536158&repository_type=Repository&version=41″,”headerInfo”:{“blobSize”:”157 KB”,”deleteInfo”:{“deleteTooltip”:”You must be signed in to make or propose changes”},”editInfo”:{“editTooltip”:”You must be signed in to make or propose changes”},”ghDesktopPath”:”https://desktop.github.com”,”gitLfsPath”:null,”onBranch”:true,”shortPath”:”3ebb8d2″,”siteNavLoginPath”:”/login?return_to=https%3A%2F%2Fgithub.com%2Fsamvher%2Fbert-for-laptops%2Fblob%2Fmain%2FBERT_for_laptops.ipynb”,”isCSV”:false,”isRichtext”:false,”toc”:null,”lineInfo”:{“truncatedLoc”:”2163″,”truncatedSloc”:”2163″},”mode”:”file”},”image”:false,”isCodeownersFile”:null,”isPlain”:false,”isValidLegacyIssueTemplate”:false,”issueTemplateHelpUrl”:”https://docs.github.com/articles/about-issue-and-pull-request-templates”,”issueTemplate”:null,”discussionTemplate”:null,”language”:”Jupyter Notebook”,”languageID”:185,”large”:false,”loggedIn”:false,”newDiscussionPath”:”/samvher/bert-for-laptops/discussions/new”,”newIssuePath”:”/samvher/bert-for-laptops/issues/new”,”planSupportInfo”:{“repoIsFork”:null,”repoOwnedByCurrentUser”:null,”requestFullPath”:”/samvher/bert-for-laptops/blob/main/BERT_for_laptops.ipynb”,”showFreeOrgGatedFeatureMessage”:null,”showPlanSupportBanner”:null,”upgradeDataAttributes”:null,”upgradePath”:null},”publishBannersInfo”:{“dismissActionNoticePath”:”/settings/dismiss-notice/publish_action_from_dockerfile”,”dismissStackNoticePath”:”/settings/dismiss-notice/publish_stack_from_file”,”releasePath”:”/samvher/bert-for-laptops/releases/new?marketplace=true”,”showPublishActionBanner”:false,”showPublishStackBanner”:false},”renderImageOrRaw”:false,”richText”:null,”renderedFileInfo”:{“identityUUID”:”b676dd1d-ab16-4794-abf4-6ce7b1f90426″,”renderFileType”:”ipynb”,”size”:161162},”shortPath”:null,”tabSize”:8,”topBannersInfo”:{“overridingGlobalFundingFile”:false,”globalPreferredFundingPath”:null,”repoOwner”:”samvher”,”repoName”:”bert-for-laptops”,”showInvalidCitationWarning”:false,”citationHelpUrl”:”https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files”,”showDependabotConfigurationBanner”:false,”actionsOnboardingTip”:null},”truncated”:false,”viewable”:true,”workflowRedirectUrl”:null,”symbols”:{“timedOut”:false,”notAnalyzed”:true,”symbols”:[]}},”copilotInfo”:null,”csrf_tokens”:{“/samvher/bert-for-laptops/branches”:{“post”:”HBql4DVjmCCCqK-S-NYaXOL4qnQLiY6XL3t87k_RLo5QcZjrL7V6TMDxzdDpZXplgTxjCDORDcqjpWc0fX67VQ”},”/repos/preferences”:{“post”:”5BOnpR9OqcoTdoMIqy38WZkttHnWWiidXLcGl_AkTC2yLzrWGUrBD90gEOJk_oKQCoFwVVnvQH1HtMuNAIzDlA”}}},”title”:”bert-for-laptops/BERT_for_laptops.ipynb at main · samvher/bert-for-laptops”}

Read More