Hacker News from Y Combinator

Syndicate content
Links for the intellectually curious, ranked by readers. // via fulltextrssfeed.com
Updated: 18 hours 17 min ago

The Sound So Loud That It Circled the Earth Four Times

29 September 2014 - 7:00pm

A lithograph of the massive 1883 eruption of KrakatoaThe eruption of Krakatoa, and subsequent phenomena, 1888; Parker & Coward; via Wikipedia

On 27 August 1883, the Earth let out a noise louder than any it has made since.

It was 10:02 AM local time when the sound emerged from the island of Krakatoa, which sits between Java and Sumatra in Indonesia. It was heard 1,300 miles away in the Andaman and Nicobar islands (“extraordinary sounds were heard, as of guns firing”); 2,000 miles away in New Guinea and Western Australia (“a series of loud reports, resembling those of artillery in a north-westerly direction”); and even 3,000 miles away in the Indian Ocean island of Rodrigues, near Maldives (“coming from the eastward, like the distant roar of heavy guns.”1) In all, it was heard by people in over 50 different geographical locations, together spanning an area covering a thirteenth of the globe.

Think, for a moment, just how crazy this is. If you’re in Boston and someone tells you that they heard a sound coming from New York City, you’re probably going to give them a funny look. But Boston is a mere 200 miles from New York. What we’re talking about here is like being in Boston and clearly hearing a noise coming from Dublin, Ireland. Travelling at the speed of sound (766 miles or 1,233 kilometers per hour), it takes a noise about 4 hours to cover that distance. This is the most distant sound that has ever been heard in recorded history.

So what could possibly create such an earth-shatteringly loud bang? A volcano on Krakatoa had just erupted with a force so great that it tore the island apart, emitting a plume of smoke that reached 17 miles into the atmosphere, according to a geologist who witnessed it1. You could use this observation to calculate that stuff spewed out of the volcano at over 1,600 miles per hour—or nearly half a mile per second. That’s more than twice the speed of sound.

This explosion created a deadly tsunami with waves over a hundred feet (30 meters) in height. One hundred sixty-five coastal villages and settlements were swept away and entirely destroyed. In all, the Dutch (the colonial rulers of Indonesia at the time) estimated the death toll at 36,417, while other estimates exceed 120,0002,3.

The British ship Norham Castle was 40 miles from Krakatoa at the time of the explosion. The ship’s captain wrote in his log, “So violent are the explosions that the ear-drums of over half my crew have been shattered. My last thoughts are with my dear wife. I am convinced that the Day of Judgement has come.”2

A map showing the area in which the Krakatoa explosion could be heard.The eruption of Krakatoa, and subsequent phenomena, 1888

In general, sounds are caused not by the end of the world but by fluctuations in air pressure. A barometer at the Batavia gasworks (100 miles away from Krakatoa) registered the ensuing spike in pressure at over 2.5 inches of mercury1,2. That converts to over 172 decibels of sound pressure, an unimaginably loud noise. To put that in context, if you were operating a jackhammer you’d be subject to about 100 decibels. The human threshold for pain is near 130 decibels, and if you had the misfortune of standing next to a jet engine, you’d experience a 150 decibel sound. (A 10 decibel increase is perceived by people as sounding roughly twice as loud.) The Krakatoa explosion registered 172 decibels at 100 miles from the source. This is so astonishingly loud, that it’s inching up against the limits of what we mean by sound.

When you hum a note or speak a word, you’re wiggling air molecules back and forth dozens or hundreds of times per second, causing the air pressure to be low in some places and high in other places. The louder the sound, the more intense these wiggles, and the larger the fluctuations in air pressure. But there’s a limit to how loud a sound can get. At some point, the fluctuations in air pressure are so large that the low pressure regions hit zero pressure—a vacuum—and you can’t get any lower than that. This limit happens to be about 194 decibels for a sound in Earth’s atmosphere. Any louder, and the sound is no longer just passing through the air, it’s actually pushing the air along with it, creating a pressurized burst of moving air known as a shock wave.

Closer to Krakatoa, the sound was well over this limit, producing a blast of high pressure air so powerful that it ruptured the eardrums of sailors 40 miles away. As this sound travelled thousands of miles, reaching Australia and the Indian Ocean, the wiggles in pressure started to die down, sounding more like a distant gunshot. Over 3,000 miles into its journey, the wave of pressure grew too quiet for human ears to hear, but it continued to sweep onward, reverberating for days across the globe. The atmosphere was ringing like a bell, imperceptible to us but detectable by our instruments.

The Krakatoa explosion registered 172 decibels at 100 miles from the source. This is so astonishingly loud, that it’s inching up against the limits of what we mean by sound.

By 1883, weather stations in scores of cities across the world were using barometers to track changes in atmospheric pressure. Six hours and 47 minutes after the Krakatoa explosion, a spike of air pressure was detected in Calcutta. By 8 hours, the pulse reached Mauritius in the west and Melbourne and Sydney in the east. By 12 hours, St. Petersburg noticed the pulse, followed by Vienna, Rome, Paris, Berlin, and Munich. By 18 hours the pulse had reached New York, Washington DC, and Toronto1.  Amazingly, for as many as 5 days after the explosion, weather stations in 50 cities around the globe observed this unprecedented spike in pressure re-occcuring like clockwork, approximately every 34 hours. That is roughly how long it takes sound to travel around the entire planet.

In all, the pressure waves from Krakatoa circled the globe three to four times in each direction. (Each city felt up to seven pressure spikes because they experienced shock waves travelling in opposite directions from the volcano1.) Meanwhile, tidal stations as far away as India, England, and San Francisco measured a rise in ocean waves simultaneous with this air pulse, an effect that had never been seen before. It was a sound that could no longer be heard but that continued moving around the world, a phenomenon that people nicknamed “the great air-wave.”

Recently, an incredible home video of a volcanic eruption taken by a couple on vacation in Papua New Guinea started making the rounds on the Internet. If you watch closely, this video gives you a sense for the pressure wave created by a volcano.

When the volcano erupts, it produces a sudden spike in air pressure; you can actually watch as it moves through the air, condensing water vapor into clouds as it travels. The people taking the video are (fortunately) far enough away that pressure wave takes a while to reach them. When it does finally hit the boat, some 13 seconds after the explosion, you hear what sounds like a huge gunshot accompanied by a sudden blast of air. Multiplying 13 seconds by the speed of sound tells us that the boat was about 4.4 kilometers, or 2.7 miles, away from the volcano. This is somewhat akin to what happened at Krakatoa, except the ‘gunshot’ in that case could be heard not just three but three thousand miles, away, a mind-boggling demonstration of the immense destructive power that nature can unleash.

References:

[1] Judd, John Wesley, et al. The Eruption of Krakatoa: And Subsequent Phenomena. Trübner & Company, 1888. (a comprehensive data-filled report of the Krakatoa eruption commissioned by the Royal Society, accessible for free under public domain)

[2] Winchester, Simon. Krakatoa: The day the world exploded. Penguin UK, 2004.

[3] Simkin, Tom, and Richard S. Fiske. Krakatau, 1883—the volcanic eruption and its effects. Smithsonian Inst Pr, 1983.

Thanks to Nicole Sharp and Will Slaton for helpful discussions about the physics of the Krakatoa explosion.

Aatish Bhatia is a recent physics Ph.D. working at Princeton University to bring science and engineering to a wider audience. He writes the award-winning science blog Empirical Zeal and is on Twitter as @aatishb.

OS X Bash Update 1.0

29 September 2014 - 7:00pm

This update fixes a security flaw in the bash UNIX shell. 

For more information on the security content of this update, see http://support.apple.com/kb/HT1222.

Trends in the Silk Road 2.0

29 September 2014 - 7:00pm

Written by Daryl Lau

Impetus

Last Friday, one of the top articles on hacker news was called Breaking the Silk Road’s Captcha

This sounded pretty cool to me, though not necessarily applicable because the current Silk Road 2.0 (I’ll just be calling it SR from now on) isn’t using anything nearly as sophisticated.

I thought it would be really interesting to scrape SR for, let’s say a month or two. I could do cool stuff like make a stock ticker and display the values like COK XTC LSD etc.

Disclaimer

The following information is for educational purposes only, I have no affiliation with the Silk Road 2.0, nor have I ever purchased anything off the site. As far as I know, visiting the site and writing about it with no intention to buy (commit a crime) is perfectly legal.

Some implementation quirks


Before we begin: I only wanted to spend an hour or two doing this. I was late for a dinner and wanted it to run overnight while I was sleeping. If you are looking to build a robust system, you should consider a different solution.

Captcha

Simply download the captcha, run it through some opencv transforms, then feed it to tesseract. If it doesn’t work, just keep on trying until we can get a relatively easy one. I think my sucess rate was >90% with some very tranforms using opencv.

Connecting through tor

The SR site is an anonymous hidden service reachable only through the tor network. You run the tor client daemon on your machine, then use it as a SOCKS5 proxy.

This has some complications, because dns requests also have to go through tor.

The quick and dirty solution is to just spawn the scraper through torsocks which wraps all the net requests from my scraper.

Automatic logouts/timeouts

The SR site seems to be very eager to automatically log out users. When logged out, I simply create a new user. When I am back on the site, I make sure to traverse to the last known point from the root node of our crawl tree. This is to avoid detection.

The nature of web crawling through tor:

Crawling through tor already obfuscates your identity to a certain degree, so we don’t really have to do anything other than cycling User-Agent strings to look different from any other client.

I’ve made a one day snapshot available at github.com/dlau/sr-data

I will release the source code for the crawler when I am done, with the SR specific portions removed if anyone is interested. This will all go to the same repo.

Findings

Alright enough technical details, let’s see what useful information we can get out of this.

Knowing very little about recreational drug use, I visited the National Institute of Drug Abuse’s website which conveniently provided the names of, what the US considers, to be the most widely used drugs.

I thought, if I know them, they must be a big deal right?! I guess so. Here are the drugs I picked out:

Total number of listings Sorted by number of listings ---------------------------- MDMA 1321 Weed 761 LSD 523 Cocaine 475 Amphetamine 215 Heroin 150 Ketamine 67 Opium 53 Mescaline 20 Total 3585

weed is simply marijuana that is smoked, not any other derivative such as hash

To put things in perspective, at the moment of writing this SR has approximately 13,000 listings for drugs. Just a guess, but it looks like prescription drugs account for a large portion of SR drug listings.

Nothing much to say here, other than the fact that MDMA seems to have the most listings.

Highest number of ratings

Just like buying off Amazon, users can review the specific product. SR gives a rating from 1-5 stars and the total number of reviews per product listing.

The average number of ratings per product as shown here seem to be rather uniform, there is on average 29 reviews per product.

MDMA 33822 25 Weed 28213 37 LSD 12122 23 Cocaine 16591 34 Amphetamine 6251 29 Heroin 3132 20 Ketamine 1504 22 Opium 1256 23 Mescaline 62 3 Total 102953 Top 100 Most Reviewed Items

MDMA 48 Weed 22 LSD 10 Cocaine 9 Amphetamine 7 Ketamine 1 Opium 1 Mescaline 1 Heroin 1

In case you are wondering, there were some outliers:

  • One had 100g of MDMA for $1510.77. It had 392 ratings.
  • Another was selling 100g of mdma for $1186 and 50g for $659. They had 293 ratings and *279 ratings respectively.
  • The other was for 1/4lb of bulk medical marijuana for $619.10. It had 378 ratings.

I somehow doubt this guy has sold half a million dollars worth of MDMA at $1.5k a pop in such a huge quantity, but the price seems to be in line with other sellers for an equivalent amount. I’m not entirely sure what the rules are regarding who can give feedback, but there seem to be people buying huge quantites if a user must buy a product to be able to review it. I have never purchased anything from the site, and I wasn’t presented with any choices to review an item.

If only people who purchase the item can review it, then I am a bit less skeptical. I saw one canadian seller listing 1 kilo of MDMA for USD $8k with 1 review!

========================================================

The average price of the top 100 items is $129 The average price of the top 500 items is $188 The average price of the top 1000 items is $236

Prices are converted to USD at time of crawl using exchange rates from the coinbase api.

Countries

Sellers on SR can specify where they ship from and where they ship to.

isocode number of listings -------------------------------- us 93 au 45 gb 40 de 39 nl 35 ca 32 se 10 cn 6 za 4 be 4 it 2 es 2 nz 2 no 2 ie 2 pl 2 dk 2 sk 1 cz 1 fi 1 fr 1 ch 1 at 1 in 1 Observations Total Sales Volume

If, indeed every sale can map to a transaction, some vendors are doing huge amounts of business through mail order drugs. While the number is small, if we sum up all the product reviews x product prices, we get a huge number of USD $20,668,330.05.

REMEMBER! This is on Silk Road 2.0 with a very small subset of their entire inventory.

sqlite> SELECT SUM(review_count * price_usd) FROM silkroad_data WHERE review_count > 0; 20668330.0569627 sqlite> SELECT COUNT(*) FROM silkroad_data; 3579 Comparing to Agora

The agora marketplace seems to have more or less the same number of listings. It would be interesting to see whether or not the sellers are the same or different between the sites.

Junk listings

SR has quite a lot of junk listings, there are all sorts of listings unrelated to the product category. I had to filter out quite a lot of listings which deviated too far from the mean price per unit volume. The Agora Marketplace seems to be a bit better curated and moderated. I think suspect that it has more real inventory than SR.

Closing I won’t tell you that I know what I’m doing

This is simply a collection of observations from someone who knows pretty much nothing about the drug world. It is probably among the longest articles I have ever written, any suggestions with regard to writing would be greatly appreciated. It must have taken me at least 3 times the amount of time to write this article versus getting all the data!

Need more data

I’ve set up a cron-type job to crawl SR daily and crunch some numbers. It will be interesting to see how things change over time, though a month may not be enough time to see any significant shifts.

This was a bit much for me

It was really creepy looking through all those drug listings, with rocks of all sorts of shapes and colors. I spent way too much time writing this article, hope someone finds it educational.

To be continued …

Part 1 gave us an overall, albeit superficial view of the numbers behind SR 2.0.

Part 2 will focus on pricing, trends and predictions.

In the last part of the series, I will report on changes over time.

« All blog entries

Adobe joins the Chromebook party, starting with Photoshop

29 September 2014 - 7:00pm
[Cross-posted on the Google for Education blog]

Chromebooks are fast, easy to use and secure. They bring the best of the cloud right to your desktop, whether that’s Google Drive, Google+ Photos or Gmail. Today, in partnership with Adobe, we’re welcoming Creative Cloud onto Chromebooks, initially with a streaming version of Photoshop. This will be available first to U.S.-based Adobe education customers with a paid Creative Cloud membership—so the Photoshop you know and love is now on Chrome OS. No muss, no fuss.

This streaming version of Photoshop is designed to run straight from the cloud to your Chromebook. It’s always up-to-date and fully integrated with Google Drive, so there’s no need to download and re-upload files—just save your art directly from Photoshop to the cloud. For IT administrators, it’s easy to manage, with no long client installation and one-click deployment to your team’s Chromebooks.

Head to Adobe.com to apply for access!

Posted by Stephen Konig, Product Manager & Sunset Photographer

The Extraordinary California Drought of 2013-2014

29 September 2014 - 7:00pm

A note from the author

This special update is a little different from what I typically post on the California Weather Blog. In the paragraphs below, I discuss results from and context for a study that my colleagues and I recently published in a special issue of the Bulletin of the American Meteorological Society (Swain et al. 2014).

The Ridiculously Resilient Ridge at its peak during January 2014. (Daniel Swain)

Unlike the majority of content on this blog, this report has undergone scientific peer review—an important distinction to make in the science blogosphere—and claims made on the basis of our peer-reviewed findings are marked with an asterisk (*) throughout this post. A reference list is provided at the end of the post, and the full BAMS report is available here.  I would like to thank my co-authors—Michael Tsiang, Matz Haugen, Deepti Singh, Allison Charland, Bala Rajaratnam, and Noah Diffenbaugh—all of whom played critical roles in bringing this paper together.

The really short version 

In 2013 and 2014, a vast region of persistently high atmospheric pressure over the northeastern Pacific Ocean–known as the “Ridiculously Resilient Ridge”–prevented typical winter storms from reaching California, bringing record-low precipitation and record-high temperatures. These extremely dry and warm conditions have culminated in California’s worst drought in living memory, and likely the worst in over 100 years. Human-caused climate change has increased the likelihood of extremely high atmospheric pressure over the North Pacific Ocean, which suggests an increased risk of atmospheric patterns conducive to drought in California.

The 12-month Modified Palmer Drought Severity Index for California. The current value is the lowest in more than 100 years, and is part of a century-long downward trend. (NOAA/NCDC)

What are the effects of the ongoing extreme drought in California?

The impacts of the drought are wide-ranging, and continue to intensify with each passing month. Curtailment of state and federal water project deliveries for agricultural irrigation have already resulted multi-billion dollar losses as thousands of acres of farmland are fallowed. Small communities in some regions have started to run out of water entirely, and increasingly stringent urban conservation measures have been enacted over the summer as reservoir storage drops to critically low levels. Thousands of new water wells have been constructed on an emergency basis over the past year, and skyrocketing rates of groundwater pumping have led to rapid land subsidence in the San Joaquin Valley. Not to be outdone, snowpack in the Sierra Nevada Mountains was almost nonexistent for much of 2013-2014, and at least one of California’s major rivers is no longer reaching the Pacific Ocean.

Explosive pyrocumulus cloud development atop the King Fire as it burned through thick high-elevation forest in the Sierra Nevada in September 2014 (looking west from Lake Tahoe). Photo courtesy of Steve Ellsworth, Professor at Sierra Nevada College.

The severity of California’s drought is so great that it is starting to change the physical geography of the state. The Sierra Nevada’s mountain peaks have risen measurably since 2012 as the Earth’s crust rebounds from the net loss of 63 trillion gallons of water—an amount equivalent to the entire annual ice melt of the Greenland Ice Sheet. Intense, destructive wildfires are burning throughout the state, and while September and October are the peak of the typical fire season in California, the number of fires exhibiting extreme behavior and “dangerous” rates of spread has been far higher than usual due to the ubiquity of tinder-dry, drought-cured brush and trees. Conditions have been so warm and dry that at least one glacial outburst flood has occurred on the slopes of Mt. Shasta as winter ice accumulation decreases and summer melt accelerates. The overall visibility and severity of these impacts have brought the drought to the forefront of California politics: landmark legislation regarding the regulation of groundwater recently was recently passed by the state legislature and has now been signed by the governor, and a “water bond” will feature prominently as Proposition 1 on the California ballot this November.

Just how severe is the current drought relative to others in California’s past?

A smooth 12-month average of California precipitation shows that the current drought ecompasses the driest year on record in California. (Swain et al. 2014)

California is currently experiencing its third consecutive year of unusually dry conditions, but the intensity of California’s long-term drought has increased dramatically over the past 18 months. 2013 was the driest calendar year in at least 119 years of record keeping—but even more impressively, the current drought now encompasses the driest consecutive 12-month period since at least 1895.* This means that the maximum 12-month magnitude of the precipitation deficits in California during the current drought have exceeded those during all previous droughts in living memory—including both the 1976-1977 and 1987-1992 events.* As of September 2014, 3-year precipitation deficits now exceed average annual precipitation across most of California, and most these anomalies stem from the exceptional dryness during 2013 and early 2014. For many practical purposes, 2013 was a “year without rain” in California—an extraordinary occurrence in a region with a traditionally very well defined winter rainy season.

2014 has thus far been California’s warmest year on record, part of a long-term warming trend. (NOAA/NCDC)

In addition to extremely low precipitation, California has also been experiencing exceptional warmth over much of the past year. 2014 is currently California’s record warmest year to date by a wide margin—meaning that it has been warmer during the current drought than during any previous drought since at least the 1800s. Warm temperatures increase the rate of evaporation from parched soils and critically dry rivers, lakes, and streams—exacerbating the impacts of existing precipitation deficits. In fact, primary metrics of overall drought severity—including the widely-used Palmer Drought Severity Index (PDSI)—have now reached their lowest levels since at least the 1800s. All of this evidence points consistently toward an increasingly inescapable reality: that the 2013-2014 drought in California is the worst in living memory, and likely in well over a century.

What’s causing these incredibly warm and dry conditions in California?

The atmospheric pattern over much of North America has been exhibiting a remarkable degree of persistence over the past 12-18 months. This very unusual atmospheric configuration—in which the large-scale atmospheric wave pattern appears to be largely “stuck” in place—has been characterized by a seemingly ever-present West Coast ridge and a similarly stubborn trough over central and eastern United States (commonly referred to in media coverage as the “Polar Vortex,” though this terminology is arguably problematic). This so-called “North American dipole” (highlighted by Wang et al., 2014) has resulted in persistent warm/dry anomalies along the West Coast and persistent cool/wet anomalies over the Midwest and Eastern Seaboard.

The white region on this map plot depicts the areas where 500mb geopotential heights during 2013 were unprecedented (higher than any previous value since at least 1948). Note that much of this region corresponds to the location of the Triple R.  (Swain et al. 2014)

Because of the extraordinary persistence and strength of the western half of this wave pattern and its conspicuous impacts in California, I started referring to this anomaly as the “Ridiculously Resilient Ridge” (or “Triple R”) in December 2013. Since that time, the Triple R has waxed and waned—and for a brief period during February and March 2014, faded away almost entirely. But the anomalous Ridge returned during the spring months, and has continued to be a notable feature of the large-scale pattern through summer 2014.

It’s important to note that the Triple R is not a feature that has necessarily been present every single day for the entire duration of the California drought. The “resilience” of the Triple R is key to its significance: despite occasional, transient disruptions of the persistent high pressure on daily to weekly timescales, the much-maligned Triple R has been in place more often than not since early January 2013. Averaged over multiple months (and now up to a year or more), the Ridge pops out as a strikingly prominent feature in map plots of the large-scale atmosphere.* In fact, the region of historically unprecedented (since at least 1948) annual-scale geopotential height anomalies associated with the Triple R extend over a truly vast geographic region—from central California westward across the entire North Pacific to the Kamchatka Peninsula in the Russian Far East.* Extremely high geopotential heights (a vertically aggregated measure of atmospheric temperature) over the northeastern Pacific Ocean are historically linked to very low precipitation in California.* This is consistent with previous work by other researchers, and highlights the fact that such extreme values are usually linked to a northward shift in the storm track, which directs storms away from California.

Top: zonal (west-to-east) wind anomalies at 250mb during 2013. Bottom: same as top, but for meridional (south-north) winds. Note that the westerly winds associated with the Pacific storm track are shifted well to the north. (Swain et al. 2014)

Over these many months—and especially during the second half of the 2012-2013 rainy season during January 2013-May 2013 and the first half of the 2013-2014 rainy season during October 2013-January 2014—the Triple R induced persistent shifts in the large-scale wind patterns near and west of California.* During California’s rainy season, which typically runs from late October through early May, winter storms approach California from the west and northwest, bringing Pacific moisture to the region in the form of periodic rainfall and mountain snowfall. The latitude of the “storm track” along the West Coast—largely defined by the position of the jet stream—varies from day-to-day, month-to-month, and even year-to-year. During the 2013-2014 California drought, however, the Triple R pushed the jet stream well to the north of California (and, for much of that period, even north of Oregon and Washington).* This northward deflection of the storm track prevented precipitation-bearing low pressure systems from reaching California for large portions of two consecutive rainy seasons, ultimately resulting in the lowest 12-month precipitation on record in California.*

In addition to causing extremely low precipitation in California, the Triple R is also largely responsible for California’s record warmth over the past 9 months. During the cool season, the Ridge brought long stretches of cloudless days, which caused daytime temperatures during winter to be well above average (and, at the same time, the position of the ridge also prevented major cold air outbreaks from occurring after December 2013). During the warm season, the Ridge has helped to shut down the typical northwesterly prevailing winds along the coast (and thus the upwelling) that are responsible for northern and central California’s legendarily cold ocean surface temperatures. This combination of endless clear skies and far warmer than usual near-shore ocean temperatures have allowed California’s air temperatures thus far in 2014 to be the warmest on record since at least 1895–and by a considerable margin.

Why has the North Pacific/West Coast ridge been so “ridiculously resilient?” Has climate change increased the risk of events like the 2013-2014 California drought? Click here to read more on the next page!

Pages: 1 2

Using Machine Learning and Node.js to detect the gender of Instagram Users

29 September 2014 - 7:00pm
← Back to Articles Using Machine Learning and NodeJS to detect the gender of Instagram Users

September 25, 2014 Reading time: 13 minutes

The goal of this article is to provide a very practical guide to deploying a machine learning solution at scale. Not everything is proven right or optimal, and as with any real-life deployment, we made some trade-offs and took some shortcuts on the go without necessarily building all the evidence that would have been required in an academic setting. We apologize for that, and we will try to clearly point out throughout the post the places where we did so and hope that it will be helpful to you nonetheless.

Let’s start with a little bit of context: TOTEMS Analytics provides analytics on Instagram (audiences and communities around hashtags). Over the past year, we noticed an ever increasing need of our clients for demographics information on their Instagram audience which led us to decide 6 months ago to invest time to build a gender classifier based on social signals we could find on the platform (Instagram does not disclose demographics information of their users on their API). We invested 2 months of man.work in this project. After trying basic methods such as using name extraction and census (which yielded a 0.65 success rate, barely above the 0.5 success rate of a random classifier), we came up with a rather simple neural-network approach that now enables us to provide to our clients unique information that they can’t find anywhere else. So we figured we’d share how we did it, so that you can also leverage simple machine learning techniques to enhance and differentiate your customers/users experiences.

To give you a more explicit idea of what we came up with, we’ve embedded the resulting classifier in a simple demo available right from this post. The classifier uses the hashtags used in the most recents posts of a user. Try it out now! It should work with most Instagram users (provided they’re public and relatively active):

The constraints we had

Our platform retrieves or refreshes around 400 user profiles per second (this is managed using 4 high-bandwidth servers co-located with instagram’s API servers on AWS). These profiles are stored in a sharded MySQL table and used to compute aggregated information about audiences (follower/followee relationships) or communities (contributors to a particular hashtag). This context led us to set the following prerequisites for the gender classifier we wanted to build:

  • Real-time classification:  We already knew the load we would have and it was important for the classifier to not drastically increase the amount of machines needed to process these profiles.
  • >0.9 success rate: This was our initial target. That number is quite arbitrary, but we figured the aggregated data computed based on a classifier with this kind of success rate would be good enough for all of our clients, and a good first milestone.

The training set

This is probably the most crucial part of the overall exercise. If you’re building a classifier, it is likely because you’re lacking the data you want to infer in the first place. Yet, you’ll need a fairly big data set properly classified to use as a training set. In our case, we needed to find the gender of a large population of Instagram accounts. We knew that this gender information was readily available on other platforms, in particular Facebook[1] where most users provide their gender publicly. Focusing on Instagram profiles that included links to a facebook URL in their profile (of which there are many) was the way forward.

We built a few simple regular expressions to rule out anything that was not a proper Facebook profile URL and went over all the user ids we ever came across… and went even a little bit further by looking at their followers when relevant. Upon inspection of several hundred million Instagram users (most of the user base at that time), we were able to extract the Instagram username to Facebook profile relationships whenever relevant. The next step was to retrieve gender information on Facebook when available. With 570k profiles, we had our training set.

Extracting the proper input signals

Before building our neural network, we needed to get a better understanding of which input signal to use. Several approaches could have suited our goal here:

  • full-words from a user’s posts caption and/or their bio and full-name
  • n-grams from a user’s posts caption and/or their bio and full-name

The main problem here is that the input signal space is vast (set of all n-grams, or set of all words), and it is hard to design coherent fixed-sized input vectors (expected by most classifiers) that work on all profiles.

An interesting solution to that problem is to rely on mutual information to order the input space[2], and select the top N n-grams or full-words that have the greatest mutual-information[3] with the gender random variable. Intuitively, the mutual information between two random variables measures how much knowing one of these variables reduces uncertainty about the other. Here’s how we computed the mutual information in our case:

We extracted the hashtags, full-words and n-grams used in the captions of recent posts from the users we had in our training set and ran the computation described above.

Out of curiosity, we ordered hashtags by the conditional probability of being classified as female given the use of a given hashtag. These are the hashtags most strongly associated with female users:

We also computed the hashtags most strongly associated with male users:

And finally, we generated the top 10k hashtags most mutually dependent with gender:

Using this list, we were then able to generate fixed-sized binary input vectors of arbitrary size (1k, 2k, 4k, 10k), representing the presence – or absence – of these most mutually dependent hashtags in the recent posts’ captions of a user. We repeated this operation with n-grams and full words and left the evaluation of which input signal would be more efficient to after having built our neural network.

Intuitively, the conditional probability of being male or female could be seen as a more efficient marker than than mutual information (these lists are certainly more expressive), but mutual information takes into account the probability of finding the feature (see how top mutually dependent terms are more probable than conditional ones, their count is much higher). In other words, it’s more efficient for a classifier to have a less strongly associated but much more probable term rather than very strongly associated but highly improbable terms. The latter will simply be useless for most of the users to be classified.

Building a neural network using nodeJS

We implemented our own neural network based on popular references[4][5] using nodeJS. We started with small training sets and increased progressively their size until we hit the limitations of Javascript’s garbage collector (stopping the world too frequently) and rewrote it in C++ as a NodeJS native add-on (available here: https://github.com/totemstech/neuraln)

A neural network is a graph composed of layers. The first layer is set to the input vector value that needs to be classified. In our case, the presence or not of the top N most mutually dependent hashtags or n-grams in a user’s recent posts. Each layer is linked to the other by weight values. Any node in a layer is linked to all the nodes in the next layer. There can be any number of layers between the input layer and the output layer, these are called inner-layers. Finally the outer layer represents the output vector whose dimension depends on value that needs to be inferred.

A neural network, is therefore defined by its layer structure, `layers_[l]` (number of node in each layer); and the weights value between each layer , `W_[l][i][j]`. Omitting other members, this is exactly how our neural network is defined:

Additionally, each node within the network is defined by its activation function. This function defines what a node outputs to the next layer given the sum of the inputs it received from the previous layer. It applies to all layers except the input layer whose value is set by the input vector. In our implementation we use the following code for our nodes’ activation function:

Now that we have the proper data structure to represent our neural network, we need to be able to train it. The network we described is a feed-forward network: values are propagated from the input vector layer to the output layer, along the weights, using at each node its activation function to feed the next layer. Training of such networks relies on backpropagation[6].

Backpropagation alone deserves a whole blogpost[7] and is very well described in textbooks[4], so we suggest you refer to them if you want to understand the internals of it. What’s important to remember is that backpropagation lets you update the weights of the neural network by propagating the error (difference between the outputed value and the expected value from the training set) along the gradient of the network. Computing that gradient, by the way, requires your activation function to be differentiable. Backpropagation was invented in 1970 but only popularized in 1986, enabling a much broader use of neural networks. It has since then become the go-to solution to train neural networks.

Using backpropagation, we can repeatedly iterate on the elements of the training set and reduce the error rate until it reaches the desired value. In our implementation, we compute the sum of the quadratic errors between the computed output `res` and the expected one `train[i]` and average it over the entire training set, and repeat that computation until it reaches the target value `error` passed as argument:

Choosing the best input signal

Once we had a functioning neural network and training algorithm, the next step was to test the different network structures and input signals to pick the most promising one. Here are the initial results we got:

  • `Type` is the input type (n-grams, words or hashtags) taken over the user’s recent posts captions (early results had shown that using the bio and full-name was not yielding better results, contradictory to results in [2])
  • `L1,L2,L3` is the network structure, that is, the size of each layer. We restricted ourselves to 2- and 3-layer networks. `L1` also defines the size of the input vector, computed as described before using the features with the best mutual information.
  • `Cov` is an indicative value equal to the average number of features found on each elements of the training set (number of matching words/n-gram/hashtag on average)
  • `Err` is the target error rate for training. (Setting an exceedingly low value can cause the network to overfit, that is, become too specific to the training set)
  • `Train` is the size of the training set
  • `It` is the number of iterations over the training set needed to reach the target error rate
  • `Res` is the success rate on the test set (generally 10% the size of the training set). It’s important to remember that the baseline is 0.5 not 0. A random function would be expected to perform at ~0.5 for gender classification over a random set of individuals.

It is worth noting here that networks without an inner layer (called perceptron), are linear classifiers. Their prediction is based on a simple linear combination of the input features. We were quite surprised by the efficiency of these networks given the simplicity of the model they encode.

Best results were obtained with networks with one hidden layer, and these results hinted us to the fact that using a large (10k) hashtag-based input vector would potentially yield the best results. Especially if we were able to use a larger training set.

Tweaking and Results

We were able to train a few networks using a 200k-element training set. The process became quite painful, as training on such large data sets would take up to 5h on the servers we had available. Using a 200k-element training set, we were able to increase the success rate to 0.83.

Finally, inspired by boosting[8], we experimented with training a male network and a female network using two distinct training sets of 200k elements each, and using both networks to classify, picking whichever network had the strongest response to the given input. This is a purely operational solution that does not root into anything else than the non-extensive experimental approach we described here. But it iss what worked best for us. Using this technique we were able to reach a success rate of 0.88 on the 70k-element test set, close enough to our initial target.

We added serialization (`to_string`) and deserialization (constructor) functions to our library and built the resulting classifier into our product. The nice property of feed-forward neural networks is that classification is really fast once the network is loaded in memory. Adding the classifier to our infrastructure had no visible impact on the load of our aggregation servers.

Analysis of the resulting Neural Networks

To prepare this post, we added a few scripts to generate SVG representations of the neural network serialized format in order to visualize the structure of the network. We ran these scripts on the networks (male and female) we have been using for the past few months in production.

You can see the result of this visualisation below (click to enlarge). We only display “heavy” links, that is links with an absolute weight above 0.8. Positive weights are in blue, negative weights are in red. The heavier the link is, the more colored it is (semi-transparent links are close to 0.8 while others have a higher absolute weight).

We were surprised by the simplicity of the resulting structure, but aware that lower weight links not displayed here might play a significant role, especially since it seems that a certain number of inner-layer nodes without any ingressing link in the visualization do have a significant output link.

The other surprising aspect, is the similarity of the two networks. They both have a mainly positive component feeding an inner-layer node with a positive impact on the result. And a mixed positive/negative component feeding an inner-layer node with a negative impact on the result. The former component seemingly playing the role of an enabler, and the latter component seemingly playing the role of an inhibitor.

Conclusion

We hope that this description of our experience deploying a machine learning solution in production will serve as a useful practical example to illustrate the numerous theoretical resources available online as well as in textbooks. The code we used to build and train our neural network, currently used in production at TOTEMS has been open-sourced on our company GitHub account: https://github.com/totemstech/neuraln. We hope it can serve in other production settings where a pure Javascript network implementation may lack the speed of a C++ implementation.

-stan


[1] Facebook Graph API https://developers.facebook.com/docs/graph-api/reference/v2.1/user
[2] Dicriminating gender on Twitter – The MITRE Corporation https://www.mitre.org/sites/default/files/pdf/11_0170.pdf
[3] Mutual information – Wikipedia http://en.wikipedia.org/wiki/Mutual_information
[4] Artificial Intelligence: A Modern Approach – S.Russel, P.Norvig http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597
[5] Brain.js – @hartur https://github.com/harthur/brain
[6] Backpropagation – Wikipedia http://en.wikipedia.org/wiki/Backpropagation
[7] How the backpropagation algorithm works – M.Nielsen http://neuralnetworksanddeeplearning.com/chap2.html
[8] Boosting (Machine Learning) – Wikipedia http://en.wikipedia.org/wiki/Boosting_(machine_learning)

About the Author Stanislas Polu

After graduating from Polytechnique and Stanford, Stan co-founded TOTEMS. Aside from heading up technical operations at TOTEMS, he’s building the web-browser of the future, Breach. No big deal!

Follow @spolu on Twitter for more updates

Interview with Terence Tao

29 September 2014 - 7:00am
Terence Tao Interview Transcript - Google Docs

JavaScript isn't enabled in your browser, so this file can't be opened. Enable and reload.


The version of the browser you are using is no longer supported. Please upgrade to a

supported browser

.

Dismiss

Toggle screen reader support

Postgres full text search is good enough

29 September 2014 - 7:00am
Postgres full-text search is Good Enough!

08 Mar 2014 on postgres | full-text search

When you have to build a web application, you are often asked to add search. The magnifying glass is something that we now add to wireframes without even knowing what we are going to search.

Search has became an important feature and we've seen a big increase in the popularity of tools like elasticsearch and SOLR which are both based on lucene. They are great tools but before going down the road of Weapons of Mass Destruction Search, maybe what you need is something a bit lighter which is simply good enough!

What do you I mean by 'good enough'? I mean a search engine with the following features:

  • Stemming
  • Ranking / Boost
  • Support Multiple languages
  • Fuzzy search for mispelling
  • Accent support

Luckily PostgreSQL supports all these features.

This post is aimed at people who :

  • use PostgreSQL and don't want to install an extra dependency for their search engine.
  • use an alternative database (eg: MySQL) and have the need for better full-text search features.

In this post we are going to progressively illustrate some of the full-text search features in Postgres based on the following tables and data:

CREATE TABLE author( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE post( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, author_id INT NOT NULL references author(id) ); CREATE TABLE tag( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE posts_tags( post_id INT NOT NULL references post(id), tag_id INT NOT NULL references tag(id) ); INSERT INTO author (id, name) VALUES (1, 'Pete Graham'), (2, 'Rachid Belaid'), (3, 'Robert Berry'); INSERT INTO tag (id, name) VALUES (1, 'scifi'), (2, 'politics'), (3, 'science'); INSERT INTO post (id, title, content, author_id) VALUES (1, 'Endangered species', 'Pandas are an endangered species', 1 ), (2, 'Freedom of Speech', 'Freedom of speech is a necessary right missing in many countries', 2), (3, 'Star Wars vs Star Trek', 'Few words from a big fan', 3); INSERT INTO posts_tags (post_id, tag_id) VALUES (1, 3), (2, 2), (3, 1);

It's a traditional blog-like application with post objects, which have a title and content. A post is associated to an author via a foreign key. A post itself can have multiple tags

What is Full-Text Search.

First, let's look at the definition:

In text retrieval, full-text search refers to techniques for searching a single computer-stored document or a collection in a full text database. Full-text search is distinguished from searches based on metadata or on parts of the original texts represented in databases.

-- Wikipedia

This definition introduces the concept of a document, which is important. When you run a search accross your data, you are looking into meaningful entities for which you want to search, these are your documents! The PostgreSQL documentation explains it amazingly.

A document is the unit of searching in a full text search system; for example, a magazine article or email message.

-- Postgres documentation

This document can be accross multiple tables and it represents a logical entity which we want to search for.

Build our document

In the previous section we introduced the concept of document. A document is not related to our table schema but to data; together these represent a meaningful object.
Based on our example schema, the document is composed of:

  • post.title
  • post.content
  • author.name of the post
  • all tag.names associated to the post

To create our document based on this criteria imagine this SQL query:

SELECT post.title || ' ' || post.content || ' ' || author.name || ' ' || coalesce((string_agg(tag.name, ' ')), '') as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id; document -------------------------------------------------- Endangered species Pandas are an endangered species Pete Graham politics Freedom of Speech Freedom of speech is a necessary right missing in many countries Rachid Belaid politics Star Wars vs Star Trek Few words from a big fan Robert Berry politics (3 rows)

As we are grouping by post and author, we are using string_agg() as the aggregate function because multiple tag can be associated to a post. Even if author is a foreign key and a post cannot have more than one author, it is required to add an aggregate function for author or to add author to the GROUP BY.

We also used coalesce(). When a value can be NULL then it's good practice to use the coalesce() function, otherwise the concatenation will result in a NULL value too.

At this stage our document is simply a long string and this doesn't help us; we need to transform it into the right format via the function to_tsvector().

SELECT to_tsvector(post.title) || to_tsvector(post.content) || to_tsvector(author.name) || to_tsvector(coalesce((string_agg(tag.name, ' ')), '')) as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id; document -------------------------------------------------- 'endang':1,6 'graham':9 'panda':3 'pete':8 'polit':10 'speci':2,7 'belaid':16 'countri':14 'freedom':1,4 'mani':13 'miss':11 'necessari':9 'polit':17 'rachid':15 'right':10 'speech':3,6 'berri':13 'big':10 'fan':11 'polit':14 'robert':12 'star':1,4 'trek':5 'vs':3 'war':2 'word':7 (3 rows)

This query will return our document as tsvector which is a type suited to full-text search. Let's try to convert a simple string into a tsvector.

SELECT to_tsvector('Try not to become a man of success, but rather try to become a man of value');

The query will return the following result:

to_tsvector ---------------------------------------------------------------------- 'becom':4,13 'man':6,15 'rather':10 'success':8 'tri':1,11 'valu':17 (1 row)

Something weird just happened. First there are less words than in the original sentence, some of the words are different (try became tri) and they are all followed by numbers. Why?

A tsvector value is a sorted list of distinct lexemes which are words that have been normalized to make different variants of the same word look alike.
For example, normalization almost always includes folding upper-case letters to lower-case and often involves removal of suffixes (such as 's', 'es' or 'ing' in English). This allows searches to find variant forms of the same word without tediously entering all the possible variants.

The numbers represent the location of the lexeme in the original string. For example, "man" is present at position 6 and 15. Try counting the words and see for yourself.

By default Postgres uses 'english' as text search configuration for the function to_tsvector and it will also ignore english stopwords.
That explains why the tsvector results have less elements than the ones in our sentence. We see later a bit more about languages and text search configuration.

Querying

We have seen how to build a document but the goal here is to find the document. For running a query against a tsvector we can use the @@ operator which is documented here. Let's see some examples on how to query our document.

> select to_tsvector('If you can dream it, you can do it') @@ 'dream'; ?column? ---------- t (1 row) > select to_tsvector('It''s kind of fun to do the impossible') @@ 'impossible'; ?column? ---------- f (1 row)

The second query returns false because we need to build a tsquery which creates the same lexemes and, using the operator @@, casts the string into a tsquery. The following shows the difference between casting and using the function to_tsquery()

SELECT 'impossible'::tsquery, to_tsquery('impossible'); tsquery | to_tsquery --------------+------------ 'impossible' | 'imposs' (1 row)

But in the case of 'dream' the stem is equal to the word.

SELECT 'dream'::tsquery, to_tsquery('dream'); tsquery | to_tsquery --------------+------------ 'dream' | 'dream' (1 row)

From now on we will use to_tsquery for querying documents.

SELECT to_tsvector('It''s kind of fun to do the impossible') @@ to_tsquery('impossible'); ?column? ---------- t (1 row)

A tsquery value stores lexemes that are to be searched for, and combines them honoring the Boolean operators & (AND), | (OR), and ! (NOT). Parentheses can be used to enforce grouping of the operators

> SELECT to_tsvector('If the facts don't fit the theory, change the facts') @@ to_tsquery('! fact'); ?column? ---------- f (1 row) > SELECT to_tsvector('If the facts don''t fit the theory, change the facts') @@ to_tsquery('theory & !fact'); ?column? ---------- f (1 row) > SELECT to_tsvector('If the facts don''t fit the theory, change the facts.') @@ to_tsquery('fiction | theory'); ?column? ---------- t (1 row)

We can also use startwith query style when using :*.

> SELECT to_tsvector('If the facts don''t fit the theory, change the facts.') @@ to_tsquery('theo:*'); ?column? ---------- t (1 row)

Now that we know how to make a full-text search query, we can come back to our initial table schema and try to query our documents.

SELECT pid, p_title FROM (SELECT post.id as pid, post.title as p_title, to_tsvector(post.title) || to_tsvector(post.content) || to_tsvector(author.name) || to_tsvector(coalesce(string_agg(tag.name, ' '))) as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id) p_search WHERE p_search.document @@ to_tsquery('Endangered & Species'); pid | p_title -----+-------------------- 1 | Endangered species (1 row)

This will find our document which contains Endangered and Species or lexemes close enough.

Language support

Postgres provides built-ins text search for many languages: Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian, Portugese, Romanian, Russian, Spanish, Swedish, Turkish.

SELECT to_tsvector('english', 'We are running'); to_tsvector ------------- 'run':3 (1 row) SELECT to_tsvector('french', 'We are running'); to_tsvector ---------------------------- 'are':2 'running':3 'we':1 (1 row)

A column name can be used to create the tsvector based on our starting model. Let's assume that post can be written in different languages and post contains a column language.

ALTER TABLE post ADD language text NOT NULL DEFAULT('english');

We can now rebuild our document to use this language column.

SELECT to_tsvector(post.language::regconfig, post.title) || to_tsvector(post.language::regconfig, post.content) || to_tsvector('simple', author.name) || to_tsvector('simple', coalesce((string_agg(tag.name, ' ')), '')) as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id;

Without the explicit cast ::regconfig the query will have generated an error:

ERROR: function to_tsvector(text, text) does not exist

regconfig is the object identifier type which represents the text search configuration in Postgres: http://www.postgresql.org/docs/9.3/static/datatype-oid.html

Fow now the lexemes of our document will be built using the right language based on post.language.

We also used simple which is one of the built in search text configs that Postgres provides. simple doesn't ignore stopwords and doesn't try to find the stem of the word. With simple every group of characters separated by a space is a lexeme; the simple text search config is pratical for data like a persons name for which we may not want to find the stem of the word.

SELECT to_tsvector('simple', 'We are running'); to_tsvector ---------------------------- 'are':2 'running':3 'we':1 (1 row) Accented Character

When you build a search engine supporting many languages you will also hit the accent problem. In many languages accents are very important and can change the meaning of the word. Postgres ships with an extension call unaccent which is useful to unaccentuate content.

CREATE EXTENSION unaccent; SELECT unaccent('èéêë'); unaccent ---------- eeee (1 row)

Let's add some accented content to our post table.

INSERT INTO post (id, title, content, author_id, language) VALUES (4, 'il était une fois', 'il était une fois un hôtel ...', 2,'french')

If we want to ignore accents when we build our document, then we can simply do the following:

SELECT to_tsvector(post.language, unaccent(post.title)) || to_tsvector(post.language, unaccent(post.content)) || to_tsvector('simple', unaccent(author.name)) || to_tsvector('simple', unaccent(coalesce(string_agg(tag.name, ' ')))) JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON author.id = post.author_id GROUP BY p.id

That works but it's a bit cumbersome with more room for mistakes. We can also build a new text search config with support for unaccented characters.

CREATE TEXT SEARCH CONFIGURATION fr ( COPY = french ); ALTER TEXT SEARCH CONFIGURATION fr ALTER MAPPING FOR hword, hword_part, word WITH unaccent, french_stem;

When we are using this new text search config, we can see the lexemes

SELECT to_tsvector('french', 'il était une fois'); to_tsvector ------------- 'fois':4 (1 row) SELECT to_tsvector('fr', 'il était une fois'); to_tsvector -------------------- 'etait':2 'fois':4 (1 row)

This gives us the same result as applying unaccent first and building the tsvector from the result.

SELECT to_tsvector('french', unaccent('il était une fois')); to_tsvector -------------------- 'etait':2 'fois':4 (1 row)

The number of lexemes is different because il était une are stopwords in French. Is it an issue to keep these stop words in our document? I don't think so as etait is not really a stopword as it's mispelled.

SELECT to_tsvector('fr', 'Hôtel') @@ to_tsquery('hotels') as result; result -------- t (1 row)

If we create an unaccented search config for each language that our post can be written in and we keep this value in post.language then we can keep our previous document query.

SELECT to_tsvector(post.language, post.title) || to_tsvector(post.language, post.content) || to_tsvector('simple', author.name) || to_tsvector('simple', coalesce(string_agg(tag.name, ' '))) JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON author.id = post.author_id GROUP BY p.id

If you need to create unaccented text search config for each language supported by Postgres then you can use this gist

Our document will now likely increase in size because it can unclude unaccented stopwords but we query without caring about accented characters. This can be useful e.g. for somebody with an english keyboard searching french content.

Ranking

When you build a search engine you want to be able to get search results ordered by relevance. The ranking of documents is based on many factors which are roughly explained in this documentation.

Ranking attempts to measure how relevant documents are to a particular query, so that when there are many matches the most relevant ones can be shown first. PostgreSQL provides two predefined ranking functions, which take into account lexical, proximity, and structural information; that is, they consider how often the query terms appear in the document, how close together the terms are in the document, and how important is the part of the document where they occur.

-- PostgreSQL documentation

To order our results by revelance PostgreSQL provides a few functions but in our example we will be using only 2 of them : ts_rank() and setweight().

The function setweight allows us to assign a weight value to a tsvector; the value can be 'A', 'B', 'C' or 'D'

SELECT pid, p_title FROM (SELECT post.id as pid, post.title as p_title, setweight(to_tsvector(post.language::regconfig, post.title), 'A') || setweight(to_tsvector(post.language::regconfig, post.content), 'B') || setweight(to_tsvector('simple', author.name), 'C') || setweight(to_tsvector('simple', coalesce(string_agg(tag.name, ' '))), 'B') as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id) p_search WHERE p_search.document @@ to_tsquery('english', 'Endangered & Species') ORDER BY ts_rank(p_search.document, to_tsquery('english', 'Endangered & Species')) DESC;

In the query above, we have assigned different weights to the different fields of a document. post.title is more important than the post.content and as important as tag associated. The least important is the author.name.

This means that if we were to search for the term 'Alice' a document that contains that term in its title would be returned before a document that contains the term in its content and document that with an author of that name would be returned last.

Based on the weights assigned to part of our document the ts_rank() returns a floating number which represents the relevancy of our document against the query.

SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example | document')) as relevancy; relevancy ----------- 0.0607927 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example ')) as relevancy; relevancy ----------- 0.0607927 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example | unkown')) as relevancy; relevancy ----------- 0.0303964 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example & document')) as relevancy; relevancy ----------- 0.0985009 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example & unknown')) as relevancy; relevancy ----------- 1e-20 (1 row)

However, the concept of relevancy is vague and very application-specific. Different applications might require additional information for ranking, e.g., document modification time. The built-in ranking functions such as ts_rank are only examples. You can write your own ranking functions and/or combine their results with additional factors to fit your specific needs.

To illustrate the paragraph above, if we wanted to promote newer posts against older ones we could divide the ts_rank value by the age of the document +1 (to avoid dividing by zero).

OPTIMIZATION AND Indexing

To optimize the search on one table is straight forward. PostgreSQL supports function based indexes so you can simply create a GIN index around the tsvector() function.

CREATE INDEX idx_fts_post ON post USING gin(setweight(to_tsvector(language, title),'A') || setweight(to_tsvector(language, content), 'B'));

GIN or GiST indexes? These two indexes could be subject of a blog post themselves. GiST can produce false matches which then requires a extra table row lookup to confirm the match. On the other hand, GIN is faster to query but bigger and slower to build.

As a rule of thumb, GIN indexes are best for static data because lookups are faster. For dynamic data, GiST indexes are faster to update. Specifically, GiST indexes are very good for dynamic data and fast if the number of unique words (lexemes) is under 100,000, while GIN indexes will handle 100,000+ lexemes better but are slower to update.

-- Postgres doc : Chap 12 Full Text Search

For our example, we will be using GIN but the choice can be argued and you need to take your own decision based on your data.

We have a problem in our schema example; the document is spread accross multiple tables with different weights. For a better performance it's necessary to denormalize the data via triggers or materialized view.

You don't always need to denormalise and in some cases you can add a function based index as we did above. Alternatively you can easily denormalise data from the same table via the postgres trigger function tsvector_update_trigger(...) or tsvector_update_trigger_column(...). See the Postgres doc for more detailed information.

For our application there being some some delay before results are returned via the search can be acceptable. This is a good use case for using a Materialized View so we can add an extra index on it.

CREATE MATERIALIZED VIEW search_index AS SELECT post.id, post.title, setweight(to_tsvector(post.language::regconfig, post.title), 'A') || setweight(to_tsvector(post.language::regconfig, post.content), 'B') || setweight(to_tsvector('simple', author.name), 'C') || setweight(to_tsvector('simple', coalesce(string_agg(tag.name, ' '))), 'A') as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id

Then reindexing the search engine will be as simple as periodically running REFRESH MATERIALIZED VIEW search_index;.

We can now add an index on the materialized view.

CREATE INDEX idx_fts_search ON search_index USING gin(document);

And querying will become much simpler too.

SELECT id as post_id, title FROM search_index WHERE document @@ to_tsquery('english', 'Endangered & Species') ORDER BY ts_rank(p_search.document, to_tsquery('english', 'Endangered & Species')) DESC;

If you cannot afford delay then you may have to investigate the alternative method using triggers.

There is not one way to build your document store; it will depend on what comprises your document: single table, multiple table, multiple languages, amount of data ...

Thoughtbot.com published a good article on which I advise reading.

Mispelling

PostgreSQL comes with a very useful extenstion called pg_trgm. See pg_trgm doc.

CREATE EXTENSION pg_trgm;

This provides support for trigram which is a N-gram with N == 3. N-grams are useful because they allow finding strings with similar characters and, in essence, that's what a misspelling is - a word that is similar but not quite right.

SELECT similarity('Something', 'something'); similarity ------------ 1 (1 row) SELECT similarity('Something', 'samething'); similarity ------------ 0.538462 (1 row) SELECT similarity('Something', 'unrelated'); similarity ------------ 0 (1 row) SELECT similarity('Something', 'everything'); similarity ------------ 0.235294 (1 row) SELECT similarity('Something', 'omething'); similarity ------------ 0.583333 (1 row)

With the examples above you can see that similarity returns a float to represent the similarity between two strings. To detect mispelling is then a matter of collecting the lexemes used by our documents and comparing the similarity with our search input. I found that 0.5 is good number to test similarity of mispelling. First we need to create this list of unique lexemes used by our documents.

CREATE MATERIALIZED VIEW unique_lexeme AS SELECT word FROM ts_stat( 'SELECT to_tsvector('simple', post.title) || to_tsvector('simple', post.content) || to_tsvector('simple', author.name) || to_tsvector('simple', coalesce(string_agg(tag.name, ' '))) FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id');

The query above builds a view with one column called word from all the unique lexemes of our documents. We used simple because our content can be in multiple languages. Once we create this materialized view we need to add an index to make a similarity query faster.

CREATE INDEX words_idx ON search_words USING gin(word gin_trgm_ops);

Luckily unique lexemes used in a search engine is not something that will change rapidly so we possibly won't have to refresh the materialized view too often via:

REFRESH MATERIALIZED VIEW unique_lexeme;

Once we have built this table finding the closest match is very simple.

SELECT word WHERE similarity(word, 'samething') > 0.5 ORDER BY word <-> 'samething' LIMIT 1;

This query returns a lexeme which is similar enough (>0.5) to the search input samething ordered by the closest first. The operator <-> returns the "distance" between the arguments, that is one minus the similarity() value.

When you decide to handle mispelling in your search you may want to not look for misspellings on every query. Instead you could query for misspellings only when the search returns no results and use the results of that query to provide some suggestions to your user. It is also possible that your data may contain misspellings if it comes from some source of informal communication such as a social network in which case you may obtain good results by appending the similar lexeme to your tsquery.

is a good reference article about the use of trigrams for using misspellings and search with Postgres.

In my use case the unique lexemes table has never been bigger than 2000 rows but from my understanding if you have more 1M unique lexemes used accross your document then you may be meet performance issues with this technique.

About MySQL and RDS Does it works on Postgres RDS?

All the examples illustrated work on RDS. From what I'm aware the only restrictions on search features imposed by RDS are those that require access to the file system such as custom dictionaries, ispell, synonyms and thesaurus. See the related issue on the aws forum.

I'm using MYSQL should I use the builtin full-text search?

I wouldn't. Without starting a flame war, MySQL full-text search features are very limited. By default there is no support for stemming nor any language support. I came accross a stemming function which can be installed but MYSQL doesn't support function based indexes.

Then what can you do? Based on what we have discussed above if Postgres fulfills your use case then think about moving to Postgres. This can easily be done via tools like py-mysql2pgsql. Or you can investigate more advanced solutions like SOLR and Elasticsearch.

Conclusion

We have seen how to build a decent multi-language search engine based on a non-trival document. This article is only an overview but it should give you enough background and examples to get you started with your own. I may have made some mistakes in this article and I would appreciate if you report them to blog@lostpropertyhq.com

The full-text search feature included in Posgres is awesome and quite fast (enough). It will allow your application to grow without depending on another tool. Is Postgres Search the silver bullet? Probably not if your core business needs revolve around search.

Some features are missing but in lot of use cases you won't need them. It goes without saying that it's critical that you analyze and understand your needs to know which road to take.

Personally I hope to see the full-text search continuing to improve in Postgres and maybe a few of these features being included:

  • Additional built in language support. eg: Chinese, Japanese...
  • Foreign data wrapper around Lucene. Lucene is still the most advanced tool for full-text search and it will have a lot of benefits to see integration with Postgres.
  • More boost or scoring feature for the ranking of results would be first-rate. Elasticsearch and SOLR offer advanced solutions already.
  • A way to do fuzzy tsquery without having to use trigram would be nice. Elasticsearch offer a simple way to do fuzzy search queries.
  • Being able to create and edit dynamically features such as dictionary content, synonyms, thesaurus via SQL this removing the need to add files to the filesystem

Postgres is not as advanced as ElasticSearch and SOLR but these two are dedicated full-text search tools whereas full-text search is only a feature of PostgreSQL and a pretty good one.

Author: Rach Belaid (@rachbelaid) from Lost Property

Lost Property Share this post

Facebook Google+

All content copyright © 2013 • All rights reserved. Proudly published with Ghost

Postgres full-text search is Good Enough!

08 Mar 2014 on postgres | full-text search

When you have to build a web application, you are often asked to add search. The magnifying glass is something that we now add to wireframes without even knowing what we are going to search.

Search has became an important feature and we've seen a big increase in the popularity of tools like elasticsearch and SOLR which are both based on lucene. They are great tools but before going down the road of Weapons of Mass Destruction Search, maybe what you need is something a bit lighter which is simply good enough!

What do you I mean by 'good enough'? I mean a search engine with the following features:

  • Stemming
  • Ranking / Boost
  • Support Multiple languages
  • Fuzzy search for mispelling
  • Accent support

Luckily PostgreSQL supports all these features.

This post is aimed at people who :

  • use PostgreSQL and don't want to install an extra dependency for their search engine.
  • use an alternative database (eg: MySQL) and have the need for better full-text search features.

In this post we are going to progressively illustrate some of the full-text search features in Postgres based on the following tables and data:

CREATE TABLE author( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE post( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, author_id INT NOT NULL references author(id) ); CREATE TABLE tag( id SERIAL PRIMARY KEY, name TEXT NOT NULL ); CREATE TABLE posts_tags( post_id INT NOT NULL references post(id), tag_id INT NOT NULL references tag(id) ); INSERT INTO author (id, name) VALUES (1, 'Pete Graham'), (2, 'Rachid Belaid'), (3, 'Robert Berry'); INSERT INTO tag (id, name) VALUES (1, 'scifi'), (2, 'politics'), (3, 'science'); INSERT INTO post (id, title, content, author_id) VALUES (1, 'Endangered species', 'Pandas are an endangered species', 1 ), (2, 'Freedom of Speech', 'Freedom of speech is a necessary right missing in many countries', 2), (3, 'Star Wars vs Star Trek', 'Few words from a big fan', 3); INSERT INTO posts_tags (post_id, tag_id) VALUES (1, 3), (2, 2), (3, 1);

It's a traditional blog-like application with post objects, which have a title and content. A post is associated to an author via a foreign key. A post itself can have multiple tags

What is Full-Text Search.

First, let's look at the definition:

In text retrieval, full-text search refers to techniques for searching a single computer-stored document or a collection in a full text database. Full-text search is distinguished from searches based on metadata or on parts of the original texts represented in databases.

-- Wikipedia

This definition introduces the concept of a document, which is important. When you run a search accross your data, you are looking into meaningful entities for which you want to search, these are your documents! The PostgreSQL documentation explains it amazingly.

A document is the unit of searching in a full text search system; for example, a magazine article or email message.

-- Postgres documentation

This document can be accross multiple tables and it represents a logical entity which we want to search for.

Build our document

In the previous section we introduced the concept of document. A document is not related to our table schema but to data; together these represent a meaningful object.
Based on our example schema, the document is composed of:

  • post.title
  • post.content
  • author.name of the post
  • all tag.names associated to the post

To create our document based on this criteria imagine this SQL query:

SELECT post.title || ' ' || post.content || ' ' || author.name || ' ' || coalesce((string_agg(tag.name, ' ')), '') as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id; document -------------------------------------------------- Endangered species Pandas are an endangered species Pete Graham politics Freedom of Speech Freedom of speech is a necessary right missing in many countries Rachid Belaid politics Star Wars vs Star Trek Few words from a big fan Robert Berry politics (3 rows)

As we are grouping by post and author, we are using string_agg() as the aggregate function because multiple tag can be associated to a post. Even if author is a foreign key and a post cannot have more than one author, it is required to add an aggregate function for author or to add author to the GROUP BY.

We also used coalesce(). When a value can be NULL then it's good practice to use the coalesce() function, otherwise the concatenation will result in a NULL value too.

At this stage our document is simply a long string and this doesn't help us; we need to transform it into the right format via the function to_tsvector().

SELECT to_tsvector(post.title) || to_tsvector(post.content) || to_tsvector(author.name) || to_tsvector(coalesce((string_agg(tag.name, ' ')), '')) as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id; document -------------------------------------------------- 'endang':1,6 'graham':9 'panda':3 'pete':8 'polit':10 'speci':2,7 'belaid':16 'countri':14 'freedom':1,4 'mani':13 'miss':11 'necessari':9 'polit':17 'rachid':15 'right':10 'speech':3,6 'berri':13 'big':10 'fan':11 'polit':14 'robert':12 'star':1,4 'trek':5 'vs':3 'war':2 'word':7 (3 rows)

This query will return our document as tsvector which is a type suited to full-text search. Let's try to convert a simple string into a tsvector.

SELECT to_tsvector('Try not to become a man of success, but rather try to become a man of value');

The query will return the following result:

to_tsvector ---------------------------------------------------------------------- 'becom':4,13 'man':6,15 'rather':10 'success':8 'tri':1,11 'valu':17 (1 row)

Something weird just happened. First there are less words than in the original sentence, some of the words are different (try became tri) and they are all followed by numbers. Why?

A tsvector value is a sorted list of distinct lexemes which are words that have been normalized to make different variants of the same word look alike.
For example, normalization almost always includes folding upper-case letters to lower-case and often involves removal of suffixes (such as 's', 'es' or 'ing' in English). This allows searches to find variant forms of the same word without tediously entering all the possible variants.

The numbers represent the location of the lexeme in the original string. For example, "man" is present at position 6 and 15. Try counting the words and see for yourself.

By default Postgres uses 'english' as text search configuration for the function to_tsvector and it will also ignore english stopwords.
That explains why the tsvector results have less elements than the ones in our sentence. We see later a bit more about languages and text search configuration.

Querying

We have seen how to build a document but the goal here is to find the document. For running a query against a tsvector we can use the @@ operator which is documented here. Let's see some examples on how to query our document.

> select to_tsvector('If you can dream it, you can do it') @@ 'dream'; ?column? ---------- t (1 row) > select to_tsvector('It''s kind of fun to do the impossible') @@ 'impossible'; ?column? ---------- f (1 row)

The second query returns false because we need to build a tsquery which creates the same lexemes and, using the operator @@, casts the string into a tsquery. The following shows the difference between casting and using the function to_tsquery()

SELECT 'impossible'::tsquery, to_tsquery('impossible'); tsquery | to_tsquery --------------+------------ 'impossible' | 'imposs' (1 row)

But in the case of 'dream' the stem is equal to the word.

SELECT 'dream'::tsquery, to_tsquery('dream'); tsquery | to_tsquery --------------+------------ 'dream' | 'dream' (1 row)

From now on we will use to_tsquery for querying documents.

SELECT to_tsvector('It''s kind of fun to do the impossible') @@ to_tsquery('impossible'); ?column? ---------- t (1 row)

A tsquery value stores lexemes that are to be searched for, and combines them honoring the Boolean operators & (AND), | (OR), and ! (NOT). Parentheses can be used to enforce grouping of the operators

> SELECT to_tsvector('If the facts don't fit the theory, change the facts') @@ to_tsquery('! fact'); ?column? ---------- f (1 row) > SELECT to_tsvector('If the facts don''t fit the theory, change the facts') @@ to_tsquery('theory & !fact'); ?column? ---------- f (1 row) > SELECT to_tsvector('If the facts don''t fit the theory, change the facts.') @@ to_tsquery('fiction | theory'); ?column? ---------- t (1 row)

We can also use startwith query style when using :*.

> SELECT to_tsvector('If the facts don''t fit the theory, change the facts.') @@ to_tsquery('theo:*'); ?column? ---------- t (1 row)

Now that we know how to make a full-text search query, we can come back to our initial table schema and try to query our documents.

SELECT pid, p_title FROM (SELECT post.id as pid, post.title as p_title, to_tsvector(post.title) || to_tsvector(post.content) || to_tsvector(author.name) || to_tsvector(coalesce(string_agg(tag.name, ' '))) as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id) p_search WHERE p_search.document @@ to_tsquery('Endangered & Species'); pid | p_title -----+-------------------- 1 | Endangered species (1 row)

This will find our document which contains Endangered and Species or lexemes close enough.

Language support

Postgres provides built-ins text search for many languages: Danish, Dutch, English, Finnish, French, German, Hungarian, Italian, Norwegian, Portugese, Romanian, Russian, Spanish, Swedish, Turkish.

SELECT to_tsvector('english', 'We are running'); to_tsvector ------------- 'run':3 (1 row) SELECT to_tsvector('french', 'We are running'); to_tsvector ---------------------------- 'are':2 'running':3 'we':1 (1 row)

A column name can be used to create the tsvector based on our starting model. Let's assume that post can be written in different languages and post contains a column language.

ALTER TABLE post ADD language text NOT NULL DEFAULT('english');

We can now rebuild our document to use this language column.

SELECT to_tsvector(post.language::regconfig, post.title) || to_tsvector(post.language::regconfig, post.content) || to_tsvector('simple', author.name) || to_tsvector('simple', coalesce((string_agg(tag.name, ' ')), '')) as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id;

Without the explicit cast ::regconfig the query will have generated an error:

ERROR: function to_tsvector(text, text) does not exist

regconfig is the object identifier type which represents the text search configuration in Postgres: http://www.postgresql.org/docs/9.3/static/datatype-oid.html

Fow now the lexemes of our document will be built using the right language based on post.language.

We also used simple which is one of the built in search text configs that Postgres provides. simple doesn't ignore stopwords and doesn't try to find the stem of the word. With simple every group of characters separated by a space is a lexeme; the simple text search config is pratical for data like a persons name for which we may not want to find the stem of the word.

SELECT to_tsvector('simple', 'We are running'); to_tsvector ---------------------------- 'are':2 'running':3 'we':1 (1 row) Accented Character

When you build a search engine supporting many languages you will also hit the accent problem. In many languages accents are very important and can change the meaning of the word. Postgres ships with an extension call unaccent which is useful to unaccentuate content.

CREATE EXTENSION unaccent; SELECT unaccent('èéêë'); unaccent ---------- eeee (1 row)

Let's add some accented content to our post table.

INSERT INTO post (id, title, content, author_id, language) VALUES (4, 'il était une fois', 'il était une fois un hôtel ...', 2,'french')

If we want to ignore accents when we build our document, then we can simply do the following:

SELECT to_tsvector(post.language, unaccent(post.title)) || to_tsvector(post.language, unaccent(post.content)) || to_tsvector('simple', unaccent(author.name)) || to_tsvector('simple', unaccent(coalesce(string_agg(tag.name, ' ')))) JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON author.id = post.author_id GROUP BY p.id

That works but it's a bit cumbersome with more room for mistakes. We can also build a new text search config with support for unaccented characters.

CREATE TEXT SEARCH CONFIGURATION fr ( COPY = french ); ALTER TEXT SEARCH CONFIGURATION fr ALTER MAPPING FOR hword, hword_part, word WITH unaccent, french_stem;

When we are using this new text search config, we can see the lexemes

SELECT to_tsvector('french', 'il était une fois'); to_tsvector ------------- 'fois':4 (1 row) SELECT to_tsvector('fr', 'il était une fois'); to_tsvector -------------------- 'etait':2 'fois':4 (1 row)

This gives us the same result as applying unaccent first and building the tsvector from the result.

SELECT to_tsvector('french', unaccent('il était une fois')); to_tsvector -------------------- 'etait':2 'fois':4 (1 row)

The number of lexemes is different because il était une are stopwords in French. Is it an issue to keep these stop words in our document? I don't think so as etait is not really a stopword as it's mispelled.

SELECT to_tsvector('fr', 'Hôtel') @@ to_tsquery('hotels') as result; result -------- t (1 row)

If we create an unaccented search config for each language that our post can be written in and we keep this value in post.language then we can keep our previous document query.

SELECT to_tsvector(post.language, post.title) || to_tsvector(post.language, post.content) || to_tsvector('simple', author.name) || to_tsvector('simple', coalesce(string_agg(tag.name, ' '))) JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON author.id = post.author_id GROUP BY p.id

If you need to create unaccented text search config for each language supported by Postgres then you can use this gist

Our document will now likely increase in size because it can unclude unaccented stopwords but we query without caring about accented characters. This can be useful e.g. for somebody with an english keyboard searching french content.

Ranking

When you build a search engine you want to be able to get search results ordered by relevance. The ranking of documents is based on many factors which are roughly explained in this documentation.

Ranking attempts to measure how relevant documents are to a particular query, so that when there are many matches the most relevant ones can be shown first. PostgreSQL provides two predefined ranking functions, which take into account lexical, proximity, and structural information; that is, they consider how often the query terms appear in the document, how close together the terms are in the document, and how important is the part of the document where they occur.

-- PostgreSQL documentation

To order our results by revelance PostgreSQL provides a few functions but in our example we will be using only 2 of them : ts_rank() and setweight().

The function setweight allows us to assign a weight value to a tsvector; the value can be 'A', 'B', 'C' or 'D'

SELECT pid, p_title FROM (SELECT post.id as pid, post.title as p_title, setweight(to_tsvector(post.language::regconfig, post.title), 'A') || setweight(to_tsvector(post.language::regconfig, post.content), 'B') || setweight(to_tsvector('simple', author.name), 'C') || setweight(to_tsvector('simple', coalesce(string_agg(tag.name, ' '))), 'B') as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id) p_search WHERE p_search.document @@ to_tsquery('english', 'Endangered & Species') ORDER BY ts_rank(p_search.document, to_tsquery('english', 'Endangered & Species')) DESC;

In the query above, we have assigned different weights to the different fields of a document. post.title is more important than the post.content and as important as tag associated. The least important is the author.name.

This means that if we were to search for the term 'Alice' a document that contains that term in its title would be returned before a document that contains the term in its content and document that with an author of that name would be returned last.

Based on the weights assigned to part of our document the ts_rank() returns a floating number which represents the relevancy of our document against the query.

SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example | document')) as relevancy; relevancy ----------- 0.0607927 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example ')) as relevancy; relevancy ----------- 0.0607927 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example | unkown')) as relevancy; relevancy ----------- 0.0303964 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example & document')) as relevancy; relevancy ----------- 0.0985009 (1 row) SELECT ts_rank(to_tsvector('This is an example of document'), to_tsquery('example & unknown')) as relevancy; relevancy ----------- 1e-20 (1 row)

However, the concept of relevancy is vague and very application-specific. Different applications might require additional information for ranking, e.g., document modification time. The built-in ranking functions such as ts_rank are only examples. You can write your own ranking functions and/or combine their results with additional factors to fit your specific needs.

To illustrate the paragraph above, if we wanted to promote newer posts against older ones we could divide the ts_rank value by the age of the document +1 (to avoid dividing by zero).

OPTIMIZATION AND Indexing

To optimize the search on one table is straight forward. PostgreSQL supports function based indexes so you can simply create a GIN index around the tsvector() function.

CREATE INDEX idx_fts_post ON post USING gin(setweight(to_tsvector(language, title),'A') || setweight(to_tsvector(language, content), 'B'));

GIN or GiST indexes? These two indexes could be subject of a blog post themselves. GiST can produce false matches which then requires a extra table row lookup to confirm the match. On the other hand, GIN is faster to query but bigger and slower to build.

As a rule of thumb, GIN indexes are best for static data because lookups are faster. For dynamic data, GiST indexes are faster to update. Specifically, GiST indexes are very good for dynamic data and fast if the number of unique words (lexemes) is under 100,000, while GIN indexes will handle 100,000+ lexemes better but are slower to update.

-- Postgres doc : Chap 12 Full Text Search

For our example, we will be using GIN but the choice can be argued and you need to take your own decision based on your data.

We have a problem in our schema example; the document is spread accross multiple tables with different weights. For a better performance it's necessary to denormalize the data via triggers or materialized view.

You don't always need to denormalise and in some cases you can add a function based index as we did above. Alternatively you can easily denormalise data from the same table via the postgres trigger function tsvector_update_trigger(...) or tsvector_update_trigger_column(...). See the Postgres doc for more detailed information.

For our application there being some some delay before results are returned via the search can be acceptable. This is a good use case for using a Materialized View so we can add an extra index on it.

CREATE MATERIALIZED VIEW search_index AS SELECT post.id, post.title, setweight(to_tsvector(post.language::regconfig, post.title), 'A') || setweight(to_tsvector(post.language::regconfig, post.content), 'B') || setweight(to_tsvector('simple', author.name), 'C') || setweight(to_tsvector('simple', coalesce(string_agg(tag.name, ' '))), 'A') as document FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id

Then reindexing the search engine will be as simple as periodically running REFRESH MATERIALIZED VIEW search_index;.

We can now add an index on the materialized view.

CREATE INDEX idx_fts_search ON search_index USING gin(document);

And querying will become much simpler too.

SELECT id as post_id, title FROM search_index WHERE document @@ to_tsquery('english', 'Endangered & Species') ORDER BY ts_rank(p_search.document, to_tsquery('english', 'Endangered & Species')) DESC;

If you cannot afford delay then you may have to investigate the alternative method using triggers.

There is not one way to build your document store; it will depend on what comprises your document: single table, multiple table, multiple languages, amount of data ...

Thoughtbot.com published a good article on which I advise reading.

Mispelling

PostgreSQL comes with a very useful extenstion called pg_trgm. See pg_trgm doc.

CREATE EXTENSION pg_trgm;

This provides support for trigram which is a N-gram with N == 3. N-grams are useful because they allow finding strings with similar characters and, in essence, that's what a misspelling is - a word that is similar but not quite right.

SELECT similarity('Something', 'something'); similarity ------------ 1 (1 row) SELECT similarity('Something', 'samething'); similarity ------------ 0.538462 (1 row) SELECT similarity('Something', 'unrelated'); similarity ------------ 0 (1 row) SELECT similarity('Something', 'everything'); similarity ------------ 0.235294 (1 row) SELECT similarity('Something', 'omething'); similarity ------------ 0.583333 (1 row)

With the examples above you can see that similarity returns a float to represent the similarity between two strings. To detect mispelling is then a matter of collecting the lexemes used by our documents and comparing the similarity with our search input. I found that 0.5 is good number to test similarity of mispelling. First we need to create this list of unique lexemes used by our documents.

CREATE MATERIALIZED VIEW unique_lexeme AS SELECT word FROM ts_stat( 'SELECT to_tsvector('simple', post.title) || to_tsvector('simple', post.content) || to_tsvector('simple', author.name) || to_tsvector('simple', coalesce(string_agg(tag.name, ' '))) FROM post JOIN author ON author.id = post.author_id JOIN posts_tags ON posts_tags.post_id = posts_tags.tag_id JOIN tag ON tag.id = posts_tags.tag_id GROUP BY post.id, author.id');

The query above builds a view with one column called word from all the unique lexemes of our documents. We used simple because our content can be in multiple languages. Once we create this materialized view we need to add an index to make a similarity query faster.

CREATE INDEX words_idx ON search_words USING gin(word gin_trgm_ops);

Luckily unique lexemes used in a search engine is not something that will change rapidly so we possibly won't have to refresh the materialized view too often via:

REFRESH MATERIALIZED VIEW unique_lexeme;

Once we have built this table finding the closest match is very simple.

SELECT word WHERE similarity(word, 'samething') > 0.5 ORDER BY word <-> 'samething' LIMIT 1;

This query returns a lexeme which is similar enough (>0.5) to the search input samething ordered by the closest first. The operator <-> returns the "distance" between the arguments, that is one minus the similarity() value.

When you decide to handle mispelling in your search you may want to not look for misspellings on every query. Instead you could query for misspellings only when the search returns no results and use the results of that query to provide some suggestions to your user. It is also possible that your data may contain misspellings if it comes from some source of informal communication such as a social network in which case you may obtain good results by appending the similar lexeme to your tsquery.

is a good reference article about the use of trigrams for using misspellings and search with Postgres.

In my use case the unique lexemes table has never been bigger than 2000 rows but from my understanding if you have more 1M unique lexemes used accross your document then you may be meet performance issues with this technique.

About MySQL and RDS Does it works on Postgres RDS?

All the examples illustrated work on RDS. From what I'm aware the only restrictions on search features imposed by RDS are those that require access to the file system such as custom dictionaries, ispell, synonyms and thesaurus. See the related issue on the aws forum.

I'm using MYSQL should I use the builtin full-text search?

I wouldn't. Without starting a flame war, MySQL full-text search features are very limited. By default there is no support for stemming nor any language support. I came accross a stemming function which can be installed but MYSQL doesn't support function based indexes.

Then what can you do? Based on what we have discussed above if Postgres fulfills your use case then think about moving to Postgres. This can easily be done via tools like py-mysql2pgsql. Or you can investigate more advanced solutions like SOLR and Elasticsearch.

Conclusion

We have seen how to build a decent multi-language search engine based on a non-trival document. This article is only an overview but it should give you enough background and examples to get you started with your own. I may have made some mistakes in this article and I would appreciate if you report them to blog@lostpropertyhq.com

The full-text search feature included in Posgres is awesome and quite fast (enough). It will allow your application to grow without depending on another tool. Is Postgres Search the silver bullet? Probably not if your core business needs revolve around search.

Some features are missing but in lot of use cases you won't need them. It goes without saying that it's critical that you analyze and understand your needs to know which road to take.

Personally I hope to see the full-text search continuing to improve in Postgres and maybe a few of these features being included:

  • Additional built in language support. eg: Chinese, Japanese...
  • Foreign data wrapper around Lucene. Lucene is still the most advanced tool for full-text search and it will have a lot of benefits to see integration with Postgres.
  • More boost or scoring feature for the ranking of results would be first-rate. Elasticsearch and SOLR offer advanced solutions already.
  • A way to do fuzzy tsquery without having to use trigram would be nice. Elasticsearch offer a simple way to do fuzzy search queries.
  • Being able to create and edit dynamically features such as dictionary content, synonyms, thesaurus via SQL this removing the need to add files to the filesystem

Postgres is not as advanced as ElasticSearch and SOLR but these two are dedicated full-text search tools whereas full-text search is only a feature of PostgreSQL and a pretty good one.

Author: Rach Belaid (@rachbelaid) from Lost Property

Lost Property Share this post

Facebook Google+

All content copyright

© 2013 • All rights reserved. Proudly published with

Ghost

For Science: Does ZFS deduplication work on intros of TV shows?

29 September 2014 - 7:00am

Today from the “What am I doing with my life?”-department: I finally set out to find a definite answer to something I’ve always wondered about ever since hearing about the deduplication feature of ZFS – does it work on the intros of TV shows? TL;DR: Nope!

Didn’t see that one coming, did you?

I never even expected it to work. Plus, you’re always advised against using deduplication anyway. The infamous “1GB RAM per 1TB storage in the pool” rule which is often incorrectly applied to ZFS in general stems from it. So even if I had found out that it worked, I probably couldn’t have benefitted from that. But still, not knowing for sure always bugged me.

As I’m currently building a new NAS and will switch from ext4 for my home storage once it will be fully operating it was time to simply do some tests and be done with the matter one and for all. Establishing the test setup: One season of Dexter in 1080p from the iTunes Store weighs roughly 26GB, exactly 28171821903 bytes in the case of my test data. The episodes of said season run for 54:27s on average while the brilliant intro of the hit-turned-shit show lasts for a whopping 01:45s – i.e. 3,213957759% of each episode. That means we could hope for saving around 800MB per season in an ideal scenario.

First I created four different ZFS pools:

  • zfs_blank – neither compression nor deduplication turned on
  • zfs_dedup – deduplication turned on
  • zfs_compr – compression turned on
  • zfs_both – both compression and deduplication turned on
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ truncate -s 32G /var/lib/zfs_img/zfs_blank.img $ truncate -s 32G /var/lib/zfs_img/zfs_dedup.img $ truncate -s 32G /var/lib/zfs_img/zfs_compr.img $ truncate -s 32G /var/lib/zfs_img/zfs_both.img $ zpool create zfs_blank /var/lib/zfs_img/zfs_blank.img $ zpool create zfs_dedup /var/lib/zfs_img/zfs_dedup.img $ zfs set dedup=on zfs_dedup $ zpool create zfs_compr /var/lib/zfs_img/zfs_compr.img $ zfs set compression=on zfs_compr $ zpool create zfs_both /var/lib/zfs_img/zfs_both.img $ zfs set compression=on zfs_both $ zfs set dedup=on zfs_both

After creating the pools, there is the exact same amount of free space on each of them:

1 2 3 4 5 6 $ df /zfs_* Filesystem 1K-blocks Used Available Use% Mounted on zfs_blank 32771968 0 32771968 0% /zfs_blank zfs_dedup 32771968 0 32771968 0% /zfs_dedup zfs_compr 32771968 0 32771968 0% /zfs_compr zfs_both 32771968 0 32771968 0% /zfs_both

After copying the files into each pool, let’s see what we got:

1 2 3 4 5 6 $ df /zfs_* Filesystem 1K-blocks Used Available Use% Mounted on zfs_blank 32771712 27531008 5240704 85% /zfs_blank zfs_dedup 32709632 27532672 5176960 85% /zfs_dedup zfs_compr 32771712 27527424 5244288 84% /zfs_compr zfs_both 32708480 27529088 5179392 85% /zfs_both

Well, this is odd. Suddenly there is a different number of (total, not just free) 1K-blocks in each of the filesystems. I have no idea why that is happening, please let me know if you can explain it. (I did stumble upon these df/ZFS troubles while researching, but either this was fixed meanwhile or never an issue with the ZoL implementation, as the script there gave me the same numbers as df/du.) To make certain this doesn’t influence the results for the purpose of the test, I also tried it with a set of highly compressible and a set of highly dedupable files. In doing so I encountered the same 1K-blocks issue but still got exactly the results I would expect.

So let’s compare how much space is used in each scenario:

1 2 3 4 5 $ du /zfs_* 27531052 /zfs_blank/ 27532697 /zfs_dedup/ 27527385 /zfs_compr/ 27529012 /zfs_both/

With deduplication turned on, the files actually use up more space than when it is turned off. Even though these are H.264-encoded videos, turning compression on saves a little space. Adding deduplication to the compression is increasing the required space just as it was the case without using compression. Between the most (dedup on) and least (compression on) amount of space the files could use there is a difference of 5312 1K-blocks, roughly 5MB. The gains from compression compared to using no compression are 3667 1K-blocks, roughly 3.5MB. You would have to store more than 750 such seasons before the savings would add up to just a single episode’s file size. Here’s a visualization of you turning on dedup for your pool:

Just like I always expected, deduplication does not work on TV show intros albeit them being “just the same”. Due to the nature of modern video encoding, the underlying data is rarely the same: In an episode with lots of explosions a high amount of bitrate will be dedicated to those scenes and less of it will be left for the intro, and therefore the resulting data will differ from another episode. I’m guessing the gains from compression come from compressible metadata of the container format (and possibly subtitles) but that’s just a wild guess. As others have written before: compression never hurts you, dedup almost certainly does.

Even on a show with an incredibly long intro like Dexter you’ll gain nothing from ZFS’ deduplication feature. On the bright side: Usually you won’t be “wasting” more than 3% of a file on the intro – an episode of The Simpsons (average length 22:49s) only uses 1,826150475% for it. You can calculate that percentage for Lost on your own, I guess.

Now you know.

With New Ad Platform, Facebook Opens Gates to Its Vault of User Data

29 September 2014 - 7:00am

HTTP/1.1 302 Found Date: Mon, 29 Sep 2014 03:48:10 GMT Server: Apache Set-Cookie: NYT-S=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=www.stg.nytimes.com Set-Cookie: NYT-S=0MG8rcaXG2gzzDXrmvxADeHCS4F64FJ9cPdeFz9JchiAIUFL2BEX5FWcV.Ynx4rkFI; expires=Wed, 29-Oct-2014 03:48:10 GMT; path=/; domain=.nytimes.com Location: http://www.nytimes.com/2014/09/29/business/with-new-ad-platform-facebook-opens-the-gates-to-its-vault-of-consumer-data.html?_r=0 Content-Length: 0 nnCoection: close Content-Type: text/html; charset=UTF-8 HTTP/1.1 200 OK Server: Apache Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Date: Mon, 29 Sep 2014 03:48:10 GMT X-Varnish: 1102874938 1102872037 Age: 52 Via: 1.1 varnish X-Cache: HIT X-API-Version: 5-5 X-PageType: article Connection: close 0023e5

Sections Home Search Skip to content Skip to navigation View mobile version Technology|With New Ad Platform, Facebook Opens Gates To Its Vault of User Data http://nyti.ms/ZjAMRB See next articles See previous articles Continue reading the main story Share This Page Continue reading the main story

SAN FRANCISCO — Facebook built itself into the No. 2 digital advertising platform in the world by analyzing the vast amount of data it had on each of its 1.3 billion users to sell individually targeted ads on its social network.

Now it is going to take those targeted ads to the rest of the Internet, mounting its most direct challenge yet to Google, the leader in digital advertising with nearly one-third of the global market.

On Monday, Facebook will roll out a rebuilt ad platform, called Atlas, that will allow marketers to tap its detailed knowledge of its users to direct ads to those people on thousands of other websites and mobile apps.

“We are bringing all of the people-based marketing functions that marketers are used to doing on Facebook and allowing them to do that across the web,” David Jakubowski, the company’s head of advertising technology, said in an interview.

Continue reading the main story Related Coverage

For example, if PepsiCo, one of the first advertisers to sign on to the service, wanted to reach college age men with ads for its Mountain Dew Baja Blast, it could use Atlas to identify several million of those potential customers and show each of them a dozen ads for the soft drink on game apps, sports and video sites. Atlas would also provide Pepsi with information to help it assess which ads were the most effective.

If successful, such cross-platform advertising could create a new revenue stream for Facebook and offer marketers an attractive alternative to ad networks run by Google, Yahoo, Apple and others.

“Facebook has deep, deep data on its users. You can slice and dice markets, like women 25 to 35 who live in the Southeast and are fans of ‘Breaking Bad,’ ” said Rebecca Lieb, a digital advertising and media analyst at the Altimeter Group, a research firm. The new Atlas platform, she said, “can track people across devices, weave together online and offline.”

But such detailed tracking of Facebook users on and off the service also raises privacy concerns.

In June, the company warned that it was doing extended tracking for advertising purposes and announced a new tool that allows individuals to see and change some of the information that Facebook has collected on them.

“There is a Big Brother perception that is a side effect of this kind of precision targeting,” Ms. Lieb said. “People are worried that you know them.”

She said consumer concerns about privacy are helping to drive interest in alternative social networks, such as the start-up Ello, that are promising not to use customer information for advertising.

Facebook says it never discloses the identity of individuals to marketers and that any matching of, say, Pepsi’s own database of its fans to Facebook’s data is done on a blind basis.

Many other Internet companies, from Google and Yahoo to little-known data brokers, collect data on individuals based on their web browsing and other activities and use it to target ads.

But Facebook’s combination of real identity and voluntarily disclosed personal information makes it a particularly valuable tool for marketers.

The Facebook login is most useful on mobile devices, where traditional web tracking tools like cookies and pixel tags do not work. If a person is logged into the Facebook app on a smartphone, the company has the ability to see what other apps he or she is using and could show ads within those apps.

“Nobody else besides Facebook has the depth of data about individuals,” said Debra Aho Williamson, a principal analyst at the research firm eMarketer. “That’s where the power of this ad platform is going to come from.”

The Omnicom Group, one of the largest advertising companies in the world, will be the first to sign up to use Atlas.

“Mobile has been a very hard thing for us to do,” said Jonathan Nelson, chief executive of Omnicom Digital. “This Atlas solution is a huge step forward in making mobile marketing more effective.”

Atlas has its roots in a company of the same name that Facebook bought from Microsoft last year, and Facebook has signaled for months that it would use the acquisition as the basis for a broader ad offering.

The revamped Atlas platform positions Facebook to compete more directly with Google, which scans user information from sources like email and web searches to help marketers target ads across the Internet and, increasingly, mobile devices.

Facebook has a strong track record of delivering results for advertisers on its social network, but now the company must prove it can do the same on other sites.

“A lot of these digital media companies are working on different ways to go after the same people,” Ms. Williamson said. “We’re going to see Facebook being a very strong competitor, but if I were an advertiser, I would look at all of them.”

A version of this article appears in print on September 29, 2014, on page B7 of the New York edition with the headline: With New Ad Platform, Facebook Opens Gates To Its Vault of User Data.

More on nytimes.com

Site Index

A Simple Guide to Five Normal Forms in Relational Database Theory (1982)

29 September 2014 - 7:00am
A Simple Guide to Five Normal Forms in Relational Database Theory

William Kent, "A Simple Guide to Five Normal Forms in Relational Database Theory", Communications of the ACM 26(2), Feb. 1983, 120-125. Also IBM Technical Report TR03.159, Aug. 1981. Also presented at SHARE 62, March 1984, Anaheim, California. Also in A.R. Hurson, L.L. Miller and S.H. Pakzad, Parallel Architectures for Database Systems, IEEE Computer Society Press, 1989. [12 pp]

Copyright 1996 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org.

William Kent
Sept 1982


> 1 INTRODUCTION . . . 2
> 2 FIRST NORMAL FORM . . . 2
> 3 SECOND AND THIRD NORMAL FORMS . . . 2
>> 3.1 Second Normal Form . . . 2
>> 3.2 Third Normal Form . . . 3
>> 3.3 Functional Dependencies . . . 4
> 4 FOURTH AND FIFTH NORMAL FORMS . . . 5
>> 4.1 Fourth Normal Form . . . 6
>>> 4.1.1 Independence . . . 8
>>> 4.1.2 Multivalued Dependencies . . . 9
>> 4.2 Fifth Normal Form . . . 9
> 5 UNAVOIDABLE REDUNDANCIES . . . 12
> 6 INTER-RECORD REDUNDANCY . . . 13
> 7 CONCLUSION . . . 13
> 8 ACKNOWLEDGMENT . . . 14
> 9 REFERENCES . . . 14

The normal forms defined in relational database theory represent guidelines for record design. The guidelines corresponding to first through fifth normal forms are presented here, in terms that do not require an understanding of relational theory. The design guidelines are meaningful even if one is not using a relational database system. We present the guidelines without referring to the concepts of the relational model in order to emphasize their generality, and also to make them easier to understand. Our presentation conveys an intuitive sense of the intended constraints on record design, although in its informality it may be imprecise in some technical details. A comprehensive treatment of the subject is provided by Date [4].

The normalization rules are designed to prevent update anomalies and data inconsistencies. With respect to performance tradeoffs, these guidelines are biased toward the assumption that all non-key fields will be updated frequently. They tend to penalize retrieval, since data which may have been retrievable from one record in an unnormalized design may have to be retrieved from several records in the normalized form. There is no obligation to fully normalize all records when actual performance requirements are taken into account.

First normal form [1] deals with the "shape" of a record type.

Under first normal form, all occurrences of a record type must contain the same number of fields.

First normal form excludes variable repeating fields and groups. This is not so much a design guideline as a matter of definition. Relational database theory doesn't deal with records having a variable number of fields.

Second and third normal forms [2, 3, 7] deal with the relationship between non-key and key fields.

Under second and third normal forms, a non-key field must provide a fact about the key, us the whole key, and nothing but the key. In addition, the record must satisfy first normal form.

We deal now only with "single-valued" facts. The fact could be a one-to-many relationship, such as the department of an employee, or a one-to-one relationship, such as the spouse of an employee. Thus the phrase "Y is a fact about X" signifies a one-to-one or one-to-many relationship between Y and X. In the general case, Y might consist of one or more fields, and so might X. In the following example, QUANTITY is a fact about the combination of PART and WAREHOUSE.

3.1 Second Normal Form

Second normal form is violated when a non-key field is a fact about a subset of a key. It is only relevant when the key is composite, i.e., consists of several fields. Consider the following inventory record:

--------------------------------------------------- | PART | WAREHOUSE | QUANTITY | WAREHOUSE-ADDRESS | ====================-------------------------------

The key here consists of the PART and WAREHOUSE fields together, but WAREHOUSE-ADDRESS is a fact about the WAREHOUSE alone. The basic problems with this design are:

  • The warehouse address is repeated in every record that refers to a part stored in that warehouse.
  • If the address of the warehouse changes, every record referring to a part stored in that warehouse must be updated.
  • Because of the redundancy, the data might become inconsistent, with different records showing different addresses for the same warehouse.
  • If at some point in time there are no parts stored in the warehouse, there may be no record in which to keep the warehouse's address.

To satisfy second normal form, the record shown above should be decomposed into (replaced by) the two records:

------------------------------- --------------------------------- | PART | WAREHOUSE | QUANTITY | | WAREHOUSE | WAREHOUSE-ADDRESS | ====================----------- =============--------------------

When a data design is changed in this way, replacing unnormalized records with normalized records, the process is referred to as normalization. The term "normalization" is sometimes used relative to a particular normal form. Thus a set of records may be normalized with respect to second normal form but not with respect to third.

The normalized design enhances the integrity of the data, by minimizing redundancy and inconsistency, but at some possible performance cost for certain retrieval applications. Consider an application that wants the addresses of all warehouses stocking a certain part. In the unnormalized form, the application searches one record type. With the normalized design, the application has to search two record types, and connect the appropriate pairs.

3.2 Third Normal Form

Third normal form is violated when a non-key field is a fact about another non-key field, as in

------------------------------------ | EMPLOYEE | DEPARTMENT | LOCATION | ============------------------------

The EMPLOYEE field is the key. If each department is located in one place, then the LOCATION field is a fact about the DEPARTMENT -- in addition to being a fact about the EMPLOYEE. The problems with this design are the same as those caused by violations of second normal form:

  • The department's location is repeated in the record of every employee assigned to that department.
  • If the location of the department changes, every such record must be updated.
  • Because of the redundancy, the data might become inconsistent, with different records showing different locations for the same department.
  • If a department has no employees, there may be no record in which to keep the department's location.

To satisfy third normal form, the record shown above should be decomposed into the two records:

------------------------- ------------------------- | EMPLOYEE | DEPARTMENT | | DEPARTMENT | LOCATION | ============------------- ==============-----------

To summarize, a record is in second and third normal forms if every field is either part of the key or provides a (single-valued) fact about exactly the whole key and nothing else.

3.3 Functional Dependencies

In relational database theory, second and third normal forms are defined in terms of functional dependencies, which correspond approximately to our single-valued facts. A field Y is "functionally dependent" on a field (or fields) X if it is invalid to have two records with the same X-value but different Y-values. That is, a given X-value must always occur with the same Y-value. When X is a key, then all fields are by definition functionally dependent on X in a trivial way, since there can't be two records having the same X value.

There is a slight technical difference between functional dependencies and single-valued facts as we have presented them. Functional dependencies only exist when the things involved have unique and singular identifiers (representations). For example, suppose a person's address is a single-valued fact, i.e., a person has only one address. If we don't provide unique identifiers for people, then there will not be a functional dependency in the data:

---------------------------------------------- | PERSON | ADDRESS | -------------+-------------------------------- | John Smith | 123 Main St., New York | | John Smith | 321 Center St., San Francisco | ----------------------------------------------

Although each person has a unique address, a given name can appear with several different addresses. Hence we do not have a functional dependency corresponding to our single-valued fact.

Similarly, the address has to be spelled identically in each occurrence in order to have a functional dependency. In the following case the same person appears to be living at two different addresses, again precluding a functional dependency.

--------------------------------------- | PERSON | ADDRESS | -------------+------------------------- | John Smith | 123 Main St., New York | | John Smith | 123 Main Street, NYC | ---------------------------------------

We are not defending the use of non-unique or non-singular representations. Such practices often lead to data maintenance problems of their own. We do wish to point out, however, that functional dependencies and the various normal forms are really only defined for situations in which there are unique and singular identifiers. Thus the design guidelines as we present them are a bit stronger than those implied by the formal definitions of the normal forms.

For instance, we as designers know that in the following example there is a single-valued fact about a non-key field, and hence the design is susceptible to all the update anomalies mentioned earlier.

---------------------------------------------------------- | EMPLOYEE | FATHER | FATHER'S-ADDRESS | |============------------+-------------------------------| | Art Smith | John Smith | 123 Main St., New York | | Bob Smith | John Smith | 123 Main Street, NYC | | Cal Smith | John Smith | 321 Center St., San Francisco | ----------------------------------------------------------

However, in formal terms, there is no functional dependency here between FATHER'S-ADDRESS and FATHER, and hence no violation of third normal form.

Fourth [5] and fifth [6] normal forms deal with multi-valued facts. The multi-valued fact may correspond to a many-to-many relationship, as with employees and skills, or to a many-to-one relationship, as with the children of an employee (assuming only one parent is an employee). By "many-to-many" we mean that an employee may have several skills, and a skill may belong to several employees.

Note that we look at the many-to-one relationship between children and fathers as a single-valued fact about a child but a multi-valued fact about a father.

In a sense, fourth and fifth normal forms are also about composite keys. These normal forms attempt to minimize the number of fields involved in a composite key, as suggested by the examples to follow.

4.1 Fourth Normal Form

Under fourth normal form, a record type should not contain two or more independent multi-valued facts about an entity. In addition, the record must satisfy third normal form.

The term "independent" will be discussed after considering an example.

Consider employees, skills, and languages, where an employee may have several skills and several languages. We have here two many-to-many relationships, one between employees and skills, and one between employees and languages. Under fourth normal form, these two relationships should not be represented in a single record such as

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | ===============================

Instead, they should be represented in the two records

-------------------- ----------------------- | EMPLOYEE | SKILL | | EMPLOYEE | LANGUAGE | ==================== =======================

Note that other fields, not involving multi-valued facts, are permitted to occur in the record, as in the case of the QUANTITY field in the earlier PART/WAREHOUSE example.

The main problem with violating fourth normal form is that it leads to uncertainties in the maintenance policies. Several policies are possible for maintaining two independent multi-valued facts in one record:

(1) A disjoint format, in which a record contains either a skill or a language, but not both:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | | | Smith | type | | | Smith | | French | | Smith | | German | | Smith | | Greek | -------------------------------

This is not much different from maintaining two separate record types. (We note in passing that such a format also leads to ambiguities regarding the meanings of blank fields. A blank SKILL could mean the person has no skill, or the field is not applicable to this employee, or the data is unknown, or, as in this case, the data may be found in another record.)

(2) A random mix, with three variations:

(a) Minimal number of records, with repetitions:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | German | | Smith | type | Greek | -------------------------------

(b) Minimal number of records, with null values:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | German | | Smith | | Greek | -------------------------------

(c) Unrestricted:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | | | Smith | | German | | Smith | type | Greek | -------------------------------

(3) A "cross-product" form, where for each employee, there must be a record for every possible pairing of one of his skills with one of his languages:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | cook | German | | Smith | cook | Greek | | Smith | type | French | | Smith | type | German | | Smith | type | Greek | -------------------------------

Other problems caused by violating fourth normal form are similar in spirit to those mentioned earlier for violations of second or third normal form. They take different variations depending on the chosen maintenance policy:

  • If there are repetitions, then updates have to be done in multiple records, and they could become inconsistent.
  • Insertion of a new skill may involve looking for a record with a blank skill, or inserting a new record with a possibly blank language, or inserting multiple records pairing the new skill with some or all of the languages.
  • Deletion of a skill may involve blanking out the skill field in one or more records (perhaps with a check that this doesn't leave two records with the same language and a blank skill), or deleting one or more records, coupled with a check that the last mention of some language hasn't also been deleted.

Fourth normal form minimizes such update problems.

4.1.1 Independence

We mentioned independent multi-valued facts earlier, and we now illustrate what we mean in terms of the example. The two many-to-many relationships, employee:skill and employee:language, are "independent" in that there is no direct connection between skills and languages. There is only an indirect connection because they belong to some common employee. That is, it does not matter which skill is paired with which language in a record; the pairing does not convey any information. That's precisely why all the maintenance policies mentioned earlier can be allowed.

In contrast, suppose that an employee could only exercise certain skills in certain languages. Perhaps Smith can cook French cuisine only, but can type in French, German, and Greek. Then the pairings of skills and languages becomes meaningful, and there is no longer an ambiguity of maintenance policies. In the present case, only the following form is correct:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | French | | Smith | type | German | | Smith | type | Greek | -------------------------------

Thus the employee:skill and employee:language relationships are no longer independent. These records do not violate fourth normal form. When there is an interdependence among the relationships, then it is acceptable to represent them in a single record.

4.1.2 Multivalued Dependencies

For readers interested in pursuing the technical background of fourth normal form a bit further, we mention that fourth normal form is defined in terms of multivalued dependencies, which correspond to our independent multi-valued facts. Multivalued dependencies, in turn, are defined essentially as relationships which accept the "cross-product" maintenance policy mentioned above. That is, for our example, every one of an employee's skills must appear paired with every one of his languages. It may or may not be obvious to the reader that this is equivalent to our notion of independence: since every possible pairing must be present, there is no "information" in the pairings. Such pairings convey information only if some of them can be absent, that is, only if it is possible that some employee cannot perform some skill in some language. If all pairings are always present, then the relationships are really independent.

We should also point out that multivalued dependencies and fourth normal form apply as well to relationships involving more than two fields. For example, suppose we extend the earlier example to include projects, in the following sense:

  • An employee uses certain skills on certain projects.
  • An employee uses certain languages on certain projects.

If there is no direct connection between the skills and languages that an employee uses on a project, then we could treat this as two independent many-to-many relationships of the form EP:S and EP:L, where "EP" represents a combination of an employee with a project. A record including employee, project, skill, and language would violate fourth normal form. Two records, containing fields E,P,S and E,P,L, respectively, would satisfy fourth normal form.

4.2 Fifth Normal Form

Fifth normal form deals with cases where information can be reconstructed from smaller pieces of information that can be maintained with less redundancy. Second, third, and fourth normal forms also serve this purpose, but fifth normal form generalizes to cases not covered by the others.

We will not attempt a comprehensive exposition of fifth normal form, but illustrate the central concept with a commonly used example, namely one involving agents, companies, and products. If agents represent companies, companies make products, and agents sell products, then we might want to keep a record of which agent sells which product for which company. This information could be kept in one record type with three fields:

----------------------------- | AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | GM | truck | -----------------------------

This form is necessary in the general case. For example, although agent Smith sells cars made by Ford and trucks made by GM, he does not sell Ford trucks or GM cars. Thus we need the combination of three fields to know which combinations are valid and which are not.

But suppose that a certain rule was in effect: if an agent sells a certain product, and he represents a company making that product, then he sells that product for that company.

----------------------------- | AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | Ford | truck | | Smith | GM | car | | Smith | GM | truck | | Jones | Ford | car | -----------------------------

In this case, it turns out that we can reconstruct all the true facts from a normalized form consisting of three separate record types, each containing two fields:

------------------- --------------------- ------------------- | AGENT | COMPANY | | COMPANY | PRODUCT | | AGENT | PRODUCT | |-------+---------| |---------+---------| |-------+---------| | Smith | Ford | | Ford | car | | Smith | car | | Smith | GM | | Ford | truck | | Smith | truck | | Jones | Ford | | GM | car | | Jones | car | ------------------- | GM | truck | ------------------- ---------------------

These three record types are in fifth normal form, whereas the corresponding three-field record shown previously is not.

Roughly speaking, we may say that a record type is in fifth normal form when its information content cannot be reconstructed from several smaller record types, i.e., from record types each having fewer fields than the original record. The case where all the smaller records have the same key is excluded. If a record type can only be decomposed into smaller records which all have the same key, then the record type is considered to be in fifth normal form without decomposition. A record type in fifth normal form is also in fourth, third, second, and first normal forms.

Fifth normal form does not differ from fourth normal form unless there exists a symmetric constraint such as the rule about agents, companies, and products. In the absence of such a constraint, a record type in fourth normal form is always in fifth normal form.

One advantage of fifth normal form is that certain redundancies can be eliminated. In the normalized form, the fact that Smith sells cars is recorded only once; in the unnormalized form it may be repeated many times.

It should be observed that although the normalized form involves more record types, there may be fewer total record occurrences. This is not apparent when there are only a few facts to record, as in the example shown above. The advantage is realized as more facts are recorded, since the size of the normalized files increases in an additive fashion, while the size of the unnormalized file increases in a multiplicative fashion. For example, if we add a new agent who sells x products for y companies, where each of these companies makes each of these products, we have to add x+y new records to the normalized form, but xy new records to the unnormalized form.

It should be noted that all three record types are required in the normalized form in order to reconstruct the same information. From the first two record types shown above we learn that Jones represents Ford and that Ford makes trucks. But we can't determine whether Jones sells Ford trucks until we look at the third record type to determine whether Jones sells trucks at all.

The following example illustrates a case in which the rule about agents, companies, and products is satisfied, and which clearly requires all three record types in the normalized form. Any two of the record types taken alone will imply something untrue.

----------------------------- | AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | Ford | truck | | Smith | GM | car | | Smith | GM | truck | | Jones | Ford | car | | Jones | Ford | truck | | Brown | Ford | car | | Brown | GM | car | | Brown | Totota | car | | Brown | Totota | bus | ----------------------------- ------------------- --------------------- ------------------- | AGENT | COMPANY | | COMPANY | PRODUCT | | AGENT | PRODUCT | |-------+---------| |---------+---------| |-------+---------| | Smith | Ford | | Ford | car | | Smith | car | Fifth | Smith | GM | | Ford | truck | | Smith | truck | Normal | Jones | Ford | | GM | car | | Jones | car | Form | Brown | Ford | | GM | truck | | Jones | truck | | Brown | GM | | Toyota | car | | Brown | car | | Brown | Toyota | | Toyota | bus | | Brown | bus | ------------------- --------------------- -------------------

Observe that:

  • Jones sells cars and GM makes cars, but Jones does not represent GM.
  • Brown represents Ford and Ford makes trucks, but Brown does not sell trucks.
  • Brown represents Ford and Brown sells buses, but Ford does not make buses.

Fourth and fifth normal forms both deal with combinations of multivalued facts. One difference is that the facts dealt with under fifth normal form are not independent, in the sense discussed earlier. Another difference is that, although fourth normal form can deal with more than two multivalued facts, it only recognizes them in pairwise groups. We can best explain this in terms of the normalization process implied by fourth normal form. If a record violates fourth normal form, the associated normalization process decomposes it into two records, each containing fewer fields than the original record. Any of these violating fourth normal form is again decomposed into two records, and so on until the resulting records are all in fourth normal form. At each stage, the set of records after decomposition contains exactly the same information as the set of records before decomposition.

In the present example, no pairwise decomposition is possible. There is no combination of two smaller records which contains the same total information as the original record. All three of the smaller records are needed. Hence an information-preserving pairwise decomposition is not possible, and the original record is not in violation of fourth normal form. Fifth normal form is needed in order to deal with the redundancies in this case.

Normalization certainly doesn't remove all redundancies. Certain redundancies seem to be unavoidable, particularly when several multivalued facts are dependent rather than independent. In the example shown Section 4.1.1, it seems unavoidable that we record the fact that "Smith can type" several times. Also, when the rule about agents, companies, and products is not in effect, it seems unavoidable that we record the fact that "Smith sells cars" several times.

The normal forms discussed here deal only with redundancies occurring within a single record type. Fifth normal form is considered to be the "ultimate" normal form with respect to such redundanciesæ.

Other redundancies can occur across multiple record types. For the example concerning employees, departments, and locations, the following records are in third normal form in spite of the obvious redundancy:

------------------------- ------------------------- | EMPLOYEE | DEPARTMENT | | DEPARTMENT | LOCATION | ============------------- ==============----------- ----------------------- | EMPLOYEE | LOCATION | ============-----------

In fact, two copies of the same record type would constitute the ultimate in this kind of undetected redundancy.

Inter-record redundancy has been recognized for some time [1], and has recently been addressed in terms of normal forms and normalization [8].

While we have tried to present the normal forms in a simple and understandable way, we are by no means suggesting that the data design process is correspondingly simple. The design process involves many complexities which are quite beyond the scope of this paper. In the first place, an initial set of data elements and records has to be developed, as candidates for normalization. Then the factors affecting normalization have to be assessed:

  • Single-valued vs. multi-valued facts.
  • Dependency on the entire key.
  • Independent vs. dependent facts.
  • The presence of mutual constraints.
  • The presence of non-unique or non-singular representations.

And, finally, the desirability of normalization has to be assessed, in terms of its performance impact on retrieval applications.

I am very grateful to Ted Codd and Ron Fagin for reading earlier drafts and making valuable comments, and especially to Chris Date for helping clarify some key points.

  1. E.F. Codd, "A Relational Model of Data for Large Shared Data Banks", Comm. ACM 13 (6), June 1970, pp. 377-387.

    The original paper introducing the relational data model.

  2. E.F. Codd, "Normalized Data Base Structure: A Brief Tutorial", ACM SIGFIDET Workshop on Data Description, Access, and Control, Nov. 11-12, 1971, San Diego, California, E.F. Codd and A.L. Dean (eds.).

    An early tutorial on the relational model and normalization.

  3. E.F. Codd, "Further Normalization of the Data Base Relational Model", R. Rustin (ed.), Data Base Systems (Courant Computer Science Symposia 6), Prentice-Hall, 1972. Also IBM Research Report RJ909.

    The first formal treatment of second and third normal forms.

  4. C.J. Date, An Introduction to Database Systems (third edition), Addison-Wesley, 1981.

    An excellent introduction to database systems, with emphasis on the relational.

  5. R. Fagin, "Multivalued Dependencies and a New Normal Form for Relational Databases", ACM Transactions on Database Systems 2 (3), Sept. 1977. Also IBM Research Report RJ1812.

    The introduction of fourth normal form.

  6. R. Fagin, "Normal Forms and Relational Database Operators", ACM SIGMOD International Conference on Management of Data, May 31-June 1, 1979, Boston, Mass. Also IBM Research Report RJ2471, Feb. 1979.

    The introduction of fifth normal form.

  7. W. Kent, "A Primer of Normal Forms", IBM Technical Report TR02.600, Dec. 1973.

    An early, formal tutorial on first, second, and third normal forms.

  8. T.-W. Ling, F.W. Tompa, and T. Kameda, "An Improved Third Normal Form for Relational Databases", ACM Transactions on Database Systems, 6(2), June 1981, 329-346.

    One of the first treatments of inter-relational dependencies.

A Simple Guide to Five Normal Forms in Relational Database Theory

William Kent, "A Simple Guide to Five Normal Forms in Relational Database Theory", Communications of the ACM 26(2), Feb. 1983, 120-125. Also IBM Technical Report TR03.159, Aug. 1981. Also presented at SHARE 62, March 1984, Anaheim, California. Also in A.R. Hurson, L.L. Miller and S.H. Pakzad, Parallel Architectures for Database Systems, IEEE Computer Society Press, 1989. [12 pp]

Copyright 1996 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org.

William Kent
Sept 1982


> 1 INTRODUCTION . . . 2
> 2 FIRST NORMAL FORM . . . 2
> 3 SECOND AND THIRD NORMAL FORMS . . . 2
>> 3.1 Second Normal Form . . . 2
>> 3.2 Third Normal Form . . . 3
>> 3.3 Functional Dependencies . . . 4
> 4 FOURTH AND FIFTH NORMAL FORMS . . . 5
>> 4.1 Fourth Normal Form . . . 6
>>> 4.1.1 Independence . . . 8
>>> 4.1.2 Multivalued Dependencies . . . 9
>> 4.2 Fifth Normal Form . . . 9
> 5 UNAVOIDABLE REDUNDANCIES . . . 12
> 6 INTER-RECORD REDUNDANCY . . . 13
> 7 CONCLUSION . . . 13
> 8 ACKNOWLEDGMENT . . . 14
> 9 REFERENCES . . . 14

The normal forms defined in relational database theory represent guidelines for record design. The guidelines corresponding to first through fifth normal forms are presented here, in terms that do not require an understanding of relational theory. The design guidelines are meaningful even if one is not using a relational database system. We present the guidelines without referring to the concepts of the relational model in order to emphasize their generality, and also to make them easier to understand. Our presentation conveys an intuitive sense of the intended constraints on record design, although in its informality it may be imprecise in some technical details. A comprehensive treatment of the subject is provided by Date [4].

The normalization rules are designed to prevent update anomalies and data inconsistencies. With respect to performance tradeoffs, these guidelines are biased toward the assumption that all non-key fields will be updated frequently. They tend to penalize retrieval, since data which may have been retrievable from one record in an unnormalized design may have to be retrieved from several records in the normalized form. There is no obligation to fully normalize all records when actual performance requirements are taken into account.

First normal form [1] deals with the "shape" of a record type.

Under first normal form, all occurrences of a record type must contain the same number of fields.

First normal form excludes variable repeating fields and groups. This is not so much a design guideline as a matter of definition. Relational database theory doesn't deal with records having a variable number of fields.

Second and third normal forms [2, 3, 7] deal with the relationship between non-key and key fields.

Under second and third normal forms, a non-key field must provide a fact about the key, us the whole key, and nothing but the key. In addition, the record must satisfy first normal form.

We deal now only with "single-valued" facts. The fact could be a one-to-many relationship, such as the department of an employee, or a one-to-one relationship, such as the spouse of an employee. Thus the phrase "Y is a fact about X" signifies a one-to-one or one-to-many relationship between Y and X. In the general case, Y might consist of one or more fields, and so might X. In the following example, QUANTITY is a fact about the combination of PART and WAREHOUSE.

3.1 Second Normal Form

Second normal form is violated when a non-key field is a fact about a subset of a key. It is only relevant when the key is composite, i.e., consists of several fields. Consider the following inventory record:

--------------------------------------------------- | PART | WAREHOUSE | QUANTITY | WAREHOUSE-ADDRESS | ====================-------------------------------

The key here consists of the PART and WAREHOUSE fields together, but WAREHOUSE-ADDRESS is a fact about the WAREHOUSE alone. The basic problems with this design are:

  • The warehouse address is repeated in every record that refers to a part stored in that warehouse.
  • If the address of the warehouse changes, every record referring to a part stored in that warehouse must be updated.
  • Because of the redundancy, the data might become inconsistent, with different records showing different addresses for the same warehouse.
  • If at some point in time there are no parts stored in the warehouse, there may be no record in which to keep the warehouse's address.

To satisfy second normal form, the record shown above should be decomposed into (replaced by) the two records:

------------------------------- --------------------------------- | PART | WAREHOUSE | QUANTITY | | WAREHOUSE | WAREHOUSE-ADDRESS | ====================----------- =============--------------------

When a data design is changed in this way, replacing unnormalized records with normalized records, the process is referred to as normalization. The term "normalization" is sometimes used relative to a particular normal form. Thus a set of records may be normalized with respect to second normal form but not with respect to third.

The normalized design enhances the integrity of the data, by minimizing redundancy and inconsistency, but at some possible performance cost for certain retrieval applications. Consider an application that wants the addresses of all warehouses stocking a certain part. In the unnormalized form, the application searches one record type. With the normalized design, the application has to search two record types, and connect the appropriate pairs.

3.2 Third Normal Form

Third normal form is violated when a non-key field is a fact about another non-key field, as in

------------------------------------ | EMPLOYEE | DEPARTMENT | LOCATION | ============------------------------

The EMPLOYEE field is the key. If each department is located in one place, then the LOCATION field is a fact about the DEPARTMENT -- in addition to being a fact about the EMPLOYEE. The problems with this design are the same as those caused by violations of second normal form:

  • The department's location is repeated in the record of every employee assigned to that department.
  • If the location of the department changes, every such record must be updated.
  • Because of the redundancy, the data might become inconsistent, with different records showing different locations for the same department.
  • If a department has no employees, there may be no record in which to keep the department's location.

To satisfy third normal form, the record shown above should be decomposed into the two records:

------------------------- ------------------------- | EMPLOYEE | DEPARTMENT | | DEPARTMENT | LOCATION | ============------------- ==============-----------

To summarize, a record is in second and third normal forms if every field is either part of the key or provides a (single-valued) fact about exactly the whole key and nothing else.

3.3 Functional Dependencies

In relational database theory, second and third normal forms are defined in terms of functional dependencies, which correspond approximately to our single-valued facts. A field Y is "functionally dependent" on a field (or fields) X if it is invalid to have two records with the same X-value but different Y-values. That is, a given X-value must always occur with the same Y-value. When X is a key, then all fields are by definition functionally dependent on X in a trivial way, since there can't be two records having the same X value.

There is a slight technical difference between functional dependencies and single-valued facts as we have presented them. Functional dependencies only exist when the things involved have unique and singular identifiers (representations). For example, suppose a person's address is a single-valued fact, i.e., a person has only one address. If we don't provide unique identifiers for people, then there will not be a functional dependency in the data:

---------------------------------------------- | PERSON | ADDRESS | -------------+-------------------------------- | John Smith | 123 Main St., New York | | John Smith | 321 Center St., San Francisco | ----------------------------------------------

Although each person has a unique address, a given name can appear with several different addresses. Hence we do not have a functional dependency corresponding to our single-valued fact.

Similarly, the address has to be spelled identically in each occurrence in order to have a functional dependency. In the following case the same person appears to be living at two different addresses, again precluding a functional dependency.

--------------------------------------- | PERSON | ADDRESS | -------------+------------------------- | John Smith | 123 Main St., New York | | John Smith | 123 Main Street, NYC | ---------------------------------------

We are not defending the use of non-unique or non-singular representations. Such practices often lead to data maintenance problems of their own. We do wish to point out, however, that functional dependencies and the various normal forms are really only defined for situations in which there are unique and singular identifiers. Thus the design guidelines as we present them are a bit stronger than those implied by the formal definitions of the normal forms.

For instance, we as designers know that in the following example there is a single-valued fact about a non-key field, and hence the design is susceptible to all the update anomalies mentioned earlier.

---------------------------------------------------------- | EMPLOYEE | FATHER | FATHER'S-ADDRESS | |============------------+-------------------------------| | Art Smith | John Smith | 123 Main St., New York | | Bob Smith | John Smith | 123 Main Street, NYC | | Cal Smith | John Smith | 321 Center St., San Francisco | ----------------------------------------------------------

However, in formal terms, there is no functional dependency here between FATHER'S-ADDRESS and FATHER, and hence no violation of third normal form.

Fourth [5] and fifth [6] normal forms deal with multi-valued facts. The multi-valued fact may correspond to a many-to-many relationship, as with employees and skills, or to a many-to-one relationship, as with the children of an employee (assuming only one parent is an employee). By "many-to-many" we mean that an employee may have several skills, and a skill may belong to several employees.

Note that we look at the many-to-one relationship between children and fathers as a single-valued fact about a child but a multi-valued fact about a father.

In a sense, fourth and fifth normal forms are also about composite keys. These normal forms attempt to minimize the number of fields involved in a composite key, as suggested by the examples to follow.

4.1 Fourth Normal Form

Under fourth normal form, a record type should not contain two or more independent multi-valued facts about an entity. In addition, the record must satisfy third normal form.

The term "independent" will be discussed after considering an example.

Consider employees, skills, and languages, where an employee may have several skills and several languages. We have here two many-to-many relationships, one between employees and skills, and one between employees and languages. Under fourth normal form, these two relationships should not be represented in a single record such as

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | ===============================

Instead, they should be represented in the two records

-------------------- ----------------------- | EMPLOYEE | SKILL | | EMPLOYEE | LANGUAGE | ==================== =======================

Note that other fields, not involving multi-valued facts, are permitted to occur in the record, as in the case of the QUANTITY field in the earlier PART/WAREHOUSE example.

The main problem with violating fourth normal form is that it leads to uncertainties in the maintenance policies. Several policies are possible for maintaining two independent multi-valued facts in one record:

(1) A disjoint format, in which a record contains either a skill or a language, but not both:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | | | Smith | type | | | Smith | | French | | Smith | | German | | Smith | | Greek | -------------------------------

This is not much different from maintaining two separate record types. (We note in passing that such a format also leads to ambiguities regarding the meanings of blank fields. A blank SKILL could mean the person has no skill, or the field is not applicable to this employee, or the data is unknown, or, as in this case, the data may be found in another record.)

(2) A random mix, with three variations:

(a) Minimal number of records, with repetitions:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | German | | Smith | type | Greek | -------------------------------

(b) Minimal number of records, with null values:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | German | | Smith | | Greek | -------------------------------

(c) Unrestricted:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | | | Smith | | German | | Smith | type | Greek | -------------------------------

(3) A "cross-product" form, where for each employee, there must be a record for every possible pairing of one of his skills with one of his languages:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | cook | German | | Smith | cook | Greek | | Smith | type | French | | Smith | type | German | | Smith | type | Greek | -------------------------------

Other problems caused by violating fourth normal form are similar in spirit to those mentioned earlier for violations of second or third normal form. They take different variations depending on the chosen maintenance policy:

  • If there are repetitions, then updates have to be done in multiple records, and they could become inconsistent.
  • Insertion of a new skill may involve looking for a record with a blank skill, or inserting a new record with a possibly blank language, or inserting multiple records pairing the new skill with some or all of the languages.
  • Deletion of a skill may involve blanking out the skill field in one or more records (perhaps with a check that this doesn't leave two records with the same language and a blank skill), or deleting one or more records, coupled with a check that the last mention of some language hasn't also been deleted.

Fourth normal form minimizes such update problems.

4.1.1 Independence

We mentioned independent multi-valued facts earlier, and we now illustrate what we mean in terms of the example. The two many-to-many relationships, employee:skill and employee:language, are "independent" in that there is no direct connection between skills and languages. There is only an indirect connection because they belong to some common employee. That is, it does not matter which skill is paired with which language in a record; the pairing does not convey any information. That's precisely why all the maintenance policies mentioned earlier can be allowed.

In contrast, suppose that an employee could only exercise certain skills in certain languages. Perhaps Smith can cook French cuisine only, but can type in French, German, and Greek. Then the pairings of skills and languages becomes meaningful, and there is no longer an ambiguity of maintenance policies. In the present case, only the following form is correct:

------------------------------- | EMPLOYEE | SKILL | LANGUAGE | |----------+-------+----------| | Smith | cook | French | | Smith | type | French | | Smith | type | German | | Smith | type | Greek | -------------------------------

Thus the employee:skill and employee:language relationships are no longer independent. These records do not violate fourth normal form. When there is an interdependence among the relationships, then it is acceptable to represent them in a single record.

4.1.2 Multivalued Dependencies

For readers interested in pursuing the technical background of fourth normal form a bit further, we mention that fourth normal form is defined in terms of multivalued dependencies, which correspond to our independent multi-valued facts. Multivalued dependencies, in turn, are defined essentially as relationships which accept the "cross-product" maintenance policy mentioned above. That is, for our example, every one of an employee's skills must appear paired with every one of his languages. It may or may not be obvious to the reader that this is equivalent to our notion of independence: since every possible pairing must be present, there is no "information" in the pairings. Such pairings convey information only if some of them can be absent, that is, only if it is possible that some employee cannot perform some skill in some language. If all pairings are always present, then the relationships are really independent.

We should also point out that multivalued dependencies and fourth normal form apply as well to relationships involving more than two fields. For example, suppose we extend the earlier example to include projects, in the following sense:

  • An employee uses certain skills on certain projects.
  • An employee uses certain languages on certain projects.

If there is no direct connection between the skills and languages that an employee uses on a project, then we could treat this as two independent many-to-many relationships of the form EP:S and EP:L, where "EP" represents a combination of an employee with a project. A record including employee, project, skill, and language would violate fourth normal form. Two records, containing fields E,P,S and E,P,L, respectively, would satisfy fourth normal form.

4.2 Fifth Normal Form

Fifth normal form deals with cases where information can be reconstructed from smaller pieces of information that can be maintained with less redundancy. Second, third, and fourth normal forms also serve this purpose, but fifth normal form generalizes to cases not covered by the others.

We will not attempt a comprehensive exposition of fifth normal form, but illustrate the central concept with a commonly used example, namely one involving agents, companies, and products. If agents represent companies, companies make products, and agents sell products, then we might want to keep a record of which agent sells which product for which company. This information could be kept in one record type with three fields:

----------------------------- | AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | GM | truck | -----------------------------

This form is necessary in the general case. For example, although agent Smith sells cars made by Ford and trucks made by GM, he does not sell Ford trucks or GM cars. Thus we need the combination of three fields to know which combinations are valid and which are not.

But suppose that a certain rule was in effect: if an agent sells a certain product, and he represents a company making that product, then he sells that product for that company.

----------------------------- | AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | Ford | truck | | Smith | GM | car | | Smith | GM | truck | | Jones | Ford | car | -----------------------------

In this case, it turns out that we can reconstruct all the true facts from a normalized form consisting of three separate record types, each containing two fields:

------------------- --------------------- ------------------- | AGENT | COMPANY | | COMPANY | PRODUCT | | AGENT | PRODUCT | |-------+---------| |---------+---------| |-------+---------| | Smith | Ford | | Ford | car | | Smith | car | | Smith | GM | | Ford | truck | | Smith | truck | | Jones | Ford | | GM | car | | Jones | car | ------------------- | GM | truck | ------------------- ---------------------

These three record types are in fifth normal form, whereas the corresponding three-field record shown previously is not.

Roughly speaking, we may say that a record type is in fifth normal form when its information content cannot be reconstructed from several smaller record types, i.e., from record types each having fewer fields than the original record. The case where all the smaller records have the same key is excluded. If a record type can only be decomposed into smaller records which all have the same key, then the record type is considered to be in fifth normal form without decomposition. A record type in fifth normal form is also in fourth, third, second, and first normal forms.

Fifth normal form does not differ from fourth normal form unless there exists a symmetric constraint such as the rule about agents, companies, and products. In the absence of such a constraint, a record type in fourth normal form is always in fifth normal form.

One advantage of fifth normal form is that certain redundancies can be eliminated. In the normalized form, the fact that Smith sells cars is recorded only once; in the unnormalized form it may be repeated many times.

It should be observed that although the normalized form involves more record types, there may be fewer total record occurrences. This is not apparent when there are only a few facts to record, as in the example shown above. The advantage is realized as more facts are recorded, since the size of the normalized files increases in an additive fashion, while the size of the unnormalized file increases in a multiplicative fashion. For example, if we add a new agent who sells x products for y companies, where each of these companies makes each of these products, we have to add x+y new records to the normalized form, but xy new records to the unnormalized form.

It should be noted that all three record types are required in the normalized form in order to reconstruct the same information. From the first two record types shown above we learn that Jones represents Ford and that Ford makes trucks. But we can't determine whether Jones sells Ford trucks until we look at the third record type to determine whether Jones sells trucks at all.

The following example illustrates a case in which the rule about agents, companies, and products is satisfied, and which clearly requires all three record types in the normalized form. Any two of the record types taken alone will imply something untrue.

----------------------------- | AGENT | COMPANY | PRODUCT | |-------+---------+---------| | Smith | Ford | car | | Smith | Ford | truck | | Smith | GM | car | | Smith | GM | truck | | Jones | Ford | car | | Jones | Ford | truck | | Brown | Ford | car | | Brown | GM | car | | Brown | Totota | car | | Brown | Totota | bus | ----------------------------- ------------------- --------------------- ------------------- | AGENT | COMPANY | | COMPANY | PRODUCT | | AGENT | PRODUCT | |-------+---------| |---------+---------| |-------+---------| | Smith | Ford | | Ford | car | | Smith | car | Fifth | Smith | GM | | Ford | truck | | Smith | truck | Normal | Jones | Ford | | GM | car | | Jones | car | Form | Brown | Ford | | GM | truck | | Jones | truck | | Brown | GM | | Toyota | car | | Brown | car | | Brown | Toyota | | Toyota | bus | | Brown | bus | ------------------- --------------------- -------------------

Observe that:

  • Jones sells cars and GM makes cars, but Jones does not represent GM.
  • Brown represents Ford and Ford makes trucks, but Brown does not sell trucks.
  • Brown represents Ford and Brown sells buses, but Ford does not make buses.

Fourth and fifth normal forms both deal with combinations of multivalued facts. One difference is that the facts dealt with under fifth normal form are not independent, in the sense discussed earlier. Another difference is that, although fourth normal form can deal with more than two multivalued facts, it only recognizes them in pairwise groups. We can best explain this in terms of the normalization process implied by fourth normal form. If a record violates fourth normal form, the associated normalization process decomposes it into two records, each containing fewer fields than the original record. Any of these violating fourth normal form is again decomposed into two records, and so on until the resulting records are all in fourth normal form. At each stage, the set of records after decomposition contains exactly the same information as the set of records before decomposition.

In the present example, no pairwise decomposition is possible. There is no combination of two smaller records which contains the same total information as the original record. All three of the smaller records are needed. Hence an information-preserving pairwise decomposition is not possible, and the original record is not in violation of fourth normal form. Fifth normal form is needed in order to deal with the redundancies in this case.

Normalization certainly doesn't remove all redundancies. Certain redundancies seem to be unavoidable, particularly when several multivalued facts are dependent rather than independent. In the example shown Section 4.1.1, it seems unavoidable that we record the fact that "Smith can type" several times. Also, when the rule about agents, companies, and products is not in effect, it seems unavoidable that we record the fact that "Smith sells cars" several times.

The normal forms discussed here deal only with redundancies occurring within a single record type. Fifth normal form is considered to be the "ultimate" normal form with respect to such redundanciesæ.

Other redundancies can occur across multiple record types. For the example concerning employees, departments, and locations, the following records are in third normal form in spite of the obvious redundancy:

------------------------- ------------------------- | EMPLOYEE | DEPARTMENT | | DEPARTMENT | LOCATION | ============------------- ==============----------- ----------------------- | EMPLOYEE | LOCATION | ============-----------

In fact, two copies of the same record type would constitute the ultimate in this kind of undetected redundancy.

Inter-record redundancy has been recognized for some time [1], and has recently been addressed in terms of normal forms and normalization [8].

While we have tried to present the normal forms in a simple and understandable way, we are by no means suggesting that the data design process is correspondingly simple. The design process involves many complexities which are quite beyond the scope of this paper. In the first place, an initial set of data elements and records has to be developed, as candidates for normalization. Then the factors affecting normalization have to be assessed:

  • Single-valued vs. multi-valued facts.
  • Dependency on the entire key.
  • Independent vs. dependent facts.
  • The presence of mutual constraints.
  • The presence of non-unique or non-singular representations.

And, finally, the desirability of normalization has to be assessed, in terms of its performance impact on retrieval applications.

I am very grateful to Ted Codd and Ron Fagin for reading earlier drafts and making valuable comments, and especially to Chris Date for helping clarify some key points.

  1. E.F. Codd, "A Relational Model of Data for Large Shared Data Banks", Comm. ACM 13 (6), June 1970, pp. 377-387.

    The original paper introducing the relational data model.

  2. E.F. Codd, "Normalized Data Base Structure: A Brief Tutorial", ACM SIGFIDET Workshop on Data Description, Access, and Control, Nov. 11-12, 1971, San Diego, California, E.F. Codd and A.L. Dean (eds.).

    An early tutorial on the relational model and normalization.

  3. E.F. Codd, "Further Normalization of the Data Base Relational Model", R. Rustin (ed.), Data Base Systems (Courant Computer Science Symposia 6), Prentice-Hall, 1972. Also IBM Research Report RJ909.

    The first formal treatment of second and third normal forms.

  4. C.J. Date, An Introduction to Database Systems (third edition), Addison-Wesley, 1981.

    An excellent introduction to database systems, with emphasis on the relational.

  5. R. Fagin, "Multivalued Dependencies and a New Normal Form for Relational Databases", ACM Transactions on Database Systems 2 (3), Sept. 1977. Also IBM Research Report RJ1812.

    The introduction of fourth normal form.

  6. R. Fagin, "Normal Forms and Relational Database Operators", ACM SIGMOD International Conference on Management of Data, May 31-June 1, 1979, Boston, Mass. Also IBM Research Report RJ2471, Feb. 1979.

    The introduction of fifth normal form.

  7. W. Kent, "A Primer of Normal Forms", IBM Technical Report TR02.600, Dec. 1973.

    An early, formal tutorial on first, second, and third normal forms.

  8. T.-W. Ling, F.W. Tompa, and T. Kameda, "An Improved Third Normal Form for Relational Databases", ACM Transactions on Database Systems, 6(2), June 1981, 329-346.

    One of the first treatments of inter-relational dependencies.

Mining Bitcoin with pencil and paper: 0.67 hashes per day

29 September 2014 - 7:00am
I decided to see how practical it would be to mine Bitcoin with pencil and paper. It turns out that the SHA-256 algorithm used for mining is pretty simple and can in fact be done by hand. Not surprisingly, the process is extremely slow compared to hardware mining and is entirely impractical. But performing the algorithm manually is a good way to understand exactly how it works.

A pencil-and-paper round of SHA-256

The mining process Bitcoin mining is a key part of the security of the Bitcoin system. The idea is that Bitcoin miners group a bunch of Bitcoin transactions into a block, then repeatedly perform a cryptographic operation called hashing zillions of times until someone finds a special extremely rare hash value. At this point, the block has been mined and becomes part of the Bitcoin block chain. The hashing task itself doesn't accomplish anything useful in itself, but because finding a successful block is so difficult, it ensures that no individual has the resources to take over the Bitcoin system. For more details on mining, see my Bitcoin mining article.

A cryptographic hash function takes a block of input data and creates a smaller, unpredictable output. The hash function is designed so there's no "short cut" to get the desired output - you just have to keep hashing blocks until you find one by brute force that works. For Bitcoin, the hash function is a function called SHA-256. To provide additional security, Bitcoin applies the SHA-256 function twice, a process known as double-SHA-256.

In Bitcoin, a successful hash is one that starts with enough zeros.[1] Just as it is rare to find a phone number or license plate ending in multiple zeros, it is rare to find a hash starting with multiple zeros. But Bitcoin is exponentially harder. Currently, a successful hash must start with approximately 17 zeros, so only one out of 1.4x1020 hashes will be successful. In other words, finding a successful hash is harder than finding a particular grain of sand out of all the grains of sand on Earth.

The following diagram shows a block in the Bitcoin blockchain along with its hash. The yellow bytes are hashed to generate the block hash. In this case, the resulting hash starts with enough zeros so mining was successful. However, the hash will almost always be unsuccessful. In that case, the miner changes the nonce value or other block contents and tries again.

Structure of a Bitcoin block

The SHA-256 hash algorithm used by Bitcoin The SHA-256 hash algorithm takes input blocks of 512 bits (i.e. 64 bytes), combines the data cryptographically, and generates a 256-bit (32 byte) output. The SHA-256 algorithm consists of a relatively simple round repeated 64 times. The diagram below shows one round, which takes eight 8-byte inputs, A through H, performs a few operations, and generates new values of A through H.

One round of the SHA-256 algorithm showing the 8 input blocks A-H, the processing steps, and the new blocks. Diagram created by kockmeyer, CC BY-SA 3.0.

The blue boxes mix up the values in non-linear ways that are hard to analyze cryptographically. Since the algorithm uses several different functions, discovering an attack is harder. (If you could figure out a mathematical shortcut to generate successful hashes, you could take over Bitcoin mining.)

The Ma majority box looks at the bits of A, B, and C. For each position, if the majority of the bits are 0, it outputs 0. Otherwise it outputs 1. That is, for each position in B, C, and D, look at the number of 1 bits. If it is zero or one, output 0. If it is two or three, output 1.

The Σ0 box rotates the bits of A to form three rotated versions, and then sums them together modulo 2. In other words, if the number of 1 bits is odd, the sum is 1; otherwise, it is 0. The three values in the sum are A rotated right by 2 bits, 13 bits, and 22 bits.

The Ch "choose" box chooses output bits based on the value of input E. If a bit of E is 1, the output bit is the corresponding bit of F. If a bit of E is 0, the output bit is the corresponding bit of G. In this way, the bits of F and G are shuffled together based on the value of E.

The next box Σ1 rotates and sums the bits of E, similar to Σ0 except the shifts are 6, 11, and 25 bits.

The red boxes perform 32-bit addition, generating new values for A and E. The input Wt is based on the input data, slightly processed. (This is where the input block gets fed into the algorithm.) The input Kt is a constant defined for each round.[2]

As can be seen from the diagram above, only A and E are changed in a round. The other values pass through unchanged, with the old A value becoming the new B value, the old B value becoming the new C value and so forth. Although each round of SHA-256 doesn't change the data much, after 64 rounds the input data will be completely scrambled.[3]

Manual mining The video below shows how the SHA-256 hashing steps described above can be performed with pencil and paper. I perform the first round of hashing to mine a block. Completing this round took me 16 minutes, 45 seconds.

To explain what's on the paper: I've written each block A through H in hex on a separate row and put the binary value below. The maj operation appears below C, and the shifts and Σ0 appear above row A. Likewise, the choose operation appears below G, and the shifts and Σ1 above E. In the lower right, a bunch of terms are added together, corresponding to the first three red sum boxes. In the upper right, this sum is used to generate the new A value, and in the middle right, this sum is used to generate the new E value. These steps all correspond to the diagram and discussion above.

I also manually performed another hash round, the last round to finish hashing the Bitcoin block. In the image below, the hash result is highlighted in yellow. The zeroes in this hash show that it is a successful hash. Note that the zeroes are at the end of the hash. The reason is that Bitcoin inconveniently reverses all the bytes generated by SHA-256.[4]

Last pencil-and-paper round of SHA-256, showing a successfully-mined Bitcoin block.

What this means for mining hardware Each step of SHA-256 is very easy to implement in digital logic - simple Boolean operations and 32-bit addition. (If you've studied electronics, you can probably visualize the circuits already.) For this reason, custom ASIC chips can implement the SHA-256 algorithm very efficiently in hardware, putting hundreds of rounds on a chip in parallel. The image below shows a mining chip that runs at 2-3 billion hashes/second; Zeptobars has more photos.

The silicon die inside a Bitfury ASIC chip. This chip mines Bitcoin at 2-3 Ghash/second. Image from Zeptobars. (CC BY 3.0)

In contrast, Litecoin, Dogecoin, and similar altcoins use the scrypt hash algorithm, which is intentionally designed to be difficult to implement in hardware. It stores 1024 different hash values into memory, and then combines them in unpredictable ways to get the final result. As a result, much more circuitry and memory is required for scrypt than for SHA-256 hashes. You can see the impact by looking at mining hardware, which is thousands of times slower for scrypt (Litecoin, etc) than for SHA-256 (Bitcoin).

Conclusion The SHA-256 algorithm is surprisingly simple, easy enough to do by hand. (The elliptic curve algorithm for signing Bitcoin transactions would be very painful to do by hand since it has lots of multiplication of 32-byte integers.) Doing one round of SHA-256 by hand took me 16 minutes, 45 seconds. At this rate, hashing a full Bitcoin block (128 rounds)[3] would take 1.49 days, for a hash rate of 0.67 hashes per day (although I would probably get faster with practice). In comparison, current Bitcoin mining hardware does several terahashes per second, about a quintillion times faster than my manual hashing. Needless to say, manual Bitcoin mining is not at all practical.[5] Notes [1] It's not exactly the number of zeros at the start of the hash that matters. To be precise, the hash must be less than a particular value that depends on the current Bitcoin difficulty level.

[2] The source of the constants used in SHA-256 is interesting. The NSA designed the SHA-256 algorithm and picked the values for these constants, so how do you know they didn't pick special values that let them break the hash? To avoid suspicion, the initial hash values come from the square roots of the first 8 primes, and the Kt values come from the cube roots of the first 64 primes. Since these constants come from a simple formula, you can trust that the NSA didn't do anything shady (at least with the constants).

[3] Unfortunately the SHA-256 hash works on a block of 512 bits, but the Bitcoin block header is more than 512 bits. Thus, a second set of 64 SHA-256 hash rounds is required on the second half of the Bitcoin block. Next, Bitcoin uses double-SHA-256, so a second application of SHA-256 (64 rounds) is done to the result. Adding this up, hashing an arbitrary Bitcoin block takes 192 rounds in total. However there is a shortcut. Mining involves hashing the same block over and over, just changing the nonce which appears in the second half of the block. Thus, mining can reuse the result of hashing the first 512 bits, and hashing a Bitcoin block typically only requires 128 rounds.

[4] Obviously I didn't just have incredible good fortune to end up with a successful hash. I started the hashing process with a block that had already been successfully mined. In particular I used the one displayed earlier in this article, #286819.

[5] Another problem with manual mining is new blocks are mined about every 10 minutes, so even if I did succeed in mining a block, it would be totally obsolete (orphaned) by the time I finished.

Writing a simple operating system from scratch (2010) [pdf]

28 September 2014 - 7:00pm

%PDF-1.5 %ÐÔÅØ 2 0 obj << /Type /ObjStm /N 100 /First 806 /Length 1305 /Filter /FlateDecode >> stream xڝV]sœF|¿_1y²•’löR.§N>ù£b9ŽNŠ“*¿p°’(«ZÀ¶þ}z8Ð9²¶®nížžžaA…¤E¤%%d$‰€bCB‘øÓ$"ÜÃ*Á2&‰RÔáBJ’Ø)ŸŒH©dB $* ¸CZRš´Öx@Ú`[L!µ P›Â† ®BŠ€ÕE!v'Å…™ ¤P‘÷M!ƒ"¦8€nA±ŠÐãyRœàfD "D %`2%x`QH2È#€"ƒœ#@†ÿxŒ¥ZÄHW¨|ì€"è‚Ÿ'$$$&þµ¡|`öE"h¦I\ƒO!¬€:¡â`!T(ގ\„†5"§y}ší†ë"ØtüB+¶>Œ¢mØ"ûEÄQÁ;èÁ†AD½BŒì%a±L_Q0ǨÔa¡X&îÆ&d™¨¯`u$¡\”E$ü…‘ÀJ#¾Üd`Ö‹VàœBî ōE2h’2àœÐ2R±L4Œô‚ÛIª€S³âL‘¤T†s³fP2©%/·˜Q5©¹D†;NðÌaqNXÄñB r2b“P:1 µ“ÖHÃn¡zÒpPÔC³íkiØ^üdÌÞ ‚2fQ¨ Œ“xñì=]ÓÓWîÔÑÓ=nlÖ®þõ‰Ø£çϏ_¸ºµuÛì}³3»L¯[ëǍoêÖ»¼ëÑo–#kuÝáš–>»,ZÄì¼¥´Î©½´tè\Kï½ËlsOÜAá9F>ýÈÿðÍŸëý-è°tÙU³ü8½(2z×U맘ԐÉû3:ªº2½?ï¦Û|ºì²ù–ôUÜ18?K0èÿ€)qzçÌYcÏ»²†/äÎéµý’æ6+ª´¤w®M§«¥F¹ði rçÙãŸVUQ_ÐÇÇEÎ>Ø-X0»Ü~Ü{XØWìOìÁ§¢AäSÀ!ñ»Ñ¦ ƒú#_í#ï²t¿OÚ«vòÐËÖûîú¾¶¿ƒ»68±Eàúç^Õô/ËúÚµ{ϐ±[;Ûð ³QgVö¼¨Ùšuëñ7¯nðå¬aP¯¬M³« iêŽ49"Ü DïJVÖõc«™ˆ¢ïDQ³Q¢! ‚SyÙÕý³yÌغYÙå–^¥Å?׈¿ŽfÜ&¦gC%[ĺ«ªÔßLõøÀý®óÝ§—¶Í.©²TÝÀsè:pMæ®íLÄplû[!Ö2ÿ”֙ͧ§ÔPL™|ì§UÑ\ÍDŒÆˆG_pdæ6§íGËŒ%Ú6çÚ^T÷¨wȆ—çµûܧ•/>Ù†ßùy!êëׁ;jÍüûsÑ?Øã™1àŽx„2PÉ~bãÜàcžM™Û·Ój™ƒ¯wßÑÛâÜÒ‡¢½t]Û§3…|;C|Ób%|Uº Ž•m2_\óÑsšnÊI-wÇQO³:%œ{KLÐjSÞLÁ/ŽÓ«Û±ô¹@;OaŸô}À}ðãðqÓeÞ¯ú!ﶭü¯ë<ýa}mˇEܾ:ÿ÷ðñg[ñŸ: )^ÙÚú´Oå$ýŒ£ß{µSng)†J¾uÊöwê.W3S?3B8¶¼+¸y›oNÊUÚ¦S®Éñõ·Y×g¼3{Hø?‰ãëÐ endstream endobj 310 0 obj << /Length 460 /Filter /FlateDecode >> stream xڝ’MoÛ0†ïù:Ú@ÍJ%Ù;¦Ý lÀ,.zXwH]'jǁë¶?Jr‡v¹õ¢Ï—Ôó’’b+¤¸Y,ëÅå'e„RPƒ¢Þ* T%œvPZQ?Š™ÏÖŸrŽ‘ l)$h’äi“]· wŠß`|—EP¶‘÷(•LyKQAeÑƬ hë9bÚ«ác™F¿Ý…ÅQ©c)ô+©š“^çCpÖ‰âŸè^*sþ¬Ò€¥æ6D JYýŽ|…Ö:´/ØxJ»ew<ïÝËü±^üVaÖÚ endstream endobj 351 0 obj << /Length 1147 /Filter /FlateDecode >> stream xÚÝYÉrÛF½ó+&'U4ûrIIŠä¥¢D2©Jª ¢PKâü}z@4Ej©²-’b™AcÞëî×= Fs„ÑëöÇ¿GŽD°ˆµbHS©Ð´}øˆÑß!³D£íÔ1Mb!8/Ðxt1Â1‘ÚÌá0KNP ÖïÝ|ÿzt4œRŽ(•VM®áTÀ(GB³˜c&3ô!8®m¶h›ðãäÝèd²f™BêXIö¬ezÌn%q"©4kŒck `ö`ׂ˜v0;vNžÛ»Q;¢,–’¸AF„h¼$hë*ŒÀЬI0ma,Èá …EP˜kìà/ Poà¸*o»6«áy̓Ãzjt“·Ù´íêÌÜeAº˜¹áöÆß9rï«Zwyîß?ÍšìRˆ€™·œêe*T,D‚àÅq"¤_Gl p*‚‰5ÎœqªqæÃµ3Ιâ0?ñÀ“äy¼Ððž4ã*K‘è)¢ž¢£·¿\KÒQáI áçSc'qï8n{nÏÒy>u§¿ueHUpÏ©À€JÈw%Ó"æë3øøüÒ­ö¤4Ⱥ"mmdsÊ¿º¶)R,7ÂrCeœ€¬­dóFaÞG‚õüMóàà"8t¬}ÖÒEÂñ9(Ã¥;¿£²¯'\}c,OàÕ•BÁ†¿Azqb#¶#­è¾äýc|DëýÏWUó²É®»baEÒ\W×ÎÛo²Ïé,›æeZø¼¯ZŸD‚ËtýÆ•ŠõuÂÖ+ê3_¯¾¨<<Cªê¡ Íë´,óÅÜ þ…ÎnÈè*÷åê}f¹€g¾nf0Õ–TJe ©\Þl¨\+Z<¬‰™WEÿä ”Sb)å~T)½.QÙP¥ªÙ@µ©<Ž¤„«Ä|wSó!÷%@ÁëÙé+ÚI]úþ&+Šêg C¨mò ß°ÄÑ:ecK• š[{hNkÓ×ÞB›i®÷Lê-dCÌ¥Ï÷Ajž7@‰QyÎw>nÖ —wè™GÞµ­Su` —–´ðÊ2 AÓ«yM®éb…ÞnäK£õÐ×v+^,þ° ëbö${& ›‰b(”â,+«ú?C‘§ëp6«¡Êîí„~M¯lÊ È 7^Žº%|ø>´>ó_ýùÊA8KëOÍÊvn|;tR³-­Šg*æ«ÍÕE—5®I50ÁÍ‚î[«oý7žû˜ èðhP _8nëJâb!Ùñn@®×n…ÐÀ¿läw¡ß¦îc‡i ôŽ×õÄD {›êáD}T¹ž°®Š>êÎ~ª„LH¾—~žŸåy¶·žOö¼ì=ýÞ §!lÞ»ÅÔðŽ—ÚÖÝb²IûÔ° šÝ,ó¸óÂ:{µý¡dS¤÷uà!?ž÷|_óÞ3°1ïïúqW–)ôöÝ­$žPjÿŽìÿF<8%9F¨ýþÈYŒÅbEÙòÿƒÃ#'“Ñÿrã endstream endobj 401 0 obj << /Length 1700 /Filter /FlateDecode >> stream xÚÝZËr›HÝû+´¨Šdè',mËJ<‰#O¤Œkj2‹6tl*T r’¿Ÿ~"g$yBH✾÷Üso+\‚ÁëƒãùÁá€AŽbŒÁ`þyxQ8 Ž"2˜§ƒ¿½“éûùéûùÌÿgþûá$Äíû! GQˆä×é[³,Swö'Ñ Åuób4«“ŽÍGàûÃì½_‰Š¿ò‡BoâGÐãuâƒÈ»1—\‘·P×¾›k³š×7ÃrX%åÒÊëòž0¨7ò‡8¶@q¬qŒ=5@F1@ƒ¡&‘4ŒBùä4@Þ+^ÕYY°Øàüàà(½õ1ñX‘ðT¾–÷cŠwh?l¨aïàÆ7±kÿ³4+® Úú†›“qV}ñ‡…?õë;ÈËÝC‹žvš¨8ýVó"婉Šs¾(…M£$áUeÎ?Êcèe…z×æο^ðBeL-ߣAô+A›Ïìó¿)}@½¯íeÕg"»UÇ+ó¸—~„¼RèE§{°x¨7´¸?V6ª‘w|6Ùø.ÍQ½y§4Ú~k€´ ²/VuÝ@Ìjƒ‚å¹9™+/¯¥Hs!aáh*ôDÈ‘©s·+B#A©b;†EAQï´ð!ðj•z\fžÈÔ™¦D‚‡`x¥i‘碬yRë,–q~®Âx©ªYqˆ©óÓíšÒ–¥òÿHË‚’Í£”-ëµnÚ¸z—}¶µò2“Tß”«úNÆd·ES­»BiEQÐ?J9UÍŠŽºñ:/¯˜ Á±¢×«‘-ëR4a‰<¦®_I~ryG$/üß‘A7BhŽù§ DE¼ñÜæ[ak@Uqm‹®rY"v_:!î,øsöeøìkÖXBŠé^ù'ÒBîÐB-Î#·ä>$·Ä/ƒ“¸K‘0jE OØ*ò¥/?&2ÅÔ+#½Ç«,OÝK™ûEj®¿+ï”/þò¥0–+aîyËEÁ%±!”Yè¡ðqÚŒmî¨{(Ë,gÆå#nÑ¡?m% fUânÌêk^pÁÖ¶àS9kÍß93œ¶F”ºsSe0Š·iê[ƒMpÎɾ3›¸úò§Ê9&2v•+´3Kös >œ§=‘zÓ¬íD•ÒU‘¨`–ð û$/­>u*¯ÂÝX]Õ”™é¦djÛQÅQš فq÷Úh‹¤fÌjé1p´+i#Ž[uú'«u‰1j(Oœ6KuÆÆì[9Añ="»TymjÈÊ9 é5#4Æ;šÎ½à ;ÇA£r'‚·œàÇFúJk;f²—qŽÚ´h}KU¶ì‰!,ϧR“¬)¼È›ÊÅ2OagL岝 ØŒ.ï˜Íuý²¤=ÏÂj”ð¾”u}ÂѪ.l¨Úë؉4Ì7vfľøf@ Q´7ÉŠQïÐTÐ$ëT\³"[ÏZ¦.ä¦v,¬íHk,ö½ªùâ·Êù±ÆhþX¥” àÿüð÷*®ët¬)¼ÙB¹v@ðëkŸš´ûä5+>wiz!øp)LEJL¯_9U²¥XÕ„«õ¬1 à¶Ê°ÅlBsÚ«÷>¬¥r3Œ$g‚Y¥Ù!{ÃYÊ-ÜI¦-(Ñ/V*…ƒÎÉlWFlW6Ö²[5,ãy¹lÚ®SižÌ-Ó¦:R€oõ$[-¨~­æÄês¢2lgJSd¡ë¤Vóy<®_#M¿ö†‰ô« &¬œËU}8]ÕK5?#h¯ûhd)éÔRÒêÞΧ¶–¯*N”¾ÄAéܲ ­F¯ááB”ׂ-:`Ñ£‡k»6PíGì,“QÉþ-¦D»ÝÔj˜¸¿è’¦Í™%‚óâ‡M(µEñn—Èg'ª‡§F6ÆXj·Ê=[~ÇYµÌYkK^J·‘ííLZÈ‚MP.·¥ÅËk{"A½w¶XæÜíŸÚyɶ‘5 —ºEٚʬ'æzx,µszß—È°©ÚEš7­Ï™ÎˆÕRmxà |µ‹Ýñí:Š·üû•öÃ¥´46M[‘!²OZõ<Üà–˦›@м„ûôï‘çáÅýGh+ÖFSB^&½­ »ÑDmKs¶PmŠ-ºƒqMÍ…0»û‰ Ël„Ç}hÉJ;mº’™üF·2ú;ÕÞƒüNýC„¢±[1´±p竼ÎZ­¿åDu:ûþÇ°8iþ^鎧óƒå³–ß endstream endobj 203 0 obj << /Type /ObjStm /N 100 /First 862 /Length 2232 /Filter /FlateDecode >> stream xÚµZÛnã8}÷Wðmg™dñ 4èt6ÛÁl#Á¤³»AÔ‰Ð-´-’Ü3ùû=%—Ý·ø²³ÑC@Ù&OU‘ÅR¬¶J+«ƒ¢ˆ&*Ÿ”5ZeÆ(ãZ§L2h½ü¡‹Co“•h­UdÜÌZRf–-Ú¨œ&eI+çð=å"Æ“SÞàwòÊ{þ=aVn³ ÄØV…¨gÖ‘ ó» "ÆòœßY¯ULÀóFÅŒþÞ©,ë½JL)óïYe,Ì«²£™ ¤rÄï!(£y!!âÁƒIÔxH¼,Ù0·ˆ5‹ac0Âb¤1<]ÌÊXƒo’ŃK3ò†Ál²ÍÀI@&®àaˆIg SÀp¬È8 zØ8Ë@vL#‚QŒ×fFÏxÃßÙC‚‰Œ÷åOl¦Ùè‰N;‚R&aÄ&³6ãä(øñÄ4ÈBÙùÈò²#FÙ‘7:[,¨Ov„2›–,?àÏAP² Ó`Äè‘€sSb¶·f²Êa=x å,S…8‡O„‰üS€s~€×$Ï}’r9~á éÁ‰¼edg”w€ ˆ÷Œ‰}€ü›ùĤ¼+‡Ì4U0¬!ü'°3Ðáhè£HŽç`_6Kðl—TÔ,%|*8ùÑ ù½®Gp³è3ñòUŒX;åF%mâìÅ5¿Qó¿·o[5?W?ôëw}u?ÔmSøÂæGõóϳ~ëê¡nÞ«vÝ©_ª®©?i7#_uU9-ÕYÛê]ÚN íø±ºrõ¿ Òõ¢nô #+UÝpøPí‡ÙaÂËõÐ.7ÌÎÖõâ¡W¿×Ãõ¦üX!A[Q®º÷eS÷[W«ªÛÞ<öCµüK¯^µ•:+ûê!'B©ë®^VÝ‘ÙÝvö·XíuWý´êÚûªï¡iÙ<¨óºã®Ÿªþ(Žèb݌ߩóê~Qò Ú¦±^WåCÕ©‹zñÚý‡r5T]60çÕ§jÑ®xùëûªêrÈOõ}¥Î;ê6 åoTh¿*a»Ê×e÷ð{ÙUê²Y­‡ùÕz@spiº|9¿‚qû#ZðûyÀu×¾ïÊå+9:L|i#ºzS-ÛîQ½¼g{Z›ÌvsßUU#ê™k‰ÿl`“~(7þÏ.^÷«b`#öQ™^Yß«¯(¨ËåjQ-a»Ñ Ž"Ñníb±e³A<´zÚZ¶yG]6p£n½j&QòKõø®…OìWm7Âö Ÿêþã CÂvwYTÄûý>ŠËl5ã¥\ob±:°’¸5ä ,ªí§½m7FŒöf½êmÄ?9dË.É$ëå²ì÷Bÿu»Œ³úÝ¢fÇ_}ø¾÷-ŸäZýªæõpÇý_¼˜Íß>®*5¿.ßW³ù«Fl†g°æž³ù¯Um<dz~üêMõP—gíê–ûøŒý<•œ)R¾›©âØ]fùröýû?H‚\‘"á€KEÆAÛ¬‹»ýý¦³Áv‡á«Îà¬x¥†s++4œ‚8ù€?Ê8>‘Sñ ÌÙp7Õ n¡Áù…š¿­þÔüiqp&+ŽûSâÌ_6 NOLÎySâ´il­–ÖHk¥%i´^Ú m”Vð¬à‘à‘à‘à‘à‘à‘à‘à‘à‘à‘à9Ás‚çÏ ž<'xNðÜï…Gfó›õ»aüüºù8›ŸµvÉQIs7=¿œ¿ºåéôk­úvãF¸Kè\è<¦°,dq©ÐÚ¡ßKµ7f6ü¬<O#…-<òo—]Ižä±ñ)hø\$Nz=®é¢.B2iØg¤a-T@ší@ƒ¯?–ƒY¯OE´áUv'äó!_d¾4"ù¢ç\(b4GˆL¡ˆÍŠ ëÀ=ÎfŽ)BÏH&ák„3TdÞ/C‘pÝrà—ÂA>§v—5?/­íùž ¦¦‹˜ý)\&°ep!¾’¦"`“¤ßõGäž?„),ƒƒ¡›ø¢qÙIh9°ƒ8¾¦ÃTÙû‘ΊƒzÐ!Œ¦p\Õ°±àã‡pA1!2…ƒÀ1<Ît«©ð8Ýp"B¡cŠL»‹[v<òˆ…ñéXØÐ4!LpÛ¸4xOœoNÓ)lìó³±Ùˆ‹S8"o$6ÕiÒLB6a?CèXä(cE3ãÚ'͎Š›@• .ŸáìËŒ~s‚(n ±ÏXõ¤Bóþ"Nf:…ÍsúK.T:lº\•å<‹²Þ#´âáôq ¿E,gdâù›v\ ¦"&{Š*l0Öj¤ml+#.§ã³O•¯Ù¸ lÄ)>n=;™?6‡m4*&ƒû#' Ž_e€ÏIªøçWÅàÂa¸ü/ªÜ8“?¢Ê{‹‰¼ïó+_Pòàå ÒIª„ Ø K1nkòü®Â"¿4§‰Ak9˜å42Ž“Ã)lÒC±°ù‡q¸)Óañˆ‚‹×,iÞiùU¡£?E”ü%.yýM©(äýu¥ï:Ã")]Dí©+ñË)OR>âTþɺÒçêÓŸ¬+9ý}])ÿ¿u%'u/u/u/u¿ý]ê:Aê:Aê:Aê:Aê:Aê:Að‚àÁ‚/ ^¼(xQð¢àEÁ‹‚/ ^¼$xIð’à%ÁK‚—/ ^¼$xIð²àeÁË‚—/^¼,xùyëX’+.\z#Íw`«"îÂØJäŠÄgàíÂò‹fd‰ …I'dg~Š«_ÀÕ×ÙÀ¯ûq#7‰2Á±p¬ŒoÍqØy@ç‹ÉÇe S¤ÐW-ÇÿՐÖƒL°¢ó2d®·`l-dørìÁMãú—N!3AŠ†7,¿;q%Vê6Trp×+2_uà¶!ŒãØ>XÉÙÑø/zðÐÏ endstream endobj 408 0 obj << /Length 172 /Filter /FlateDecode >> stream xÚU=Â0E÷þŠ7&CÒä=^Ò¬•Vp¨C³‰CE¬‹RPðßÛ:Ý;œ{®l“<&i‰ÖêÀŒ/` ëÌxò:sÏp›}‹*Öòwiiù—'ÚRu3Ú½&(1߃ÑÎtpè&X­´"Ö–I&•µ™õ³ïÇ*šá-f'ÛhRHÚ9\ywºu÷vhWI(fÚ#<ý}¯YÄäEÝ6“ endstream endobj 412 0 obj << /Length 2296 /Filter /FlateDecode >> stream xڽˎã6òÞ_¡[d`¬ˆÔ;{°³É&Av3›î $9Ðm£‡G¢¦»ÿ~ëEÙîö°˜‹HKÅb½‹qpâàŸw±Œ¸ûò[ªˆ´ÊÒàa謊R•¹ÖQ–—ÁCü¾=š“³Ó揇? te¡?+£B@¢Ü}óp÷áN,T â,*‹$Ȫ*Š«,¨û»ßþˆƒ6â(©Êà‘Pû )œYÁ¼îïþsG 8œö£<1<ÔcZÐiEW€±‹òl‡<<óüh¦æ70BŠ4Å»h ¸ûxë†=ÒYjü"óIÊg%‰?f;üû™çó¸w|}ø˜É’L29 öÏÂR`gd‘3/AšŒtq9›Ú-èŠÏH™ö8½ÿš†P•e•p3×H)Mó°3ŽfEøn÷`á˜ò?ÆåÀÛ÷G{6h½EŠ²EÔ®e󝖳F´§Ø{&†£qìÛAŽCu"…ÍP›™n€à_¡8Í­cŐG!ûÛ+þEêãPE—ádzƒž«^ ¦o=DôŒóôÐr×Q8 ÄyÞˆVëy\I©âyÅ_êP `×a0Á]¶úæ–…ìYÇ{¸ªÊÂc“]â¤1leJ!$ÉBfÉî9þŠG­Ç¥k˜Ì8Á—q³€ÃãÌ'˜Ý&C­ÀçÐ úe”…˜¯‹Ê#c€À“Œ£ ñTêøHp%"Lv&M›‰MŸX÷œfàQ\sd†(+ú¬ÊH•éµŽè5iQ†lþ(¤BxyÀ8œ—ù=:ìØËN xÌr û‘é´N0¬íåVIˆGø ±ZQ÷1˜ðÍûuU#°…0èê–êÁ\Hhí3)6·‚Í ×go/4šoËKöb°íi¿tŒÞ³)ÁŸfFßæ}ïø¼·à¿ƒmÝQï:ÐHÑ`\šïÖȉAœWöd9~yk6b%šúq ÈÀ+ÁØ7ë}$žÄ zo^b2ÇÖ‡S‚û3&ûai'O%î)´×—'úܺqëUKÑ2$8G¯ã£\ú;H[` $ÿc kÑ›aÇœ9c†=œ?¶ ÅDÜçjgOeÎòˆ’à  5Ê‹}ûî—7<[S>‘"etvS{âéÎøŸÆeútp|\ȳ,8q́¥îí~cvf#¢;€Kökâæò›:#€“ÅCX±9°aУ,O|Q‚?¾ì*&¦üy4¤³œkëÉRósç‡+°Ÿ;3*ã$a Ç½«ºŠ¬Îν1³ÓǶ¶—u]åâû3Ü43E™Ok ³Fê¼è â÷ C¸ê\»uf~/rMPIÄ)020’¤§¢"ίEùïŐdke—@«N¦ˆ ;QSj-aÄÂv’_5j±,Ø|]wâk´‚ùkßä¢ÏtTÊ"Öair#l5|+›”ŽŸÌà¦Å¢(cGÄq·[XPœ˜EÚ˜7+lgUˆEešU!ž:̎}¥á€â|M’­X|»ó{‹-;¯æ¡=‰ œüLÅ`†>õWoPØOc³=^»]‚GÕ(\´l“#¯Œ`Øn¿…’‹ÍêÜF¶™Ô#¦Þgˆ4ÉõÒ[zzAåëJI,A.¦tCôQâ ®(éŒ/„#f·-¿q½¼Œ}òvÑZj@“B±AÁx•X} S—M#Å:uYó«×–†¾æ½\òë#‰å½ Pò&pšæ“õ{œûfUÞ¬à·g~ÜbÏ›Ãç±cöé±+‘‡@3§õZ¾Óg*[^úw> •£À y«ó eh?,6ø¨…€£ðJ·ÞÈ ´·õÒ2Oµ¦2M-e(LÛÃØÉÆùõ®¨*]³¥7§'y¹½¼}Ãøÿ’K Ø?àÉïTædšü—# ?RÏÖ‹œ+.y£3iÑ;É3c-cHAÊ=t–‘÷zkÿâÎÀ[P2|å{©Ïí*@Ùä*jž>Šˆõš]q>X(“¾îR‰³#vû7Ÿ÷Àhçë|{´Ý‰!‹ìø¦Õz߻ʭ½i%±FÓE/shðš‹õ"*³2ÿŽÊ²¼|A÷¿|óp÷?Lvp endstream endobj 420 0 obj << /Length 361 /Filter /FlateDecode >> stream xÚmQMSÂ0½÷W䘚„4åä(‚âŒà0ñÄp¨m„ji;mü÷&Ý2⌧ýx»ûò^"´Czît0šsŽ£)9ÒïˆE’&J %Mb¤s´ÁÓÇÛ=[“Pð3JBs¼Xê5á ¯î_§z±Z’­~Í™¼>&”¤ŠÅŽ«¿ÃýL ä£y‚&tó؏†B&”Iå*'° ÷EGB.>yªºý„êÒ­j뎋Šð[Så&(íI¡lMS¦™9ìíh4µ{Óþ»ØYÓ4EµƒfgëÊ`kˆæœ™²ôw•»ë¢)*„°wbððäy{BŒqwÌ|½÷•è)}tø¹¨Š3ôšÖñÈGEB†3ÀçòÊD6f4q~ßÀȵ͒ŽÙÞìU¬^‘oԍS–`ã\é…û^÷íŒ8@ž›//Ö”uóë§Ü/ô‡v¦rË%ýóÝ—8ÓÁ“žÿ endstream endobj 425 0 obj << /Length 2254 /Filter /FlateDecode >> stream xÚ•Ërä¶ñ®¯à-TÕ—ß¹Y.;¶+åu,¹¼UŽ Í â”,}úÎŒ—»U9Hl4~wcÒh¥Ñ?nRùÞ=ܼÿVUQV'*+‹èá)Re›YUJ%eÕD}ô[üõAŸfãoøèóHIÝÔÒ—MRg5p#B…$7ß<Üü÷&\eQ––ISçQÙ¶IÚ–Qw¼ùí÷4êaó‡(Mò¶‰^‰ôåMw¶ÑýÍ¿nÒ$ €¦€ý¤*²ÈƒÜŸ Š—‚e TÛÀ½(—×¢ˆ;žÔd—Wuü•ïv6ݼxƒ˜*ÖcÏ[óÁ * ½Jª²ŽvªMšFÌqç€(‹Ý̇~ò¼îÌ4}A0hÿ—þWnj.ݸ5Ø mÅúIv»Ë2UÆT¨.¯D…åYT Œ§[ü#§6Q›´„ðNAÛ<©Û‚¹þènU¿¾ƒƒy¿â™çyüx»ÃÅގ¼v‹g¢ÿcXŒæí¶Éã„-Y'eÙF _Ò–%³þõ`à(„ÜÊao˜±ãÿ3bfŽÛøRuÛ™ÑGä°L²šfí\NüÕ{mG:”Çv´³ÕÃð&·Ûùà¡×#r’ÑÍÖ‰Œî‰Ùmi¢ÑH‰—Õ1™×ƒôlÇ=oMoÓlŽÉí®hšøÞÍálØR‘yQ£5끫Á鞏ÏäÒpË#o¸âŠ=ûݎ×ƒžÍÙÖ3æ嶄0÷µ„0'RÑ¥b]f8Ä åQ‹)ò¦ Þ澀'ïŽM¨3¨Å« äQ†*,§Ùy½7̲ñÉT/ !9~–ÁNüíï =¼1Jϳî÷ÁôrÌ1㐾鵗ºsúC¸þ;-S“ìÑ#Eë€ÌJw:½!˜{Ki²=¿c!=hß3ÔÛéYòCö~¹¿“-7î#›fî¸5û|F|…Ú¦å9ͶÃÀÐtpžµOQ®©shñ/Þ‘¥'@|òf÷ážavÁ‹õn<žÝÓô}Cç/¬.ì„4`“Üø‰¯ì€õ)½0søÀR"{ä•{< B‚^ó#ˆä–¡çå9²Ïl†åÅ¿6(!å &¨‚–A½WMÁÒwo;=0ž¥™x‘J§øƒ„‘õ©ÉâG*FÈ+ÔE`†–Ðg‡3ÁèFÐ¬HFÓ3†µÊã.”!Ïv ZLüSªëuàÕ‹Lò–ܲ¡3º£H±zd1zá;=; ü~<ÁÆ2¿ÿ°Ì'T‘÷îi¦rGÑàåZöîû÷¢z‘VhÜéÜ0@g¦â‹hÔñÓŸ*öP½íh‚\¤1óf«Â]”ÿ¼¨¨ÈRXêYA‚à‡Œ=ñÊrò8^ÍÑya£‘X4ð·7ÂCS³C )åÕežáÁ‰¿´£µ¬m˜ßäÛ6FË¡?kÈuÇ ÞS|F/³Ûõf6Tˆˆ›ì,;Çy70‚¾çŠÀd,éß„=Œ:RG¬–ƒ½¡ü}·%òÄQx9qmd;0uÞ˜QŠä3ÅÝ›ÌPcCÕû­º;}¡˜>I­WbªG‡õa6oLܱ¸ï)+£rZ*\ÀBäV$IÄwŒ:ÁÜa»eО¢0àw8àù¨óŸ¯˜#MiYhõØæȃž—k¸e-K&yàü3€èôÉn^Ó=WÏÈ`‘põ×)ëêê«)CX¯%4ã>þ¹ðäFD…‰€'þ^ôXI¸`(çuüaf­ˆ(ÁðæhGHR¡ÁqmXÔ,€J¼P§G1gÊáa™§õVa CÕaQu{Jc,jÞP’П˜ð"S€ø¯†CŠ`8„¹À ¤ù€ ƖɎÜ.¹ ÐAË=£ãëL 8ÊÖÀ«!ùÁÞmfâ:…æÍj¯ò*2JijyVÄ“hÓYdÝñöñåh ön÷zÖaQ_ÂÌ„€TZØ*3ÅÀ#Íœ…À„^ÈÙþi°ðJjñ†ŠÄrâ~›·XÖiï4škbIÛ–ç€@ë3j­_»"Ë.+PQÅš¯ßCÏ+%´ß±¹€.Ì=Yˆ©Jþ U k;œEÙPCäÁz×÷ngVM/Õj|ç­-R Í/"„ù9V(ˆüøñã—'Ï{÷îªÀ´ ]„Àèô:¢Ãö)@ˆ¢‚û"|ÑI£D$|‡P³W>/Î÷á)Ï÷c|‚C¬¥XëQj•uMë2µ‰¡¶™]8QU*NéIX‰U•¯˜àD¢ Ãäßóȝbä-JÏJ…7¼Ôë‹Ç´JÛ¤nTøäñ¶„&@ÿ°¸´8÷âŠnúä%^”Ižæᬈ/E¢ Í!þð ŽUŠa6 m‡‡YH•e’6åFóQu»ÚŽŒÐòÝ,lTQ&&BKƒ!Þxµ¾qÊ0á•,7Ž`je\R\ÑÅ}o1€iÀ@ütdOl¤ÎÌV›s¿k5\Yvx=éõõÓ¡s…Ä0@=åa‡']Ç0þÜÆÛxmþiܺ­DjËOFXž§ƒ/ä~*øžÙ×Ì>ņ½á¥èpqHfT3òPNU̦[¤(¤4À÷¢Cë8µ‡"þÑÍŸ\"ÏKÈ¢Ÿ~áÃ=Ëc¦-•¸ Cö–_‘ëÔH¬+BaÆw·~ò"ì\*Zeá·9EÂ+±<Ûg>È ò]•]¼èìááELQ–KÖâ—Õ¨2šµüBs{†… ç†ºrA¡¿‡Ìˆð%;¦’W ì]½¯hïñü«jOwxýDÒââQòi5£`äß{}<< /Length 2740 /Filter /FlateDecode >> stream xÚÕ]sÛ6òÝ¿B7ªc1¿Ù—ÇMšt¦M.Vnry€EÈBC‘*?b»¿þöe˹KkzzÅXìvÁìjÌ~<{³|þv¾ˆÂÔýù"KCïüõÏoލг·ó0ÄWËççËwoŸô—°‘yË—yöúõrþaùl¿P±Ç²ÅšÿúüùÅ?}¡’)UQ–ø™JaaLjsÈ)ž¾Èg…_¤aŠ¨‹(É}•d³E¨|Ç<ãìJÛút¾ˆÓÀÓ5|£Øê®Ùml×Û•îM‰ÀÄÛ]wÜ´ãé²ÙãÓ_ÎÃÜ»eè³W¯/N¹yû±Øë7†G*ÝõÂÃ^#NCÌX¨ÌO’¾xda ¯Ñ "*"¯Y󩏊г5.Ò›ºDÒr9_Њôß3vgV}Óòø']?Å6MðûF¾›¹rƒ[}eWܬ‘¦Âù™gZ…¯Ò|øQÀ«Y²¹|‹Â ‹‰´ÂÚQäÜh$_¸ä`Õ˜HùQ:@½Ó0ò.”g‘hYH"‚^²”F¯„ß»ŽÙÚoÚf¸ÚpÇè2LzHB_ SKóÉ®ü+Hã_ÑÎ z*iv;Ñ /[û »FHÚèVD^Úî£Ï¸ƒ<_(ÆRÏô+6Q‚Ú]v£NCªÁU &ªòå¢,Ø̶ioçyÄûÅÔ¥ÃëúvXõwÇõÏß¼H3ÝÐ\ÙšûæƬ†ÞÖWw&sâ¶ëRyä@#ÍQäYœÆ‡KÔeÇÀ~£{nÅÞµí7nÔpÙŒŽ&;¢[Ÿ¶>°aÙ¹±²‘u‹³Aеˆ Á±¿xÕ0sÛ¦â1²ü)5«f»zÙ*ǽC•ÄäÿZ–¸¿ÐAT8z*p¨Þ3ä½bùb·âþj;AbQBƒw‡ÆÏÄ<3ÏYœàªÕkñ#ÍÐv¦bªÅȵ³Ýî*™yKã2Evµ%Ý󲑻°ýƒçùòä·Ô“`¦Æ-Q©ŸƒÉ¯¶'ï?³Âû®ÈgׄºE¹ò“U»š]œüã`‘#Îý"(>·H0K3?ƒ8× 1R³Ì/²@‘ý„ŸçÑ,.@Ùrœ>S^ÿYá­©SN;kîÔ ¾¸Ð<ÔyÄÒ÷:¢d…§â­¾ÚҍÃ-?„$€N(âûŽ&ðÿë“'ÔI¸£©£ì¨¡Å9\ªqôlä¾±âÙç׸k¬dh‡Yи]÷,‘öj Œúê{ð§q™C¶€ÚZPVìmJérßðÏ0ö@p%ªXÁËfÞÆM„6Þ3Ôƒ0tWi‘»+.øHª·17º4+»Õ•ÿ™î8x5åûìË?û0‰Ÿ%£˜1€~\ŠÌ×ëþ<ëo,RFÝŠ–B€6誺•ˆ³/v'àAB"X™N"º_‡íÎÿœXˆQ?Ò`@Ëõ¡I戞]3&®àQº”ÃLtéO¯µ@ÒW@ŠôµlC~$ÓêÑ·° 5±(ÈA?ºÔÄ:ìD¨Nή˜°#õ4Yãzc§ÉNoªªã9Ï0e‹R*. ²ˆïÖq8ÓB+¡Â´Æo]²'hkþÜËC&…-"⣠×2¯vX¿r9Z¥îµÛÒm¾Ñ;1ºZ6ç\3›¡ÓÔÔ°žMòþÛÄée«ñ$KN„³±·Ó%×bƒ±†£¿›¶éܤ$xòÝîÜÝCk¶¶ïy¡Ä[sö™}k>Yr·RÊ‹ÄaºƒUÐývßc—D;Eûpe‹*<Œ“œ·Â;F·•u#¥éV­ì’a‘ €†J—XÊÄöXÅxd„^9OÔ·-Â'~žürRÌßfí36}„¹*ï”»$à‹ôP0X:ë©ÈS0{£¢pfŒMêŽJÏÎáèþ)æq·Uœúá7½}c%‹¯™á_æòE“ Wr©ñw4¿•¦Š2BDÚ…w“§¢­Ä¨½¡àQôG…~>Ëm4Ňâ=ÉÈ«Þ.½f²//F’ û²©:`¢:_ZVfÁ>ìH©8õ5ʃXɀ£Ã'Gìã×Î^Õ/«ÕA¦;}[„Ë#öh¹¢$æ0o8¿/«Æª=Ú4ˆa~ ÷/TÏG—i›9Özk+«[ÓSç0WÞ4RÂð ?!Q·]o¶Ü–z.F]ò@»>v;òžY´ŸŸ…žù4ç×Mêupsõƒ®e€È³HDõdèotÉ"‹Â©„SÐÐýÓâßîK5¥—gBôKxa*kdòj{ËíK]äQ½Z5sy‡Lˆ<*5C%äí-$ú’B–,rqÂB¥pß²£5½E1‡ ÆyôPîÌv™Äwø7׫øšž$ðÒb< ‘êàÂøa¿ VVA™‡t³ñS3¶kPÀ[~~³¨ bÑ–j28þuààmÂÀÃïnçÍvg+~8È]àJ•’ΰ¹ÁTjû€µ±.#•LR2MÛulȀLaÀö‘÷×m³eŒ¡sÕqÐJéPWŒhf³1dɘü Ô½¿q(2)D|1 '}H¦%ôzÇÞØ×I¡š‚ªy²‹,YP×Ê>ÄGº²‘GA=ô ø! A§Æ%Þrò “æw*éa% Í\Ô–žŒÓüÀS¥ &sƒVh[pÙÜiÚ_EIœy/›1‰6ò´DOš)?ãZ¯ÕÑÔô~k7Åz¨N*ÿ=ºh endstream endobj 447 0 obj << /Length 3117 /Filter /FlateDecode >> stream xÚ¥YYÛÈ~÷¯ä‰,™7©äÉöŽ³à#öÌ.‚8”Ô’˜¡Ø69ǿߪúª)jFc,'vWW_ÕU_/váÅß_½»zõæC_DÑb™eñÅÕö" ³EY$ER,Êüâjsñïàý¯o¿^]~›Í“8âÅl^äqðþ˧¯×#õí·Y\ãÇ«Ë÷W×ß.•úùnÁÕ¯Jy÷åËÕì?Wÿ íçQºHSÝâ«Ìÿòþòûw~ó!ʦ§JŠlQD9ÍîŒy^…z‹7Ê‹åb™Ç9³Î=ï<ŽQšb†q·³y\f]WMóHí" î÷¦åV4VFíMÝî0¶µ†VÃνæÑ4è-H7­÷=8ÍCµî'KvÍJ×®ÛM}Wo†ª}Ås{庯›‘NŸ@É®·ÙŒ’bI¨¤,-›$IPñ'Æjg@Û˜»z­m>?3ÔÊ0Û=ÎÊ$x ‚³ø:_Õ¶Æ9à@¿ãsšêáVÎg»ž.F~!§‹ŠE–-õˆŽxµç5Ò4 ðMß:W¯XR<Ôï î@/b\¯œÝuÕC|;tZÛÃíЛck;44»¡¥;¥,á׍u9 Ÿ»Y–USC²ós‡—aÐÚÖÐAéx¢Ë^uƒ¡{Úkykjôr¸XÒõxk¦Õ~:Ã7š¬P‰6`õBá1gÖVv¢¥ìVù÷õš¹÷º®®`Ú0qÕ–¤4Õ½›žßvcèEš¢CÝ› žºWÅÁsƒ¬BçñQ˜Å<¨9uõlˆZ½û료҉UFå"M"‘ø2IpšaTüĐ£pQ&©7ùk'¦™äK’CÕÕNP¦jš¬›8y^ëÙ< †YtÖ‚ËB™MWõã²îÑõ†Þ=!›…U2•¬6bx î{6œ‚Þ°«Ùzy¯Ì-X¯ÂHNø+y©š5ùÝ“¨d™¤|\V#î“ôbÉÄ$§o…éæœôRïU3¼0ùԁÚ?Â,4‹Áwš,y£ÌÞÞŠöa«MíôT~Cæ©ÜþÜe6] U#”^à”Õ£3P0¡ôªW{…&oÈ‹ŸiÌ\ßÿÿQy¥,¸«»^à7ÉÓàPÁTêVGÝM³êŒ¼2[„7)žU90ÿöéw†Ðª3 ‹¨˜Ž-Þ‰cx%b57ÞÂô`‡ˆòD>B]â<"Á icÐåCÉ—?q°!ÅauŒ<{}Eà¦`ˆgŽáŽH¤ ‰Ðü–¯¹>&»¾êúùp‹êt‰x*Ô…Bk´(ÂüôõþÅ¢³)`˜3¹¡X„Ž:mÁ´Ò0oMÇ3 ì+w2×èR­6ŒbÜásðDÌzèýP½Åwâ^ø@G÷B[8rxäª^tûªÝ±ø3‚ˆ­Ìâæ) ´ˆ4S“~oÙ7 ?>2^¼uŽÚ&ºò(I?[´ŸB–ÌRÈâöÖíæ/lÝq¨˜Éø–òng´¬nqZZ#*d3îV=¾p[Ý +vÆÊq2 sohûÓ‰PøLµ4Á±éبÖtú,ƒßIZ´”Œ‚²Ï§+P0úõúÜíplÄ 1MT¡êгtóä5â#ú!rMã80Ç؇9Ý°ræƒ::°Ô­ë»aÝ׏1S­ß“8‹×>X§søâÌ2qäÒegs&Ò #)‹ #}¶^2$ÖÖôdùµ§Mã+6aŠ*ÝÈ"‡­s<””¬ì]àÎœr¦Ùb¸Æ,‹gfJ§è)Gô$DzÒ^O¤#š¡§;{L}ïžæ¤T¬SéRÃbGU»6:èúAÂœEb`4¤lÑŨA± r°›L‹àÝÇ/ßuÈòtT…©E‘þ¡òÇ9B÷¢/‚‹g yÌ—£ào†TA­¡¿Lœ!4 „¦ôe«Ñ&Å­cæ*ÕC”®¤Wx†ñ3ͼ¶+œmmÓ iÑ­öèíüe”uÜA«V …‚ͨÝ5lMï¡=U|bT¡xŽƒNš('3B"‘J„¤Ap b«­þ#—Lò¦Ýϐ.q»5ž“t÷ÌKª›1\OÕÉHF'çV-ßLm¨"%n¬;Fùv&n˜º„È{Ӎ¶žÊ‹¶Oùüª[|7U_¡E)†®Í67?‡šY&Œé¬ô„éG¨aà$Ø«ÅL4—Aã³UžÓ0¶‘ܚ׈H¼yl_qçn„ôRlHVÐEÝ¥ÏçÆ°Iqv\b†ÄÐ D4gl)ï«›º÷&Î+P&sYÖgžn¢óǬ5=:¢áõ§©KcªNóÞýxûÉ8%W"ªÎ6šÞ"}ó!.Ù˜ã(K9 ¤³$Ëc/ºUD¦Ç‡+ Áåa–Ú6•xYb‘Òq²(–cf,(0àÔˆußb2+].Ê|Œcé%»¹ÉÍóåãp§‘g‡CjaýzMˆ¹­',#ê}W†b78U/(Áª<¢AW“ y£Þ±ÁE*½tQŠp…ÙÓçTÛ—Ï`'Á{9«nŸË[1ýy2ÁÔ LcìÚpš%ÙSܺV«qÙc‚È=”5J_ÖPéäjš¼@ˆÕKN€ŸEZH‹Ìãx¡¯Ì‹ïÀ9 7xu E¸Ï®¾€TI.Qdˆ ‰ ä L}7IöJó3þy‰rBIYéuÛÔ7'i 8´ctæ>××MJMRÇŒlû·OÓT©§ÉO<øŒð´bÁËô]-î›R#1D"±jIÔ|э¾º ¾¹]ë0Ë÷̉}ÍV”> YÄH©)p%¹„ö‹ Vð ßÄ×8ýð5É-8¢B›¢ ¹èq„/vƒzxh:P6ugP”mÚÎÌ"Š‚÷ü"_iåk‘Y fð€"¤úhŶñ9ŒTXdŒ˜¦‘MùÀC k:O´BcË«ÄÁ±DÊ9ÁZÏ'@Ç[uª6=]nà·?㤭ÒZ!XìêJc¶,£ӷ3·äF4b¶ÉBÛÕôNQ‡Ñ¾»ýz7LQ\’~‰¡;Ý0ädN¬‹„*Iä™iž8³zážÄtKÍc^Œq'Ñœ€”B‘Ƀ ÑVá“M Ô$Á—D]­ QK+/ÀÎ(m$³¥w{Û¾0²Àâô<{ì*Èþ‰ÉŒÍõBË.ÏÂÞi­«ÌOîZf>mw7“kÖGõ@’!HDÍ]ˈ{Ô.ó`Š/ è¯QeêrÁz]fš\’ç6›z8è‚[Ð*t÷UwöׇºÚî?¡pãI3ÎÎð¼ÿåùôýëïïРPGÔüyÀ™ €K(i|l7¤^º”fÂñ觹*Ìjé0Uëű¸ìµqŽ´Z«GOËz'I0§™¥O^—ªs)ânî©ZᙳJ'kßvöjÕâëU,-*Fä p®õ¥2¦ ˜Ÿyè»jÝ{²¬yL€ìùªftTE¢Ã|>÷•a;ìöày_$ÉøËéOý–ÐÄfò_"Õÿ ãÉÏ~KI%Χ|wÆ%7‡ÎÚv>äîÐjéÕ˜ŽG¦#YqhAz²šU2j>Ì(/À­§!Ù ÏF Þ&éMWH½4ãçÅÉ$È%†måžý¯ñßË«W£"…D endstream endobj 454 0 obj << /Length 2567 /Filter /FlateDecode >> stream xÚ½YmÛ¸þ¾¿B@8°y¢DQRÐk‘¤›&ô6—uÐ×ÃA±i[ˆd9zÉÆÿþæM²ìu‚¦EûE‡äÌÃá̐¼­x¿y¶¼ùáEzZ«,ŽCo¹ñt«4‰¼$JTj½åÚûÕþòéëåí›Ù" ­ªÙ"±¡ÿüî¯ßŽÜ§ofa‚¯–·Ï—oßÜ ÷ç¿!‘øË—Âyvw·œý¶ü –_h£Œ‘%^Óø»ç·÷÷ØýÃOµŠ’X%ÚÂ(’¶(sˆ`BêÁd‘±$¼¤¡VÚªHéÙBë0öŸÕ öW3øìÚ'³EþÓô?§–‰ç¯ß2q[¡\_æ]Ý°r©—©Ì†WCC2e“„—Á©Ã¦†Ï®:üÆ}ì‹ÆI«ÛåS(ä˜n0ûNù¹ð‹êPŠÌªÞÿ+ÐfÛ7yWÔ{f"§tsFDi›z À×í5Ñ#æÂ,›šDé,ð}_¯vm³úÆ9¿Arܹ)’‘UV‡ƒs4/ó‹™ö÷p‚C)ç%·×î 6ç8KOHÀÖ§‘Jµ†Í!/â9z“&þÚµ«¦xã7†y]^”Ò¨7øký]›ôÀÌÆá’,û©Xñ@[ šöÕGV»Zd¯çöÜÎ÷kžàíýñ=YRçÍ&Ñ"Ó8–éjþ5Çs3±ÈU8ú¦ƒ‰£(ÄæLçðìM—ïQ¬+¡È—î‡]Á^ÊMð£¸>sDŽXÛÃLùÖ Ùò/ꆿƒncÖãï¨ÑÃ­Ç d©}ŬU]úÎ5(ùm—7]«HP'*Ž³sé š9†c2ÑÊεÜÌñö#çㄬˣŠ¼GG™|ĘfBj8½4,Ѿu¼ãò®í†‘"|B ܬcw=3KbbKNÍ<4E×1ŒÆ/ö̼Q2òß½{ǽäb[Ž~"5×Ò×r—è ¦ýwÁ#LT„§àÁFÝÜ.o>Þh`žö¢8V:4ž #èÄ[U7¿þxkèü çÏRïD+âtkz¥wó‹ä¨ézQœªB½ C8âÉ ,õù_Ǭá÷̺!kÚ U6¿j]ªUl³Ñ¾é$VIèÅ&P‰ùځgrȝˆ"í%*KMY\ÇÊZíÅ©UÙÑŸ q'Ö_ÎiŒŽßrüj~™’_BöY½#|&ÉÀ£áGcw==ôØ_Çý55¸£Á/Ř¾ž|;\*™Æfaüvô­é‹£µ!†qëAé$‹ð,ƒ&¤V&wãðUz;ðÊ"8¢ãá¡%eÊØø„–1´Vªý ÍVÒü¬äaò=²U¬dAß–¾°#AVf0eÔ´#º!º›pÖã½!#’$fubÂŒx0°"ú@t?NHM;1@&\ß– ‘¥xñj@ £Ï±œP§’UKD°D¤MDzß#}aim¬ÿñŒ|,Á îüNnbPü"ëG”ËP9v&1Њ9¿ÝŽ¦YzµA !vRª´-°„·†EæY(.ÇÞ’OhöÌÓ°Û?òd×÷‹ÀÌâ‹ý¢ð¨ ™ºÏUŒÊ'ƒ_MKØ«AR¿ Môíñàq\Â)bˆcß—(¨œWá:,—‚8ÈT`‹äKwƒÌKÁӞ2c¨^©ô猷ÂBŸÒ-68ÝB7ɸÀç´¤.Êüi±¸ˆÒ1`ÅÊD’1—XÞÔ’â$ñ£o8ûq­2”'Ì›&Ù!¯žsDSŽ27UI\]¹'Wwò"gŠI`)Ïg¡hû•Ùo‹3añ§§ˆ˜æ:b#Õ•XWH€ ˜AÀ}½š¢r\ö¡"esäÁ¼ù~KÅõ–ÇqIijWw÷LAÕ%" Ù㐞ÖOé‚b5öt0„±Öñc¬7bËÕW ±µ­+×í ÒŒ·¦ú4‹¡4.‹õXÁÈn5¤Ç¢Aíö{e.ÜíËÅã}±_IÍv¾ïß3óTªJ©eén0©)ù¢×ƘÊúQ¸_!¨-œ3o¤Øµ4±þûOcyiÉ—¹„ƒÉYɵûráH›‘ÙÇ~lt8ÌÏP$÷åšéS%ü})Y ÝétCWÐ/ìº,˜óØT•3þ ‘f!-ô8bz 'vQÊM/’¼ "|åÊ°É£Fñq:ÉÈ(§§—€P^~‘Ëý—ïõ‚) †T÷D½i§MŒA¡(󆙄”1W‡Ô Howç£ÏL¢žª¦P }B ]r‚¹—+{ VùA6$󲕅É/áwâ²p®`•7¼hÑÁ‰ìùeï¼à¥ ’èœx£Aê3”°gaæ/Åä¨U´ü[ºV¦âËn"¶Ör{ŠH:óù2@ÙŸpøè<×/‹›!~ïk>úã%Ɲîqfr‹üÊå{¹’ i»¼-œ¤ŒN²ÉÖI2Üç6’û”ÊR®›ÿV:3¸ÁÄé࣫ϼòÏGˆ£‹º]à]j÷ð]Á›¢t9¹8Ô˜1g3F°H4ø¹/_Rôòñmë6}¹§mÂ6yü¾tŸóµ[Á"%!l ¹BîÇ´ñúðsÝ1–Ö(©$“Ò😋Ý)Œ)x*D†5&¡nô&‚l÷LX/´,ʱ¥É4‘ÚTŠ†Ë掟Åp”?×-’dò¨áõ£æhça]Úyü}ôbÂräÐßï×®i;>‡Ð~Øñs ¿8Á+½ˆÄ» ®ìkâ0õ¦òWߺõ´þ0¾¸!×,JÇ–Üyhêm“W8á×^D¬~Œeñ·Êñä¹1Æz“ dyØàY)¹MÏN Ǩ-Öx°5Ø*]=ä¡‚Àq¨>v’eHÀW­2ç>‡x“_,Ô·ó/œwÜ*§§òžD”8— 4.ÆÌ3Ì‘Añ ~•Y©9ª¢:é,*w9k.û€vÒKʱ>‹!‡´uåàá ¾$'é*ÚËÏEÐÊÉÐA“¼-´Á‚lÀñ¯y5=fɾQ‘ƒmŠžðËEô4"H¯ÃVzQå¤GŒ« à+îä,˽†·hk†AÞÚ¾:È[3uäÝõç:1€«¯„Ÿ.£Ä²/ cG¡8Óân„1£ë;’E·cªª©†‡QRÓÖ|›ÀùfxÃºP¦âêƒfìè´%R¨!‹µ°´ñû­kZæs0»´Ì/¨JÕX$¯)ƒCÅ:-U4ÖÛk&È4”åkGÆb™³á_B§§´,þ܈„4ٍÿ OÒ$2‡étÀ¿êUÇ­ö8™¸.‡ôŸ>çç!,˜ëy8WJ¥óìÿö¸6¹ÿÝ~X endstream endobj 469 0 obj << /Length 2654 /Filter /FlateDecode >> stream xÚÍÛrã¶õÝ_¡·Ò3–xMŸïn“Î4Þîj§Mh –˜åÅ!Áµ¯ï¹©(ídÔ±;ž1@çàÜ¡pµ_…«¿\}·½zó^©•”"cµÚÞ¯d‹,Õ«T§"KVÛÝêŸÁÍ÷ß~ؾûx½Ñ* ”¸Þ¤‰ nnÿöáó„ýöãµJaâÛw7ÛÏß9ìoHƒí÷óÝííöú_Û¿Âö‰(r[| ïooÞ}ú„ÃoÞËx~*Æ"• |E³Sœs:*Þ¼ÏV¹È•àԍŽ3!ãtµQRÈ(â/Þš²jŠ‘‡b` àæ®CÝ=·Ö´üÆa%ÐEêøá4¾«[µ¥õ½}åÀṹƥ¯7¸l=ÀBrCyp7º9óTìN†ÌÙÈTÄq-²À±«òÄ!<¬ÊÓ@&¸p¨ƒ¡ãG䥛t(ð_ñŸÃ؎?«ÚざLJ®1<£5Œ¢áqFéÝ\¤2%dŠTþ’E»s{ܦuñëóìpt(‡¨n;Ð‹‰Ó?Ò%eŠn-ÓÁýµD’$¬HèÚXkúy‘GÁ¾úZµ{‡oX`„L²U(tÂÿ$’«$è7ȏ(R*ŸÉ”T‰ˆóÌ‹_¸–kµBdë|]¬ïÖåz·6ëû?¸É˜yVÒ¥ŠE¬¦S9¦7Ì!â>2`VÔf½Œ`#âlÚx÷ÂtÇBé¢ï»žG'Jä ôÉ5kjƒsµzóЛE> ìÀ¸‚›²Í]²ÐJ-hõ…Vºµ·×ÀhÔ¨(qÆÄr?VÁqEÓ¡"¸Pw#5 ƒ›ê<¯jªÚ`Mƒâ­}Tñ(†c’9ÂI¤#°â´þÊèé²ßœ#=|ziE”ÈùÕÇa7¿ÑQJfËV¸Ë²øèO.¤_ÂÖi’û­¯ ñ§t?ª¥äÀdVŽn3<°Ø”UQ×Ïn°ad×Û›zÄß{æ-= ÈJL¬‘.¬óL` ñ’p(Ü®$‡à9:Ë9hˎ÷-*7ƒÏñÌòdþҨɰ[ëM|Ïh>©ë<:oŽWóÎ;NóP Ó–îÝ=§*ßY&…’“ÔÜ¡%¤Yê!ˈ¦9¨ŠwU[ôϼ.[*ú90‘‰È!©Ypâ2:Ð}?¶»¢q„±åF4a0MAYõåXÙÞáË¢e€cpÂoq¡G ڵ~U‹Ñ vŒ0µ)m_•dY ?Ø¢å‘Á…VUÇ©HÓ)q{i…͏ʇáÖe&R'øÊly‰ò…iÊ…Rrn 7Q dÿ4 uõÅe{ˆ'—ŠÀQpe²ˆ.¸o;‹wtƈbq¡5&Yêe»‡dòâ‚ÌOho$ÙÝã„5mDñõ‰š²¥ ²œ­‹~Ö1æ2 ñÝ^+Ë)i‘¯œ´Ä™QQ·§«Ñ¨Ú¨v¨ÞعÛ9và®íö†sü®àf0}EaoìRhï8NÂm~!Š"]¶¼ eØ{œ`4>/çSîþè~@‘e”-£ƒCµçÃèÿɁ?阔¿ðekMÞä{>aXAIWM@BçÚ uÄ4%Þè}ô£èŒØ‹ã˜X }”Þ(N^ÚÙÇ"Ég9ÈEÔ$"<–”~õ4®ÌØR »vq‹R`’di~,8½LC/ÎÐ"ÆY3*B·ØuÖì¸ÃyV*Q-3é,À¤³Ð–.Õ‘.MÂjÓîíÁMDëŠmS|™6FËÏXS)ã4ª¿œ©(:z WÄzd§\ˆÀdûÞÇ®íµÜewÀPýjø+ò€9€ <òèý.Ý»Ïv¦¨I§hFeàK2p![¿CÕ¶K[uí0ít¦.ØtX7ÔY:Ùс»¸C¸<Ď;LE˜ÊeœÚOGs)< —å[i†ýàÇ ¤\¤|þ…åJ÷ý$Hˆox™9û n>|þ†QxÑØ¢oÇAÆõä«ë¡Ã‰0È¿HNœåkH@$âp’€ÞܳƒŽs¾ê$ä<&ŸøðäwñE1U\]‚©¤©nÎCßYÐ0œ|Á8$—2NépæÿŸù6çÉŒo¬iTœ¼‡±yÓü`LLûû\”ÛkÊéíØ·\±Ëµ{FÖþ‰Wû ¯¥÷&Ë8~¢Öó÷R†(€ØŽc†ù´;.wDq}áÞÆFW³y¬ÚÙMgøÝàö±¯,‰–Gµµóã;Ø ŽÜÆaò€™"i4UãÜGü½[Œ•c( îû®ñ&Î/ÏY! (”9Lsço.gÂÚ¶³Æ¾ÈüŒ³k†:nyܞ!ò=¿¿·…%KEñ/ôO;aæñR%:ø*.‚éwÎyüäÁã÷¥?É­/‹£;sëÞÞÂÓR̘Àï:Ê¡]ˆ@tAo€ƒÖ¦ÇóE*ˆœhã€Ù7³WÛÈ%NÐNûù ,ÂgÒ·¦)þ´p§ÀÒê™k 1Ç@‡2±%Dˆ¨kHx3ÿäêÒN¬Úc¡š“Š§VLFǽ‚‡÷=¿g)WãÖî÷ŒL|çï ¯“Ew;ýŽÔ%s¯{¸:Úꈹ¨I¤ŽøríôoªZl"-ý/L$˜BŒæ˜7ÓÌí¡r+ì*|w-ÇÚN¿õP.£áé!<?U. –ÄóZ¤9^`hzmŠ¾??> {]󠋘7÷@ÈØ}åòBcþÙâ}µ}ùF -¿0úoí»íÆØáJ®¤ŽÁàfye"‚4¦l®~¹:¦aßRÊ\Q/ÊS‘xü›¹zÛ]ýþüš¿èf¶*ývkù+)ÿÛ-ã£SÊ™£ßni0¿Y*ƒ›sš©\<8)C)ñ_BAHMU23)ÊÇËÊ¿#£¼z³öŸxöo¦/¡ endstream endobj 465 0 obj << /Type /XObject /Subtype /Image /Width 1490 /Height 622 /BitsPerComponent 8 /ColorSpace /DeviceRGB /SMask 473 0 R /Length 132315 /Filter /FlateDecode >> stream xÚìXÇ÷÷ƒ¨‘¢{ïÝHŒ½wc5ön°¢±7{ìúÃÞKDìÆ(ìŠ QÁŠbÆb£(¢"ðžÇMæÝÿÎî²{¹®òý=22ª‹™0xð`‰“ðǘa=¯]»Æw§Â…«ª]»vü§V¯^ ÕªTÀW­ºäΝÛÏϪ‹9ðÓO?IZ»Aƒß€ê"ž¹œ?}úÕªÀÌU ¨lŽ…ê’’/^\ÒÚ­[·†êÕªT€ù«.D¶lÙ^½zÕª‹©UjÕª=~üªT€ù«.DÞ¼yß¼yÕªT¨.P]P]’³vrr’-væÌ™þ/îîîhm¨.ß%lÙ²iêܼysÇýû÷¡ºÒˆêBи'[¬‹‹‹dxܳgZªT—oLu!Ÿ÷…îܹÕÕ…¨T©Ò£ÿxøðáùóç§OŸž9sfÞ2W®\ hF¨.«.3fìò…N:5kÖ¬råÊôŠŠ_¬X±ÐÐPÜt¨.€ÔR]ê֭˜ؽ½½'MšdmmÍ[-ZmÕªKšU]ÒP]¨.5kÖÔèz¯_¿6]õ^¾|Iî܁ÈhóæÍ›   ÿ'NìÝ»÷ðáÃ>>>äK&ŠŒŒ¼|ù2ÕíèÑ£©{òÅXªKxx¸ŸŸ]Î_ýuêÔ)ò>~ühjÕ%[¶l³˜˜OOÏV­Z) /¿‘Šº~ýúéӧ銎;F÷ëýû÷×ÿÝ»wW¯^=yò¤Ð>oß¾5õ͍ˆˆ8þüþýû©ò*Q…©7^ºtÉÃÃÀ_ ]ÝöK9tèýRè·£ñ—Հ´¦ºüòË/¼åƍeGlãÎ#BBBΝ;GCßÝ»wãããõŽ®4ÐѬwüøqÁI ÁöÑ£Gƪ9!T8Þ4û<}úôP]BCCi¢¡ š&AšXiz¥Iö«V]hš£žC~&ݦƒ^¸pÁ¤>­–úgN1ùÑÑÑFQ]bccoܸ!øBäã%Ó"W„êF^ó…”×òõõ¥ÛD¾ 9c*¾ýèh@ Z%ߢŽqÿþ}T€±T´Ó§OÏËn¬Q£†$mýðáÃy³üQb6yòdá-š6lhaa!ÎhS·n]õÔÕ4Œ?¾Y³fyòäQZ³gÎœ¹qãÆþùç‡TŠ2dˆ¤nôíÂ[W®\¡•¦xKÆàÁƒ§M›&±¯V­Z\\œŠ T¦LÉGÖ¯_¯ÝùaŸâÈ[[[—æX¼x±lQÁÁÁcǎ-_¾¼¸µÙF”Zµj¹ºº&g‚Ö«º0vïÞmkk+{ÿùç•/"'Íš55²²²â¯ˆúÕ–-[´OÇÔ>Ô©~úé§téÒñ5)Uª”““y’O‘Ïßré%fÛ·oçÍÈ;š«{÷îânF?~ýú=|øPâ¦véÒ%C†ÌŒ®º~ýúä¡©\Ux„ Í›7Ï›7¯Ò/…ŸÚŠVRêwªP]„õŽìHòìÙ3ÞØÞÞ^2î9;;Kl>}úď³gÏÞ=pàMÊâ/¢¡†}ZD«\Ύ;h¾kÚ´i®\¹”†>;;;ò"¶mÛ¦.)ôéÓGR7Q…·hñîàà –ÇGß+±¯W¯žŠ¸MC=ùT+7îÉ“'ìSâšØØØð…+)T“#FÇ·¹tV¬Xam-ÅTjvZSÿúë¯4[I>HŽPÅŠÿø㥧*ä6óM§”8rÙ²e¼1ùŸ¼%u6úÒ‚Š+“5kVr/É]4Lu‰ŽŽ&¯²I“&²¾Pƒ 6mÚ¤â¦Jxúôéĉ©q,--ùš”,Y’Üæ}ûöI>uúôi¾¼¼¼x7#g;ñK>ñž={Š]ú›~}AAAâÈqêÖ­›¸««Á¡RÂÃÃ.ªE‹ùòåSèÇBmµaÃõ|šP]ÆR]Èÿ‘=ú!›Rc4]Þøí·ßbccU² ¯ZµJér ( ý¼yÑ¢Eù‘ŸÑ¶m[>0 ½N“ÕAòÖ Aƒüýýù¯à' y)cZÔ“›¤ñƽxñBïùzšYx%môèÑü]ÕF¶lْªqøðaY­£eË–JÙºu+ßýd=•$\>| /BVl”@mȀZ¢éRgæÍ.^¼H=‡ækÙóõõ>NŽ%¯¹±ˆÊ/…ô©žöÔ©SP]€ê¢¢ºÐ’„—îi&›~NK4]š¡ø¯>|8-ð{ô衲D’]Õ Ð2VûÐWªT©.(Õ°aCޞ^_¹r%?«Ž3æìÙ³üWœ8qB©üyóæñö÷ïß×ë$ðé¤iÙîää$»Ê–3gNZ8ªykeË–MòŠ¨«(¹=]ºtᵚ3gÎHÌø~Nëw~ïÄÇUªD“>ùêzU—íÛ·ÓMIò2É —C½…é7lØ0-¾"!ù¬ÆhºëׯçÍ|||<<<ììì”üRoooöqÞ3g«†åË—+]ZéÒ¥uåeSɉÕ`,Õ…†>Þ²H‘"²Å¬ºôêÕËÁÁ!É¡ïÒ¥KÉW]Áœ ÚZT—Çó^%1pà@úµ›äõ-Z(µ|½zõ$ÆM›6Õ~ã’¯º„‡‡W«VMW 4í¦°êBôîÝ[6ŒìŒßÿ]ûåXYYýý÷ßJߤ}:6®ê²hÑ"õà6Ù³g'ß{ݺuÿRt©.ž>}ªP]”TZåñ–?þø£l±«.4ÛÒ\©>^¥K—îæÍ›ÉW]„£´H×®ºìÞ½[¶œÑ£G'~Ùá#y½S§NJ-_©R%‰qûöíµß¸ä«./_¾äÏ/<Ù|Twww¥GW%~ÙÖÅÏ¡TOñSHê½|ëåÌ™3$$DRÚÓ§O*”¤³*ë-(©.Ôß´_c¦L™TbY«+B&U].\¨~³èçLÎðæÍ›“¬{V•Õ…HŸ>½’ðÕ`€êB‹ñ7ÿqëÖ­%K–Ð꘷œ?¾qU-û ˆ&MšEu!òåË'»‹˜W]lmmeèß¿â—­Ë·¯Ð´Ëo᠏§˜êòéÓ§ bÖ¬Y)¬º\¹rE¶&‡’XN›6MïåXYY [X%„††–(QB—(aDÕE˃ÅÚµk«+3lÇ{2U!t¶ìr¨.¤5Õ¥aÆÌI ÕŸìö¼•+WWuÑè$( zUa3€ì‰T^uùá‡dó£F’]ZÒ0.»}EV3ág=Ó©.ÑÑÑÕ«W7ÀI wѸªKž<==ù¢~ùåë}ªÝG½×˜)S&Ùg4ôëÖ«KQuÑâÕªUK‹ŒÖ¸qc£¨.‚‡#ûÔªÀÕE#4ˆ)z6XuG_iÔ¨y5J›e'Au)UªÔÔ©S·nÝzöìÙ   ÈÈÈ'Ož\¼xqúôé²'7e£©ðªÛ÷âèè(H|ùT¾ð¥K—òÞš®¨tä§þ¾©Ñ sˆ%2~벀½½ýÈ‘#©z¿ýö›llºä«W¯¦¤êBÈÀ—è?çÏŸ—õRÈ×½páBXXØåË—Éâ'qê-ü¹f^I` ÛºuëñãÇÓR‚þOD¨›qUVyrzÛ´i“ä)rh‰Q´hQÙwɏUR]J”(áââB¿”3gο”àà`úeÍœ9SVÀ\±bT ºh„FKÙãEÉQ]Ä+š¦M›Ö«WOiÍ%ÛJP]Ê•+7cÆŒmÛ¶yyy=|ø†¾ÇÓú}òäɲӍì3^uQqFŒ‘øå¤ÿìFvá<{ölþ ‘®è =b³?¿`·²²âñOM!{E•*U=zô’%Kúôé#;7Ñ$«ZG¯êb¼êBw™â"ôÒS§N½xñ‚ªMŸâoPÆŒ%!D\]]ùÒ„ oÞÞÞü“5ٍ@²®‚°+¦K—.ÔK»víªƒˆï<äëÊô£(øBW®\™4iß%Š+ÆÿZéÛ•öÞ´jÕJð…èºrçÎmtÕ…ùBÕªU#_(ÉSäΑ/¤ôPIöP¹ ºÐµS‡wss#_èþýû‚/Döüñ‡ì6$YiªÀª­=i°U‰±™LÕeÀ€lðöì™l7Y©„†M>`©ò¬øù‹s]ª½EËç—/_R%i|^»v-{œGËUþ(ï)ÕªUˈ‡wôæ0ŠŠŠ’u–œœœÄú9!²GÚµk—ª‹ì~rÁ‰e4jÔˆßhÄ?WÚ»w/_Ô† Ä6þþþ²çÈè®ñÎdtt4¹^4®êR¢D 5÷Õ«WJ^„µµ5»Fºwäoð6²'Ó§Nªt¶NàéÓ§¼„جY3¨.@uI›)S¦ˆ3°Wu¡ñŸM¬4TÒ¢I£TB®‹Ò¹Käùý0´žÒ¥ºtîÜùäÉ“¯_¿¦K W¬XÁfª¿TäçÏ"ÉuшÞFááá²›vƎ+ŽIB+S¾žDϞ=ÍPuáÝ3bÒ¤I3êN|ÿ^½zÉV˜¼G~ShPPÿ¤fÍš²qky×…¨ZµªAW€:RíÚµ5ª.4MóÒßÁƒ%fû÷ïO²ÑèvÈúB5jÔä}¡¥K—’Û`\Õ…Z’I^¡¡¡ôcQÚ·Ìv‚Q;wêÔ‰·Ù¸q#ßþÓ§O÷òòR jÂ?„¢»Õ2ª­ÙgϞ­’*.9ªK—.]$fGÕ2Wj„Õ+»ðWR]æÍ›§R8-ygR²+˜~"3x‰ªJx€%»ô戤K—Ž\²”T]ºwï®î×=yòDVD’-­qãƼ;$6 ò¥•-[Vé¡­àoQu¡¦¶’ȉ²½qÓ¦MIº¬3fÌ0¬_ 2„߁Æ;'P]€ê"!wîÜ´TIðšÕEØ\*FVNŸ3gŽa—ܹsgIQ… Ò®º¨„îLüòè‡ßryöìY±Í;wøbïÝ»—bª‹lаzõêÉŠT²ÙÕsD¦ŠêÂ+!ööö²Û‡x}†œUÙçŒaaaü’œ?j÷Ã?È6'‘¯9µû…¾Hvã·Duyþü9o#äiÞ¼¹Ä²J•*bƒáÇó¥•,YRec6ï%Gu¡®%yÔ5}útÙÛ½víZ±ÙÍ›7y¥]ˆ8_¶lÌ¥Ò”HŽê¦;|ø°ìaC^³¶¶Ö»‹ÜÁÚ¼yóòåËـ ®™ÇÕ`€êBE}öööJèòæÍûêÕ+#ª.%J”­'¿–T õøñãqãÆé õìÙ3-ª-¥“lR???~"?¹à#Ô©?3ºêR¹reþÒøg+*²ÀäÉ“SRu‘5'>a$›ºˆ.³º…æYæÄÈÈHÙàiz/69ª>E6å¨ìÉ8þ²R–íààà &èÊM@H6á@u ª.?üðƒØI¨P¡‚R¦{oÅë¸ä«.²)>}úÄ[:88È^QPPШQ£ôFç¯BVuѲ&=sæ%C¼ƒ´|ùòZ¦¤ê"{ŽCœ G-üyã¹sçš•ê"»ªdÉ’²Nõg-gyêÑk‡®ôA^^S:>OÈfê‘ÔJ6uQ¥J•d/³H‘"¼1;¤/û¸ÓÎÎNï=MŽêâêê*1“à'›5€ïóJyBÉÿwvv.W®œ®vçΨ.€ä«.|æè„„š³òæÍËÓ`eDÕEö°$Á{t²–³gÏV‰Ð«Âýû÷µ¨.=ÒÒª| iúõñãǼ»‘’ª/;¤OŸ^|X[Œì“…Aƒ¥¤ê"×wæÌ™*“^Ø)0ÙJÊæR7ê‹{²‰œ†¢Eu‘M_¾páB-)xø|¬P]HkªŸ9šfíÛ·ËF“=éc°ê"Ù;Êàƒ—ò–äɸ¸¸Å†xñâ…Õ%44TK«òëzŸ“?‘9sf•0z¦P]øˆ²T]=d̘1ÆR]òçÏÿHÙç2ÕeÍš5Étƍ§Tmêüüþ(¦xP7Vú ì¥mBä?$©ºð{­õ"DNüò\Æ°ýÞFT]vîÜ)1àÍ„,¢øM˲ÉO]]]ueWÙ–Õ`ÕEÀËËKVúæ7¬º(…,cÑÑUTZH<ÑhQ]ÈUÓ˜A€œR¤Ha2ÿT¨k׮ɼ‰zU~çRÎœ9•Œýýýùæ’ -h"Õåúõë²·ÌÃÃÙȞÓÅÖ­[…¢®^½Ê¿Û±cÇ”T]|||$f¼™lo>X4¯º,[¶Ìà†‚êT^uUéóäÉcDÕEVm&lll’T]ø#ÆU]lmm5¶*¿Æ,_¾¼ðÖÔ©Sµ,*MªºðÂTÑ¢E•Œù­;ß}ɉ`,Õ¥páÂ꟒="Q]æÏŸŸL'Aý.Ènøù.©D ²Ç½eSí$*<|‘¨.ɼL¶«J64ŠÒþ1©.Ôµ$f²ùÐeS…æÏŸ?IÕE%‰$T@êª.²r ñøñcc©.4É~oÁ‚ÕU—ÐÐPþ§ uíÚuôèÑÓþC6b¼ÕEÖu”E6…´°ñ˜?ÝïÄM­ºðÛJ---•¥C‡ñÍ5xðàS]zõêÅŠzŽ8‹là]¸¹¹ E=|ø·zõê)©º°ãNꪋ8¸vÕåíÛ·²É)È=èҥ˨Q£Ø/E6uT º(©. ²qXX˜±T¥óüÓ‰êòôéSÙ ~4Ñ2fÌ6ôÕ©SÇ0ÕEéˆ4l iáp>Ú××7…UÞ…S”vîÜ©kgHª¨.k×®M¦“ ä zˆÒ~ I>1#FŒà?røðaYcÙh3FßëÂýÐïEvëNJª.ü(YÕå?þ0@uy÷î-øÒÈAêܹ³ØâÓBAu¤€ê"‚4šòªË† d·"óar'Nœh˜ê’7o^í ËÇÀ§àer'”Žö˜Nu©R¥Š–È6+W®ä§L™’2ª¹+ü¾q~ö”ݦ»wï^OÍ°°6QQQ²A¾ÕE6Ú^óæÍùì²Ï…¡ºÕEIu‘¬eß]u¡QZ]uY²d _Z×®]ù¬+Æ 3Lu‘Í­ŸBšZ€ßÛINWòo¢^Õ¥L™2Z"ÛÈn#‘žRQu‘‘iŽÓî$È~;-ÞK–,©¢c?¬ä_É&³–¤ãalܸѰ¸.ôÖ~™¬ªô»U$¾ÕE6›gÓ¦MÅOôf͚Րª¹@²QS¼¼¼R]u‘§Ì%°ÅèªË«W¯$ÕèbùçJ!ôuQ©R%I±µjÕR±8p ßîîî²Æ²¹öíÛ—ªËÞ½{mmmeÝɾSÙFJ ’D63ï$|¥ªËرcù¢‚‚‚ø¢x9ªP]TTZòy„‰+W®¤ºêÒ¯_?¾4q[†ì†X£«.| ikkë¡C‡JÊ\¸paòo"/¨èf‰ ¡Òhù,kLSŒ–É.uUY÷ƒfÃä·­–­¶õêÕ“ÝN,«H(:—Ý÷«%‡‘a[¢\Tg•nðÕ©.ÎÎÎ|Q·oßÖøs€ê0©ê2}útÙ …Ok’òªKϞ=%éҥヘ=}úTöx…ÑUÙ)RâŽÒ?5†çU§U«V’/Ê”)¿ÉGÝ‹–MÌŸï&G‘OlDÕ%!!Á××·}ûöJï.>yòDv§“–ZÑ­—̞²;gJ”(£TÿÐlUÙì“üFZbÈf.ƒêT¥5»¬¨Kð¹S^uá×æVVV|>hšhMÕ…à§9‰“@ó¯lšH½Ô¯_Ÿ?|§d/»­B6{õÝ»wynîǏÍJu!Š-*±)T¨Ò1'Ož”}Kv—õ€4Æyýú5ßztÓù'†Ô d1IT—çÏŸó6-[¶äû9OPPD3f_5#¿=LÅ2[ÕeðàÁZ6tQË󇡺Œ¥ºT«V퍈‡ž9s¦{÷î²Ï°hÚâ‹MyÕEV’hò´´”uo"Õ…O!dh/ÃprrâŸ}âí?|øÀG'þîKf"±ý³gÏdÏ"uêÔÉ°zʪ.äHú9*]ºtiܸ±RšršIe“\ScòƳfÍRr6âãã©WwíÚÕÒÒrݺuâ·nܸ!ÛÕ©5ø=Æä[Λ7ÚókQ]æΝ˵gÏÉEñRT º4lØPì$ÐØNËRÙÝqD¹råøbS^u‘U„$ç£###eƒº˜Hu‘C+¦]»vF¹‰²á>h­zýúuYí…֞²ñ¨Q£Ä{6=z$›d¹_¿~†ÕÓ¤ª‹l~ç6mÚ¨D·nÝ7n\æÌ™ÉæߥÙÐÊÊJv§ ¿û:]ºtüa|B6~ÚO?ý$>”ôüùóªU«Êö>ŸµìÖ£iÓ¦)ùBôº——] ùB+W®”8²G¼+UªD=GRNttôÂ…ùÙßlU—E‹ñEmß¾]rQ²Û㡺Œ¥ºèBöŒLÊ«.ÿý·ì> WWWšh±¼yóæ%J(]…)T—D¹Ò*ûÁ(Ïö¥|ÿò Ä¡f‘µ/]º´££ãÌ™3»téB³§lä‡QuÑ…­­íåË—e¿té’¬TR±bō7z{{?~üøöíÛôÇš5kú÷ï/ž‘%ªK¢r*êŒ36nÜxȐ!sæÌ$¿sIV”&åV­Z3¦M›6²í¬¤º\¹rEV*±··ß°a¹@=ºs玏ÏÚµk ö®%ªK¢rx^òyÈÿ|!ú?ýÍ<ƯEu¡;Ε%KjRê‡ä躹¹ñ·ª µT—²eËÊn5LyÕ%,,LV%à‘Í°`"ÕeûöíJÕ iTe—¦.ÈÑMÍ 9s$þH||¼Òfu–.]jp=“©ºh˜ú¤¯ºDDDȆTâ+R]È'äm´ÿR ºÕE;•*U’Ýr™òªKHHˆìùbCŸ‰Tš}”ªA.‡Ê! ]Ð"‰!ÃCn›ø#t×øÔK‚cnªqîÜ9¥dCêðªŸô9]ºtgϞeׯ_ç¿«Y³füžÎ˜lXE^uIT~¬–$¼êòöí[Ùè.*|-ªË»wïòäÉcð€Õ’ªË?þøüùsÙbS^uÑèÖ¨QcòäÉ)¦ºßBŸ’­ÉСCxGŽ©KufœÆk¿Ý...É©¤ÁªKÆŒ‡¢©†1eÊÙ/zU—Ä/¡ÕŸB~¥ªK¢ÂÆ0 •+W–=µÕ¨.¡a$44T¶Ø”W]¢pð‡§dÁ˜Hu‘M!-0aÂ#ÞÇþýûëR]¿<}¨]»¶ö۝.]º¹sç&§’¦V]ˆýû÷kßTTwww-÷KVý˜7ožÄŒébÅŠ©WÀÊÊJV—U]ˆ3fàñªKâ—ø~|¢Ìo@uIü’%3ÉË©X±"rRQu)S¦Ì²eËd`¥¢êBLž<™þ*ö¦È”=Ûk"Õ%Q!3 ¡tRÆ0Þ¿?iÒ$þˆ±Šê’øeÇUÏÆÆ&É;^ @òU’YITrEh¾ÓNÐÓÓ“š§¾.ðõõ•-*::šÜoÙÈŠ_µê"¸d²ÎêÖ­ûòåË…Buª‹aeÖ®]+›·%UbÔ¨Q²ç/ØhùæÍ›‰'¦˜êBŒ7N¶2wïÞ5â}ŒŒŒ3fŒÊN^u!âââÔ]F‘"EŽ=šÌJ¦€êBÜ»wæ8흙*°~ýzöqº/|vEÙ=] |,AòQy—#88ØÞÞ^©¹sç>yòä­[·´«.ù|Æp~þùgoooÙ¢bbb¨—jñ…¾.Õ…˜={¶Ê^ñZµj=þ\V=ƒêÐË…ê'¹­[·¦ñdìرþùç;w’,ÖÁÁARˆlüöƍKÌ”fN:I,i9,kI«Brœ$sb•*UV®\Ÿøåp7âÀeäiHld…£$áSH÷%Tš)î&]Åš5kÈc¤›EwM\y•4‘ááá4gÕ®]›¯§]Ë–-7oÞ¬¢°i'$$D¶ƒ5jÔˆ:XçΝûõë7tèÐ &¬[·îüùóZ’ÈB7zÿþý]ºt)P €¬BR¦L™^½z‘WƧßâyýú5ÍËTO~—©……yJÔyoÓËË‹¿R‰Õ“7ãs>~ü˜7Û±c‡¬ÿ)1Sz`JßBÍ.yêG®ã²eË„µÒΝ;ù/å³nM›6Mb£1{Àü9sæL’NámÚ´éÞ½ûøñãÝÜÜø§<§‘¼fÍš6lNÐÄ/ŸczÔ¨Q›ž={ÐÂ| i¢N:¦¸›ô]«V­¢IæG‰“@n›ŠCã|5x­>[¶ltëÝÝݍr*((ˆoyªªú§\\\øO%™àØÏÏoðàÁåË—ç÷„Ð+… ¢™?þ¥K—$9r¤ä»4h ›q8ñK6@êlû>}úî¨DàZ´h‘ähsžTu0•\Ò_OŸ>¥_ʵk×”ÎÀ·ç$ÐGCߍ7"""R·2|â¤M›6™a£}üøñÞ½{ÿüóM‚W®\yò䉖”ÄfNddä;w|}}Ož”á!Ÿáýû÷htêÔIòkØ°ah´hÑBâ$8;;£YTðóósuu]°`A=J”(Á§ãyúô)Z Hƒøúú’“0þünݺñ±ßmllpÀ@yóæ©DwWÊ¿€o'aÒ¤Ih"uTT—úõëÇÆÆ¢‰€´‰ŠêÒ¢Eƒã½¤”T—–-[¦z¢¤"JªKûöí£¢¢Ð>I²zõêÂÿQºtéæÍ›;99?-¤q/^Ìœ„²e˶hÑbØ°a~~~h–ù—جǿÿùú£=ŒCtLâwþýïï£hãÕÀ@u0P]LTSÕÀ@u0P]LTSÕÀ@u0P]LA\\â ‰ÿþwíÚøfyûöíµk×®~áÁƒ†òþýûëׯ…ܹsǀBBBV­Z5vìØÞ½{5ÊÕÕÕ°ÊÄÆÆÞ¼yS¨ ]—a—xõ?>þl@!ñññwïÞe…DGGPHBBBPP+$**ʀk9|øðôéÓ|þ Ò‹TæéÓ§¬&¡¡¡”pþüùÙ³g;99õë×oòäÉ;vìøðáƒå„……±NœüžÿðáCJ .ºdÉê®Ôi©ë®\¹’º±å|üøñƍBMnݺeØåˆ{>ý (áÍ›7›7ož8qbŸ>}† 6oÞ<* #$à›„æhoÝÜÜ\ÿ# À€9š&úíÛ·³B.^¼hÀܺ`Á‚_~ù¥|ùòEŠ©Y³¦££ãÉ“'©†ºÊ¡Ùðï¿ÿ¦YI¨ Íõ4Ë‹/<È Ù³g…‡pôèÑeË– …lٲŰ)‰aùòåB!«W¯Ö[§OŸ<<<<|Íš5mÚ´ù駟¨«T­Zµ[·nT&yV*óîÝ;Ö/^LCJÉ’%-,,¾ûšô5B‹eZA/]º”Ö›åÊ•K—.+„†,]õyýúuǎÅ%0š5kFci’sЕ+WÖ¯_?hРʕ+g̘‘}œÊÔ>‘»EKNZlÖ¨Q#S¦Lâjhœ iV½s玻»ûÈ‘#i²±±BsœÆÊíܹsܸq 6´³³rêÔ)]mKSsÎœ9ù†Í•+—/($$„<–©S§’£’;wnq 4»éª ùN%J”àkbkk;sæÌ$u­ˆˆˆ'N̝;÷×_%§T\Âo¿ý¦±‘‘‘gϞ]´hQ×®]©2âžßªU+]—C~¹LüåP—ëÒ¥KXX˜úÇ£££É3_¹re¿~ýÈ´´´d%,XP»÷uùòåuëÖ 8°R¥J2d`…|ÿý÷º.‡~ÎÔóéSüQÉôû° øxöìÙÖ­[eçh‚†SNÍÑcǎ¥9:K–,’Bh)­Kù™3gŽxôS»vmõUùû÷ï80eÊš£iZ—|¼S§NZê@ëÊ#GŽÐDܶmÛüùóK ©_¿¾Æ9~ü¸0GÓ¢UR-Ì56ÍÑ.äçhâ‡~Ðu¯¯^½Z¾|yÙ†%ÇcÆ I~|ÅŠ4GW¨PA}T Ñ®º$$$ôíÛ÷;U RÂâÅ‹U>«Ku!'Ji‚f~Ê^ òåT>«Qu¹víšJt©.ÞÞÞJ´@Þ¼yïÞ½«¢–(ùغTÞÇ6Lu¡î]®\9•Ëɘ1#-10x3äÅëÿ‹ÁI )S¦¨ŒuUZŒ«¢KuéÙ³'?z‹ÿ™-[¶{÷î)}üܹs*5ѨºÜ¹sG}vÖRÈ›7oT Ñ®ºˆœIÐ¥ºœ?^"Pð&Mš¤RB•*UT®H»êL.dJ•x#*ûšúõë§RªËĉU Ñ®ºÄÅÅI$ºÉ]+R¤ÈË—/•Jؾ}»JM4ª.ä«¢Kuùã?ԃ䰝>}ƒ'@/ׯ_W©4ª.4œª¢]uyýúµxßBÆ iyNs%¹4´ fêPÎœ9U¶PÊîåЫºüøãÉW]7nœ|Õ…&£¨.bß2_¾|C‡=xðà¡C‡è±N¥âΝ;×(ªË®]»˜ƒagg×»wošsi # lÙ²¬ÀΝ;+•pøðáä«.þþþ*…hW]‚‚‚²gÏÎ:}pùòåÔi-ZÔ¨Q#V`‰%ÂÃÕT•šhW]²fÍš|Õ%66¶V­Zâ-Ó§O'Ož|°.]ºXYY±é1~³":&ñ»ÿþ÷÷Q´0DuË׫.âB´«.âI¶L™2ôí´B§1ùäÉ“ƒ Ï&JçYxÕ…æ¦ê¬ºØÚÚ²Áß`Õ…J r’©ºÐÔÃvüjW]ž={–#G¦ Œ;–*..Ž¦oºw… båoÛ¶M£êbiiɦ~íªÝÍ *ˆÕ€#GŽ|üø‘¼è-[¶T®\™½E‘vÕEì÷¬ºˆ;­vÕ…|Hö©jÕª¹¹¹½zõêÇä^’ÏÀÞª^½ºÒf^u¡Ëa^¢ÁªuÖóµ«.ûöíc_M âêêzûöíøøøþùÇÅÅ%}úôìgeØ9tTšhy5`À€5kÖÐj”M ºTšƒèƒ}ûö¥YŒ¨† êU]†ÂÌ#FHÞïµhß¾½ŠêBc&¹%4à/\¸–óƒ6Lu)\¸ð¯¿þ:gÎœãǏÓxk˜ê’?þ¶mÛΘ1ãðáÃÿûßÿS]hðoÑ¢ù‡û÷里Y¯êBNÛŒ”7o^‰{@I¾|ù„wiNQzŽ&8„Y²diР9-;vìðññÑ«º¼ÿž¹™2e’l["߃fmVæTTú8Íã4ãoÚ´‰º1ó©t©.äÑ‘Ÿ3pàÀµk×^¾|™\A½ªK›6mX…,X y—öî°aÃTTêœeË–íÕ«µ$5,õpTêù%K–ìÚµë¢E‹Îœ9Ó¿½ªË²eËX…©ËIܤƒ2ñ³|ùòzƒÕ˜›êBs´ƒƒÃÌ™3i¹JÆ ºäΝ»eË–S§N¥i+$$„f:½ªËëׯÙ'*?LMNQ’ež;wÎÎΎ¼¯qãÆíܹóþýû ÅŠÓ«ºØÚÚÖ­[wäÈ‘[·nV5jÔЫºXYYѧhŽ¦%ü7h6¡vÖ«ºdÎœ™æèAƒÑ½¸rå M—4_ëU]Äšyt’wØnUò…È‘-„üråÊõîÝ{éÒ¥çÏŸÿðáƒXmÓ¨º¬\¹ReFXXXñâÅÙ† ¥`†ää-Z´cǎóæÍ;qâDDD„———ªKÚµkG}ÉÓÓ“ºŸXmÓ¨º2— T©RTñ»Ô÷¨Ë±2ÉUSR]ÈkÖ¬Ù¤I“öíÛ'ô|¦¶iW]ÈjÔ¨ÑøñãwíÚD/’û¤Ku‰‹‹c?ÞA%h‰dð©%xûö­ŸŸ_LLŒøÅŸþY—êB÷õõ•LUÍ›7×¥º³a–_î °U$AÕ–-‡fCšîůû¡Wu¹pá‚DZ¡e¬^Õ…Vñ/^¼¿BŒ^Õåúõë’Ù\|¾F£êÂ6ÌÐ<"ÛnTUö$«[·n²…Ð,v÷î]šFÙ+tuzUrØGhªå ¨Lòƒ *ˆ¿N¬ò]»vMrð™‰9Uê$þþþ’Àhl“FÕE¼aFö{ÉY¥¢Ø.bÙ¨DdC®²äÙ%»eÚUú9‹_!ŸY—êÍv>—/_þÝ»w¼Xü¤þŒ!@u_/´Š—¼b€êÂb€ê"Þ{ {Œšœ"ö0ËÎÎNv»M‹ü¤©Wu¡/âu½ªUƒNb€ê·­^ÕåÙ³glσÒÚýû÷'éÌð5Ñ«ºPò­ÅäKoÛ·o3OXÉ™ákb€êÂb€êÂÄ‘o)ûÀŽÖÌ­*R¤ˆì“¾&¨.²=_¯êòçŸ&ÙLÁ£Ý,¡€d¢Wu‘E¯ê2gÎ6Ü)Åü§%9Ûû7tèP51@uá1@uá1@uáÑ«ºDFF²ù«K—.I*3d,»Ü–½zU—R¥J öeË–Õ¢Ì(ik<Ä<==5zDÏT¦qÕ…<Á¸iÓ¦=¢¬Y³šBu!o%hP·góÔ˜ åUò¥Y%4zDJaüS]uë™ÿûßÿ4zDã º¨.ÉT]„Œ‡Z4ˆ3!^¾|ª‹:BÆC7ªXÆÇdzt<=zô0ºêB*9:%ØÛÛ³œf«º”.]ZËÖbôèѬäW¯^™§ê">ÿ%I„!)ZŽí€yª.BVbõPºoÞ¼a–NNNP]ԍccc™q’MX¼B6º~2U6G—+WNÝÒÕÕUo@Ý”W]Ø1p"$$DÅRÐFc@Ý”W]Äg´·lÙ¢nœ'OÔP]T£¨.+VÔxnHœfEcЏ´¬º¸¹¹i?7”-[6Á²eË–FW]®]»¦ý‰aƒ Ë ˜­êÂâê×­[WÝò?þ`%kÌû“òªK·nÝX%•Bé Ð"…Y._¾c&àëR]&MšÄ±   Ë„„KKKÁ²oß¾P]ԍô‹TcǎeÆ*`V]Xؐ$O0ѪŸ•Lî„yª.½{÷Ö(R=þ\o|à”W]ĝóàÁƒêÆåÊ•Ó¨.ª‹,ŋן“g½aµÒ²ê²jÕ*Ö\çΝS7.Y²¤®kÔ¥ºx{{3cú ºq‡ËlÙ²™­êÂæ´k×NÝrõêÕz7h¥¼êÒºukÁØÆÆFÝòþýûìræÏŸ1ðu©.Ç׾Þg{çΝ¡º¨?~üXûz_œ¼[)rrT—Ì™3ÆmÛ¶U·<÷öö6OÕ…í/JÒé g—3mÚ4Œ™€¯Ku¨|ûö­ºq®\¹t¹iYuyøð!3ž:uªº±8?”–h*zU[[[Á¸}ûöê–ä6°’>lžªK»vícº.uËOŸ>±ËÑèB¤¼ê²råJVÉ.¨³=Ôb0TÕ%9ªK4j§OŸf%ïÙ³ª‹:ºÂà°s^5kÖ4ºê"N‘ó矪³´Iî»H-Õ%!!%ZJ2Žøœ×?ÿücžªK“&M4»¾“L€ù¨.ÎÎÎl{øð¡ºq†Ë>}ú@uQ7¦ª2ãaÆ©?ž‡‡‡]uÉ›7¯ÆìâñgÎœ1OÕ¥W¯^O¿|ù’YNž<ٝU’*Kq½õë×›§ê"äÝdaô4ÆV¨.J°(aIÆñ'|¤ªBuÑ>¯­ZµJÅRüˆjèСFW]ÄO¬n\ªT)éRQu©V­š`\¢D uK–ð‘œð?š§ê"Nê¡îvþý÷ßÌòСC3_—ê"ž·mÛ¦béççÇ,wïÞ ÕEûÌX¹reuK"Ucb½ª÷ý÷ß'$$h™£mllÔ-SQuÙ¹s§ÆDKâ9:É@µ©¥ºˆÄM™2Eã‚bÆŒ0P]’£ºˆW|ê¡íXêkkk“cZV]Äóšzh;qêu}Æ0Õ%Q´GT=éÝV;;;];ºSEuéÓ§{N§Þ™¨˜¤>“Šª‹Ø?W>#NÉDÝc&àëR]Ä3£úSñùPõ P]$3#Íb*Û&£££ÙöZGGGS¨.â6T?GF}FW*TQ]®_¿Î.gîܹ*–â”Ü‘‘‘橺ˆÛPýTûÑ£GÙåüõ×_0P]’£ºlÛ¶ÙïÝ»WÅ’E€Ñ>9¦eÕ%Q”J=F.Ýkv|||L¡º4mÚT0Κ5k\\œ’™8ŒFý'UTq~(™‚®”î”ÆL ©¨º\¹rEão–mÏ’%‹FåÌGu/x*¤bÆ’»åÊ•+>>ªK’ö›6mÒâÎyxx03777S¨.þþþZž:‰‘;ÖlU—ÏŸ?³$æµjÕR±,Q¢„`V±bE5IÕ…|rMNåyëàÁƒÙêÕ«W0P]’£º¼~ýš†SÁ¾wïÞZÖųCuaž¨…©•ÌZ¶l)˜ÙÙÙ}úôɪ‹Ø<}ú´’Ùĉ™Ùýû÷ÍVu?xš4i’’ÙÉ“' ˆ‚’òª-(˜@WµjU%³?Z[[f:tÀ€ øUq¸­k×®ÉÚ„……YZZê]\§qÕ…jËMŝ£ölȘÚÙª‹xTÑÄۙΝ;g¶ªÑ¯_¿$Ý9q¸{õ“;©®ºˆŸ·*LHH`Û¤5¦×¨.ê°¾9räøüù³¬Í¼yóX±/^„ê¢Å^|ÂwÓ¦M²6QQQ4Kê V¦Wuw-1mg*[¶¬Æš¤Šê"îZ*)† ÆŠ}öì™Ùª.â®eaañüùsY›hÏ橺ܾ}›=ëQʶ#Nm¬Ep€ê"кukvü–&VÞàÞ½{,è®FÁÁ0ÕE|zýèQ™Ü®oß¾eqäŠ-ªä|š‰êrúôé$·åôèуu¿ÀÀ@sV]Þ½{—5kV–:&&†·!gŒ]2ùœ-P]’¯ºˆEogggÞ ((({öì‚9iڏ6¤qÕ%66–¥„¦?^¾|ÉÛ°§Nê»P’©ºMš4a1ë®\¹ÂˆŸ:it«RQu2Z¹r%opéÒ%+++Á€Qí•IÕE¼»C‡üOìÍ›7L#ŸS=Õ˜­ê"^ŸÊ:*[¶laï6jÔH{M º\¾|™}¤D‰’­,ááá,§³……ÅÕ«WM§ºÐw1y‡þ4B\\;ø¬%qª«.‰¢#o²ÏÑfϞÍÞUÙhd&ªñÇ° ÓïQò®··7ó…Š)BÞ,FKT—ä«.ñññåË—WzŒ.^î)=³€ê¢ÄúõëYÓU¯^=::ZI:Ðåô ºˆc¶äÏŸ?$$Düî¡C‡ØÃÇÂ…kŸaSKuùðá;•cii)é–ÁÁÁ,»4]׍7Ì\uIÅl!&Nœ(qP™hF¸ººb´è…\‚úÿ[[[6©IÞZ¶l™Òì#±Ì–-“ô%o)å=yòäI®\¹Øò¿[·n´î&hïÞ½ŽŽŽlÎœ9³ŠdA3ŽäëØÀ+8búôé#[È?ÿü#±,^¼8+¤V­Zâ·Ú¶m«4…I {MU«V•¼+¦æÕ«W3{{{VÈO?ý$yWép““ûTÙ²eé¦ûùùùûûÓåÊ•coÑ:]¥«´jÕJü]5kÖ‹9’š\ºtI¶5kÖ°OÑŒ<~üøÓ§O.Y²¤N:ì-šÝTâö>\òu2d>˜'OÉ[;vì-dÚ´iKO˜z¯ä-¥ vׯ_gŸ¢:PGݳguZwwwšè©³+UÙ[»eËÉ×±Â÷ß/yKiS‡‡‡Ä²@ì×$yK)`2-˜ô$d§ûuïÞ=OOÏ#F°˜xä_ìڀùÀæ-(zeƒ­hŽ-äòåËl6¡A»R¥J“'O¦¸cǎl"A³¹Êå°ø$ZP:ôÚ·o_í…4H¶ &h/D)­Ïÿþ÷?í…(z%_¢M›6Ì,GŽ4;¯_¿ž&Sñ|—;wn•”4âø$IB÷Q©qØr¶É$ÿ™¼bª<;Nó¾···R ä~§‡‡Ê–£«¥x¹ÇŽKŸ>=sÈ=&¯rùòå­[·f™˜3gŽJ§»µI²víZÙBÄÏì’dæÌ™²…„„„ˆË¥K—&Ïgãƍ´X ßR¬Î©ÄC%ºuë¦}¤¢Õ®l!쩺ºté¢T™.0ÍGšŒh¨r94<ÉŸ?¿Êgi9€³E¨.ÆU]¿<6bÛAe!‡M}š†ê"Ëû÷ï5j¤òYZS«§V4–꒐0`À•ÏZ[[«ôsS]¿<6bQqdIr¿Šù¨.‰_¥©ÿœëÔ©£1%˜³ê’øåð²xσ˜Š+&¹Gª‹RËPù³gÏΔ)ÿ)++«ùóç'™ÊXªKâ—3ìl7”KKK''§>¨×ĬTâêÕ«*Tý`ýúõ=z¤~9f¥º$~ [­Ô8… :qâ†MT£«.‰_²øµoߞ¯UÉ’%iy›äå@uQ‚<œÅ‹ówŠüŸ>}ú$yrÜXª‹õñ2æÿÐí»{÷®úgÍMu!š4iŽG‰Ÿ <üöíۍ7<¸Íèú§FÝÞÛÛ[{MhÚR/.í¯¿þúý÷ß;tèТEGGGWWW¥¸ñ|÷Ð^“àà`ÙB¨j/„zx’µ¢.:qâDê®Ôi©ë’C«”KÂãǏµ×äúõë²…PÓi/Dý¡žÀƒæΝKîw³fÍ:vì8räHOOOø`jÂÂÂöíÛ·|ùrZåmÙ²Eˈ ´pþüùÍ›7Óì¶jÕ*TŒ‰J³¹»»û‚–.]ºwï^r_¿ê¶ Ù³gÏ’%KèŠèº¾êËyÿþýáÇW¯^M]…üd///ø?£óùs¢ëúÿzŒö0Ñ1‰ßø÷¿¿¢=ŒTðuñêÕ«¿þúíÀüê¾"Ž9’+W®2øûû£5˜9P]À×ÑQ££‡ÎÒÑ–.]úÇh¤q>}úýúÃàB^¾|ùè¡¡¡òäÉ“Û_xõêîüÿÅ,T`öT¨Pá;¿þúëÛ·oÑ2@šbáÂ…ß¡T©Râââ"R­ZµÔ½œ>}úŽM³fÍ.¤K—.B!ô‡Á…P„B¨JR¿~}¡#F¤nÃnܸÑõ~~~©[“õëׯþ;w.dó{öŒ\ߐ½å|úô‰>xõêÕ¯´Ž=zñâEu›{÷îÑ5¾yóÝæcîܹ‚?P¸paƒ™0a‚PÈO?ý”º—ÕÅDP÷jB&ukòý÷ß5Ù¼y³Á…07øï¿ÿNÅkyôè«É?ÿücX!OŸ>möô7Æ4F€êÌ–W¯^µjÕJ¼Å¥råÊwïÞEˈ ­[·î–-[¢££Ñ˜íïÔÙÙ™~ª  µ@õêÕ¼cǎ„„]åÐhkkËþ¹iÓ&éçÏ^¹s玖'¡aaa¦A]ßþþýûÛ·o‡‡‡›¢‰–-[æð-C™¥¥e:uÔm&MšD×xþüy៟>}¢Ê¿xñ½ñkªT¨.¦À(ª ŬúcH#@u执§gîٰܹœ.]:ò‘’s`ù[eÁ‚BåÈ‘#"" ÂóîÝ;4HE³dÉ"lÕkÚ´i×®]«U«–!CzEï˜&Q]vìØAãäîÝ»Ù+õêÕK²+胎ŽŽº¾ýÈ‘#Tçùó罉üýýÓ§O/eïß¿OÒ^‹ê2gκFv¸àÁƒTøo¿ý†ùµÕ…ªTSÕƒê̍˜˜š¬iíÀÆäüùóŸ>}-ӐP¼xq¡•4h€‘ðúõkZáÚÛÛÇÅÅ¡5@jQ¥Jú…N:UÜCCC—.]úùóg]EITª‹a˜Hu‰ýñÇ©Úùòå3¢ê"ªË7T¨.

%lr{÷îý-œÄ¼úJƒççÏŸé]òŸÅ¢½’@ãÂ…6l8zô¨XyöìÙŠ+„Å+_¬#Ñw:ujݺuôA¾æ*Lš4ÉÊÊêÞ½{ùóç׫º„……9rÄÍ͍wÝŸ?N5²Ý…††äó$'wÀ7Éÿþ÷?æê߸q b"^¿~-ŒfêéNž®‹°à"¸X±bìã%K–d?œáÇóå³ä°6l Š‰k¾hÑ"-QküüüÒ§O¿`Áú[¯êâîîN°/íÛ·oLLŒXÌaq]V®\ÉWþܹsèŸP] º@uêÕ¨. í,LZ·nÀ¹º a‘œÑ <ä¤K—Nh¢ºuëê\ú þÃ?ˆã&õë×ïÙ³ghSP¶lYjäæÍ›:tHé°› ºdË–­|ùò.\ˆŠŠòõõ-S¦½èáá¡Eu‰ŽŽ&Ø¢ZµjþCiƒ’êR°`ÁiÓ¦=xðÀÏϯGâ• õZ2Ð+'NdåÇÇÇÓ[{öì¡×‹+vêÔ©ÈÈHúÝýüóÏôÊŠ+Ô[FF«T©"œ´Ò¥ºØÙÙeÍšuçΝoÞ¼¹wï^§Nè³NNN²ªË»wï¼¼¼èŸ;wf•K4ªT¨.P] º`:bb×ø÷¿c^h =z4Ož}z‰êÒªU+É7nL¯¿~ý:eT—[·n‰Í„ $7oÞTQ]5£eË–’¯èÛ·/½îíí­T???KKË©S§²Wt©.ÖÖÖq^Hè¶uëV¨.P] º@uêÕAî=y;âå^ûöí8W#`íF£ATðôôdmÕ¶m[4ˆ„àààÞ½{³£X¥J•Ú·oÇè$$$x{{wïÞ]Èíææ&V]F-±0`½îëë›2ª‹{V²2e®µ¬ê"¼8|øpÉWÌ›7O²ELLLLù/ÄÆƪ¨.=zô¨/Âßߟ©.åÊ•“”¹wï^úø´iÓ º@uêÕªT«V­²²²b£®µµõºuëÐ,ÚiÑ¢…ÐtÔ›|6 "D¨Øµk„çêÕ«Mš4‘„­S§ŽÁnPgäÈ‘B°±ê2vìX‰ÙÀéuŸ”Q]¢££õª.‡¢ér$_!ì<±Í½{÷„ŒÌ’¯véxyÉŒ;räÈ÷,ãA5T©¿¥¥¥DpÒr±ÖU]úõ뇾ժT¨.P]àæøñãyóæÎ7nœx‡9Ђ³³3 ñüùs4ˆÜÝÝÅfÑ JÄÇÇoܸQ8ëÁȐ!¹Ðêa KLLK¯,FÈeܵkW±êBÎù‹/˜Mpp0µ|™2eØ+Iª.9sæÌ•+—‰T—K—.ñ?!ÆÕ\4éÝ»wY³fµ³³{óæö¶ÒMwǎâiùL¯3…P¢ºDFFÒ?kÕª…> ÕªT¨.P]à›$66vôèÑâÀ¹ùò壅ZF/Ÿ>}ʝ;·Ð†:t@ƒh‡Ë"Nœ8QádzfÍb¡_YþùóçKæ@Œ3öèÑÃÃÃãþýûÔzOž}úùóç¯^½ª)Ë0Õ%""‚ª'Ož 6\¾|™Ê¢.»¹¹ «Wz1>>žj^§NzeáÂ…ºÚJ—êbcc“#GŽcǎ}þü™.§ÿþ’DÕ…(T¨Ý‹eË–]¼x‘*¯å‹T¨.P] º@u€¯‚ÀÀÀŠ+Š—oíÚµÃssà%kÆãǏ£A´C«]Z± MW¬X1IàPÀóêÕ+''§ôéÓ‹¼´t¥UvBBÚGoÞ¼©R¥Š$X1Q @mÛ¶13Au™:uª6H{eaa1gÎqiIª.W¯^-Y²$û¥°*†©.Ć ²dÉÂʶñPgX´h‘°²z]¯‹‹‹ÞLXºT—:uê¬ZµŠšˆ…kÓ¦8Q5¯ºxzzŠ7[ž;wýªT¨.P] ºÀ7ÀêÕ«%s×®]‹f1˜† -Y¢D ¬|õ²lÙ2ÖÇŒƒÑ;wÚµk' ~þùgìUÓNhh聨ûM›6ÀãǏK²ªË¬Y³âããýýý—.]J#'KÙÌ ·Ä¹˜_¼xqæÌõEàóçÏ>ôññ¡·Äç•ÄÄÅÅÑ»âò¯_¿N¯Ð·‹Í66öÞ½{çΝ£·Þ¾}‹ž ÕªT¨.P]àk_h888HVj`“¹þe©w÷>Hü´¤fÍšBZZZúùù¡M4BÕêÕ«K´—-Z q’S]ШՅªTSÕ̍'NHçŽ;s“ɨQ£XšÑ2Œ[·ne̘QhF{{{½‡ Ò8»ví*V¬˜XxI—.££cHH'9@u@#P]x ºð@u1P]À|ˆ;v¬$p.‚—&Ÿèèè~øAhÒ=z Afúôé¬sΞ= ¢÷îêêš={v±öbmm=eÊqT  ¨.hªT¨.¦ ™ª‹p(•W]$Gn$ Ÿ?ÿü³x9æàà€]FaË–-¬U}||Ð ÉÑ Ê—/Ïv ݹsm¢—7oÞŒ7Ž9“¹råZµjUZØ>} €ÔªT¨.¦ ™ªËÖ­[oܸ!Q]6n܈‘

%PDF-1.5 %ÐÔÅØ 2 0 obj << /Type /ObjStm /N 100 /First 806 /Length 1305 /Filter /FlateDecode >> stream xڝV]sœF|¿_1y²•’löR.§N>ù£b9ŽNŠ“*¿p°’(«ZÀ¶þ}z8Ð9²¶®nížžžaA…¤E¤%%d$‰€bCB‘øÓ$"ÜÃ*Á2&‰RÔáBJ’Ø)ŸŒH©dB $* ¸CZRš´Öx@Ú`[L!µ P›Â† ®BŠ€ÕE!v'Å…™ ¤P‘÷M!ƒ"¦8€nA±ŠÐãyRœàfD "D %`2%x`QH2È#€"ƒœ#@†ÿxŒ¥ZÄHW¨|ì€"è‚Ÿ'$$$&þµ¡|`öE"h¦I\ƒO!¬€:¡â`!T(ގ\„†5"§y}ší†ë"ØtüB+¶>Œ¢mØ"ûEÄQÁ;èÁ†AD½BŒì%a±L_Q0ǨÔa¡X&îÆ&d™¨¯`u$¡\”E$ü…‘ÀJ#¾Üd`Ö‹VàœBî ōE2h’2àœÐ2R±L4Œô‚ÛIª€S³âL‘¤T†s³fP2©%/·˜Q5©¹D†;NðÌaqNXÄñB r2b“P:1 µ“ÖHÃn¡zÒpPÔC³íkiØ^üdÌÞ ‚2fQ¨ Œ“xñì=]ÓÓWîÔÑÓ=nlÖ®þõ‰Ø£çϏ_¸ºµuÛì}³3»L¯[ëǍoêÖ»¼ëÑo–#kuÝáš–>»,ZÄì¼¥´Î©½´tè\Kï½ËlsOÜAá9F>ýÈÿðÍŸëý-è°tÙU³ü8½(2z×U맘ԐÉû3:ªº2½?ï¦Û|ºì²ù–ôUÜ18?K0èÿ€)qzçÌYcÏ»²†/äÎéµý’æ6+ª´¤w®M§«¥F¹ði rçÙãŸVUQ_ÐÇÇEÎ>Ø-X0»Ü~Ü{XØWìOìÁ§¢AäSÀ!ñ»Ñ¦ ƒú#_í#ï²t¿OÚ«vòÐËÖûîú¾¶¿ƒ»68±Eàúç^Õô/ËúÚµ{ϐ±[;Ûð ³QgVö¼¨Ùšuëñ7¯nðå¬aP¯¬M³« iêŽ49"Ü DïJVÖõc«™ˆ¢ïDQ³Q¢! ‚SyÙÕý³yÌغYÙå–^¥Å?׈¿ŽfÜ&¦gC%[ĺ«ªÔßLõøÀý®óÝ§—¶Í.©²TÝÀsè:pMæ®íLÄplû[!Ö2ÿ”֙ͧ§ÔPL™|ì§UÑ\ÍDŒÆˆG_pdæ6§íGËŒ%Ú6çÚ^T÷¨wȆ—çµûܧ•/>Ù†ßùy!êëׁ;jÍüûsÑ?Øã™1àŽx„2PÉ~bãÜàcžM™Û·Ój™ƒ¯wßÑÛâÜÒ‡¢½t]Û§3…|;C|Ób%|Uº Ž•m2_\óÑsšnÊI-wÇQO³:%œ{KLÐjSÞLÁ/ŽÓ«Û±ô¹@;OaŸô}À}ðãðqÓeÞ¯ú!ﶭü¯ë<ýa}mˇEܾ:ÿ÷ðñg[ñŸ: )^ÙÚú´Oå$ýŒ£ß{µSng)†J¾uÊöwê.W3S?3B8¶¼+¸y›oNÊUÚ¦S®Éñõ·Y×g¼3{Hø?‰ãëÐ endstream endobj 310 0 obj << /Length 460 /Filter /FlateDecode >> stream xڝ’MoÛ0†ïù:Ú@ÍJ%Ù;¦Ý lÀ,.zXwH]'jǁë¶?Jr‡v¹õ¢Ï—Ôó’’b+¤¸Y,ëÅå'e„RPƒ¢Þ* T%œvPZQ?Š™ÏÖŸrŽ‘ l)$h’äi“]· wŠß`|—EP¶‘÷(•LyKQAeÑƬ hë9bÚ«ác™F¿Ý…ÅQ©c)ô+©š“^çCpÖ‰âŸè^*sþ¬Ò€¥æ6D JYýŽ|…Ö:´/ØxJ»ew<ïÝËü±^üVaÖÚ endstream endobj 351 0 obj << /Length 1147 /Filter /FlateDecode >> stream xÚÝYÉrÛF½ó+&'U4ûrIIŠä¥¢D2©Jª ¢PKâü}z@4Ej©²-’b™AcÞëî×= Fs„ÑëöÇ¿GŽD°ˆµbHS©Ð´}øˆÑß!³D£íÔ1Mb!8/Ðxt1Â1‘ÚÌá0KNP ÖïÝ|ÿzt4œRŽ(•VM®áTÀ(GB³˜c&3ô!8®m¶h›ðãäÝèd²f™BêXIö¬ezÌn%q"©4kŒck `ö`ׂ˜v0;vNžÛ»Q;¢,–’¸AF„h¼$hë*ŒÀЬI0ma,Èá …EP˜kìà/ Poà¸*o»6«áy̓Ãzjt“·Ù´íêÌÜeAº˜¹áöÆß9rï«Zwyîß?ÍšìRˆ€™·œêe*T,D‚àÅq"¤_Gl p*‚‰5ÎœqªqæÃµ3Ιâ0?ñÀ“äy¼Ððž4ã*K‘è)¢ž¢£·¿\KÒQáI áçSc'qï8n{nÏÒy>u§¿ueHUpÏ©À€JÈw%Ó"æë3øøüÒ­ö¤4Ⱥ"mmdsÊ¿º¶)R,7ÂrCeœ€¬­dóFaÞG‚õüMóàà"8t¬}ÖÒEÂñ9(Ã¥;¿£²¯'\}c,OàÕ•BÁ†¿Azqb#¶#­è¾äýc|DëýÏWUó²É®»baEÒ\W×ÎÛo²Ïé,›æeZø¼¯ZŸD‚ËtýÆ•ŠõuÂÖ+ê3_¯¾¨<<Cªê¡ Íë´,óÅÜ þ…ÎnÈè*÷åê}f¹€g¾nf0Õ–TJe ©\Þl¨\+Z<¬‰™WEÿä ”Sb)å~T)½.QÙP¥ªÙ@µ©<Ž¤„«Ä|wSó!÷%@ÁëÙé+ÚI]úþ&+Šêg C¨mò ß°ÄÑ:ecK• š[{hNkÓ×ÞB›i®÷Lê-dCÌ¥Ï÷Ajž7@‰QyÎw>nÖ —wè™GÞµ­Su` —–´ðÊ2 AÓ«yM®éb…ÞnäK£õÐ×v+^,þ° ëbö${& ›‰b(”â,+«ú?C‘§ëp6«¡Êîí„~M¯lÊ È 7^Žº%|ø>´>ó_ýùÊA8KëOÍÊvn|;tR³-­Šg*æ«ÍÕE—5®I50ÁÍ‚î[«oý7žû˜ èðhP _8nëJâb!Ùñn@®×n…ÐÀ¿läw¡ß¦îc‡i ôŽ×õÄD {›êáD}T¹ž°®Š>êÎ~ª„LH¾—~žŸåy¶·žOö¼ì=ýÞ §!lÞ»ÅÔðŽ—ÚÖÝb²IûÔ° šÝ,ó¸óÂ:{µý¡dS¤÷uà!?ž÷|_óÞ3°1ïïúqW–)ôöÝ­$žPjÿŽìÿF<8%9F¨ýþÈYŒÅbEÙòÿƒÃ#'“Ñÿrã endstream endobj 401 0 obj << /Length 1700 /Filter /FlateDecode >> stream xÚÝZËr›HÝû+´¨Šdè',mËJ<‰#O¤Œkj2‹6tl*T r’¿Ÿ~"g$yBH✾÷Üso+\‚ÁëƒãùÁá€AŽbŒÁ`þyxQ8 Ž"2˜§ƒ¿½“éûùéûùÌÿgþûá$Äíû! GQˆä×é[³,Swö'Ñ Åuób4«“ŽÍGàûÃì½_‰Š¿ò‡BoâGÐãuâƒÈ»1—\‘·P×¾›k³š×7ÃrX%åÒÊëòž0¨7ò‡8¶@q¬qŒ=5@F1@ƒ¡&‘4ŒBùä4@Þ+^ÕYY°Øàüàà(½õ1ñX‘ðT¾–÷cŠwh?l¨aïàÆ7±kÿ³4+® Úú†›“qV}ñ‡…?õë;ÈËÝC‹žvš¨8ýVó"婉Šs¾(…M£$áUeÎ?Êcèe…z×æο^ðBeL-ߣAô+A›Ïìó¿)}@½¯íeÕg"»UÇ+ó¸—~„¼RèE§{°x¨7´¸?V6ª‘w|6Ùø.ÍQ½y§4Ú~k€´ ²/VuÝ@Ìjƒ‚å¹9™+/¯¥Hs!aáh*ôDÈ‘©s·+B#A©b;†EAQï´ð!ðj•z\fžÈÔ™¦D‚‡`x¥i‘碬yRë,–q~®Âx©ªYqˆ©óÓíšÒ–¥òÿHË‚’Í£”-ëµnÚ¸z—}¶µò2“Tß”«úNÆd·ES­»BiEQÐ?J9UÍŠŽºñ:/¯˜ Á±¢×«‘-ëR4a‰<¦®_I~ryG$/üß‘A7BhŽù§ DE¼ñÜæ[ak@Uqm‹®rY"v_:!î,øsöeøìkÖXBŠé^ù'ÒBîÐB-Î#·ä>$·Ä/ƒ“¸K‘0jE OØ*ò¥/?&2ÅÔ+#½Ç«,OÝK™ûEj®¿+ï”/þò¥0–+aîyËEÁ%±!”Yè¡ðqÚŒmî¨{(Ë,gÆå#nÑ¡?m% fUânÌêk^pÁÖ¶àS9kÍß93œ¶F”ºsSe0Š·iê[ƒMpÎɾ3›¸úò§Ê9&2v•+´3Kös >œ§=‘zÓ¬íD•ÒU‘¨`–ð û$/­>u*¯ÂÝX]Õ”™é¦djÛQÅQš فq÷Úh‹¤fÌjé1p´+i#Ž[uú'«u‰1j(Oœ6KuÆÆì[9Añ="»TymjÈÊ9 é5#4Æ;šÎ½à ;ÇA£r'‚·œàÇFúJk;f²—qŽÚ´h}KU¶ì‰!,ϧR“¬)¼È›ÊÅ2OagL岝 ØŒ.ï˜Íuý²¤=ÏÂj”ð¾”u}ÂѪ.l¨Úë؉4Ì7vfľøf@ Q´7ÉŠQïÐTÐ$ëT\³"[ÏZ¦.ä¦v,¬íHk,ö½ªùâ·Êù±ÆhþX¥” àÿüð÷*®ët¬)¼ÙB¹v@ðëkŸš´ûä5+>wiz!øp)LEJL¯_9U²¥XÕ„«õ¬1 à¶Ê°ÅlBsÚ«÷>¬¥r3Œ$g‚Y¥Ù!{ÃYÊ-ÜI¦-(Ñ/V*…ƒÎÉlWFlW6Ö²[5,ãy¹lÚ®SižÌ-Ó¦:R€oõ$[-¨~­æÄês¢2lgJSd¡ë¤Vóy<®_#M¿ö†‰ô« &¬œËU}8]ÕK5?#h¯ûhd)éÔRÒêÞΧ¶–¯*N”¾ÄAéܲ ­F¯ááB”ׂ-:`Ñ£‡k»6PíGì,“QÉþ-¦D»ÝÔj˜¸¿è’¦Í™%‚óâ‡M(µEñn—Èg'ª‡§F6ÆXj·Ê=[~ÇYµÌYkK^J·‘ííLZÈ‚MP.·¥ÅËk{"A½w¶XæÜíŸÚyɶ‘5 —ºEٚʬ'æzx,µszß—È°©ÚEš7­Ï™ÎˆÕRmxà |µ‹Ýñí:Š·üû•öÃ¥´46M[‘!²OZõ<Üà–˦›@м„ûôï‘çáÅýGh+ÖFSB^&½­ »ÑDmKs¶PmŠ-ºƒqMÍ…0»û‰ Ël„Ç}hÉJ;mº’™üF·2ú;ÕÞƒüNýC„¢±[1´±p竼ÎZ­¿åDu:ûþÇ°8iþ^鎧óƒå³–ß endstream endobj 203 0 obj << /Type /ObjStm /N 100 /First 862 /Length 2232 /Filter /FlateDecode >> stream xÚµZÛnã8}÷Wðmg™dñ 4èt6ÛÁl#Á¤³»AÔ‰Ð-´-’Ü3ùû=%—Ý·ø²³ÑC@Ù&OU‘ÅR¬¶J+«ƒ¢ˆ&*Ÿ”5ZeÆ(ãZ§L2h½ü¡‹Co“•h­UdÜÌZRf–-Ú¨œ&eI+çð=å"Æ“SÞàwòÊ{þ=aVn³ ÄØV…¨gÖ‘ ó» "ÆòœßY¯ULÀóFÅŒþÞ©,ë½JL)óïYe,Ì«²£™ ¤rÄï!(£y!!âÁƒIÔxH¼,Ù0·ˆ5‹ac0Âb¤1<]ÌÊXƒo’ŃK3ò†Ál²ÍÀI@&®àaˆIg SÀp¬È8 zØ8Ë@vL#‚QŒ×fFÏxÃßÙC‚‰Œ÷åOl¦Ùè‰N;‚R&aÄ&³6ãä(øñÄ4ÈBÙùÈò²#FÙ‘7:[,¨Ov„2›–,?àÏAP² Ó`Äè‘€sSb¶·f²Êa=x å,S…8‡O„‰üS€s~€×$Ï}’r9~á éÁ‰¼edg”w€ ˆ÷Œ‰}€ü›ùĤ¼+‡Ì4U0¬!ü'°3Ðáhè£HŽç`_6Kðl—TÔ,%|*8ùÑ ù½®Gp³è3ñòUŒX;åF%mâìÅ5¿Qó¿·o[5?W?ôëw}u?ÔmSøÂæGõóϳ~ëê¡nÞ«vÝ©_ª®©?i7#_uU9-ÕYÛê]ÚN íø±ºrõ¿ Òõ¢nô #+UÝpøPí‡ÙaÂËõÐ.7ÌÎÖõâ¡W¿×Ãõ¦üX!A[Q®º÷eS÷[W«ªÛÞ<öCµüK¯^µ•:+ûê!'B©ë®^VÝ‘ÙÝvö·XíuWý´êÚûªï¡iÙ<¨óºã®Ÿªþ(Žèb݌ߩóê~Qò Ú¦±^WåCÕ©‹zñÚý‡r5T]60çÕ§jÑ®xùëûªêrÈOõ}¥Î;ê6 åoTh¿*a»Ê×e÷ð{ÙUê²Y­‡ùÕz@spiº|9¿‚qû#ZðûyÀu×¾ïÊå+9:L|i#ºzS-ÛîQ½¼g{Z›ÌvsßUU#ê™k‰ÿl`“~(7þÏ.^÷«b`#öQ™^Yß«¯(¨ËåjQ-a»Ñ Ž"Ñníb±e³A<´zÚZ¶yG]6p£n½j&QòKõø®…OìWm7Âö Ÿêþã CÂvwYTÄûý>ŠËl5ã¥\ob±:°’¸5ä ,ªí§½m7FŒöf½êmÄ?9dË.É$ëå²ì÷Bÿu»Œ³úÝ¢fÇ_}ø¾÷-ŸäZýªæõpÇý_¼˜Íß>®*5¿.ßW³ù«Fl†g°æž³ù¯Um<dz~üêMõP—gíê–ûøŒý<•œ)R¾›©âØ]fùröýû?H‚\‘"á€KEÆAÛ¬‹»ýý¦³Áv‡á«Îà¬x¥†s++4œ‚8ù€?Ê8>‘Sñ ÌÙp7Õ n¡Áù…š¿­þÔüiqp&+ŽûSâÌ_6 NOLÎySâ´il­–ÖHk¥%i´^Ú m”Vð¬à‘à‘à‘à‘à‘à‘à‘à‘à‘à‘à9Ás‚çÏ ž<'xNðÜï…Gfó›õ»aüüºù8›ŸµvÉQIs7=¿œ¿ºåéôk­úvãF¸Kè\è<¦°,dq©ÐÚ¡ßKµ7f6ü¬<O#…-<òo—]Ižä±ñ)hø\$Nz=®é¢.B2iØg¤a-T@ší@ƒ¯?–ƒY¯OE´áUv'äó!_d¾4"ù¢ç\(b4GˆL¡ˆÍŠ ëÀ=ÎfŽ)BÏH&ák„3TdÞ/C‘pÝrà—ÂA>§v—5?/­íùž ¦¦‹˜ý)\&°ep!¾’¦"`“¤ßõGäž?„),ƒƒ¡›ø¢qÙIh9°ƒ8¾¦ÃTÙû‘ΊƒzÐ!Œ¦p\Õ°±àã‡pA1!2…ƒÀ1<Ît«©ð8Ýp"B¡cŠL»‹[v<òˆ…ñéXØÐ4!LpÛ¸4xOœoNÓ)lìó³±Ùˆ‹S8"o$6ÕiÒLB6a?CèXä(cE3ãÚ'͎Š›@• .ŸáìËŒ~s‚(n ±ÏXõ¤Bóþ"Nf:…ÍsúK.T:lº\•å<‹²Þ#´âáôq ¿E,gdâù›v\ ¦"&{Š*l0Öj¤ml+#.§ã³O•¯Ù¸ lÄ)>n=;™?6‡m4*&ƒû#' Ž_e€ÏIªøçWÅàÂa¸ü/ªÜ8“?¢Ê{‹‰¼ïó+_Pòàå ÒIª„ Ø K1nkòü®Â"¿4§‰Ak9˜å42Ž“Ã)lÒC±°ù‡q¸)Óañˆ‚‹×,iÞiùU¡£?E”ü%.yýM©(äýu¥ï:Ã")]Dí©+ñË)OR>âTþɺÒçêÓŸ¬+9ý}])ÿ¿u%'u/u/u/u¿ý]ê:Aê:Aê:Aê:Aê:Aê:Að‚àÁ‚/ ^¼(xQð¢àEÁ‹‚/ ^¼$xIð’à%ÁK‚—/ ^¼$xIð²àeÁË‚—/^¼,xùyëX’+.\z#Íw`«"îÂØJäŠÄgàíÂò‹fd‰ …I'dg~Š«_ÀÕ×ÙÀ¯ûq#7‰2Á±p¬ŒoÍqØy@ç‹ÉÇe S¤ÐW-ÇÿՐÖƒL°¢ó2d®·`l-dørìÁMãú—N!3AŠ†7,¿;q%Vê6Trp×+2_uà¶!ŒãØ>XÉÙÑø/zðÐÏ endstream endobj 408 0 obj << /Length 172 /Filter /FlateDecode >> stream xÚU=Â0E÷þŠ7&CÒä=^Ò¬•Vp¨C³‰CE¬‹RPðßÛ:Ý;œ{®l“<&i‰ÖêÀŒ/` ëÌxò:sÏp›}‹*Öòwiiù—'ÚRu3Ú½&(1߃ÑÎtpè&X­´"Ö–I&•µ™õ³ïÇ*šá-f'ÛhRHÚ9\ywºu÷vhWI(fÚ#<ý}¯YÄäEÝ6“ endstream endobj 412 0 obj << /Length 2296 /Filter /FlateDecode >> stream xڽˎã6òÞ_¡[d`¬ˆÔ;{°³É&Av3›î $9Ðm£‡G¢¦»ÿ~ëEÙîö°˜‹HKÅb½‹qpâàŸw±Œ¸ûò[ªˆ´ÊÒàa謊R•¹ÖQ–—ÁCü¾=š“³Ó揇? te¡?+£B@¢Ü}óp÷áN,T â,*‹$Ȫ*Š«,¨û»ßþˆƒ6â(©Êà‘Pû )œYÁ¼îïþsG 8œö£<1<ÔcZÐiEW€±‹òl‡<<óüh¦æ70BŠ4Å»h ¸ûxë†=ÒYjü"óIÊg%‰?f;üû™çó¸w|}ø˜É’L29 öÏÂR`gd‘3/AšŒtq9›Ú-èŠÏH™ö8½ÿš†P•e•p3×H)Mó°3ŽfEøn÷`á˜ò?ÆåÀÛ÷G{6h½EŠ²EÔ®e󝖳F´§Ø{&†£qìÛAŽCu"…ÍP›™n€à_¡8Í­cŐG!ûÛ+þEêãPE—ádzƒž«^ ¦o=DôŒóôÐr×Q8 ÄyÞˆVëy\I©âyÅ_êP `×a0Á]¶úæ–…ìYÇ{¸ªÊÂc“]â¤1leJ!$ÉBfÉî9þŠG­Ç¥k˜Ì8Á—q³€ÃãÌ'˜Ý&C­ÀçÐ úe”…˜¯‹Ê#c€À“Œ£ ñTêøHp%"Lv&M›‰MŸX÷œfàQ\sd†(+ú¬ÊH•éµŽè5iQ†lþ(¤BxyÀ8œ—ù=:ìØËN xÌr û‘é´N0¬íåVIˆGø ±ZQ÷1˜ðÍûuU#°…0èê–êÁ\Hhí3)6·‚Í ×go/4šoËKöb°íi¿tŒÞ³)ÁŸfFßæ}ïø¼·à¿ƒmÝQï:ÐHÑ`\šïÖȉAœWöd9~yk6b%šúq ÈÀ+ÁØ7ë}$žÄ zo^b2ÇÖ‡S‚û3&ûai'O%î)´×—'úܺqëUKÑ2$8G¯ã£\ú;H[` $ÿc kÑ›aÇœ9c†=œ?¶ ÅDÜçjgOeÎòˆ’à  5Ê‹}ûî—7<[S>‘"etvS{âéÎøŸÆeútp|\ȳ,8q́¥îí~cvf#¢;€Kökâæò›:#€“ÅCX±9°aУ,O|Q‚?¾ì*&¦üy4¤³œkëÉRósç‡+°Ÿ;3*ã$a Ç½«ºŠ¬Îν1³ÓǶ¶—u]åâû3Ü43E™Ok ³Fê¼è â÷ C¸ê\»uf~/rMPIÄ)020’¤§¢"ίEùïŐdke—@«N¦ˆ ;QSj-aÄÂv’_5j±,Ø|]wâk´‚ùkßä¢ÏtTÊ"Öair#l5|+›”ŽŸÌà¦Å¢(cGÄq·[XPœ˜EÚ˜7+lgUˆEešU!ž:̎}¥á€â|M’­X|»ó{‹-;¯æ¡=‰ œüLÅ`†>õWoPØOc³=^»]‚GÕ(\´l“#¯Œ`Øn¿…’‹ÍêÜF¶™Ô#¦Þgˆ4ÉõÒ[zzAåëJI,A.¦tCôQâ ®(éŒ/„#f·-¿q½¼Œ}òvÑZj@“B±AÁx•X} S—M#Å:uYó«×–†¾æ½\òë#‰å½ Pò&pšæ“õ{œûfUÞ¬à·g~ÜbÏ›Ãç±cöé±+‘‡@3§õZ¾Óg*[^úw> •£À y«ó eh?,6ø¨…€£ðJ·ÞÈ ´·õÒ2Oµ¦2M-e(LÛÃØÉÆùõ®¨*]³¥7§'y¹½¼}Ãøÿ’K Ø?àÉïTædšü—# ?RÏÖ‹œ+.y£3iÑ;É3c-cHAÊ=t–‘÷zkÿâÎÀ[P2|å{©Ïí*@Ùä*jž>Šˆõš]q>X(“¾îR‰³#vû7Ÿ÷Àhçë|{´Ý‰!‹ìø¦Õz߻ʭ½i%±FÓE/shðš‹õ"*³2ÿŽÊ²¼|A÷¿|óp÷?Lvp endstream endobj 420 0 obj << /Length 361 /Filter /FlateDecode >> stream xÚmQMSÂ0½÷W䘚„4åä(‚âŒà0ñÄp¨m„ji;mü÷&Ý2⌧ýx»ûò^"´Czît0šsŽ£)9ÒïˆE’&J %Mb¤s´ÁÓÇÛ=[“Pð3JBs¼Xê5á ¯î_§z±Z’­~Í™¼>&”¤ŠÅŽ«¿ÃýL ä£y‚&tó؏†B&”Iå*'° ÷EGB.>yªºý„êÒ­j뎋Šð[Så&(íI¡lMS¦™9ìíh4µ{Óþ»ØYÓ4EµƒfgëÊ`kˆæœ™²ôw•»ë¢)*„°wbððäy{BŒqwÌ|½÷•è)}tø¹¨Š3ôšÖñÈGEB†3ÀçòÊD6f4q~ßÀȵ͒ŽÙÞìU¬^‘oԍS–`ã\é…û^÷íŒ8@ž›//Ö”uóë§Ü/ô‡v¦rË%ýóÝ—8ÓÁ“žÿ endstream endobj 425 0 obj << /Length 2254 /Filter /FlateDecode >> stream xÚ•Ërä¶ñ®¯à-TÕ—ß¹Y.;¶+åu,¹¼UŽ Í â”,}úÎŒ—»U9Hl4~wcÒh¥Ñ?nRùÞ=ܼÿVUQV'*+‹èá)Re›YUJ%eÕD}ô[üõAŸfãoøèóHIÝÔÒ—MRg5p#B…$7ß<Üü÷&\eQ––ISçQÙ¶IÚ–Qw¼ùí÷4êaó‡(Mò¶‰^‰ôåMw¶ÑýÍ¿nÒ$ €¦€ý¤*²ÈƒÜŸ Š—‚e TÛÀ½(—×¢ˆ;žÔd—Wuü•ïv6ݼxƒ˜*ÖcÏ[óÁ * ½Jª²ŽvªMšFÌqç€(‹Ý̇~ò¼îÌ4}A0hÿ—þWnj.ݸ5Ø mÅúIv»Ë2UÆT¨.¯D…åYT Œ§[ü#§6Q›´„ðNAÛ<©Û‚¹þènU¿¾ƒƒy¿â™çyüx»ÃÅގ¼v‹g¢ÿcXŒæí¶Éã„-Y'eÙF _Ò–%³þõ`à(„ÜÊao˜±ãÿ3bfŽÛøRuÛ™ÑGä°L²šfí\NüÕ{mG:”Çv´³ÕÃð&·Ûùà¡×#r’ÑÍÖ‰Œî‰Ùmi¢ÑH‰—Õ1™×ƒôlÇ=oMoÓlŽÉí®hšøÞÍálØR‘yQ£5끫Á鞏ÏäÒpË#o¸âŠ=ûݎ×ƒžÍÙÖ3æ嶄0÷µ„0'RÑ¥b]f8Ä åQ‹)ò¦ Þ澀'ïŽM¨3¨Å« äQ†*,§Ùy½7̲ñÉT/ !9~–ÁNüíï =¼1Jϳî÷ÁôrÌ1㐾鵗ºsúC¸þ;-S“ìÑ#Eë€ÌJw:½!˜{Ki²=¿c!=hß3ÔÛéYòCö~¹¿“-7î#›fî¸5û|F|…Ú¦å9ͶÃÀÐtpžµOQ®©shñ/Þ‘¥'@|òf÷ážavÁ‹õn<žÝÓô}Cç/¬.ì„4`“Üø‰¯ì€õ)½0søÀR"{ä•{< B‚^ó#ˆä–¡çå9²Ïl†åÅ¿6(!å &¨‚–A½WMÁÒwo;=0ž¥™x‘J§øƒ„‘õ©ÉâG*FÈ+ÔE`†–Ðg‡3ÁèFÐ¬HFÓ3†µÊã.”!Ïv ZLüSªëuàÕ‹Lò–ܲ¡3º£H±zd1zá;=; ü~<ÁÆ2¿ÿ°Ì'T‘÷îi¦rGÑàåZöîû÷¢z‘VhÜéÜ0@g¦â‹hÔñÓŸ*öP½íh‚\¤1óf«Â]”ÿ¼¨¨ÈRXêYA‚à‡Œ=ñÊrò8^ÍÑya£‘X4ð·7ÂCS³C )åÕežáÁ‰¿´£µ¬m˜ßäÛ6FË¡?kÈuÇ ÞS|F/³Ûõf6Tˆˆ›ì,;Çy70‚¾çŠÀd,éß„=Œ:RG¬–ƒ½¡ü}·%òÄQx9qmd;0uÞ˜QŠä3ÅÝ›ÌPcCÕû­º;}¡˜>I­WbªG‡õa6oLܱ¸ï)+£rZ*\ÀBäV$IÄwŒ:ÁÜa»eО¢0àw8àù¨óŸ¯˜#MiYhõØæȃž—k¸e-K&yàü3€èôÉn^Ó=WÏÈ`‘põ×)ëêê«)CX¯%4ã>þ¹ðäFD…‰€'þ^ôXI¸`(çuüaf­ˆ(ÁðæhGHR¡ÁqmXÔ,€J¼P§G1gÊáa™§õVa CÕaQu{Jc,jÞP’П˜ð"S€ø¯†CŠ`8„¹À ¤ù€ ƖɎÜ.¹ ÐAË=£ãëL 8ÊÖÀ«!ùÁÞmfâ:…æÍj¯ò*2JijyVÄ“hÓYdÝñöñåh ön÷zÖaQ_ÂÌ„€TZØ*3ÅÀ#Íœ…À„^ÈÙþi°ðJjñ†ŠÄrâ~›·XÖiï4škbIÛ–ç€@ë3j­_»"Ë.+PQÅš¯ßCÏ+%´ß±¹€.Ì=Yˆ©Jþ U k;œEÙPCäÁz×÷ngVM/Õj|ç­-R Í/"„ù9V(ˆüøñã—'Ï{÷îªÀ´ ]„Àèô:¢Ãö)@ˆ¢‚û"|ÑI£D$|‡P³W>/Î÷á)Ï÷c|‚C¬¥XëQj•uMë2µ‰¡¶™]8QU*NéIX‰U•¯˜àD¢ Ãäßóȝbä-JÏJ…7¼Ôë‹Ç´JÛ¤nTøäñ¶„&@ÿ°¸´8÷âŠnúä%^”Ižæᬈ/E¢ Í!þð ŽUŠa6 m‡‡YH•e’6åFóQu»ÚŽŒÐòÝ,lTQ&&BKƒ!Þxµ¾qÊ0á•,7Ž`je\R\ÑÅ}o1€iÀ@ütdOl¤ÎÌV›s¿k5\Yvx=éõõÓ¡s…Ä0@=åa‡']Ç0þÜÆÛxmþiܺ­DjËOFXž§ƒ/ä~*øžÙ×Ì>ņ½á¥èpqHfT3òPNU̦[¤(¤4À÷¢Cë8µ‡"þÑÍŸ\"ÏKÈ¢Ÿ~áÃ=Ëc¦-•¸ Cö–_‘ëÔH¬+BaÆw·~ò"ì\*Zeá·9EÂ+±<Ûg>È ò]•]¼èìááELQ–KÖâ—Õ¨2šµüBs{†… ç†ºrA¡¿‡Ìˆð%;¦’W ì]½¯hïñü«jOwxýDÒââQòi5£`äß{}<< /Length 2740 /Filter /FlateDecode >> stream xÚÕ]sÛ6òÝ¿B7ªc1¿Ù—ÇMšt¦M.Vnry€EÈBC‘*?b»¿þöe˹KkzzÅXìvÁìjÌ~<{³|þv¾ˆÂÔýù"KCïüõÏoލг·ó0ÄWËççËwoŸô—°‘yË—yöúõrþaùl¿P±Ç²ÅšÿúüùÅ?}¡’)UQ–ø™JaaLjsÈ)ž¾Èg…_¤aŠ¨‹(É}•d³E¨|Ç<ãìJÛút¾ˆÓÀÓ5|£Øê®Ùml×Û•îM‰ÀÄÛ]wÜ´ãé²ÙãÓ_ÎÃÜ»eè³W¯/N¹yû±Øë7†G*ÝõÂÃ^#NCÌX¨ÌO’¾xda ¯Ñ "*"¯Y󩏊г5.Ò›ºDÒr9_Њôß3vgV}Óòø']?Å6MðûF¾›¹rƒ[}eWܬ‘¦Âù™gZ…¯Ò|øQÀ«Y²¹|‹Â ‹‰´ÂÚQäÜh$_¸ä`Õ˜HùQ:@½Ó0ò.”g‘hYH"‚^²”F¯„ß»ŽÙÚoÚf¸ÚpÇè2LzHB_ SKóÉ®ü+Hã_ÑÎ z*iv;Ñ /[û »FHÚèVD^Úî£Ï¸ƒ<_(ÆRÏô+6Q‚Ú]v£NCªÁU &ªòå¢,Ø̶ioçyÄûÅÔ¥ÃëúvXõwÇõÏß¼H3ÝÐ\ÙšûæƬ†ÞÖWw&sâ¶ëRyä@#ÍQäYœÆ‡KÔeÇÀ~£{nÅÞµí7nÔpÙŒŽ&;¢[Ÿ¶>°aÙ¹±²‘u‹³Aеˆ Á±¿xÕ0sÛ¦â1²ü)5«f»zÙ*ǽC•ÄäÿZ–¸¿ÐAT8z*p¨Þ3ä½bùb·âþj;AbQBƒw‡ÆÏÄ<3ÏYœàªÕkñ#ÍÐv¦bªÅȵ³Ýî*™yKã2Evµ%Ý󲑻°ýƒçùòä·Ô“`¦Æ-Q©ŸƒÉ¯¶'ï?³Âû®ÈgׄºE¹ò“U»š]œüã`‘#Îý"(>·H0K3?ƒ8× 1R³Ì/²@‘ý„ŸçÑ,.@Ùrœ>S^ÿYá­©SN;kîÔ ¾¸Ð<ÔyÄÒ÷:¢d…§â­¾ÚҍÃ-?„$€N(âûŽ&ðÿë“'ÔI¸£©£ì¨¡Å9\ªqôlä¾±âÙç׸k¬dh‡Yи]÷,‘öj Œúê{ð§q™C¶€ÚZPVìmJérßðÏ0ö@p%ªXÁËfÞÆM„6Þ3Ôƒ0tWi‘»+.øHª·17º4+»Õ•ÿ™î8x5åûìË?û0‰Ÿ%£˜1€~\ŠÌ×ëþ<ëo,RFÝŠ–B€6誺•ˆ³/v'àAB"X™N"º_‡íÎÿœXˆQ?Ò`@Ëõ¡I戞]3&®àQº”ÃLtéO¯µ@ÒW@ŠôµlC~$ÓêÑ·° 5±(ÈA?ºÔÄ:ìD¨Nή˜°#õ4Yãzc§ÉNoªªã9Ï0e‹R*. ²ˆïÖq8ÓB+¡Â´Æo]²'hkþÜËC&…-"⣠×2¯vX¿r9Z¥îµÛÒm¾Ñ;1ºZ6ç\3›¡ÓÔÔ°žMòþÛÄée«ñ$KN„³±·Ó%×bƒ±†£¿›¶éܤ$xòÝîÜÝCk¶¶ïy¡Ä[sö™}k>Yr·RÊ‹ÄaºƒUÐývßc—D;Eûpe‹*<Œ“œ·Â;F·•u#¥éV­ì’a‘ €†J—XÊÄöXÅxd„^9OÔ·-Â'~žürRÌßfí36}„¹*ï”»$à‹ôP0X:ë©ÈS0{£¢pfŒMêŽJÏÎáèþ)æq·Uœúá7½}c%‹¯™á_æòE“ Wr©ñw4¿•¦Š2BDÚ…w“§¢­Ä¨½¡àQôG…~>Ëm4Ňâ=ÉÈ«Þ.½f²//F’ û²©:`¢:_ZVfÁ>ìH©8õ5ʃXɀ£Ã'Gìã×Î^Õ/«ÕA¦;}[„Ë#öh¹¢$æ0o8¿/«Æª=Ú4ˆa~ ÷/TÏG—i›9Özk+«[ÓSç0WÞ4RÂð ?!Q·]o¶Ü–z.F]ò@»>v;òžY´ŸŸ…žù4ç×Mêupsõƒ®e€È³HDõdèotÉ"‹Â©„SÐÐýÓâßîK5¥—gBôKxa*kdòj{ËíK]äQ½Z5sy‡Lˆ<*5C%äí-$ú’B–,rqÂB¥pß²£5½E1‡ ÆyôPîÌv™Äwø7׫øšž$ðÒb< ‘êàÂøa¿ VVA™‡t³ñS3¶kPÀ[~~³¨ bÑ–j28þuààmÂÀÃïnçÍvg+~8È]àJ•’ΰ¹ÁTjû€µ±.#•LR2MÛulȀLaÀö‘÷×m³eŒ¡sÕqÐJéPWŒhf³1dɘü Ô½¿q(2)D|1 '}H¦%ôzÇÞØ×I¡š‚ªy²‹,YP×Ê>ÄGº²‘GA=ô ø! A§Æ%Þrò “æw*éa% Í\Ô–žŒÓüÀS¥ &sƒVh[pÙÜiÚ_EIœy/›1‰6ò´DOš)?ãZ¯ÕÑÔô~k7Åz¨N*ÿ=ºh endstream endobj 447 0 obj << /Length 3117 /Filter /FlateDecode >> stream xÚ¥YYÛÈ~÷¯ä‰,™7©äÉöŽ³à#öÌ.‚8”Ô’˜¡Ø69ǿߪúª)jFc,'vWW_ÕU_/váÅß_½»zõæC_DÑb™eñÅÕö" ³EY$ER,Êüâjsñïàý¯o¿^]~›Í“8âÅl^äqðþ˧¯×#õí·Y\ãÇ«Ë÷W×ß.•úùnÁÕ¯Jy÷åËÕì?Wÿ íçQºHSÝâ«Ìÿòþòûw~ó!ʦ§JŠlQD9ÍîŒy^…z‹7Ê‹åb™Ç9³Î=ï<ŽQšb†q·³y\f]WMóHí" î÷¦åV4VFíMÝî0¶µ†VÃνæÑ4è-H7­÷=8ÍCµî'KvÍJ×®ÛM}Wo†ª}Ås{庯›‘NŸ@É®·ÙŒ’bI¨¤,-›$IPñ'Æjg@Û˜»z­m>?3ÔÊ0Û=ÎÊ$x ‚³ø:_Õ¶Æ9à@¿ãsšêáVÎg»ž.F~!§‹ŠE–-õˆŽxµç5Ò4 ðMß:W¯XR<Ôï î@/b\¯œÝuÕC|;tZÛÃíЛck;44»¡¥;¥,á׍u9 Ÿ»Y–USC²ós‡—aÐÚÖÐAéx¢Ë^uƒ¡{Úkykjôr¸XÒõxk¦Õ~:Ã7š¬P‰6`õBá1gÖVv¢¥ìVù÷õš¹÷º®®`Ú0qÕ–¤4Õ½›žßvcèEš¢CÝ› žºWÅÁsƒ¬BçñQ˜Å<¨9uõlˆZ½û료҉UFå"M"‘ø2IpšaTüĐ£pQ&©7ùk'¦™äK’CÕÕNP¦jš¬›8y^ëÙ< †YtÖ‚ËB™MWõã²îÑõ†Þ=!›…U2•¬6bx î{6œ‚Þ°«Ùzy¯Ì-X¯ÂHNø+y©š5ùÝ“¨d™¤|\V#î“ôbÉÄ$§o…éæœôRïU3¼0ùԁÚ?Â,4‹Áwš,y£ÌÞÞŠöa«MíôT~Cæ©ÜþÜe6] U#”^à”Õ£3P0¡ôªW{…&oÈ‹ŸiÌ\ßÿÿQy¥,¸«»^à7ÉÓàPÁTêVGÝM³êŒ¼2[„7)žU90ÿöéw†Ðª3 ‹¨˜Ž-Þ‰cx%b57ÞÂô`‡ˆòD>B]â<"Á icÐåCÉ—?q°!ÅauŒ<{}Eà¦`ˆgŽáŽH¤ ‰Ðü–¯¹>&»¾êúùp‹êt‰x*Ô…Bk´(ÂüôõþÅ¢³)`˜3¹¡X„Ž:mÁ´Ò0oMÇ3 ì+w2×èR­6ŒbÜásðDÌzèýP½Åwâ^ø@G÷B[8rxäª^tûªÝ±ø3‚ˆ­Ìâæ) ´ˆ4S“~oÙ7 ?>2^¼uŽÚ&ºò(I?[´ŸB–ÌRÈâöÖíæ/lÝq¨˜Éø–òng´¬nqZZ#*d3îV=¾p[Ý +vÆÊq2 sohûÓ‰PøLµ4Á±éبÖtú,ƒßIZ´”Œ‚²Ï§+P0úõúÜíplÄ 1MT¡êгtóä5â#ú!rMã80Ç؇9Ý°ræƒ::°Ô­ë»aÝ׏1S­ß“8‹×>X§søâÌ2qäÒegs&Ò #)‹ #}¶^2$ÖÖôdùµ§Mã+6aŠ*ÝÈ"‡­s<””¬ì]àÎœr¦Ùb¸Æ,‹gfJ§è)Gô$DzÒ^O¤#š¡§;{L}ïžæ¤T¬SéRÃbGU»6:èúAÂœEb`4¤lÑŨA± r°›L‹àÝÇ/ßuÈòtT…©E‘þ¡òÇ9B÷¢/‚‹g yÌ—£ào†TA­¡¿Lœ!4 „¦ôe«Ñ&Å­cæ*ÕC”®¤Wx†ñ3ͼ¶+œmmÓ iÑ­öèíüe”uÜA«V …‚ͨÝ5lMï¡=U|bT¡xŽƒNš('3B"‘J„¤Ap b«­þ#—Lò¦Ýϐ.q»5ž“t÷ÌKª›1\OÕÉHF'çV-ßLm¨"%n¬;Fùv&n˜º„È{Ӎ¶žÊ‹¶Oùüª[|7U_¡E)†®Í67?‡šY&Œé¬ô„éG¨aà$Ø«ÅL4—Aã³UžÓ0¶‘ܚ׈H¼yl_qçn„ôRlHVÐEÝ¥ÏçÆ°Iqv\b†ÄÐ D4gl)ï«›º÷&Î+P&sYÖgžn¢óǬ5=:¢áõ§©KcªNóÞýxûÉ8%W"ªÎ6šÞ"}ó!.Ù˜ã(K9 ¤³$Ëc/ºUD¦Ç‡+ Áåa–Ú6•xYb‘Òq²(–cf,(0àÔˆußb2+].Ê|Œcé%»¹ÉÍóåãp§‘g‡CjaýzMˆ¹­',#ê}W†b78U/(Áª<¢AW“ y£Þ±ÁE*½tQŠp…ÙÓçTÛ—Ï`'Á{9«nŸË[1ýy2ÁÔ LcìÚpš%ÙSܺV«qÙc‚È=”5J_ÖPéäjš¼@ˆÕKN€ŸEZH‹Ìãx¡¯Ì‹ïÀ9 7xu E¸Ï®¾€TI.Qdˆ ‰ ä L}7IöJó3þy‰rBIYéuÛÔ7'i 8´ctæ>××MJMRÇŒlû·OÓT©§ÉO<øŒð´bÁËô]-î›R#1D"±jIÔ|э¾º ¾¹]ë0Ë÷̉}ÍV”> YÄH©)p%¹„ö‹ Vð ßÄ×8ýð5É-8¢B›¢ ¹èq„/vƒzxh:P6ugP”mÚÎÌ"Š‚÷ü"_iåk‘Y fð€"¤úhŶñ9ŒTXdŒ˜¦‘MùÀC k:O´BcË«ÄÁ±DÊ9ÁZÏ'@Ç[uª6=]nà·?㤭ÒZ!XìêJc¶,£ӷ3·äF4b¶ÉBÛÕôNQ‡Ñ¾»ýz7LQ\’~‰¡;Ý0ädN¬‹„*Iä™iž8³zážÄtKÍc^Œq'Ñœ€”B‘Ƀ ÑVá“M Ô$Á—D]­ QK+/ÀÎ(m$³¥w{Û¾0²Àâô<{ì*Èþ‰ÉŒÍõBË.ÏÂÞi­«ÌOîZf>mw7“kÖGõ@’!HDÍ]ˈ{Ô.ó`Š/ è¯QeêrÁz]fš\’ç6›z8è‚[Ð*t÷UwöׇºÚî?¡pãI3ÎÎð¼ÿåùôýëïïРPGÔüyÀ™ €K(i|l7¤^º”fÂñ觹*Ìjé0Uëű¸ìµqŽ´Z«GOËz'I0§™¥O^—ªs)ânî©ZᙳJ'kßvöjÕâëU,-*Fä p®õ¥2¦ ˜Ÿyè»jÝ{²¬yL€ìùªftTE¢Ã|>÷•a;ìöày_$ÉøËéOý–ÐÄfò_"Õÿ ãÉÏ~KI%Χ|wÆ%7‡ÎÚv>äîÐjéÕ˜ŽG¦#YqhAz²šU2j>Ì(/À­§!Ù ÏF Þ&éMWH½4ãçÅÉ$È%†måžý¯ñßË«W£"…D endstream endobj 454 0 obj << /Length 2567 /Filter /FlateDecode >> stream xÚ½YmÛ¸þ¾¿B@8°y¢DQRÐk‘¤›&ô6—uÐ×ÃA±i[ˆd9zÉÆÿþæM²ìu‚¦EûE‡äÌÃá̐¼­x¿y¶¼ùáEzZ«,ŽCo¹ñt«4‰¼$JTj½åÚûÕþòéëåí›Ù" ­ªÙ"±¡ÿüî¯ßŽÜ§ofa‚¯–·Ï—oßÜ ÷ç¿!‘øË—Âyvw·œý¶ü –_h£Œ‘%^Óø»ç·÷÷ØýÃOµŠ’X%ÚÂ(’¶(sˆ`BêÁd‘±$¼¤¡VÚªHéÙBë0öŸÕ öW3øìÚ'³EþÓô?§–‰ç¯ß2q[¡\_æ]Ý°r©—©Ì†WCC2e“„—Á©Ã¦†Ï®:üÆ}ì‹ÆI«ÛåS(ä˜n0ûNù¹ð‹êPŠÌªÞÿ+ÐfÛ7yWÔ{f"§tsFDi›z À×í5Ñ#æÂ,›šDé,ð}_¯vm³úÆ9¿Arܹ)’‘UV‡ƒs4/ó‹™ö÷p‚C)ç%·×î 6ç8KOHÀÖ§‘Jµ†Í!/â9z“&þÚµ«¦xã7†y]^”Ò¨7øký]›ôÀÌÆá’,û©Xñ@[ šöÕGV»Zd¯çöÜÎ÷kžàíýñ=YRçÍ&Ñ"Ó8–éjþ5Çs3±ÈU8ú¦ƒ‰£(ÄæLçðìM—ïQ¬+¡È—î‡]Á^ÊMð£¸>sDŽXÛÃLùÖ Ùò/ꆿƒncÖãï¨ÑÃ­Ç d©}ŬU]úÎ5(ùm—7]«HP'*Ž³sé š9†c2ÑÊεÜÌñö#çㄬˣŠ¼GG™|ĘfBj8½4,Ѿu¼ãò®í†‘"|B ܬcw=3KbbKNÍ<4E×1ŒÆ/ö̼Q2òß½{ǽäb[Ž~"5×Ò×r—è ¦ýwÁ#LT„§àÁFÝÜ.o>Þh`žö¢8V:4ž #èÄ[U7¿þxkèü çÏRïD+âtkz¥wó‹ä¨ézQœªB½ C8âÉ ,õù_Ǭá÷̺!kÚ U6¿j]ªUl³Ñ¾é$VIèÅ&P‰ùځgrȝˆ"í%*KMY\ÇÊZíÅ©UÙÑŸ q'Ö_ÎiŒŽßrüj~™’_BöY½#|&ÉÀ£áGcw==ôØ_Çý55¸£Á/Ř¾ž|;\*™Æfaüvô­é‹£µ!†qëAé$‹ð,ƒ&¤V&wãðUz;ðÊ"8¢ãá¡%eÊØø„–1´Vªý ÍVÒü¬äaò=²U¬dAß–¾°#AVf0eÔ´#º!º›pÖã½!#’$fubÂŒx0°"ú@t?NHM;1@&\ß– ‘¥xñj@ £Ï±œP§’UKD°D¤MDzß#}aim¬ÿñŒ|,Á îüNnbPü"ëG”ËP9v&1Њ9¿ÝŽ¦YzµA !vRª´-°„·†EæY(.ÇÞ’OhöÌÓ°Û?òd×÷‹ÀÌâ‹ý¢ð¨ ™ºÏUŒÊ'ƒ_MKØ«AR¿ Môíñàq\Â)bˆcß—(¨œWá:,—‚8ÈT`‹äKwƒÌKÁӞ2c¨^©ô猷ÂBŸÒ-68ÝB7ɸÀç´¤.Êüi±¸ˆÒ1`ÅÊD’1—XÞÔ’â$ñ£o8ûq­2”'Ì›&Ù!¯žsDSŽ27UI\]¹'Wwò"gŠI`)Ïg¡hû•Ùo‹3añ§§ˆ˜æ:b#Õ•XWH€ ˜AÀ}½š¢r\ö¡"esäÁ¼ù~KÅõ–ÇqIijWw÷LAÕ%" Ù㐞ÖOé‚b5öt0„±Öñc¬7bËÕW ±µ­+×í ÒŒ·¦ú4‹¡4.‹õXÁÈn5¤Ç¢Aíö{e.ÜíËÅã}±_IÍv¾ïß3óTªJ©eén0©)ù¢×ƘÊúQ¸_!¨-œ3o¤Øµ4±þûOcyiÉ—¹„ƒÉYɵûráH›‘ÙÇ~lt8ÌÏP$÷åšéS%ü})Y ÝétCWÐ/ìº,˜óØT•3þ ‘f!-ô8bz 'vQÊM/’¼ "|åÊ°É£Fñq:ÉÈ(§§—€P^~‘Ëý—ïõ‚) †T÷D½i§MŒA¡(󆙄”1W‡Ô Howç£ÏL¢žª¦P }B ]r‚¹—+{ VùA6$󲕅É/áwâ²p®`•7¼hÑÁ‰ìùeï¼à¥ ’èœx£Aê3”°gaæ/Åä¨U´ü[ºV¦âËn"¶Ör{ŠH:óù2@ÙŸpøè<×/‹›!~ïk>úã%Ɲîqfr‹üÊå{¹’ i»¼-œ¤ŒN²ÉÖI2Üç6’û”ÊR®›ÿV:3¸ÁÄé࣫ϼòÏGˆ£‹º]à]j÷ð]Á›¢t9¹8Ô˜1g3F°H4ø¹/_Rôòñmë6}¹§mÂ6yü¾tŸóµ[Á"%!l ¹BîÇ´ñúðsÝ1–Ö(©$“Ò😋Ý)Œ)x*D†5&¡nô&‚l÷LX/´,ʱ¥É4‘ÚTŠ†Ë掟Åp”?×-’dò¨áõ£æhça]Úyü}ôbÂräÐßï×®i;>‡Ð~Øñs ¿8Á+½ˆÄ» ®ìkâ0õ¦òWߺõ´þ0¾¸!×,JÇ–Üyhêm“W8á×^D¬~Œeñ·Êñä¹1Æz“ dyØàY)¹MÏN Ǩ-Öx°5Ø*]=ä¡‚Àq¨>v’eHÀW­2ç>‡x“_,Ô·ó/œwÜ*§§òžD”8— 4.ÆÌ3Ì‘Añ ~•Y©9ª¢:é,*w9k.û€vÒKʱ>‹!‡´uåàá ¾$'é*ÚËÏEÐÊÉÐA“¼-´Á‚lÀñ¯y5=fɾQ‘ƒmŠžðËEô4"H¯ÃVzQå¤GŒ« à+îä,˽†·hk†AÞÚ¾:È[3uäÝõç:1€«¯„Ÿ.£Ä²/ cG¡8Óân„1£ë;’E·cªª©†‡QRÓÖ|›ÀùfxÃºP¦âêƒfìè´%R¨!‹µ°´ñû­kZæs0»´Ì/¨JÕX$¯)ƒCÅ:-U4ÖÛk&È4”åkGÆb™³á_B§§´,þ܈„4ٍÿ OÒ$2‡étÀ¿êUÇ­ö8™¸.‡ôŸ>çç!,˜ëy8WJ¥óìÿö¸6¹ÿÝ~X endstream endobj 469 0 obj << /Length 2654 /Filter /FlateDecode >> stream xÚÍÛrã¶õÝ_¡·Ò3–xMŸïn“Î4Þîj§Mh –˜åÅ!Áµ¯ï¹©(ídÔ±;ž1@çàÜ¡pµ_…«¿\}·½zó^©•”"cµÚÞ¯d‹,Õ«T§"KVÛÝêŸÁÍ÷ß~ؾûx½Ñ* ”¸Þ¤‰ nnÿöáó„ýöãµJaâÛw7ÛÏß9ìoHƒí÷óÝííöú_Û¿Âö‰(r[| ïooÞ}ú„ÃoÞËx~*Æ"• |E³Sœs:*Þ¼ÏV¹È•àԍŽ3!ãtµQRÈ(â/Þš²jŠ‘‡b` àæ®CÝ=·Ö´üÆa%ÐEêøá4¾«[µ¥õ½}åÀṹƥ¯7¸l=ÀBrCyp7º9óTìN†ÌÙÈTÄq-²À±«òÄ!<¬ÊÓ@&¸p¨ƒ¡ãG䥛t(ð_ñŸÃ؎?«ÚざLJ®1<£5Œ¢áqFéÝ\¤2%dŠTþ’E»s{ܦuñëóìpt(‡¨n;Ð‹‰Ó?Ò%eŠn-ÓÁýµD’$¬HèÚXkúy‘GÁ¾úZµ{‡oX`„L²U(tÂÿ$’«$è7ȏ(R*ŸÉ”T‰ˆóÌ‹_¸–kµBdë|]¬ïÖåz·6ëû?¸É˜yVÒ¥ŠE¬¦S9¦7Ì!â>2`VÔf½Œ`#âlÚx÷ÂtÇBé¢ï»žG'Jä ôÉ5kjƒsµzóЛE> ìÀ¸‚›²Í]²ÐJ-hõ…Vºµ·×ÀhÔ¨(qÆÄr?VÁqEÓ¡"¸Pw#5 ƒ›ê<¯jªÚ`Mƒâ­}Tñ(†c’9ÂI¤#°â´þÊèé²ßœ#=|ziE”ÈùÕÇa7¿ÑQJfËV¸Ë²øèO.¤_ÂÖi’û­¯ ñ§t?ª¥äÀdVŽn3<°Ø”UQ×Ïn°ad×Û›zÄß{æ-= ÈJL¬‘.¬óL` ñ’p(Ü®$‡à9:Ë9hˎ÷-*7ƒÏñÌòdþҨɰ[ëM|Ïh>©ë<:oŽWóÎ;NóP Ó–îÝ=§*ßY&…’“ÔÜ¡%¤Yê!ˈ¦9¨ŠwU[ôϼ.[*ú90‘‰È!©Ypâ2:Ð}?¶»¢q„±åF4a0MAYõåXÙÞáË¢e€cpÂoq¡G ڵ~U‹Ñ vŒ0µ)m_•dY ?Ø¢å‘Á…VUÇ©HÓ)q{i…͏ʇáÖe&R'øÊly‰ò…iÊ…Rrn 7Q dÿ4 uõÅe{ˆ'—ŠÀQpe²ˆ.¸o;‹wtƈbq¡5&Yêe»‡dòâ‚ÌOho$ÙÝã„5mDñõ‰š²¥ ²œ­‹~Ö1æ2 ñÝ^+Ë)i‘¯œ´Ä™QQ·§«Ñ¨Ú¨v¨ÞعÛ9và®íö†sü®àf0}EaoìRhï8NÂm~!Š"]¶¼ eØ{œ`4>/çSîþè~@‘e”-£ƒCµçÃèÿɁ?阔¿ðekMÞä{>aXAIWM@BçÚ uÄ4%Þè}ô£èŒØ‹ã˜X }”Þ(N^ÚÙÇ"Ég9ÈEÔ$"<–”~õ4®ÌØR »vq‹R`’di~,8½LC/ÎÐ"ÆY3*B·ØuÖì¸ÃyV*Q-3é,À¤³Ð–.Õ‘.MÂjÓîíÁMDëŠmS|™6FËÏXS)ã4ª¿œ©(:z WÄzd§\ˆÀdûÞÇ®íµÜewÀPýjø+ò€9€ <òèý.Ý»Ïv¦¨I§hFeàK2p![¿CÕ¶K[uí0ít¦.ØtX7ÔY:Ùс»¸C¸<Ď;LE˜ÊeœÚOGs)< —å[i†ýàÇ ¤\¤|þ…åJ÷ý$Hˆox™9û n>|þ†QxÑØ¢oÇAÆõä«ë¡Ã‰0È¿HNœåkH@$âp’€ÞܳƒŽs¾ê$ä<&ŸøðäwñE1U\]‚©¤©nÎCßYÐ0œ|Á8$—2NépæÿŸù6çÉŒo¬iTœ¼‡±yÓü`LLûû\”ÛkÊéíØ·\±Ëµ{FÖþ‰Wû ¯¥÷&Ë8~¢Öó÷R†(€ØŽc†ù´;.wDq}áÞÆFW³y¬ÚÙMgøÝàö±¯,‰–Gµµóã;Ø ŽÜÆaò€™"i4UãÜGü½[Œ•c( îû®ñ&Î/ÏY! (”9Lsço.gÂÚ¶³Æ¾ÈüŒ³k†:nyܞ!ò=¿¿·…%KEñ/ôO;aæñR%:ø*.‚éwÎyüäÁã÷¥?É­/‹£;sëÞÞÂÓR̘Àï:Ê¡]ˆ@tAo€ƒÖ¦ÇóE*ˆœhã€Ù7³WÛÈ%NÐNûù ,ÂgÒ·¦)þ´p§ÀÒê™k 1Ç@‡2±%Dˆ¨kHx3ÿäêÒN¬Úc¡š“Š§VLFǽ‚‡÷=¿g)WãÖî÷ŒL|çï ¯“Ew;ýŽÔ%s¯{¸:Úꈹ¨I¤ŽøríôoªZl"-ý/L$˜BŒæ˜7ÓÌí¡r+ì*|w-ÇÚN¿õP.£áé!<?U. –ÄóZ¤9^`hzmŠ¾???> {]󠋘7÷@ÈØ}åòBcþÙâ}µ}ùF -¿0úoí»íÆØáJ®¤ŽÁàfye"‚4¦l®~¹:¦aßRÊ\Q/ÊS‘xü›¹zÛ]ýþüš¿èf¶*ývkù+)ÿÛ-ã£SÊ™£ßni0¿Y*ƒ›sš©\<8)C)ñ_BAHMU23)ÊÇËÊ¿#£¼z³öŸxöo¦/¡ endstream endobj 465 0 obj << /Type /XObject /Subtype /Image /Width 1490 /Height 622 /BitsPerComponent 8 /ColorSpace /DeviceRGB /SMask 473 0 R /Length 132315 /Filter /FlateDecode >> stream xÚìXÇ÷÷ƒ¨‘¢{ïÝHŒ½wc5ön°¢±7{ìúÃÞKDìÆ(ìŠ QÁŠbÆb£(¢"ðžÇMæÝÿÎî²{¹®òý=22ª‹™0xð`‰“ðǘa=¯]»Æw§Â…«ª]»vü§V¯^ ÕªTÀW­ºäΝÛÏϪ‹9ðÓO?IZ»Aƒß€ê"ž¹œ?}úÕªÀÌU ¨lŽ…ê’’/^\ÒÚ­[·†êÕªT€ù«.D¶lÙ^½zÕª‹©UjÕª=~üªT€ù«.DÞ¼yß¼yÕªT¨.P]P]’³vrr’-væÌ™þ/îîîhm¨.ß%lÙ²iêܼysÇýû÷¡ºÒˆêBи'[¬‹‹‹dxܳgZªT—oLu!Ÿ÷…îܹÕÕ…¨T©Ò£ÿxøðáùóç§OŸž9sfÞ2W®\ hF¨.«.3fìò…N:5kÖ¬råÊôŠŠ_¬X±ÐÐPÜt¨.€ÔR]ê֭˜ؽ½½'MšdmmÍ[-ZmÕªKšU]ÒP]¨.5kÖÔèz¯_¿6]õ^¾|Iî܁ÈhóæÍ›   ÿ'NìÝ»÷ðáÃ>>>äK&ŠŒŒ¼|ù2ÕíèÑ£©{òÅXªKxx¸ŸŸ]Î_ýuêÔ)ò>~ühjÕ%[¶l³˜˜OOÏV­Z) /¿‘Šº~ýúéӧ銎;F÷ëýû÷×ÿÝ»wW¯^=yò¤Ð>oß¾5õ͍ˆˆ8þüþýû©ò*Q…©7^ºtÉÃÃÀ_ ]ÝöK9tèýRè·£ñ—Հ´¦ºüòË/¼åƍeGlãÎ#BBBΝ;GCßÝ»wãããõŽ®4ÐѬwüøqÁI ÁöÑ£Gƪ9!T8Þ4û<}úôP]BCCi¢¡ š&AšXiz¥Iö«V]hš£žC~&ݦƒ^¸pÁ¤>­–úgN1ùÑÑÑFQ]bccoܸ!øBäã%Ó"W„êF^ó…”×òõõ¥ÛD¾ 9c*¾ýèh@ Z%ߢŽqÿþ}T€±T´Ó§OÏËn¬Q£†$mýðáÃy³üQb6yòdá-š6lhaa!ÎhS·n]õÔÕ4Œ?¾Y³fyòäQZ³gÎœ¹qãÆþùç‡TŠ2dˆ¤nôíÂ[W®\¡•¦xKÆàÁƒ§M›&±¯V­Z\\œŠ T¦LÉGÖ¯_¯ÝùaŸâÈ[[[—æX¼x±lQÁÁÁcǎ-_¾¼¸µÙF”Zµj¹ºº&g‚Ö«º0vïÞmkk+{ÿùç•/"'Íš55²²²â¯ˆúÕ–-[´OÇÔ>Ô©~úé§téÒñ5)Uª”““y’O‘Ïßré%fÛ·oçÍÈ;š«{÷îânF?~ýú=|øPâ¦véÒ%C†ÌŒ®º~ýúä¡©\Ux„ Í›7Ï›7¯Ò/…ŸÚŠVRêwªP]„õŽìHòìÙ3ÞØÞÞ^2î9;;Kl>}úď³gÏÞ=pàMÊâ/¢¡†}ZD«\Ύ;h¾kÚ´i®\¹”†>;;;ò"¶mÛ¦.)ôéÓGR7Q…·hñîàà –ÇGß+±¯W¯žŠ¸MC=ùT+7îÉ“'ìSâšØØØð…+)T“#FÇ·¹tV¬Xam-ÅTjvZSÿúë¯4[I>HŽPÅŠÿø㥧*ä6óM§”8rÙ²e¼1ùŸ¼%u6úÒ‚Š+“5kVr/É]4Lu‰ŽŽ&¯²I“&²¾Pƒ 6mÚ¤â¦Jxúôéĉ©q,--ùš”,Y’Üæ}ûöI>uúôi¾¼¼¼x7#g;ñK>ñž={Š]ú›~}AAAâÈqêÖ­›¸««Á¡RÂÃÃ.ªE‹ùòåSèÇBmµaÃõ|šP]ÆR]Èÿ‘=ú!›Rc4]Þøí·ßbccU² ¯ZµJér ( ý¼yÑ¢Eù‘ŸÑ¶m[>0 ½N“ÕAòÖ Aƒüýýù¯à' y)cZÔ“›¤ñƽxñBïùzšYx%môèÑü]ÕF¶lْªqøðaY­£eË–JÙºu+ßýd=•$\>| /BVl”@mȀZ¢éRgæÍ.^¼H=‡ækÙóõõ>NŽ%¯¹±ˆÊ/…ô©žöÔ©SP]€ê¢¢ºÐ’„—îi&›~NK4]š¡ø¯>|8-ð{ô衲D’]Õ Ð2VûÐWªT©.(Õ°aCޞ^_¹r%?«Ž3æìÙ³üWœ8qB©üyóæñö÷ïß×ë$ðé¤iÙîää$»Ê–3gNZ8ªykeË–MòŠ¨«(¹=]ºtᵚ3gÎHÌø~Nëw~ïÄÇUªD“>ùêzU—íÛ·ÓMIò2É —C½…é7lØ0-¾"!ù¬ÆhºëׯçÍ|||<<<ììì”üRoooöqÞ3g«†åË—+]ZéÒ¥uåeSɉÕ`,Õ…†>Þ²H‘"²Å¬ºôêÕËÁÁ!É¡ïÒ¥KÉW]Áœ ÚZT—Çó^%1pà@úµ›äõ-Z(µ|½zõ$ÆM›6Õ~ã’¯º„‡‡W«VMW 4í¦°êBôîÝ[6ŒìŒßÿ]ûåXYYýý÷ßJߤ}:6®ê²hÑ"õà6Ù³g'ß{ݺuÿRt©.ž>}ªP]”TZåñ–?þø£l±«.4ÛÒ\©>^¥K—îæÍ›ÉW]„£´H×®ºìÞ½[¶œÑ£G'~Ùá#y½S§NJ-_©R%‰qûöíµß¸ä«./_¾äÏ/<Ù|Twww¥GW%~ÙÖÅÏ¡TOñSHê½|ëåÌ™3$$DRÚÓ§O*”¤³*ë-(©.Ôß´_c¦L™TbY«+B&U].\¨~³èçLÎðæÍ›“¬{V•Õ…HŸ>½’ðÕ`€êB‹ñ7ÿqëÖ­%K–Ð꘷œ?¾qU-û ˆ&MšEu!òåË'»‹˜W]lmmeèß¿â—­Ë·¯Ð´Ëo᠏§˜êòéÓ§ bÖ¬Y)¬º\¹rE¶&‡’XN›6MïåXYY [X%„††–(QB—(aDÕE˃ÅÚµk«+3lÇ{2U!t¶ìr¨.¤5Õ¥aÆÌI ÕŸìö¼•+WWuÑè$( zUa3€ì‰T^uùá‡dó£F’]ZÒ0.»}EV3ág=Ó©.ÑÑÑÕ«W7ÀI wѸªKž<==ù¢~ùåë}ªÝG½×˜)S&Ùg4ôëÖ«KQuÑâÕªUK‹ŒÖ¸qc£¨.‚‡#ûÔªÀÕE#4ˆ)z6XuG_iÔ¨y5J›e'Au)UªÔÔ©S·nÝzöìÙ   ÈÈÈ'Ož\¼xqúôé²'7e£©ðªÛ÷âèè(H|ùT¾ð¥K—òÞš®¨tä§þ¾©Ñ sˆ%2~벀½½ýÈ‘#©z¿ýö›llºä«W¯¦¤êBÈÀ—è?çÏŸ—õRÈ×½páBXXØåË—Éâ'qê-ü¹f^I` ÛºuëñãÇÓR‚þOD¨›qUVyrzÛ´i“ä)rh‰Q´hQÙwɏUR]J”(áââB¿”3gο”àà`úeÍœ9SVÀ\±bT ºh„FKÙãEÉQ]Ä+š¦M›Ö«WOiÍ%ÛJP]Ê•+7cÆŒmÛ¶yyy=|ø†¾ÇÓú}òäɲӍì3^uQqFŒ‘øå¤ÿìFvá<{ölþ ‘®è =b³?¿`·²²âñOM!{E•*U=zô’%Kúôé#;7Ñ$«ZG¯êb¼êBw™â"ôÒS§N½xñ‚ªMŸâoPÆŒ%!D\]]ùÒ„ oÞÞÞü“5ٍ@²®‚°+¦K—.ÔK»víªƒˆï<äëÊô£(øBW®\™4iß%Š+ÆÿZéÛ•öÞ´jÕJð…èºrçÎmtÕ…ùBÕªU#_(ÉSäΑ/¤ôPIöP¹ ºÐµS‡wss#_èþýû‚/Döüñ‡ì6$YiªÀª­=i°U‰±™LÕeÀ€lðöì™l7Y©„†M>`©ò¬øù‹s]ª½EËç—/_R%i|^»v-{œGËUþ(ï)ÕªUˈ‡wôæ0ŠŠŠ’u–œœœÄú9!²GÚµk—ª‹ì~rÁ‰e4jÔˆßhÄ?WÚ»w/_Ô† Ä6þþþ²çÈè®ñÎdtt4¹^4®êR¢D 5÷Õ«WJ^„µµ5»Fºwäoð6²'Ó§Nªt¶NàéÓ§¼„جY3¨.@uI›)S¦ˆ3°Wu¡ñŸM¬4TÒ¢I£TB®‹Ò¹Käùý0´žÒ¥ºtîÜùäÉ“¯_¿¦K W¬XÁfª¿TäçÏ"ÉuшÞFááá²›vƎ+ŽIB+S¾žDϞ=ÍPuáÝ3bÒ¤I3êN|ÿ^½zÉV˜¼G~ShPPÿ¤fÍš²qky×…¨ZµªAW€:RíÚµ5ª.4MóÒßÁƒ%fû÷ïO²ÑèvÈúB5jÔä}¡¥K—’Û`\Õ…Z’I^¡¡¡ôcQÚ·Ìv‚Q;wêÔ‰·Ù¸q#ßþÓ§O÷òòR jÂ?„¢»Õ2ª­ÙgϞ­’*.9ªK—.]$fGÕ2Wj„Õ+»ðWR]æÍ›§R8-ygR²+˜~"3x‰ªJx€%»ô戤K—Ž\²”T]ºwï®î×=yòDVD’-­qãƼ;$6 ò¥•-[Vé¡­àoQu¡¦¶’ȉ²½qÓ¦MIº¬3fÌ0¬_ 2„߁Æ;'P]€ê"!wîÜ´TIðšÕEØ\*FVNŸ3gŽa—ܹsgIQ… Ò®º¨„îLüòè‡ßryöìY±Í;wøbïÝ»—bª‹lаzõêÉŠT²ÙÕsD¦ŠêÂ+!ööö²Û‡x}†œUÙçŒaaaü’œ?j÷Ã?È6'‘¯9µû…¾Hvã·Duyþü9o#äiÞ¼¹Ä²J•*bƒáÇó¥•,YRec6ï%Gu¡®%yÔ5}útÙÛ½víZ±ÙÍ›7y¥]ˆ8_¶lÌ¥Ò”HŽê¦;|ø°ìaC^³¶¶Ö»‹ÜÁÚ¼yóòåËـ ®™ÇÕ`€êBE}öööJèòæÍûêÕ+#ª.%J”­'¿–T õøñãqãÆé õìÙ3-ª-¥“lR???~"?¹à#Ô©?3ºêR¹reþÒøg+*²ÀäÉ“SRu‘5'>a$›ºˆ.³º…æYæÄÈÈHÙàiz/69ª>E6å¨ìÉ8þ²R–íààà &èÊM@H6á@u ª.?üðƒØI¨P¡‚R¦{oÅë¸ä«.²)>}úÄ[:88È^QPPШQ£ôFç¯BVuѲ&=sæ%C¼ƒ´|ùòZ¦¤ê"{ŽCœ G-üyã¹sçš•ê"»ªdÉ’²Nõg-gyêÑk‡®ôA^^S:>OÈfê‘ÔJ6uQ¥J•d/³H‘"¼1;¤/û¸ÓÎÎNï=MŽêâêê*1“à'›5€ïóJyBÉÿwvv.W®œ®vçΨ.€ä«.|æè„„š³òæÍËÓ`eDÕEö°$Á{t²–³gÏV‰Ð«Âýû÷µ¨.=ÒÒª| iúõñãǼ»‘’ª/;¤OŸ^|X[Œì“…Aƒ¥¤ê"×wæÌ™*“^Ø)0ÙJÊæR7ê‹{²‰œ†¢Eu‘M_¾páB-)xø|¬P]HkªŸ9šfíÛ·ËF“=éc°ê"Ù;Êàƒ—ò–äɸ¸¸Å†xñâ…Õ%44TK«òëzŸ“?‘9sf•0z¦P]øˆ²T]=d̘1ÆR]òçÏÿHÙç2ÕeÍš5Étƍ§Tmêüüþ(¦xP7Vú ì¥mBä?$©ºð{­õ"DNüò\Æ°ýÞFT]vîÜ)1àÍ„,¢øM˲ÉO]]]ueWÙ–Õ`ÕEÀËËKVúæ7¬º(…,cÑÑUTZH<ÑhQ]ÈUÓ˜A€œR¤Ha2ÿT¨k׮ɼ‰zU~çRÎœ9•Œýýýùæ’ -h"Õåúõë²·ÌÃÃÙȞÓÅÖ­[…¢®^½Ê¿Û±cÇ”T]|||$f¼™lo>X4¯º,[¶Ìà†‚êT^uUéóäÉcDÕEVm&lll’T]ø#ÆU]lmm5¶*¿Æ,_¾¼ðÖÔ©Sµ,*MªºðÂTÑ¢E•Œù­;ß}ɉ`,Õ¥páÂ꟒="Q]æÏŸŸL'Aý.Ènøù.©D ²Ç½eSí$*<|‘¨.ɼL¶«J64ŠÒþ1©.Ôµ$f²ùÐeS…æÏŸ?IÕE%‰$T@êª.²r ñøñcc©.4É~oÁ‚ÕU—ÐÐPþ§ uíÚuôèÑÓþC6b¼ÕEÖu”E6…´°ñ˜?ÝïÄM­ºðÛJ---•¥C‡ñÍ5xðàS]zõêÅŠzŽ8‹là]¸¹¹ E=|ø·zõê)©º°ãNꪋ8¸vÕåíÛ·²É)È=èҥ˨Q£Ø/E6uT º(©. ²qXX˜±T¥óüÓ‰êòôéSÙ ~4Ñ2fÌ6ôÕ©SÇ0ÕEéˆ4l iáp>Ú××7…UÞ…S”vîÜ©kgHª¨.k×®M¦“ ä zˆÒ~ I>1#FŒà?røðaYcÙh3FßëÂýÐïEvëNJª.ü(YÕå?þ0@uy÷î-øÒÈAêܹ³ØâÓBAu¤€ê"‚4šòªË† d·"óar'Nœh˜ê’7o^í ËÇÀ§àer'”Žö˜Nu©R¥Š–È6+W®ä§L™’2ª¹+ü¾q~ö”ݦ»wï^OÍ°°6QQQ²A¾ÕE6Ú^óæÍùì²Ï…¡ºÕEIu‘¬eß]u¡QZ]uY²d _Z×®]ù¬+Æ 3Lu‘Í­ŸBšZ€ßÛINWòo¢^Õ¥L™2Z"ÛÈn#‘žRQu‘‘iŽÓî$È~;-ÞK–,©¢c?¬ä_É&³–¤ãalܸѰ¸.ôÖ~™¬ªô»U$¾ÕE6›gÓ¦MÅOôf͚Րª¹@²QS¼¼¼R]u‘§Ì%°ÅèªË«W¯$ÕèbùçJ!ôuQ©R%I±µjÕR±8p ßîîî²Æ²¹öíÛ—ªËÞ½{mmmeÝɾSÙFJ ’D63ï$|¥ªËرcù¢‚‚‚ø¢x9ªP]TTZòy„‰+W®¤ºêÒ¯_?¾4q[†ì†X£«.| ikkë¡C‡JÊ\¸paòo"/¨èf‰ ¡Òhù,kLSŒ–É.uUY÷ƒfÃä·­–­¶õêÕ“ÝN,«H(:—Ý÷«%‡‘a[¢\Tg•nðÕ©.ÎÎÎ|Q·oßÖøs€ê0©ê2}útÙ …Ok’òªKϞ=%éҥヘ=}úTöx…ÑUÙ)RâŽÒ?5†çU§U«V’/Ê”)¿ÉGÝ‹–MÌŸï&G‘OlDÕ%!!Á××·}ûöJï.>yòDv§“–ZÑ­—̞²;gJ”(£TÿÐlUÙì“üFZbÈf.ƒêT¥5»¬¨Kð¹S^uá×æVVV|>hšhMÕ…à§9‰“@ó¯lšH½Ô¯_Ÿ?|§d/»­B6{õÝ»wynîǏÍJu!Š-*±)T¨Ò1'Ož”}Kv—õ€4Æyýú5ßztÓù'†Ô d1IT—çÏŸó6-[¶äû9OPPD3f_5#¿=LÅ2[ÕeðàÁZ6tQË󇡺Œ¥ºT«V퍈‡ž9s¦{÷î²Ï°hÚâ‹MyÕEV’hò´´”uo"Õ…O!dh/ÃprrâŸ}âí?|øÀG'þîKf"±ý³gÏdÏ"uêÔÉ°zʪ.äHú9*]ºtiܸ±RšršIe“\ScòƳfÍRr6âãã©WwíÚÕÒÒrݺuâ·nܸ!ÛÕ©5ø=Æä[Λ7ÚókQ]æΝ˵gÏÉEñRT º4lØPì$ÐØNËRÙÝqD¹råøbS^u‘U„$ç£###eƒº˜Hu‘C+¦]»vF¹‰²á>h­zýúuYí…֞²ñ¨Q£Ä{6=z$›d¹_¿~†ÕÓ¤ª‹l~ç6mÚ¨D·nÝ7n\æÌ™ÉæߥÙÐÊÊJv§ ¿û:]ºtüa|B6~ÚO?ý$>”ôüùóªU«Êö>ŸµìÖ£iÓ¦)ùBôº——] ùB+W®”8²G¼+UªD=GRNttôÂ…ùÙßlU—E‹ñEmß¾]rQ²Û㡺Œ¥ºèBöŒLÊ«.ÿý·ì> WWWšh±¼yóæ%J(]…)T—D¹Ò*ûÁ(Ïö¥|ÿò Ä¡f‘µ/]º´££ãÌ™3»téB³§lä‡QuÑ…­­íåË—e¿té’¬TR±bō7z{{?~üøöíÛôÇš5kú÷ï/ž‘%ªK¢r*êŒ36nÜxȐ!sæÌ$¿sIV”&åV­Z3¦M›6²í¬¤º\¹rEV*±··ß°a¹@=ºs玏ÏÚµk ö®%ªK¢rx^òyÈÿ|!ú?ýÍ<ƯEu¡;Ε%KjRê‡ä躹¹ñ·ª µT—²eËÊn5LyÕ%,,LV%à‘Í°`"ÕeûöíJÕ iTe—¦.ÈÑMÍ 9s$þH||¼Òfu–.]jp=“©ºh˜ú¤¯ºDDDȆTâ+R]È'äm´ÿR ºÕE;•*U’Ýr™òªKHHˆìùbCŸ‰Tš}”ªA.‡Ê! ]Ð"‰!ÃCn›ø#t×øÔK‚cnªqîÜ9¥dCêðªŸô9]ºtgϞeׯ_ç¿«Y³füžÎ˜lXE^uIT~¬–$¼êòöí[Ùè.*|-ªË»wïòäÉcð€Õ’ªË?þøüùsÙbS^uÑèÖ¨QcòäÉ)¦ºßBŸ’­ÉСCxGŽ©KufœÆk¿Ý...É©¤ÁªKÆŒ‡¢©†1eÊÙ/zU—Ä/¡ÕŸB~¥ªK¢ÂÆ0 •+W–=µÕ¨.¡a$44T¶Ø”W]¢pð‡§dÁ˜Hu‘M!-0aÂ#ÞÇþýûëR]¿<}¨]»¶ö۝.]º¹sç&§’¦V]ˆýû÷kßTTwww-÷KVý˜7ožÄŒébÅŠ©WÀÊÊJV—U]ˆ3fàñªKâ—ø~|¢Ìo@uIü’%3ÉË©X±"rRQu)S¦Ì²eËd`¥¢êBLž<™þ*ö¦È”=Ûk"Õ%Q!3 ¡tRÆ0Þ¿?iÒ$þˆ±Šê’øeÇUÏÆÆ&É;^ @òU’YITrEh¾ÓNÐÓÓ“š§¾.ðõõ•-*::šÜoÙÈŠ_µê"¸d²ÎêÖ­ûòåË…Buª‹aeÖ®]+›·%UbÔ¨Q²ç/ØhùæÍ›‰'¦˜êBŒ7N¶2wïÞ5â}ŒŒŒ3fŒÊN^u!âââÔ]F‘"EŽ=šÌJ¦€êBÜ»wæ8흙*°~ýzöqº/|vEÙ=] |,AòQy—#88ØÞÞ^©¹sç>yòä­[·´«.ù|Æp~þùgoooÙ¢bbb¨—jñ…¾.Õ…˜={¶Ê^ñZµj=þ\V=ƒêÐË…ê'¹­[·¦ñdìرþùç;w’,ÖÁÁARˆlüöƍKÌ”fN:I,i9,kI«Brœ$sb•*UV®\Ÿøåp7âÀeäiHld…£$áSH÷%Tš)î&]Åš5kÈc¤›EwM\y•4‘ááá4gÕ®]›¯§]Ë–-7oÞ¬¢°i'$$D¶ƒ5jÔˆ:XçΝûõë7tèÐ &¬[·îüùóZ’ÈB7zÿþý]ºt)P €¬BR¦L™^½z‘WƧßâyýú5ÍËTO~—©……yJÔyoÓËË‹¿R‰Õ“7ãs>~ü˜7Û±c‡¬ÿ)1Sz`JßBÍ.yêG®ã²eË„µÒΝ;ù/å³nM›6Mb£1{Àü9sæL’NámÚ´éÞ½ûøñãÝÜÜø§<§‘¼fÍš6lNÐÄ/ŸczÔ¨Q›ž={ÐÂ| i¢N:¦¸›ô]«V­¢IæG‰“@n›ŠCã|5x­>[¶ltëÝÝݍr*((ˆoyªªú§\\\øO%™àØÏÏoðàÁåË—ç÷„Ð+… ¢™?þ¥K—$9r¤ä»4h ›q8ñK6@êlû>}úî¨DàZ´h‘ähsžTu0•\Ò_OŸ>¥_ʵk×”ÎÀ·ç$ÐGCߍ7"""R·2|â¤M›6™a£}üøñÞ½{ÿüóM‚W®\yò䉖”ÄfNddä;w|}}Ož”á!Ÿáýû÷htêÔIòkØ°ah´hÑBâ$8;;£YTðóósuu]°`A=J”(Á§ãyúô)Z Hƒøúú’“0þünݺñ±ßmllpÀ@yóæ©DwWÊ¿€o'aÒ¤Ih"uTT—úõëÇÆÆ¢‰€´‰ŠêÒ¢Eƒã½¤”T—–-[¦z¢¤"JªKûöí£¢¢Ð>I²zõêÂÿQºtéæÍ›;99?-¤q/^Ìœ„²e˶hÑbØ°a~~~h–ù—جǿÿùú£=ŒCtLâwþýïï£hãÕÀ@u0P]LTSÕÀ@u0P]LTSÕÀ@u0P]LA\\â ‰ÿþwíÚøfyûöíµk×®~áÁƒ†òþýûëׯ…ܹsǀBBBV­Z5vìØÞ½{5ÊÕÕÕ°ÊÄÆÆÞ¼yS¨ ]—a—xõ?>þl@!ñññwïÞe…DGGPHBBBPP+$**ʀk9|øðôéÓ|þ Ò‹TæéÓ§¬&¡¡¡”pþüùÙ³g;99õë×oòäÉ;vìøðáƒå„……±NœüžÿðáCJ .ºdÉê®Ôi©ë®\¹’º±å|üøñƍBMnݺeØåˆ{>ý (áÍ›7›7ož8qbŸ>}† 6oÞ<* #$à›„æhoÝÜÜ\ÿ# À€9š&úíÛ·³B.^¼hÀܺ`Á‚_~ù¥|ùòEŠ©Y³¦££ãÉ“'©†ºÊ¡Ùðï¿ÿ¦YI¨ Íõ4Ë‹/<È Ù³g…‡pôèÑeË– …lٲŰ)‰aùòåB!«W¯Ö[§OŸ<<<<|Íš5mÚ´ù駟¨«T­Zµ[·nT&yV*óîÝ;Ö/^LCJÉ’%-,,¾ûšô5B‹eZA/]º”Ö›åÊ•K—.+„†,]õyýúuǎÅ%0š5kFci’sЕ+WÖ¯_?hРʕ+g̘‘}œÊÔ>‘»EKNZlÖ¨Q#S¦Lâjhœ iV½s玻»ûÈ‘#i²±±BsœÆÊíܹsܸq 6´³³rêÔ)]mKSsÎœ9ù†Í•+—/($$„<–©S§’£’;wnq 4»éª ùN%J”àkbkk;sæÌ$u­ˆˆˆ'N̝;÷×_%§T\Âo¿ý¦±‘‘‘gϞ]´hQ×®]©2âžßªU+]—C~¹LüåP—ëÒ¥KXX˜úÇ£££É3_¹re¿~ýÈ´´´d%,XP»÷uùòåuëÖ 8°R¥J2d`…|ÿý÷º.‡~ÎÔóéSüQÉôû° øxöìÙÖ­[eçh‚†SNÍÑcǎ¥9:K–,’Bh)­Kù™3gŽxôS»vmõUùû÷ï80eÊš£iZ—|¼S§NZê@ëÊ#GŽÐDܶmÛüùóK ©_¿¾Æ9~ü¸0GÓ¢UR-Ì56ÍÑ.äçhâ‡~Ðu¯¯^½Z¾|yÙ†%ÇcÆ I~|ÅŠ4GW¨PA}T Ñ®º$$$ôíÛ÷;U RÂâÅ‹U>«Ku!'Ji‚f~Ê^ òåT>«Qu¹víšJt©.ÞÞÞJ´@Þ¼yïÞ½«¢–(ùغTÞÇ6Lu¡î]®\9•Ëɘ1#-10x3äÅëÿ‹ÁI )S¦¨ŒuUZŒ«¢KuéÙ³'?z‹ÿ™-[¶{÷î)}üܹs*5ѨºÜ¹sG}vÖRÈ›7oT Ñ®ºˆœIÐ¥ºœ?^"Pð&Mš¤RB•*UT®H»êL.dJ•x#*ûšúõë§RªËĉU Ñ®ºÄÅÅI$ºÉ]+R¤ÈË—/•Jؾ}»JM4ª.ä«¢Kuùã?ԃ䰝>}ƒ'@/ׯ_W©4ª.4œª¢]uyýúµxßBÆ iyNs%¹4´ fêPÎœ9U¶PÊîåЫºüøãÉW]7nœ|Õ…&£¨.bß2_¾|C‡=xðà¡C‡è±N¥âΝ;×(ªË®]»˜ƒagg×»wošsi # lÙ²¬ÀΝ;+•pøðáä«.þþþ*…hW]‚‚‚²gÏÎ:}pùòåÔi-ZÔ¨Q#V`‰%ÂÃÕT•šhW]²fÍš|Õ%66¶V­Zâ-Ó§O'Ož|°.]ºXYY±é1~³":&ñ»ÿþ÷÷Q´0DuË׫.âB´«.âI¶L™2ôí´B§1ùäÉ“ƒ Ï&JçYxÕ…æ¦ê¬ºØÚÚ²Áß`Õ…J r’©ºÐÔÃvüjW]ž={–#G¦ Œ;–*..Ž¦oºw… båoÛ¶M£êbiiɦ~íªÝÍ *ˆÕ€#GŽ|üø‘¼è-[¶T®\™½E‘vÕEì÷¬ºˆ;­vÕ…|Hö©jÕª¹¹¹½zõêÇä^’ÏÀÞª^½ºÒf^u¡Ëa^¢ÁªuÖóµ«.ûöíc_M âêêzûöíøøøþùÇÅÅ%}úôìgeØ9tTšhy5`À€5kÖÐj”M ºTšƒèƒ}ûö¥YŒ¨† êU]†ÂÌ#FHÞïµhß¾½ŠêBc&¹%4à/\¸–óƒ6Lu)\¸ð¯¿þ:gÎœãǏÓxk˜ê’?þ¶mÛΘ1ãðáÃÿûßÿS]hðoÑ¢ù‡û÷里Y¯êBNÛŒ”7o^‰{@I¾|ù„wiNQzŽ&8„Y²diР9-;vìðññÑ«º¼ÿž¹™2e’l["߃fmVæTTú8Íã4ãoÚ´‰º1ó©t©.äÑ‘Ÿ3pàÀµk×^¾|™\A½ªK›6mX…,X y—öî°aÃTTêœeË–íÕ«µ$5,õpTêù%K–ìÚµë¢E‹Îœ9Ó¿½ªË²eËX…©ËIܤƒ2ñ³|ùòzƒÕ˜›êBs´ƒƒÃÌ™3i¹JÆ ºäΝ»eË–S§N¥i+$$„f:½ªËëׯÙ'*?LMNQ’ež;wÎÎΎ¼¯qãÆíܹóþýû ÅŠÓ«ºØÚÚÖ­[wäÈ‘[·nV5jÔЫºXYYѧhŽ¦%ü7h6¡vÖ«ºdÎœ™æèAƒÑ½¸rå M—4_ëU]Äšyt’wØnUò…È‘-„üråÊõîÝ{éÒ¥çÏŸÿðáƒXmÓ¨º¬\¹ReFXXXñâÅÙ† ¥`†ää-Z´cǎóæÍ;qâDDD„———ªKÚµkG}ÉÓÓ“ºŸXmÓ¨º2— T©RTñ»Ô÷¨Ë±2ÉUSR]ÈkÖ¬Ù¤I“öíÛ'ô|¦¶iW]ÈjÔ¨ÑøñãwíÚD/’û¤Ku‰‹‹c?ÞA%h‰dð©%xûö­ŸŸ_LLŒøÅŸþY—êB÷õõ•LUÍ›7×¥º³a–_î °U$AÕ–-‡fCšîůû¡Wu¹pá‚DZ¡e¬^Õ…Vñ/^¼¿BŒ^Õåúõë’Ù\|¾F£êÂ6ÌÐ<"ÛnTUö$«[·n²…Ð,v÷î]šFÙ+tuzUrØGhªå ¨Lòƒ *ˆ¿N¬ò]»vMrð™‰9Uê$þþþ’Àhl“FÕE¼aFö{ÉY¥¢Ø.bÙ¨DdC®²äÙ%»eÚUú9‹_!ŸY—êÍv>—/_þÝ»w¼Xü¤þŒ!@u_/´Š—¼b€êÂb€ê"Þ{ {Œšœ"ö0ËÎÎNv»M‹ü¤©Wu¡/âu½ªUƒNb€ê·­^ÕåÙ³glσÒÚýû÷'éÌð5Ñ«ºPò­ÅäKoÛ·o3OXÉ™ákb€êÂb€êÂÄ‘o)ûÀŽÖÌ­*R¤ˆì“¾&¨.²=_¯êòçŸ&ÙLÁ£Ý,¡€d¢Wu‘E¯ê2gÎ6Ü)Åü§%9Ûû7tèP51@uá1@uá1@uáÑ«ºDFF²ù«K—.I*3d,»Ü–½zU—R¥J öeË–Õ¢Ì(ik<Ä<==5zDÏT¦qÕ…<Á¸iÓ¦=¢¬Y³šBu!o%hP·góÔ˜ åUò¥Y%4zDJaüS]uë™ÿûßÿ4zDã º¨.ÉT]„Œ‡Z4ˆ3!^¾|ª‹:BÆC7ªXÆÇdzt<=zô0ºêB*9:%ØÛÛ³œf«º”.]ZËÖbôèѬäW¯^™§ê">ÿ%I„!)ZŽí€yª.BVbõPºoÞ¼a–NNNP]ԍccc™q’MX¼B6º~2U6G—+WNÝÒÕÕUo@Ý”W]Ø1p"$$DÅRÐFc@Ý”W]Äg´·lÙ¢nœ'OÔP]T£¨.+VÔxnHœfEcЏ´¬º¸¹¹i?7”-[6Á²eË–FW]®]»¦ý‰aƒ Ë ˜­êÂâê×­[WÝò?þ`%kÌû“òªK·nÝX%•Bé Ð"…Y._¾c&àëR]&MšÄ±   Ë„„KKKÁ²oß¾P]ԍô‹TcǎeÆ*`V]Xؐ$O0ѪŸ•Lî„yª.½{÷Ö(R=þ\o|à”W]ĝóàÁƒêÆåÊ•Ó¨.ª‹,ŋן“g½aµÒ²ê²jÕ*Ö\çΝS7.Y²¤®kÔ¥ºx{{3cú ºq‡ËlÙ²™­êÂæ´k×NÝrõêÕz7h¥¼êÒºukÁØÆÆFÝòþýûìræÏŸ1ðu©.Ç׾Þg{çΝ¡º¨?~üXûz_œ¼[)rrT—Ì™3ÆmÛ¶U·<÷öö6OÕ…í/JÒé g—3mÚ4Œ™€¯Ku¨|ûö­ºq®\¹t¹iYuyøð!3ž:uªº±8?”–h*zU[[[Á¸}ûöê–ä6°’>lžªK»vícº.uËOŸ>±ËÑèB¤¼ê²råJVÉ.¨³=Ôb0TÕ%9ªK4j§OŸf%ïÙ³ª‹:ºÂà°s^5kÖ4ºê"N‘ó矪³´Iî»H-Õ%!!%ZJ2Žøœ×?ÿücžªK“&M4»¾“L€ù¨.ÎÎÎl{øð¡ºq†Ë>}ú@uQ7¦ª2ãaÆ©?ž‡‡‡]uÉ›7¯ÆìâñgÎœ1OÕ¥W¯^O¿|ù’YNž<ٝU’*Kq½õë×›§ê"äÝdaô4ÆV¨.J°(aIÆñ'|¤ªBuÑ>¯­ZµJÅRüˆjèСFW]ÄO¬n\ªT)éRQu©V­š`\¢D uK–ð‘œð?š§ê"Nê¡îvþý÷ßÌòСC3_—ê"ž·mÛ¦béççÇ,wïÞ ÕEûÌX¹reuK"Ucb½ª÷ý÷ß'$$h™£mllÔ-SQuÙ¹s§ÆDKâ9:É@µ©¥ºˆÄM™2Eã‚bÆŒ0P]’£ºˆW|ê¡íXêkkk“cZV]Äóšzh;qêu}Æ0Õ%Q´GT=éÝV;;;];ºSEuéÓ§{N§Þ™¨˜¤>“Šª‹Ø?W>#NÉDÝc&àëR]Ä3£úSñùPõ P]$3#Íb*Û&£££ÙöZGGGS¨.â6T?GF}FW*TQ]®_¿Î.gîܹ*–â”Ü‘‘‘橺ˆÛPýTûÑ£GÙåüõ×_0P]’£ºlÛ¶ÙïÝ»WÅ’E€Ñ>9¦eÕ%Q”J=F.Ýkv|||L¡º4mÚT0Κ5k\\œ’™8ŒFý'UTq~(™‚®”î”ÆL ©¨º\¹rEão–mÏ’%‹FåÌGu/x*¤bÆ’»åÊ•+>>ªK’ö›6mÒâÎyxx03777S¨.þþþZž:‰‘;ÖlU—ÏŸ?³$æµjÕR±,Q¢„`V±bE5IÕ…|rMNåyëàÁƒÙêÕ«W0P]’£º¼~ýš†SÁ¾wïÞZÖųCuaž¨…©•ÌZ¶l)˜ÙÙÙ}úôɪ‹Ø<}ú´’Ùĉ™Ùýû÷ÍVu?xš4i’’ÙÉ“' ˆ‚’òª-(˜@WµjU%³?Z[[f:tÀ€ øUq¸­k×®ÉÚ„……YZZê]\§qÕ…jËMŝ£ölȘÚÙª‹xTÑÄۙΝ;g¶ªÑ¯_¿$Ý9q¸{õ“;©®ºˆŸ·*LHH`Û¤5¦×¨.ê°¾9räøüù³¬Í¼yóX±/^„ê¢Å^|ÂwÓ¦M²6QQQ4Kê V¦Wuw-1mg*[¶¬Æš¤Šê"îZ*)† ÆŠ}öì™Ùª.â®eaañüùsY›hÏ橺ܾ}›=ëQʶ#Nm¬Ep€ê"кukvü–&VÞàÞ½{,è®FÁÁ0ÕE|zýèQ™Ü®oß¾eqäŠ-ªä|š‰êrúôé$·åôèуu¿ÀÀ@sV]Þ½{—5kV–:&&†·!gŒ]2ùœ-P]’¯ºˆEogggÞ ((({öì‚9iڏ6¤qÕ%66–¥„¦?^¾|ÉÛ°§Nê»P’©ºMš4a1ë®\¹ÂˆŸ:it«RQu2Z¹r%opéÒ%+++Á€Qí•IÕE¼»C‡üOìÍ›7L#ŸS=Õ˜­ê"^ŸÊ:*[¶laï6jÔH{M º\¾|™}¤D‰’­,ááá,§³……ÅÕ«WM§ºÐw1y‡þ4B\\;ø¬%qª«.‰¢#o²ÏÑfϞÍÞUÙhd&ªñÇ° ÓïQò®··7ó…Š)BÞ,FKT—ä«.ñññåË—WzŒ.^î)=³€ê¢ÄúõëYÓU¯^=::ZI:Ðåô ºˆc¶äÏŸ?$$Düî¡C‡ØÃÇÂ…kŸaSKuùðá;•cii)é–ÁÁÁ,»4]׍7Ì\uIÅl!&Nœ(qP™hF¸ººb´è…\‚úÿ[[[6©IÞZ¶l™Òì#±Ì–-“ô%o)å=yòäI®\¹Øò¿[·n´î&hïÞ½ŽŽŽlÎœ9³ŠdA3ŽäëØÀ+8búôé#[È?ÿü#±,^¼8+¤V­Zâ·Ú¶m«4…I {MU«V•¼+¦æÕ«W3{{{VÈO?ý$yWép““ûTÙ²eé¦ûùùùûûÓåÊ•coÑ:]¥«´jÕJü]5kÖ‹9’š\ºtI¶5kÖ°OÑŒ<~üøÓ§O.Y²¤N:ì-šÝTâö>\òu2d>˜'OÉ[;vì-dÚ´iKO˜z¯ä-¥ vׯ_gŸ¢:PGݳguZwwwšè©³+UÙ[»eËÉ×±Â÷ß/yKiS‡‡‡Ä²@ì×$yK)`2-˜ô$d§ûuïÞ=OOÏ#F°˜xä_ìڀùÀæ-(zeƒ­hŽ-äòåËl6¡A»R¥J“'O¦¸cǎl"A³¹Êå°ø$ZP:ôÚ·o_í…4H¶ &h/D)­Ïÿþ÷?í…(z%_¢M›6Ì,GŽ4;¯_¿ž&Sñ|—;wn•”4âø$IB÷Q©qØr¶É$ÿ™¼bª<;Nó¾···R ä~§‡‡Ê–£«¥x¹ÇŽKŸ>=sÈ=&¯rùòå­[·f™˜3gŽJ§»µI²víZÙBÄÏì’dæÌ™²…„„„ˆË¥K—&Ïgãƍ´X ßR¬Î©ÄC%ºuë¦}¤¢Õ®l!쩺ºté¢T™.0ÍGšŒh¨r94<ÉŸ?¿Êgi9€³E¨.ÆU]¿<6bÛAe!‡M}š†ê"Ëû÷ï5j¤òYZS«§V4–꒐0`À•ÏZ[[«ôsS]¿<6bQqdIr¿Šù¨.‰_¥©ÿœëÔ©£1%˜³ê’øåð²xσ˜Š+&¹Gª‹RËPù³gÏΔ)ÿ)++«ùóç'™ÊXªKâ—3ìl7”KKK''§>¨×ĬTâêÕ«*Tý`ýúõ=z¤~9f¥º$~ [­Ô8… :qâ†MT£«.‰_²øµoߞ¯UÉ’%iy›äå@uQ‚<œÅ‹ówŠüŸ>}ú$yrÜXª‹õñ2æÿÐí»{÷®úgÍMu!š4iŽG‰Ÿ <üöíۍ7<¸Íèú§FÝÞÛÛ[{MhÚR/.í¯¿þúý÷ß;tèТEGGGWWW¥¸ñ|÷Ð^“àà`ÙB¨j/„zx’µ¢.:qâDê®Ôi©ë’C«”KÂãǏµ×äúõë²…PÓi/Dý¡žÀƒæΝKîw³fÍ:vì8räHOOOø`jÂÂÂöíÛ·|ùrZåmÙ²Eˈ ´pþüùÍ›7Óì¶jÕ*TŒ‰J³¹»»û‚–.]ºwï^r_¿ê¶ Ù³gÏ’%KèŠèº¾êËyÿþýáÇW¯^M]…üd///ø?£óùs¢ëúÿzŒö0Ñ1‰ßø÷¿¿¢=ŒTðuñêÕ«¿þúíÀüê¾"Ž9’+W®2øûû£5˜9P]À×ÑQ££‡ÎÒÑ–.]úÇh¤q>}úýúÃàB^¾|ùè¡¡¡òäÉ“Û_xõêîüÿÅ,T`öT¨Pá;¿þúëÛ·oÑ2@šbáÂ…ß¡T©Râââ"R­ZµÔ½œ>}úŽM³fÍ.¤K—.B!ô‡Á…P„B¨JR¿~}¡#F¤nÃnܸÑõ~~~©[“õëׯþ;w.dó{öŒ\ߐ½å|úô‰>xõêÕ¯´Ž=zñâEu›{÷îÑ5¾yóÝæcîܹ‚?P¸paƒ™0a‚PÈO?ý”º—ÕÅDP÷jB&ukòý÷ß5Ù¼y³Á…07øï¿ÿNÅkyôè«É?ÿücX!OŸ>möô7Æ4F€êÌ–W¯^µjÕJ¼Å¥råÊwïÞEˈ ­[·î–-[¢££Ñ˜íïÔÙÙ™~ª  µ@õêÕ¼cǎ„„]åÐhkkËþ¹iÓ&éçÏ^¹s玖'¡aaa¦A]ßþþýûÛ·o‡‡‡›¢‰–-[æð-C™¥¥e:uÔm&MšD×xþüy៟>}¢Ê¿xñ½ñkªT¨.¦À(ª ŬúcH#@u执§gîٰܹœ.]:ò‘’s`ù[eÁ‚BåÈ‘#"" ÂóîÝ;4HE³dÉ"lÕkÚ´i×®]«U«–!CzEï˜&Q]vìØAãäîÝ»Ù+õêÕK²+胎ŽŽº¾ýÈ‘#Tçùó罉üýýÓ§O/eïß¿OÒ^‹ê2gκFv¸àÁƒTøo¿ý†ùµÕ…ªTSÕƒê̍˜˜š¬iíÀÆäüùóŸ>}-ӐP¼xq¡•4h€‘ðúõkZáÚÛÛÇÅÅ¡5@jQ¥Jú…N:UÜCCC—.]úùóg]EITª‹a˜Hu‰ýñÇ©Úùòå3¢ê"ªË7T¨.

%lr{÷îý-œÄ¼úJƒççÏŸé]òŸÅ¢½’@ãÂ…6l8zô¨XyöìÙŠ+„Å+_¬#Ñw:ujݺuôA¾æ*Lš4ÉÊÊêÞ½{ùóç׫º„……9rÄÍ͍wÝŸ?N5²Ý…††äó$'wÀ7Éÿþ÷?æê߸q b"^¿~-ŒfêéNž®‹°à"¸X±bìã%K–d?œáÇóå³ä°6l Š‰k¾hÑ"-QküüüÒ§O¿`Áú[¯êâîîN°/íÛ·oLLŒXÌaq]V®\ÉWþܹsèŸP] º@uêÕ¨. í,LZ·nÀ¹º a‘œÑ <ä¤K—Nh¢ºuëê\ú þÃ?ˆã&õë×ïÙ³ghSP¶lYjäæÍ›:tHé°› ºdË–­|ùò.\ˆŠŠòõõ-S¦½èáá¡Eu‰ŽŽ&Ø¢ZµjþCiƒ’êR°`ÁiÓ¦=xðÀÏϯGâ• õZ2Ð+'NdåÇÇÇÓ[{öì¡×‹+vêÔ©ÈÈHúÝýüóÏôÊŠ+Ô[FF«T©"œ´Ò¥ºØÙÙeÍšuçΝoÞ¼¹wï^§Nè³NNN²ªË»wï¼¼¼èŸ;wf•K4ªT¨.P] º`:bb×ø÷¿c^h =z4Ož}z‰êÒªU+É7nL¯¿~ý:eT—[·n‰Í„ $7oÞTQ]5£eË–’¯èÛ·/½îíí­T???KKË©S§²Wt©.ÖÖÖq^Hè¶uëV¨.P] º@uêÕAî=y;âå^ûöí8W#`íF£ATðôôdmÕ¶m[4ˆ„àààÞ½{³£X¥J•Ú·oÇè$$$x{{wïÞ]Èíææ&V]F-±0`½îëë›2ª‹{V²2e®µ¬ê"¼8|øpÉWÌ›7O²ELLLLù/ÄÆƪ¨.=zô¨/Âßߟ©.åÊ•“”¹wï^úø´iÓ º@uêÕªT«V­²²²b£®µµõºuëÐ,ÚiÑ¢…ÐtÔ›|6 "D¨Øµk„çêÕ«Mš4‘„­S§ŽÁnPgäÈ‘B°±ê2vìX‰ÙÀéuŸ”Q]¢££õª.‡¢ér$_!ì<±Í½{÷„ŒÌ’¯véxyÉŒ;räÈ÷,ãA5T©¿¥¥¥DpÒr±ÖU]úõ뇾ժT¨.P]àæøñãyóæÎ7nœx‡9Ђ³³3 ñüùs4ˆÜÝÝÅfÑ JÄÇÇoܸQ8ëÁȐ!¹Ðêa KLLK¯,FÈeܵkW±êBÎù‹/˜Mpp0µ|™2eØ+Iª.9sæÌ•+—‰T—K—.ñ?!ÆÕ\4éÝ»wY³fµ³³{óæö¶ÒMwǎâiùL¯3…P¢ºDFFÒ?kÕª…> ÕªT¨.P]à›$66vôèÑâÀ¹ùò壅ZF/Ÿ>}ʝ;·Ð†:t@ƒh‡Ë"Nœ8QádzfÍb¡_YþùóçKæ@Œ3öèÑÃÃÃãþýûÔzOž}úùóç¯^½ª)Ë0Õ%""‚ª'Ož 6\¾|™Ê¢.»¹¹ «Wz1>>žj^§NzeáÂ…ºÚJ—êbcc“#GŽcǎ}þü™.§ÿþ’DÕ…(T¨Ý‹eË–]¼x‘*¯å‹T¨.P] º@u€¯‚ÀÀÀŠ+Š—oíÚµÃssà%kÆãǏ£A´C«]Z± MW¬X1IàPÀóêÕ+''§ôéÓ‹¼´t¥UvBBÚGoÞ¼©R¥Š$X1Q @mÛ¶13Au™:uª6H{eaa1gÎqiIª.W¯^-Y²$û¥°*†©.Ć ²dÉÂʶñPgX´h‘°²z]¯‹‹‹ÞLXºT—:uê¬ZµŠšˆ…kÓ¦8Q5¯ºxzzŠ7[ž;wýªT¨.P] ºÀ7ÀêÕ«%s×®]‹f1˜† -Y¢D ¬|õ²lÙ2ÖÇŒƒÑ;wÚµk' ~þùgìUÓNhh聨ûM›6ÀãǏK²ªË¬Y³âããýýý—.]J#'KÙÌ ·Ä¹˜_¼xqæÌõEàóçÏ>ôññ¡·Äç•ÄÄÅÅÑ»âò¯_¿N¯Ð·‹Í66öÞ½{çΝ£·Þ¾}‹ž ÕªT¨.P]àk_h888HVj`“¹þe©w÷>Hü´¤fÍšBZZZúùù¡M4BÕêÕ«K´—-Z q’S]ШՅªTSÕ̍'NHçŽ;s“ɨQ£XšÑ2Œ[·ne̘QhF{{{½‡ Ò8»ví*V¬˜XxI—.££cHH'9@u@#P]x ºð@u1P]À|ˆ;v¬$p.‚—&Ÿèèè~øAhÒ=z Afúôé¬sΞ= ¢÷îêêš={v±öbmm=eÊqT  ¨.hªT¨.¦ ™ª‹p(•W]$Gn$ Ÿ?ÿü³x9æàà€]FaË–-¬U}||Ð ÉÑ Ê—/Ïv ݹsm¢—7oÞŒ7Ž9“¹råZµjUZØ>} €ÔªT¨.¦ ™ªËÖ­[oܸ!Q]6n܈‘

Protestors in Hong Kong Take to Tech to Publicize OccupyCentral and OccupyHK

28 September 2014 - 7:00pm

Amid reports of a crackdown by the Chinese Government on social media outlets like Instagram in mainland China to suppress distribution of images of student protests in Hong Kong, the hashtag #OccupyCentral has become one of the top trends on Twitter.

Images like this:

and this:

and this:

… give a sense of the scale of the protests that have swept Hong Kong since China’s National People’s Congress Standing Committee ruled that only candidates approved by Beijing would be able to run in the elections for Hong Kong’s chief executive — despite earlier reforms opening voting to all Hong Kong citizens.

The Independent has a great synopsis of the background behind today’s protests, which have seen 34 people injured according to CNN.

The protest movement, officially called Occupy Central With Love & Peace, has an English language website, and earlier this month a White House petition sympathetic to the students’ goals was posted here.

News outlets sympathetic to Beijing’s position have responded to the student protest movement by questioning the group’s ties to the U.S., which Joshua Wong, the founder of the student protest movement called “Scholarism” denied. Wong was arrested by Hong Kong police two days ago as protests got underway.

IMAGE BY Flickr USER Natasha Causse UNDER Creative Commons LICENSE

LibreSSL: More Than 30 Days Later

28 September 2014 - 7:00pm
LibreSSL: More Than 30 Days Later

Ted Unangst

tedu@openbsd.org

LibreSSL was officially announced to the world just about exactly five months ago. Bob spoke at BSDCan about the first 30 days. For those who weren't there, I'll quickly rehash some of that material. Also, it's always best to start at the beginning, but then I'll try to focus on some new material and updates.

LibreSSL is a fork of the popular OpenSSL crypto and TLS library. TLS is the standard name for the successor to SSL, that other secure transport protocol developed in the 90s. Most notably used for https, but also secure imap/smtp/etc. I'd guess there's far more traffic protected by TLS than any other means. There are several implementations around, but two that dominate, at least in the C universe. OpenSSL is the de facto standard for servers and many clients. The alternative is NSS, the Netscape Security Services library, used by both Firefox and Chrome, but not many other clients. Oh, if you didn't upgrade NSS last week, you should go do that.

Cryptography in general and TLS in particular are pretty difficult to get right. So a monoculture isn't strictly a bad thing. Put all your eggs in one basket, and then watch that basket very carefully, right? Unfortunately, the OpenSSL basket was being watched somewhat less than very carefully. Yeah, it has bugs, but surely somebody else will fix them. And worst case scenario, since everybody uses the same library, everybody will be affected by the bugs. Nobody wants to be alone.

Fast forward past a hundred other vulns to Heartbleed, also known as the worst bug ever, though that title is heavily contested. I hear even bash has announced it is entering the contest.

What was unusual about Heartbleed? It's a vulnerability, not an exploit, with a name (and a website!). Previously we'd seen The Internet Worm (when there was only one), then Code Red, Blaster, Stuxnet. They all used various exploits, but the vulnerabilities didn't have names. Heartbleed can't even be considered the worst OpenSSL vuln. Previous bugs have resulted in remote code execution. Anybody remember the Slapper worm? That worm exploited an OpenSSL bug (which was apparently titled the SSLv2 client master key buffer overflow) which revealed not only encrypted data or your private key, but also gave up a remote shell on the server, and then it propogated itself. Yeah, I'd say that's worse. But no headlines.

I mention this just to reinforce that LibreSSL is not the result of "the worst bug ever". I may call you dirty names, but I'm not going to fork your project on the basis of a missing bounds check.

Instead, libressl is here because of a tragic comedy of other errors. Let's start with the obvious. Why were heartbeats, a feature only useful for the DTLS protocol over UDP, built into the TLS protocol that runs over TCP? And why was this entirely useless feature enabled by default? Then there's some nonsense with the buffer allocator and freelists and exploit mitigation countermeasures, and we keep on digging and we keep on not liking what we're seeing. Bob's talk has all the gory details.

But why fork? Why not start from scratch? Why not start with some other contender? We did look around a bit, but sadly the state of affairs is that the other contenders aren't so great themselves. Not long before Heartbleed, you may recall Apple dealing with goto fail, aka the worst bug ever, but actually about par for the course.

What did we do? We gutted the junk. We started rewriting lots of functions. We added some cool new crypto support, for things like ChaCha20.

I could spend an hour explaining why supporting obsolete broken systems is detrimental, but if I told you all the things I have learned about VMS, it would probably violate your human rights. Instead, I think one example will suffice.

#ifdef FIONBIO /* working code */ #else /* crappy workaround */ #endif

In theory, this looks like we're going to use the good code on a posix system. But there's just one problem. The code is testing for FIONBIO, which is only defined if you include the right header. If you forget to include the right header, the compiler falls back to the workaround. The mere existence of workarounds means they can be picked up accidentally, and you'll never know.

OK, I lied. The socklen_t workaround is just too horrible to skip over. Also, I love this picture.

Here's a problem. You want to create a variable the same size as socklen_t. One fairly obvious solution would be to declare a variable of type socklen_t. That's not how OpenSSL does things, however. Instead, let's create a union of a couple different ints, call accept(), then inspect the different union members to determine which ones were overwritten by the kernel. Oh, and don't forget to check for big endian versus little endian.

The problem extends beyond just legacy compat code. Even new code in OpenSSL can be a byzantine mess. I'm going to point out two config options for OpenSSL, but they're pretty much all like this.

#define OPENSSL_NO_HEARTBEATS #define OPENSSL_NO_BUF_FREELISTS

First, the naming convention alone reveals the on by default mentality. Everything is on, you have to pick and choose what to disable. Second, this makes testing for such options problematic. Old versions without the feature don't have the define to disable the feature. Even more bizarrely, this means that future releases of LibreSSL will have to track these defines and continue adding more of them for every new feature we decide not to import. We plan on not adding support for many more features to come.

Slowing down development is a big part of what we're doing. We're trying to present a smaller target, not a bigger target. So we've applied the brakes on new development.

I'm going to pick on the FreeBSD guys here for a bit, but it's not their fault.

2014-08-06 OpenSSL advisory
2014-09-09 FreeBSD advisory

What were you guys doing? Oh, that's right. You were probably wading through the 13,000 lines of diff that OpenSSL decided to drop as part of the new release.

Projects need to consider how downstream users will actually deal with their copiuous volume of security patches. This goes back to how did heartbeats get into the ecosystem? Because nobody could keep track of what's going on.

That's not to say LibreSSL development is frozen. We've added support for a few new ciphers, notably chacha20. As we do so, though, we consider what new possible failure modes we may introduce. I wouldn't put a buffer overflow outside the realm of possibility when implementing a cipher, but it's pretty unusual. The inputs to a cipher are quite well defined with little room for error.

Let's look at another timeline.

May 5 - Remove libssl SRP
Jul 2 - CVE-2014-5139 (ssl) discovered
Jul 28 - Remove libcrypto SRP
Jul 31 - CVE-2014-5139 (crypto) discovered
Aug 6 - OpenSSL 1.0.1i
Aug 8 - Bug report SRP is broken

On May 5, I removed all the SRP code from libssl, along with kerberos and some other protocol extensions. The problem is that this code is integrated by sprinkling a dozen ifdefs and crazy nested if/else chains into functions that do some pretty critical things, like decide how to exchange keys between client and server. Auditing these 1000 line monoliths was clearly impossible, so we cut down to the basics. The SRP code in libcrypto was left alone because it wasn't directly in the way.

On July 2, OpenSSL received notification about a crash causing bug in the TLS SRP code, but this was not publicly known. On July 28, I deleted the libcrypto SRP code as well. In my commit message, I mentioned "hey there's a bug in here, but the details are secret." This was actually kind of misleading because the secret bug was in a different library, in code that had already been removed.

Three days later, on July 31, two researchers found a remotely expoitable buffer overflow in the libcrypto SRP code. This is like "throw a rock; you're guaranteed to hit something." Even when you point people in the wrong direction, they still find bugs.

On August 6, 1.0.1i was released with fixes for both of the above issues along with about 12800 lines of diff for other stuff. On August 8, a user reported that after upgrading, SRP no longer worked. The fix for the first issue was broken. The bug was embargoed for over a month, and nobody tested the fix.

What's the lesson here? Don't drop jumbo security patches on users. Anybody actually using SRP was in an unfortunate pickle here. They couldn't upgrade to get the fix for the buffer overflow, but instead had to pick it out from the rest. If the patches had been issued separately, as they were discovered, this never would have happened. Nobody's perfect, and I've flubbed a few patches myself, but that's exactly why you don't combine them. If the libssl patch had been released at the beginning of July, the regression would have been discovered and a correct fix hammered out long before the buffer overflow was discovered.

OK, I still haven't talked too much about we have done since the last update. We've mostly stopped deleting code. There's still some scary code left, but the good news is it's code you need to run. There was a bit of a summer lull, some post hackathon decompression, and then the OpenBSD 5.6 freeze. But progress is picking up again. Dig into a directory, open a few files, rewrite a function or two. Look at all the points where memory is allocated, and then make sure it is freed, exactly one time, no more, no less.

I'm sorry to make it all sound so tame, but avoiding excitement is all part of the plan. The first 30 days were all about revolution. Now we've switched into evolution mode.

Due to quirks of release timing, the first release of LibreSSL was the portable version. The first native OpenBSD version won't be released until November, when 5.6 comes out.

I personally do not work on the portable version, but I have kept my eye on it. A few notes on that. First, you'll be happy to know that libressl portable should work on all BSD systems. It works on some other systems, too, but enough about them. Most of the extensioned interfaces in OpenBSD are already shared with other BSDs. You shouldn't even need the portable build system, if you don't like it. The simple BSD makefile build system from OpenBSD will probably suit you better.

That's the good news. The bad news, for now, is that libressl uses the arc4random function to generate random numbers. Theo has a talk coming up all about that, but it's enough for me to say that the arc4random implementations in FreeBSD and NetBSD aren't quite state of the art anymore. I know there are some patches to update FreeBSD at least, but they've stalled out. You're going to want to pick that work up.

We're currently still targeting posix platforms. Windows support isn't out of the question.

I'd like to turn now to what I hope is the future. The initial reactions to the announcement of LibreSSL included a lot of "the OpenSSL API is so bad it's not worth preserving". I'm inclined to agree. We've preserved the API because that's what we needed to do to succeed, but we're not married to it long term.

Joel and I have been working on a replacement API for OpenSSL, appropriately entitled ressl. Reimagined SSL is how I think of it. Our goals are consistency and simplicity. In particular, we answer the question "What would the user like to do?" and not "What does the TLS protocol allow the user to do?". You can make a secure connection to a server. You can host a secure server. You can read and write some data over that connection.

A few goals. First, no OpenSSL types or functions are exposed. In fact, not even any ressl internals are exposed. You should never need to contemplate X.509 or ASN.1. Those are implementation details far beyond the level of caring of most developers or users. As a consequence of that, the API is easy for other languges to bind to. The ressl interface could almost equally well describe transport over ssh tunnels. What do you want? Do you want a secure connection? We give you a secure connection.

Perhaps more importantly, it allows the implementation to evolve and change. It's not actually tied to LibreSSL. The libressl library will work with OpenSSL, and can be adapted to use other implementations as well. Previous efforts at replacing OpenSSL usually ended up with compat shims that replicated the OpenSSL API. But that's terrible. If we're going to have a universal API, it needs to be a good one. And it needs to be sufficiently abstract so that others are welcome to the party. Clearly, we've made a claim that LibreSSL is, or at least can be, the best quality TLS stack you can get. But I think the ecosystem benefits by breaking the monoculture.

The ressl API does provide one noteworthy feature. Hostname verification. In order to make a secure TLS connection, you must do two things. Validate the certificate and its trust chain. Then verify that the hostname in the cert matches the hostname you've connected to. Lots of people don't do the latter because OpenSSL doesn't do that latter. You have to do it yourself, which requires knowing about things like CommonNames and SubjectAltNames. The good news is that popular bindings for languages like python and ruby include a function to verify the hostname. The bad news is if you pick a python or ruby project at random, they probably forget to do it. Another funny fact is that since everybody has to write this code themselves, everybody does it a little bit differently. Especially regarding handling of wildcard certificates and everybody's favorite, embedded nul bytes. Hostname verification is on by default in ressl, and the API is designed so that you always provide a hostname; there's no way to accidentally call the function that doesn't do verification.

We have the advantage that we can evolve the client and server APIs in sync with at least two test programs, the OpenBSD ftp client and httpd. We're not nearly ready to call for third party support; that's probably a few months away.

Q: Where should we discuss changes to the ressl API?

A: The code is in cvs and can be discussed on the tech@ mailing list.

LibreSSL: More Than 30 Days Later

Ted Unangst

tedu@openbsd.org

LibreSSL was officially announced to the world just about exactly five months ago. Bob spoke at BSDCan about the first 30 days. For those who weren't there, I'll quickly rehash some of that material. Also, it's always best to start at the beginning, but then I'll try to focus on some new material and updates.

LibreSSL is a fork of the popular OpenSSL crypto and TLS library. TLS is the standard name for the successor to SSL, that other secure transport protocol developed in the 90s. Most notably used for https, but also secure imap/smtp/etc. I'd guess there's far more traffic protected by TLS than any other means. There are several implementations around, but two that dominate, at least in the C universe. OpenSSL is the de facto standard for servers and many clients. The alternative is NSS, the Netscape Security Services library, used by both Firefox and Chrome, but not many other clients. Oh, if you didn't upgrade NSS last week, you should go do that.

Cryptography in general and TLS in particular are pretty difficult to get right. So a monoculture isn't strictly a bad thing. Put all your eggs in one basket, and then watch that basket very carefully, right? Unfortunately, the OpenSSL basket was being watched somewhat less than very carefully. Yeah, it has bugs, but surely somebody else will fix them. And worst case scenario, since everybody uses the same library, everybody will be affected by the bugs. Nobody wants to be alone.

Fast forward past a hundred other vulns to Heartbleed, also known as the worst bug ever, though that title is heavily contested. I hear even bash has announced it is entering the contest.

What was unusual about Heartbleed? It's a vulnerability, not an exploit, with a name (and a website!). Previously we'd seen The Internet Worm (when there was only one), then Code Red, Blaster, Stuxnet. They all used various exploits, but the vulnerabilities didn't have names. Heartbleed can't even be considered the worst OpenSSL vuln. Previous bugs have resulted in remote code execution. Anybody remember the Slapper worm? That worm exploited an OpenSSL bug (which was apparently titled the SSLv2 client master key buffer overflow) which revealed not only encrypted data or your private key, but also gave up a remote shell on the server, and then it propogated itself. Yeah, I'd say that's worse. But no headlines.

I mention this just to reinforce that LibreSSL is not the result of "the worst bug ever". I may call you dirty names, but I'm not going to fork your project on the basis of a missing bounds check.

Instead, libressl is here because of a tragic comedy of other errors. Let's start with the obvious. Why were heartbeats, a feature only useful for the DTLS protocol over UDP, built into the TLS protocol that runs over TCP? And why was this entirely useless feature enabled by default? Then there's some nonsense with the buffer allocator and freelists and exploit mitigation countermeasures, and we keep on digging and we keep on not liking what we're seeing. Bob's talk has all the gory details.

But why fork? Why not start from scratch? Why not start with some other contender? We did look around a bit, but sadly the state of affairs is that the other contenders aren't so great themselves. Not long before Heartbleed, you may recall Apple dealing with goto fail, aka the worst bug ever, but actually about par for the course.

What did we do? We gutted the junk. We started rewriting lots of functions. We added some cool new crypto support, for things like ChaCha20.

I could spend an hour explaining why supporting obsolete broken systems is detrimental, but if I told you all the things I have learned about VMS, it would probably violate your human rights. Instead, I think one example will suffice.

#ifdef FIONBIO /* working code */ #else /* crappy workaround */ #endif

In theory, this looks like we're going to use the good code on a posix system. But there's just one problem. The code is testing for FIONBIO, which is only defined if you include the right header. If you forget to include the right header, the compiler falls back to the workaround. The mere existence of workarounds means they can be picked up accidentally, and you'll never know.

OK, I lied. The socklen_t workaround is just too horrible to skip over. Also, I love this picture.

Here's a problem. You want to create a variable the same size as socklen_t. One fairly obvious solution would be to declare a variable of type socklen_t. That's not how OpenSSL does things, however. Instead, let's create a union of a couple different ints, call accept(), then inspect the different union members to determine which ones were overwritten by the kernel. Oh, and don't forget to check for big endian versus little endian.

The problem extends beyond just legacy compat code. Even new code in OpenSSL can be a byzantine mess. I'm going to point out two config options for OpenSSL, but they're pretty much all like this.

#define OPENSSL_NO_HEARTBEATS #define OPENSSL_NO_BUF_FREELISTS

First, the naming convention alone reveals the on by default mentality. Everything is on, you have to pick and choose what to disable. Second, this makes testing for such options problematic. Old versions without the feature don't have the define to disable the feature. Even more bizarrely, this means that future releases of LibreSSL will have to track these defines and continue adding more of them for every new feature we decide not to import. We plan on not adding support for many more features to come.

Slowing down development is a big part of what we're doing. We're trying to present a smaller target, not a bigger target. So we've applied the brakes on new development.

I'm going to pick on the FreeBSD guys here for a bit, but it's not their fault.

2014-08-06 OpenSSL advisory
2014-09-09 FreeBSD advisory

What were you guys doing? Oh, that's right. You were probably wading through the 13,000 lines of diff that OpenSSL decided to drop as part of the new release.

Projects need to consider how downstream users will actually deal with their copiuous volume of security patches. This goes back to how did heartbeats get into the ecosystem? Because nobody could keep track of what's going on.

That's not to say LibreSSL development is frozen. We've added support for a few new ciphers, notably chacha20. As we do so, though, we consider what new possible failure modes we may introduce. I wouldn't put a buffer overflow outside the realm of possibility when implementing a cipher, but it's pretty unusual. The inputs to a cipher are quite well defined with little room for error.

Let's look at another timeline.

May 5 - Remove libssl SRP
Jul 2 - CVE-2014-5139 (ssl) discovered
Jul 28 - Remove libcrypto SRP
Jul 31 - CVE-2014-5139 (crypto) discovered
Aug 6 - OpenSSL 1.0.1i
Aug 8 - Bug report SRP is broken

On May 5, I removed all the SRP code from libssl, along with kerberos and some other protocol extensions. The problem is that this code is integrated by sprinkling a dozen ifdefs and crazy nested if/else chains into functions that do some pretty critical things, like decide how to exchange keys between client and server. Auditing these 1000 line monoliths was clearly impossible, so we cut down to the basics. The SRP code in libcrypto was left alone because it wasn't directly in the way.

On July 2, OpenSSL received notification about a crash causing bug in the TLS SRP code, but this was not publicly known. On July 28, I deleted the libcrypto SRP code as well. In my commit message, I mentioned "hey there's a bug in here, but the details are secret." This was actually kind of misleading because the secret bug was in a different library, in code that had already been removed.

Three days later, on July 31, two researchers found a remotely expoitable buffer overflow in the libcrypto SRP code. This is like "throw a rock; you're guaranteed to hit something." Even when you point people in the wrong direction, they still find bugs.

On August 6, 1.0.1i was released with fixes for both of the above issues along with about 12800 lines of diff for other stuff. On August 8, a user reported that after upgrading, SRP no longer worked. The fix for the first issue was broken. The bug was embargoed for over a month, and nobody tested the fix.

What's the lesson here? Don't drop jumbo security patches on users. Anybody actually using SRP was in an unfortunate pickle here. They couldn't upgrade to get the fix for the buffer overflow, but instead had to pick it out from the rest. If the patches had been issued separately, as they were discovered, this never would have happened. Nobody's perfect, and I've flubbed a few patches myself, but that's exactly why you don't combine them. If the libssl patch had been released at the beginning of July, the regression would have been discovered and a correct fix hammered out long before the buffer overflow was discovered.

OK, I still haven't talked too much about we have done since the last update. We've mostly stopped deleting code. There's still some scary code left, but the good news is it's code you need to run. There was a bit of a summer lull, some post hackathon decompression, and then the OpenBSD 5.6 freeze. But progress is picking up again. Dig into a directory, open a few files, rewrite a function or two. Look at all the points where memory is allocated, and then make sure it is freed, exactly one time, no more, no less.

I'm sorry to make it all sound so tame, but avoiding excitement is all part of the plan. The first 30 days were all about revolution. Now we've switched into evolution mode.

Due to quirks of release timing, the first release of LibreSSL was the portable version. The first native OpenBSD version won't be released until November, when 5.6 comes out.

I personally do not work on the portable version, but I have kept my eye on it. A few notes on that. First, you'll be happy to know that libressl portable should work on all BSD systems. It works on some other systems, too, but enough about them. Most of the extensioned interfaces in OpenBSD are already shared with other BSDs. You shouldn't even need the portable build system, if you don't like it. The simple BSD makefile build system from OpenBSD will probably suit you better.

That's the good news. The bad news, for now, is that libressl uses the arc4random function to generate random numbers. Theo has a talk coming up all about that, but it's enough for me to say that the arc4random implementations in FreeBSD and NetBSD aren't quite state of the art anymore. I know there are some patches to update FreeBSD at least, but they've stalled out. You're going to want to pick that work up.

We're currently still targeting posix platforms. Windows support isn't out of the question.

I'd like to turn now to what I hope is the future. The initial reactions to the announcement of LibreSSL included a lot of "the OpenSSL API is so bad it's not worth preserving". I'm inclined to agree. We've preserved the API because that's what we needed to do to succeed, but we're not married to it long term.

Joel and I have been working on a replacement API for OpenSSL, appropriately entitled ressl. Reimagined SSL is how I think of it. Our goals are consistency and simplicity. In particular, we answer the question "What would the user like to do?" and not "What does the TLS protocol allow the user to do?". You can make a secure connection to a server. You can host a secure server. You can read and write some data over that connection.

A few goals. First, no OpenSSL types or functions are exposed. In fact, not even any ressl internals are exposed. You should never need to contemplate X.509 or ASN.1. Those are implementation details far beyond the level of caring of most developers or users. As a consequence of that, the API is easy for other languges to bind to. The ressl interface could almost equally well describe transport over ssh tunnels. What do you want? Do you want a secure connection? We give you a secure connection.

Perhaps more importantly, it allows the implementation to evolve and change. It's not actually tied to LibreSSL. The libressl library will work with OpenSSL, and can be adapted to use other implementations as well. Previous efforts at replacing OpenSSL usually ended up with compat shims that replicated the OpenSSL API. But that's terrible. If we're going to have a universal API, it needs to be a good one. And it needs to be sufficiently abstract so that others are welcome to the party. Clearly, we've made a claim that LibreSSL is, or at least can be, the best quality TLS stack you can get. But I think the ecosystem benefits by breaking the monoculture.

The ressl API does provide one noteworthy feature. Hostname verification. In order to make a secure TLS connection, you must do two things. Validate the certificate and its trust chain. Then verify that the hostname in the cert matches the hostname you've connected to. Lots of people don't do the latter because OpenSSL doesn't do that latter. You have to do it yourself, which requires knowing about things like CommonNames and SubjectAltNames. The good news is that popular bindings for languages like python and ruby include a function to verify the hostname. The bad news is if you pick a python or ruby project at random, they probably forget to do it. Another funny fact is that since everybody has to write this code themselves, everybody does it a little bit differently. Especially regarding handling of wildcard certificates and everybody's favorite, embedded nul bytes. Hostname verification is on by default in ressl, and the API is designed so that you always provide a hostname; there's no way to accidentally call the function that doesn't do verification.

We have the advantage that we can evolve the client and server APIs in sync with at least two test programs, the OpenBSD ftp client and httpd. We're not nearly ready to call for third party support; that's probably a few months away.

Q: Where should we discuss changes to the ressl API?

A: The code is in cvs and can be discussed on the tech@ mailing list.

Songdo, South Korea: City of the Future?

28 September 2014 - 7:00pm

Jeffrey Tripp/Flickr

Speeding across one of the longest cable bridges in the world, jet-lagged but unwilling to close our eyes, we asked the taxi driver what he thought of the brand new city whose skyscrapers rose hopefully ahead of us. In his limited English, he responded without much enthusiasm: “It is nice.”

John Winthrop's famous sermon to the Massachusetts Bay colonists, as they approached what would become Boston in 1630, referred grandly to “a city upon a hill,” Winthrop's Christian vision of an ideal community. Nearly 400 years later, another exemplary city—this one secular, high-tech, and on the northwestern coast of South Korea—has appeared, on a landfill.

And there it was ahead of us.

Some of the developers of Songdo (which means “island of pine trees”) call it “The City of the Future.” Others have dubbed it “The World’s Smartest City” and “Korea’s High-Tech Utopia.” What, if anything, might such a city have in store for a tourist?

By the time we touched down at Incheon Airport in May, we knew Songdo’s short history. In 2000, it was still a marshy stretch of tidal flats in the Yellow Sea, home to a scattering of fishermen. Three years later, the Korean government filled it with 500 million tons of sand in an effort to build a business district near the international airport. (Seoul, the capital, is more than an hour away from the Incheon airport by bus; Songdo, an “aerotropolis” that boasts of being a short flight from one-third of the world’s population, is a mere 15-minute drive.) In addition to luring foreign business, the government hoped to create a sustainable city that demonstrated Korea’s technological prowess. Eleven years, $35 billion, and a few economic downturns later, Songdo has completed some 60 percent of its planned infrastructure and buildings, developers say, and reached a population of about 70,000—a third of the number expected by 2018, when the city will be “done.”

Songdo, from the 12th floor of the Sheraton Incheon (Ross Arbes)

Like most travelers, we spent our first night there in a hotel. Viewed from the 12th floor of the brand new, environmentally conscious Sheraton Incheon Hotel (the first LEED-certified hotel in South Korea), Songdo resembled an architect's model. Unlike the crowded and colorful streets of Seoul, the scene below was polished, spacious, sparse—not quite artificial, but not quite broken in yet either. It was more like the manifestation of a designer’s master plan than an evolved metropolis, with layers of lived-in depth. In the middle of the city—which, at 13,195 acres, is almost half the size of Boston proper—sat the 101-acre Central Park, where a few joggers enjoyed the morning sun. North of the park, a number of undulating, blue-glass skyscrapers towered over us. Beyond stood rows of plain concrete buildings and, farther still, large plots of dirt. Construction cranes swung in all directions.

Venturing into the busiest section of town for dinner, we struck up a conversation with an Australian pilot-trainer who spends two weeks a month in Songdo. “Is this the city of the future?” we asked. “I wouldn’t quite call it that,” he said. “Of course, this is a great place to be. And it’s unbelievable that it was all just a pile of sand 10 years ago.”

The city was built for a future that hasn’t yet arrived. Songdo’s wide sidewalks and roads—evoking a movie set—are still waiting for pedestrians and cars to fill them. (A number of music videos and television shows, most notably Psy’s “Gangnam Style,” have indeed been filmed in Songdo, taking advantage of its relative vacancy.) The quiet was almost eerie.

Songdo is connected by an underground system of pipes; garbage is sucked directly from people’s apartments into the “Third Zone Automated Waste Collection Plant,” where it is automatically processed. (Ross Arbes)

But this quiet lends itself to some nice surprises: You can hear birds, for instance. (Try that in Seoul.) An impressive 40 percent of the city will be park space—one of the highest percentages in the world, in keeping with Songdo's design as a green city. (New York City, by comparison, leads the United States with almost 20 percent green space.) There are bicycles everywhere: A significant portion of the residents are bike commuters, and they park their rides in long neat rows in front of their apartment buildings at night. There are lovely pedestrian thoroughfares flanking clothing boutiques and restaurants with outdoor seating. There are even small plots of land for urban farming, many of which were given to Songdo's former fishermen as reparation for the destruction of their fisheries. (Some now subsist as farmers.) Squinting at the green space, you could almost mistake the city for Portland, Oregon. Almost.

One morning, we found ourselves at Songdo’s waste-management center—the “Third Zone Automated Waste Collection Plant”—where we watched a video about the city’s garbage-removal system. Narrated like a Hollywood blockbuster, it explained that all of Songdo’s trash is sucked into underground pipes, and is automatically sorted and recycled, buried, or burned for fuel. These pipes connect all apartment buildings and offices; consequently, there are no street-corner trash cans or garbage trucks. Among the first of its kind in the world, the system currently requires just seven employees for the entire city.

Next, at a nearby office building, we learned about Songdo’s so-called “telepresence” system, which is currently being tested by 100 of the city’s residents. A joint venture by Cisco and the developers of the city, the system allows Songdo residents to sit in front of custom television screens and chat with English tutors in Hawaii or take fitness classes from instructors elsewhere in Korea. While the video-chatting technology itself was familiar to us—it's not a great leap from Skype—its integration into televisions and the subscription-oriented menu of classes was something we had not seen. (The system will be rolled out to 3,000 residents by year’s end, as well as some hotels.)

Finally, we visited Cisco’s Global Innovation Lab, where more technologies-in-development were on display, including mobile phone-controlled home appliances and even micro-chip tracking of Songdo’s children—so they don’t get lost. (Chips would be implanted in children's bracelets, bringing to mind a 1984-type future.) These and other technologies were being pitched to the Songdo government by Cisco, but had not been adopted yet, meaning that for the time being at least, the city’s children were free to sneak over to friend’s houses without fear of surveillance.

At Songdo’s U-Life Center, a wall of screens streams real-time footage from the CCTV cameras located throughout Songdo, so that government officials can monitor traffic and spot crime. (Ross Arbes)

This was all pretty slick, but where were the levitating buildings and flying cars we had envisioned? The city’s futurism was incremental, as it turned out, coexisting with the familiar and mundane. We had expected a city 25 or even 50 years ahead of the rest of the world; instead, Songdo felt like 2017—still the future, perhaps, but not the promised land of science fiction. There were mostly just subtle, somewhat odd differences from the cities of the present—for example, in Central Park, a small island filled with rabbits, a cordoned-off section with captive deer, and the occasional hidden speaker playing relaxing classical music.

Walking around Songdo with a guide, we had passed a vacant exhibition hall with a large “Tomorrow City” sign in front. In 2009, the place had apparently showcased the future—such as it was imagined then. But it was closed in 2011, our guide said, “because our predictions have been realized. Tomorrow is yesterday.” The exhibition hall's windows were dark, and no one we spoke with remembered exactly what technologies were featured inside. Were we merely visiting yesterday’s tomorrow?

Is this the future? A cordoned-off section of Songdo’s 101-acre Central Park features captive deer in front of ultra-modern residential buildings. (Ross Arbes)

In any case, Songdo did not fail to offer us Luddite pleasures, like the serenity of a canoe gliding through the park’s saltwater canal. The clear blue water reflected the green trees on the banks, behind which stood the tallest building in South Korea. There, the city felt like the “breath of fresh air” that Richard Nemeth, one of its designers, explained they had intended it to be, in contrast to stiflingly dense Asian cities.

Songdo offers a host of familiar transit options—buses, subways, pedestrian thoroughfares—but, on our last day, we chose the one most popular among its residents: bicycling. Pedaling the island’s 20 miles of paths, past the popular NC Cube outdoor shopping mall, past the tennis courts in the shadow of the Incheon bridge, past modest residential buildings on the outskirts of town, past the international campuses of the University of Utah and George Mason University, past fences covered in red roses, biotech labs, empty plots, half-completed buildings, and even a purveyor of Dippin' Dots ("ice cream of the future"), we finally arrived at the lonely sea wall where this unlikely and incomplete city began.

Tourists often seek out history and natural beauty, but here was a history-less and especially unnatural city. These qualities, in many ways, are actually what make Songdo appealing. It's not a utopia, nor a vision of the future; Songdo is “an ideal test bed,” as one Cisco employee put it, a massive blank slate, still able to become whatever the dreamers can convince the realists and the financiers to make it.

Rob Pike: Reflections on Window Systems (2008) [video]

28 September 2014 - 7:00pm
DGPis40 Talk Session 1: Intelligent Interfaces. "Reflections on Window Systems: A Personal History of Software Engineering" - KMDI Desire2Learn Capture Portal Media Player Controls
  • Toggle Playback
  • Seek Reverse
  • Seek Forward
  • Volume Down
  • Volume Up
  • Focus Video
Publishing the presentation...

Material Design for Bootstrap

28 September 2014 - 1:00am

Raw denim you probably haven't heard of them jean shorts Austin. Nesciunt tofu stumptown aliqua, retro synth master cleanse. Mustache cliche tempor, williamsburg carles vegan helvetica. Reprehenderit butcher retro keffiyeh dreamcatcher synth. Cosby sweater eu banh mi, qui irure terry richardson ex squid. Aliquip placeat salvia cillum iphone. Seitan aliquip quis cardigan american apparel, butcher voluptate nisi qui.

Food truck fixie locavore, accusamus mcsweeney's marfa nulla single-origin coffee squid. Exercitation +1 labore velit, blog sartorial PBR leggings next level wes anderson artisan four loko farm-to-table craft beer twee. Qui photo booth letterpress, commodo enim craft beer mlkshk aliquip jean shorts ullamco ad vinyl cillum PBR. Homo nostrud organic, assumenda labore aesthetic magna delectus mollit.

Etsy mixtape wayfarers, ethical wes anderson tofu before they sold out mcsweeney's organic lomo retro fanny pack lo-fi farm-to-table readymade. Messenger bag gentrify pitchfork tattooed craft beer, iphone skateboard locavore carles etsy salvia banksy hoodie helvetica. DIY synth PBR banksy irony. Leggings gentrify squid 8-bit cred pitchfork.

Trust fund seitan letterpress, keytar raw denim keffiyeh etsy art party before they sold out master cleanse gluten-free squid scenester freegan cosby sweater. Fanny pack portland seitan DIY, art party locavore wolf cliche high life echo park Austin. Cred vinyl keffiyeh DIY salvia PBR, banh mi before they sold out farm-to-table VHS viral locavore cosby sweater.

Inside the Starbucks at CIA HQ

28 September 2014 - 1:00am
By Emily Wax-Thibodeaux September 27 at 7:32 PM

The new supervisor thought his idea was innocent enough. He wanted the baristas to write the names of customers on their cups to speed up lines and ease confusion, just like other Starbucks do around the world.

But these aren’t just any customers. They are regulars at the CIA Starbucks.

“They could use the alias ‘Polly-O string cheese’ for all I care,” said a food services supervisor at the Central Intelligence Agency, asking that his identity remain unpublished for security reasons. “But giving any name at all was making people — you know, the undercover agents — feel very uncomfortable. It just didn’t work for this location.”

This purveyor of skinny lattes and double cappuccinos is deep inside the agency’s forested Langley, Va., compound.

Welcome to the “Stealthy Starbucks,” as a few officers affectionately call it.

Or “Store Number 1,” as the receipts cryptically say.

The baristas go through rigorous interviews and background checks and need to be escorted by agency “minders” to leave their work area. There are no frequent-customer award cards, because officials fear the data stored on the cards could be mined by marketers and fall into the wrong hands, outing secret agents.

It is one of the busiest Starbucks in the country, with a captive caffeine-craving audience of thousands of analysts and agents, economists and engineers, geographers and cartographers working on gathering intelligence and launching covert operations inside some of the most vexing and violent places around the world.

“Obviously,” one officer said, “we are caffeine-addicted personality types. ”

Because the campus is a highly secured island, few people leave for coffee, and the lines, both in the morning and mid-afternoon, can stretch down the hallway. According to agency lore, one senior official, annoyed by the amount of time employees were wasting, was known to approach someone at the back of the line and whisper, “What have you done for your country today?”

This coffee shop looks pretty much like any other Starbucks, with blond wooden chairs and tables, blueberry and raspberry scones lining the bakery cases, and progressive folk rock floating from the speakers. (There are plans to redecorate, possibly including spy paraphernalia from over the decades.)

But the manager said this shop “has a special mission,” to help humanize the environment for employees, who work under high pressure often in windowless offices and can’t fiddle with their smartphones during downtime. For security, they have to leave them in their cars.

Amid pretty posters for Kenyan beans and pumpkin spice latte, nestled in the corner where leather armchairs form a cozy nook, the supervisor said he often hears customers practicing foreign languages, such as German or Arabic.

The shop is also the site of many job interviews for agents looking to move within the CIA, such as from a counter­terrorism post to a nuclear non-proliferation gig. “Coffee goes well with those conversations,” one officer said.

The chief of the team that helped find Osama Bin Laden, for instance, recruited a key deputy for the effort at the Starbucks, said another officer who could not be named.

One female agent said she occasionally runs into old high school and college friends in line at Starbucks. Until then, they didn’t know they worked together. Such surprise reunions are not uncommon. Working at the agency is not something you e-mail or write Facebook posts about, she said.

Normally, during the day, the bestsellers are the vanilla latte and the lemon pound­cake. But for officers working into the night, whether because of a crisis or they are dealing with someone in a different time zone, double espressos and sugary Frappuccinos are especially popular.

“Coffee culture is just huge in the military, and many in the CIA come from that culture ,” said Vince Houghton, an intelligence expert and curator at the International Spy Museum. “Urban myth says the CIA Starbucks is the busiest in the world, and to me that makes perfect sense. This is a population who have to be alert and spend hours pouring through documents. If they miss a word, people can die.”

The nine baristas who work here are frequently briefed about security risks.

“We say if someone is really interested in where they work and asks too many questions, then they need to tell us,” the supervisor said.

A female barista who commutes from the District before sunrise said she initially applied to work for a catering company that services federal buildings in the region, not knowing where she might be assigned. She said she underwent extensive vetting “that was more than just a credit check.”

The 27-year-old woman was offered a job and told that she would be working in food services in Langley. On her first morning of work, she recalled, she put a location in her GPS and nothing came up. So she called the person who had hired her and got an explanation of the address. “Before I knew it, I realized I was now working for the Starbucks at the CIA,” she said.

Unfortunately, she can’t boast about where she works at parties. “The most I can say to friends is that I work in a federal building,” she said.

She said she has come to recognize people’s faces and their drinks. “There’s caramel-macchiato guy” and “the iced white mocha woman,” she said.

“But I have no idea what they do,” she added, fastening her green Starbucks apron and adjusting her matching cap. “I just know they need coffee, a lot of it.”

Emily Wax-Thibodeaux is a National staff writer who covers veteran's affairs and the culture of government. She's an award-winning former foreign correspondent who covered Africa and India for nearly a decade. She also covered immigration, crime and education for the Metro staff.

SECTION: {section=politics, subsection=null}!!!
INITIAL commentConfig: {includereply=true, canvas_permalink_id=washpost.com/8bvh5zpd9k, allow_comments=true, commentmaxlength=2000, includeshare=true, display_comments=true, canvas_permalink_app_instance=m6yzjj840m, display_more=true, moderationrequired=false, includefeaturenotification=true, defaultsort=reverseChronological, canvas_allcomments_id=washpost.com/km4ey0dajm, comments_period=14, includevoteofftopic=false, allow_videos=false, includesorts=true, markerdisplay=post_commenter:Post Commenter|staff:Post Writer|top_commenter:Post Forum|top_local:Washingtologist|top_sports:SuperFan|fact_checker:Fact Checker|post_recommended:Post Recommended|world_watcher:World Watcher|cultuer_connoisseur:Culture Connoisseur|weather_watcher:Capital Weather Watcher|post_contributor:Post Contributor, childrenitemsperpage=3, includeheader=true, includeverifiedcommenters=true, defaulttab=all, includerecommend=true, includereport=true, maxitemstop=3, source=washpost.com, allow_photos=false, maxitems=15, display_ugc_photos=false, includepause=true, canvas_allcomments_app_instance=6634zxcgfd, includepermalink=false}!!!

UGC FROM ARTICLE: {allow_comments=true, allow_photos=false, allow_videos=false, comments_period=14, comments_source=washpost.com, default_sort=, default_tab=, display_comments=true, is_ugc_gallery=false, max_items_to_display=15, max_items_to_display_top=3, moderation_required=false, stream_id=}!!!

FINAL commentConfig: {includereply=true, canvas_permalink_id=washpost.com/8bvh5zpd9k, allow_comments=true, commentmaxlength=2000, includeshare=true, display_comments=true, canvas_permalink_app_instance=m6yzjj840m, display_more=true, moderationrequired=false, includefeaturenotification=true, defaultsort=reverseChronological, canvas_allcomments_id=washpost.com/km4ey0dajm, comments_period=14, includevoteofftopic=false, allow_videos=false, includesorts=true, markerdisplay=post_commenter:Post Commenter|staff:Post Writer|top_commenter:Post Forum|top_local:Washingtologist|top_sports:SuperFan|fact_checker:Fact Checker|post_recommended:Post Recommended|world_watcher:World Watcher|cultuer_connoisseur:Culture Connoisseur|weather_watcher:Capital Weather Watcher|post_contributor:Post Contributor, childrenitemsperpage=3, includeheader=true, includeverifiedcommenters=true, defaulttab=all, includerecommend=true, includereport=true, maxitemstop=3, source=washpost.com, allow_photos=false, maxitems=15, display_ugc_photos=false, includepause=true, canvas_allcomments_app_instance=6634zxcgfd, includepermalink=false}!!

SECTION: {section=politics, subsection=null}!!!
INITIAL commentConfig: {includereply=true, canvas_permalink_id=washpost.com/8bvh5zpd9k, allow_comments=true, commentmaxlength=2000, includeshare=true, display_comments=true, canvas_permalink_app_instance=m6yzjj840m, display_more=true, moderationrequired=false, includefeaturenotification=true, defaultsort=reverseChronological, canvas_allcomments_id=washpost.com/km4ey0dajm, comments_period=14, includevoteofftopic=false, allow_videos=false, includesorts=true, markerdisplay=post_commenter:Post Commenter|staff:Post Writer|top_commenter:Post Forum|top_local:Washingtologist|top_sports:SuperFan|fact_checker:Fact Checker|post_recommended:Post Recommended|world_watcher:World Watcher|cultuer_connoisseur:Culture Connoisseur|weather_watcher:Capital Weather Watcher|post_contributor:Post Contributor, childrenitemsperpage=3, includeheader=true, includeverifiedcommenters=true, defaulttab=all, includerecommend=true, includereport=true, maxitemstop=3, source=washpost.com, allow_photos=false, maxitems=15, display_ugc_photos=false, includepause=true, canvas_allcomments_app_instance=6634zxcgfd, includepermalink=false}!!!

UGC FROM ARTICLE: {allow_comments=true, allow_photos=false, allow_videos=false, comments_period=14, comments_source=washpost.com, default_sort=, default_tab=, display_comments=true, is_ugc_gallery=false, max_items_to_display=15, max_items_to_display_top=3, moderation_required=false, stream_id=}!!!

FINAL commentConfig: {includereply=true, canvas_permalink_id=washpost.com/8bvh5zpd9k, allow_comments=true, commentmaxlength=2000, includeshare=true, display_comments=true, canvas_permalink_app_instance=m6yzjj840m, display_more=true, moderationrequired=false, includefeaturenotification=true, defaultsort=reverseChronological, canvas_allcomments_id=washpost.com/km4ey0dajm, comments_period=14, includevoteofftopic=false, allow_videos=false, includesorts=true, markerdisplay=post_commenter:Post Commenter|staff:Post Writer|top_commenter:Post Forum|top_local:Washingtologist|top_sports:SuperFan|fact_checker:Fact Checker|post_recommended:Post Recommended|world_watcher:World Watcher|cultuer_connoisseur:Culture Connoisseur|weather_watcher:Capital Weather Watcher|post_contributor:Post Contributor, childrenitemsperpage=3, includeheader=true, includeverifiedcommenters=true, defaulttab=all, includerecommend=true, includereport=true, maxitemstop=3, source=washpost.com, allow_photos=false, maxitems=15, display_ugc_photos=false, includepause=true, canvas_allcomments_app_instance=6634zxcgfd, includepermalink=false}!!

Counting bytes fast

28 September 2014 - 1:00am
 An apparently trivial and uninteresting task nonetheless received some special optimization care within FSE : counting the bytes (or 2-bytes shorts when using the U16 variant).

It seems a trivial task, and could indeed be realized by a single-line function, such as this one (assuming table is properly allocated and reset to zero) :

while (ptr<end) count[*ptr++]++;

And it works. So what's the performance of such a small loop ?
Well, when counting some random data, the loop performs at 1560 MB/s on the test system. Not bad.
But wait, data is typically not random, otherwise it wouldn't be compressible. Let's use a more typical compression scenario for FSE, with a distribution ratio of 20%. With this distribution, the counting algorithm works at 1470 MB/s. Not bad, but why does it run slower ? We are starting to notice a trend here.
So let's go to the other end of the spectrum, featuring highly compressible data with a distribution ratio of 90%. How fast does the counting algorithm run on such data ? As could be guessed, speed plummets, reaching a miserable 580 MB/s.

This is a 3x performance hit, and more importantly, it makes counting now a sizable portion of the overall time to compress a block of data (let's remind FSE targets speeds of 400 MB/s overall, so if just counting costs that much, it drags the entire compression process down).

What does happen ? This is where it becomes interesting. This is an example of CPU write commit delay.

Because the algorithm writes into a table, this data can't be cached within registers. Writing to a table cell means the result must necessarily be written to memory.
Of course, this data is probably cached into L1, and a clever CPU will not suffer any delay for this first write.
The situation becomes tricky for the following byte. In the 90% distribution example, it means we have a high probability to count the same byte twice. So, when the CPU wants to add +1 to the appropriate table cell, write commit delay gets into the way. +1 means CPU has to perform both a read and then a write at this memory address. If the previous +1 is still not entirely committed, cache will make the CPU wait a bit more before delivering the data. And the impact is sensible, as measured by the benchmark.

So, how to escape this side-effect ?
A first idea is, don't read&write to the same memory address twice in a row. A proposed solution can be observed in the FSE_count() function. The core loop is (once cleaned) as follows :

Counting1[*ip++]++;
Counting2[*ip++]++;
Counting3[*ip++]++;
Counting4[*ip++]++;

The burden of counting bytes is now distributed over 4 tables. This way, when counting 2 identical consecutive bytes, they get added into 2 different memory cells, escaping write commit delay. Of course, if we have to deal with 5 or more identical consecutive bytes, write commit delay will still be there, but at least, the latency has been used counting 3 other bytes, instead of wasted.

The function is overall more complex : more tables, hence more memory to init, special casing non-multiple-of-4 input sizes, regroup all results at the end, so intuitively there is a bit more work involved in this strategy. How does it compare with the naive implementation ?

When compressing random data, FSE_count() gets 1780 MB/s, which is already slightly better than the naive strategy. But obviously, that's not the target. This is when distribution gets squeezed that it makes the most difference, with 90% distribution being counted at 1700 MB/s. Indeed, it's still being hit, but much less, and prove overall much more stable.

With an average speed > 1700MB/s, it may seem that counting is a fast enough operation. But it is still nonetheless the second contributor to overall compression time, gobbling on its own approximately 15% of budget. That's perceptible, and still a tad too much if you ask me for such a simple task. But save another great find, it's the fastest solution I could come up with.