Hacker News from Y Combinator

Syndicate content
Links for the intellectually curious, ranked by readers. // via fulltextrssfeed.com
Updated: 6 hours 33 min ago

Eccentric axe uses physics to make splitting wood a lot easier.

6 hours 33 min ago

If you've ever tried to split your own firewood, you know it's kind of a pain in the tookus. Swinging the axe with enough force to drive the wedge into the wood and also split said wood (rather than just getting the axe head stuck) is not easy. That's why lumberjacks have big arms.

So Finnish inventor Heikki Kärnä redesigned the axe. Instead of working as a wedge, his axe is a lever. And it's sort of mesmerizing to watch.

It works because the Kärnä axe's center of gravity is to the side, rather than in the center, of the blade.

Upon hitting the top of the log and penetrating it slightly, the leading edge of the axe head begins to slow down. Where the axe blade widens sharply it stops the axe’s penetration. However, the mass of the axe head still has kinetic energy and the off line center of gravity forces it to rotate eccentrically down towards the wood. This rotational movement causes the leading edge, or sharp edge of the blade to turn in a lever action, forcing a split with all the force of the kinetic energy of the axe multiplied by the leverage of the axehead. The widening blade edge also has a benefit in that it helps to prevent the axe from penetrating into the wood and getting stuck there as is often the case with traditional axes.

The 1.9kg axe head has a significant amount of kinetic energy when it begins the rotational movement. While the centre of gravity of the head continues first to the right and then downwards the edge moves in a rotational direction to the left. This movement uses the rotational torque to split a log and push it away from the wood. In total the edge opens the wood by 8 cm. When the axe has rotated sideways it has used most of its energy and ends on top of the log on the in a sideways fashion. This safety feature ensures that the axe does not continue towards your legs and the axe remains totally in control. In addition, the axe holds the log steady on the chopping block ready for the next swing.

Also, the official company "Tale of the Vipukirves Axe" is sort of hilarious, in a Lake Wobegon kind of way.

Throughout his arduous work the axe often swung close to the hard working man's calves. The axe struck him more than once, but luckily the man was wearing protective overalls with his hems stiff into his rubber boots. After receiving a few mighty blows from the axe, he was forced to toss his boots into the trash. When the hard day's work was over, the man collected all the resinous branches into one pile and the trunks cut with a power saw in the other. They would wait to be cut into firewood.

“Darn it!” the man said in despair. “Making firewood is so much work, and it's dangerous too!”

He sat down on a stump, threw his gloves in the moss, wiped the sweat from his forehead and started cogitating. He grabbed the axe that the hardware salesman proclaimed to be the best on the market and began to examine the blade and the handle, turning the piece of metal in his sap-covered hands. Then it came to him.

”Eureka! I need to work on this!”

Video Link

Raindrop.io – Smart Bookmarks

6 hours 33 min ago
Raindrop.io - Smart bookmarks

Raindrop.io forСreative ideas

Keep pictures from Dribbble, articles from Mashable, videos from Youtube or from the web. Create your library of knowledge and inspiration!

Travellers

Places of interest, advices, tips, beautiful pictures or travelers reviews. Collect all you need travel information from different sources.

Your project

Every project starts with collection of required information and related material. Raindrop help secure it to keep and always at hand.

All

Your personal space in the cloud for everything that you find on the Internet.
Fast and gracefully!

Easy to save

Raindrop.io automatically recognizes the type of page and stores along with the bookmark associated content.

Articles, links, photos, videos, presentations, everything will be securely stored in your collection!

Smart search

Write queries, as you wish. No matter how many bookmarks you will always be easy to find them!

More than just bookmarks Share

Create public collections, and share bookmarks with the world!

Subscribe

Find and subscribe to interesting collections of other users.

Read comfortably

Concentrate on reading your favorite articles in a convenient way.

Start to collect!

10,000 users and 200,000 bookmarks.

Illumina Accelerator Program

6 hours 33 min ago

At Illumina, we’re committed to unlocking the power of the genome, but we know we can’t do it alone. The Illumina Accelerator Program is our way of catalyzing innovation in the entrepreneurial community. With extensive mentorship, financial support, and access to sequencing systems, reagents, and lab space, we’ve created a dynamic genomic ecosystem to help startups launch. Together we’ll advance the solutions that will transform medicine and improve human health.

The resources, mentorship, and momentum to make it happen.
  • Financial support, including $100,000 instrument access (MiSeq® System and NextSeq 500™ System), sequencing reagents, 20% research assistant time, $100,000 convertible notes, and an equity line of $20,000 or more
  • Fully operational plug-and-play accelerator lab space in close proximity to Illumina R&D labs
  • Validation of concept, technology, market, or application
  • Pitch preparation and access to our global customer network and established venture network
  • Extensive partner support, including financial modeling, forecasting, legal, recruiting, licensing, go to market strategy, and technical expertise
  • Bi-weekly workshop on industry trends, business models and building companies led by experienced entrepreneurs
  • Potential non-exclusive rights to Illumina Intellectual Property for your specific application

The description of the Illumina Accelerator Program presented on this website is for informational purposes only, and is subject to change. A full description of the Illumina Accelerator Program, the terms and conditions of participation, and the rights and obligations of Illumina and selected participants will be available in the near future.

Gabriel García Márquez, Literary Pioneer, Dies at 87

6 hours 33 min ago

HTTP/1.1 302 Found Date: Thu, 17 Apr 2014 21:15:41 GMT Server: Apache Set-Cookie: NYT-S=0MD.8akMQD1ATDXrmvxADeHLeLR1P8H24UdeFz9JchiAIUFL2BEX5FWcV.Ynx4rkFI; expires=Sat, 17-May-2014 21:15:41 GMT; path=/; domain=.nytimes.com Location: http://www.nytimes.com/2014/04/18/books/gabriel-garcia-marquez-literary-pioneer-dies-at-87.html?_r=1&assetType=nyt_now&gwh=570AEB18E30D9E2DEB245EFDC150909D&gwt=regi&assetType=nyt_now Content-Length: 0 nnCoection: close Content-Type: text/html; charset=UTF-8 HTTP/1.1 200 OK Server: Apache Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Content-Length: 112212 Accept-Ranges: bytes Date: Thu, 17 Apr 2014 21:15:41 GMT X-Varnish: 794180775 794180230 Age: 8 Via: 1.1 varnish Connection: keep-alive X-Cache: HIT

Sections Home Search Skip to content Skip to navigation Books|Gabriel García Márquez, Literary Pioneer, Dies at 87 http://nyti.ms/1kFlXz6 See next articles See previous articles Books |​​NYT Now Continue reading the main story Slide Show

View slide show|7 Photos Gabriel García Márquez, Novelist and Exponent of Magic Realism

Gabriel García Márquez, Novelist and Exponent of Magic Realism

Credit Miguel Tovar/Associated Press

Continue reading the main story Continue reading the main story Continue reading the main story Share This Page Continue reading the main story Continue reading the main story

Gabriel García Márquez, the Colombian novelist whose “One Hundred Years of Solitude” established him as a giant of 20th-century literature, died on Thursday at his home in Mexico City. He was 87.

His death was confirmed by Cristobal Pera, his former editor at Random House.

Mr. García Márquez, who received the Nobel Prize for Literature in 1982, wrote fiction rooted in a mythical Latin American landscape of his own creation, but his appeal was universal. His books were translated into dozens of languages. He was among a select roster of canonical writers — Dickens, Tolstoy and Hemingway among them — who were embraced both by critics and by a mass audience.

“Each new work of his is received by expectant critics and readers as an event of world importance,” the Swedish Academy of Letters said in awarding him the Nobel.

Mr. García Márquez was considered the supreme exponent, if not the creator, of the literary genre known as magic realism, in which the miraculous and the real converge. In his novels and stories, storms rage for years, flowers drift from the skies, tyrants survive for centuries, priests levitate, and corpses fail to decompose. And, more plausibly, lovers rekindle their passion after a half century apart.

Magic realism, he said, sprang from Latin America’s history of vicious dictators and romantic revolutionaries, of long years of hunger, illness and violence. In accepting his Nobel, Mr. García Márquez said: “Poets and beggars, musicians and prophets, warriors and scoundrels, all creatures of that unbridled reality, we have had to ask but little of imagination. For our crucial problem has been a lack of conventional means to render our lives believable.”

“One Hundred Years of Solitude” would sell more than 20 million copies. The Chilean poet Pablo Neruda called it “the greatest revelation in the Spanish language since ‘Don Quixote.’ ” The novelist William Kennedy hailed it as “the first piece of literature since the Book of Genesis that should be required reading for the entire human race.”

Mr. García Márquez made no claim to have invented magic realism; he pointed out that elements of it had appeared before in Latin American literature. But no one before him had used the style with such artistry, exuberance and power. Magic realism would soon inspire writers on both sides of the Atlantic, most notably Isabel Allende in Chile and Salman Rushdie in Britain.

Suffering from lymphatic cancer, which was diagnosed in 1999, Mr. García Márquez devoted most of his subsequent writing to his memoirs. One exception was the novel “Memories of My Melancholy Whores,” about the love affair between a 90-year-old man and a 14-year-old prostitute, published in 2004.

In July 2012, his brother, Jaime, was quoted as saying that Mr. García Márquez had senile dementia and had stopped writing. But Jaime Abello, director of the Gabriel García Márquez New Journalism Foundation in Cartagena, said that the condition had not been clinically diagnosed.

Mr. Pera, the author’s editor at Random House Mondadori, said at the time that Mr. García Márquez had been working on a novel, “We’ll See Each Other in August,” but that no publication date had been scheduled. The author seemed disinclined to have it published, Mr. Pera said: “He told me, ‘This far along I don’t need to publish more.’ ”

Besides his wife, Mercedes, he is survived by two sons, Rodrigo and Gonzalo.

More on nytimes.com Site Index

The Birth and Death of JavaScript [video]

6 hours 33 min ago
The Birth & Death of JavaScript — Destroy All Software Talks

A talk by Gary Bernhardt from PyCon 2014

This science fiction / comedy / absurdist / completely serious talk traces the history of JavaScript, and programming in general, from 1995 until 2035. It's not pro- or anti-JavaScript; the language's flaws are discussed frankly, but its ultimate impact on the industry is tremendously positive. For Gary's more serious (and less futuristic) thoughts on programming, try some Destroy All Software screencasts.

Another Big Milestone for Servo: Acid2

17 April 2014 - 7:00pm
jmoffitt

Apr 17 2014

Servo, the next-generation browser engine being developed by Mozilla Research, has reached an important milestone by passing the Acid2 test. While Servo is not yet fully web compatible, passing Acid2 demonstrates how far it has already come.

Servo’s Acid2 Test Result

Acid2 tests common HTML and CSS features such as tables, fixed and absolute positioning, generated content, paint order, data URIs, and backgrounds. Just as an acid test is used to judge whether some metal is gold, the web compatibility acid tests were created to expose flaws in browser rendering caused by non-conformance to web standards. Servo passed the Acid1 test in August of 2013 and has rapidly progressed to pass Acid2 as of March 2014.

Servo’s goals are to create a new browser engine for modern computer architectures and security threat models. It is written in a new programming language, Rust, also developed by Mozilla Research, which is designed to be safe and fast. Rust programs should be free from buffer overflows, reusing already freed memory, and similar problems common in C and C++ code. On top of this added safety, Servo is designed to exploit the parallelism of modern computers making use of all available processor cores, GPUs, and vector units.

The early results are encouraging. Many kinds of browser security bugs, such as the recent Heartbleed vulnerability, are prevented automatically by the Rust compiler. Performance comparisons on many portions of the Web Platform that we have implemented in single threaded mode are substantially faster than traditional browsers, and multi-threaded performance is even faster yet.

Servo has a growing community of developers and is a great project for anyone looking to play with browsers and programming languages. Please visit us at the Servo project page to learn more.

Categories: Rust, Servo

Post a comment

The New Linode Cloud: SSDs, Double RAM and much more

17 April 2014 - 7:00pm
April 17, 2014 10:00 am

Over the last year, and very feverishly over the past five months, we’ve been working on a really big project: a revamp of the Linode plans and our hardware and network – something we have a long history of doing over our past 11 years. But this time it’s like no other. These upgrades represent a $45MM investment, a huge amount of R&D, and some exciting changes.

SSDs

Linodes are now SSD. This is not a hybrid solution – it’s fully native SSD servers using battery-backed hardware RAID. No spinning rust! And, no consumer SSDs either – we’re using only reliable, insanely fast, datacenter-grade SSDs that won’t slow down over time. These suckers are not cheap.

40 Gbps Network

Each and every Linode host server is now connected via 40 Gbps of redundant connectivity into our core network, which itself now has an aggregate bandwidth of 160 Gbps. Linodes themselves can receive up to 40 Gbps of inbound bandwidth, and our plans now go up to 10 Gbps outbound bandwidth.

Processors

Linodes will now receive Intel’s latest high-end Ivy Bridge E5-2680.v2 full-power server-grade processors.

New Plans

We’ve doubled the RAM on all Linode plans! We’ve also aligned compute and outbound bandwidth with the cost of each plan.

In other words, the number of vCPUs you get increases as you go through the plans. And on the networking side, Linodes are now on a 40 Gbit link, with outbound bandwidth that also increases through the plans. Inbound traffic is still free and restricted only by link speed (40 Gbps).

Plan RAM SSD CPU XFER Outbound
Bandwidth Price Linode 2G 48 GB 2 cores 3 TB 250 Mbps $0.03/hr | $20/mo Linode 4G 96 GB 4 cores 4 TB 500 Mbps $0.06/hr | $40/mo Linode 8G 192 GB 6 cores 8 TB 1 Gbps $0.12/hr | $80/mo Linode 16G 384 GB 8 cores 16 TB 2 Gbps $0.24/hr | $160/mo Linode 32G 768 GB 12 cores 20 TB 4 Gbps $0.48/hr | $320/mo Linode 48G 1152 GB 16 cores 20 TB 8 Gbps $0.72/hr | $480/mo Linode 64G 1536 GB 20 cores 20 TB 10 Gbps $0.96/hr | $640/mo Linode 96G 1920 GB 20 cores 20 TB 10 Gbps $1.44/hr | $960/mo

And in case you missed it, we announced hourly billing recently, too.

Availability

All new Linodes will be created exclusively on the new Linode Cloud, using the new plan specs and on the new hardware and network.

Likewise, existing Linodes can upgrade free of charge via the “Pending Upgrades” link on your Linode’s Dashboard (bottom right), however there are some temporary availability delays while we work through getting hundreds of more machines in the pipeline:

  New Linodes Upgrade Existing 64-bit Upgrade Existing 32-bit Fremont, CA Yes Yes ETA 2 months Dallas, TX Yes Yes ETA 2 months Atlanta, GA Yes Yes ETA 2 months Newark, NJ Yes Yes ETA 2 months Tokyo, JP Yes ETA 3 weeks ETA 2 months London, UK ETA 1 week ETA 1 week ETA 2 months

Linodes that have configuration profiles that reference 32-bit kernels will need to wait while we ramp up 32-bit compatible availability. If you don’t want to wait, you can check out our switching kernels guide, or redeploy using a 64-bit distribution.

Also, new Linodes created on the new Linode cloud can only deploy 64-bit distributions, of which we support all popular versions. If you have a special need for legacy bitness, please open a support ticket and we’ll do our best to accommodate you.

TL;DR

Linode = SSDs + Insane network + Faster processors + Double the RAM + Hourly Billing

In conclusion………

HELL YEAH!

This is the largest single investment we’ve made in the company in our almost eleven year history. We think these improvements represent the highest quality cloud hosting available, and we’re excited to offer them to you. We have always been committed to providing upgrades for our customers and are excited about continuing our focus on simplicity, performance, and support.

Thank you for your continued loyalty and for choosing us as your cloud hosting provider.

Enjoy!

Dropbox acquires Hackpad (YC W12)

17 April 2014 - 7:00pm
Hackpad is teaming up with Dropbox! - hackpad.com

Hackpads are smart collaborative documents.

Join Hackpad Now

.

hackpad h

]]>

▲ ▼

With

feedback © Dropbox Inc. Terms Privacy Support About Blog

Reconnecting...

Connecting...

Reestablishing connection...

We're having trouble talking to the Hackpad synchronization server. You may be connecting through an incompatible firewall or proxy server.

We were unable to connect to the Hackpad synchronization server. This may be due to an incompatibility with your web browser or internet connection.

Lost connection with Hackpad. This may be due to a loss of network connectivity. If you close this window you may lose up to 30 seconds of unsaved work.

Server not responding. This may be due to network connectivity issues or high load on the server.

You are no longer allowed to access this pad. Reconnect to request access.

Reconnect Now Failed to reconnect.

If this continues to happen, please let us know (opens in new window).

We experienced an error on the page that is causing problems with saving your work. Continuing to edit on the page without refreshing your browser would lead to some loss of your work.

Refresh Now I want to copy my work first, please.

If this continues to happen, please let us know (opens in new window).


Please select a topicGeneral usage / How does this work?Feature requestLogin / account issueBroken / can't load hackpadMobile / iOS issueOther issueSend your love! ♥

Please check out our

How-to Guide

and

FAQ

first to see if your question is already answered! :)

If you have a feature request, please add it to

this pad

. Thanks!


This pad is moderated. Your changes will require owner approval.

Cancel Yes! I want to edit

This pad is shared with "", so will still be able to access it.

Paste the following into your web page or blog:

Embed as

Editable HackpadPlain HTMLPlain HTML without title

Signing in using Facebook...

Cancel

You'll need to turn on JavaScript to use Hackpad in all of its awesomeness. ^_^

The Design Flaw That Almost Wiped Out an NYC Skyscraper

17 April 2014 - 7:00pm
601 Lexington in New York City.

Courtesy of Antonio Campoy via Flickr

Roman Mars’ podcast 99% Invisible covers design questions large and small, from his fascination with rebar to the history of slot machines to the great Los Angeles Red Car conspiracy. Here at The Eye, we cross-post new episodes and host excerpts from the 99% Invisible blog, which offers complementary visuals for each episode.

This week's edition—about the design flaw that almost wiped out one of New York City’s tallest buildings—can be played below. Or keep reading to learn more.

When it was built in 1977, Citicorp Center (later renamed Citigroup Center, now called 601 Lexington) was, at 59 stories, the seventh-tallest building in the world. You can pick it out of the New York City skyline by its 45 degree-angled top.

But it’s the base of the building that really makes the tower so unique. The bottom nine of its 59 stories are stilts.

A skyscraper on stilts.

Courtesy of Joel Werner

This thing does not look sturdy. But it has to be sturdy. Otherwise they wouldn’t have built it this way.

The architect of Citicorp Center was Hugh Stubbins, but most of the credit for this building is given to its chief structural engineer, William LeMessurier.

The design originated with the need to accommodate St. Peter’s Lutheran Church, which occupied one corner of the building site at 53rd Street and Lexington Avenue in midtown Manhattan. (LeMessurier called the a church “a crummy old building … the lowest point in Victorian architecture." You can be the judge.)

The condition that St. Peter’s gave to Citicorp was that they build the church a new building in the same location. Provided that corner of the lot not be touched, the company was free to build their skyscraper around the church and in the airspace above it.

LeMessurier said he got the idea for the design while sketching on a napkin at a Greek restaurant.

Chief structural engineer William LeMessurier's napkin sketch of 601 Lexington.

Courtesy of David Billington

Here’s what’s going on with this building:

  • Nine-story stilts suspend the building over St. Peter’s church. But rather than putting the stilts in the corners, they had to be located at the midpoint of each side to avoid the church.

  • Having stilts in the middle of each side made the building less stable, so LeMessurier designed a chevron bracing structure—rows of eight-story V’s that served as the building’s skeleton.

  • The chevron bracing structure made the building exceptionally light for a skyscraper, so it would sway in the wind. LeMessurier added a tuned mass damper, a 400-ton device that keeps the building stable.

It was an ingenious, cutting edge design. And everything seemed just fine—until, as LeMessurier tells it, he got a phone call.

According to LeMessurier, in 1978 an undergraduate architecture student contacted him with a bold claim about LeMessurier’s building: that Citicorp Center could blow over in the wind.

The student (who has since been lost to history) was studying Citicorp Center and had found that the building was particularly vulnerable to quartering winds (winds that strike the building at its corners). Normally, buildings are strongest at their corners, and it’s the perpendicular winds (winds that strike the building at its faces) that cause the greatest strain. But this was not a normal building.

LeMessurier had accounted for the perpendicular winds, but not the quartering winds. He checked the math and found that the student was right. He compared what velocity winds the building could withstand with weather data and found that a storm strong enough to topple Citicorp Center hits New York City every 55 years.

But that’s only if the tuned mass damper, which keeps the building stable, is running. LeMessurier realized that a major storm could cause a blackout and render the tuned mass damper inoperable. Without the tuned mass damper, LeMessurier calculated that a storm powerful enough to take out the building his New York every 16 years.

In other words, for every year Citicorp Center was standing, there was about a 1-in-16 chance that it would collapse.

View from below 601 Lexington.

Courtesy of Andrew Smith via Flickr

LeMessurier and his team worked with Citicorp to coordinate emergency repairs. With the help of the NYPD, they worked out an evacuation plan spanning a 10-block radius. They had 2,500 Red Cross volunteers on standby, and three different weather services employed 24/7 to keep an eye on potential windstorms. They welded throughout the night and quit at daybreak, just as the building occupants returned to work.

But all of this happened in secret, even as Hurricane Ella was racing up the eastern seaboard.

Hurricane Ella never made landfall. And so the public—including the building’s occupants—were never notified. And it just so happened that New York City newspapers were on strike at the time.

The story remained a secret until writer Joe Morgenstern overheard it being told at a party, and interviewed LeMessurier. Morgenstern broke the story in The New Yorker in 1995.

And that would have been the end of the story. But then this happened:

The BBC aired a special on the Citicorp Center crisis, and one of its viewers was Diane Hartley. It turns out that she was the student in LeMessurier’s story. She never spoke with LeMessurier; rather, she spoke with one of his junior staffers.

Hartley didn’t know that her inquiry about how the building deals with quartering winds lead to any action on LeMessurier’s part. It was only after seeing the documentary that she began to learn about the impact that her undergraduate thesis had on the fate of Manhattan.

99% Invisible is distributed by PRX.

Boring Systems Build Badass Businesses

17 April 2014 - 7:00am

Let me tell you a story about systems do's and don'ts and how they relate to business success. Of all the mistakes I see businesses make, this is one of the most common. It's a critical failing that cripples or kills many businesses that could have otherwise been successful.

Background

Alice and Zola were rivals who both had dreams of building their own restaurant empires.

They each applied for and won $1 millon grants to open their restaurants - yay!

Alice's Build

Alice spent $500K to build a large restaurant and hired a handyman named Albert to lead the effort.

Albert was one of the most creative and smartest handymen in the world. Alice quized him directly from the manuals of all the top plumbing and electrician books and he passed with flying colors!

So, when designing the plumbing and electrical systems for the restaurant, Albert chose all the most exciting and cutting edge technologies!

He put a different brand of plumbing system in each section of the restaurant because each area had slightly different needs. One system went in the bathrooms, the second went in the kitchen, the third went in the lobby, and the fourth went outside.

He was even more innovative with the electrical systems - and put in a total of 10 different systems throughout the restaurant.

They were now 6 months behind schedule, but Alice now had the restaurant with the most innovative plumbing and electrical systems in the whole country. So, naturally Alice and Albert busted open the champaigne to celebrate!

Zola's Build

Zola's path to launch was a bit different.

She also spent $500K to build a large restaurant but hired a handyman named Zip to lead the effort.

Zip had a reputation for building simple systems that required little maintenance and just worked. Zola hired him based on his track record and they got to building.

When choosing the electrical and plumbing systems, Zip just chose the industry standard systems that had been around for years. These systems had great manuals and great companies backing them with plentiful support and spare parts.

Since they chose simple standard systems, they got done 2 months early and even had money left over to create a gorgeous atrium they knew the customers would love.

Alice's Run

When Alice's restaurant finally opened everything went great! Well, until the lunch rush. Then the power went out.

Albert spent the next 2 days without sleep trying to track down the problem. It turned out that the fancy electric toilets were used too frequently during the rush and burned out the relays in 4 of the 10 electrical systems.

Over the next 3 months the restaurant would open for a few days, then close to deal with some technical problem. Albert would heroically work nights and weekends to solve the problem so that the restaurant could stay open at least some of the time.

Alice was sooo grateful she had hired Albert since he was super smart and could always eventually figure out and fix even the toughest problems with the systems. In Alice's eyes, Albert was a real hero to work overtime to fix the problems.

However, Albert eventually got burned out and bored with Alice's restaurant, so he left. He figured all the problems were just bad luck - maybe next time he'd be luckier!

Alice now had to try to hire a replacement for Albert. The reputation of her restaurant was very poor now, so it was difficult to find applicants. Finally she found someone willing to take the job. Unfortunately he couldn't figure out the complex interactions between the systems since Albert hadn't left any notes. Alice hired more and more technicians to try and figure out the systems. Eventually after hiring 10 full-time technicians, they were able to figure out the systems and get them working again after a few months.

During that time, they discovered that 2 of the electrical systems and 1 of the plumbing systems had been abandoned by their creators and there was no longer support or parts for those systems. So, Alice had to hire 2 more technicians to support these now defunct systems.

All these technicians ate up the remaining money she had and made it impossible for her to ever get cash-flow positive.

The restaurant went bust and Alice decided to apply to grad school.

Zola's Run

Since the plumbing and electrical systems just worked, Zola was able to put all her focus into hiring great chefs, great entertainers, and great serving staff. She was able to innovate and come up with new exciting events and dishes for her dining guests.

Zip was rarely needed. He once fixed a cracked pipe, but it only took him 5 minutes. After a couple months he got another job and moved out of state.

Zola quickly found Zed as a replacement. He was eager to work there because of Zola's reputation and because he was very familiar with the standard systems they used.

Her restaurant's reputation grew every day and so did the demand to eat there. Soon there was nearly always an hour's wait to get in.

She still had $400K left from the grant and had earned another $1.2 million over the last year. With all that cash, she was able to start her true restaurant empire by opening another 2 restaurants.

Extreme?

This may seem like an extreme story - but I've seen much more drastic outcomes in the tech space.

I had a front-row seat to watch a company spend $14+ million on a system that was so complex and buggy it was eventually abandoned as a complete loss. In contrast, I was there when a startup scaled to over 100 million users with just a couple good engineers with simple standard systems.

If you follow tech news, you'll have heard of even more extreme scenarios - where the losses or wins were in the billions.

Common Objections

This makes sense for the underlying systems, but what about development of the actual products?

Build the most minimal solution you possibly can. See if customer's like it, use it, and will pay enough for it. Only then build it into a full solution. Simplicity, great test coverage, and great documentation will ensure what you build retains its value long-term. You'll save a ton of time and money going this route which you can then use to create even more profitable products for your customers. Always be asking "How can I do this faster, simpler, cheaper?"

But don't you want your developers to be engaged and working on interesting projects?

If your developers are desperate to play with novel technologies - just give them more time off work to play with their own projects. Google, 37Signals, and GitHub have all done this to great benefit. There are many ways to achieve developer happiness, but making your core business products a playground for developers seeking novelty is the path to hell.

But [some new unproven system] is really cool! Even [some big company] uses it!

Great! Then play with it to your heart's delight. However, do it on your own time. Don't jeopardize your business with it. Do you care more about playing with novel technologies than spending your energy and innovation on the products your customers actually care about? Remember, you're a business, not a college R&D lab.

But [some company] I know used a ton of crazy cool new tech and still got acquired for millions!

I've certainly seen this happen. However, often those companies are acquired for much less than they could have been and frequently dissolve once they've been bought. I worked for a startup that made these mistakes and lost a ~$2 million due to it (buying $1M of cool hardware they didn't need, hiring awesome data warehousing engineers when there were no data warehousing needs, etc). They still got acquired, but for probably 1/3 of what they could have been if they had spent that lost money on marketing and a better product. Within a year, the acquirer realized it had purchased a huge mess and dissolved the acquired company. Tens of millions of dollars down the drain.

Avoid the Pitfalls

In my contracting career, I've seen the inner workings of many different companies. Here are a couple rules to avoid the most common mistakes I see:

  • Innovate on your core product, not on your plumbing (this rule is extremely tempting for developers to break - see next rule)
  • Choose developers based on their track record and their commitment to ruthless simplicity and business growth

In the end, your business exists to create business value, not be a plumbing showcase.

Quantum Entanglement Drives the Arrow of Time

17 April 2014 - 7:00am

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”

Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Courtesy of Tony Short

A watershed paper by Noah Linden, left, Sandu Popescu, Tony Short and Andreas Winter (not pictured) in 2009 showed that entanglement causes objects to evolve toward equilibrium. The generality of the proof is “extraordinarily surprising,” Popescu says. “The fact that a system reaches equilibrium is universal.” The paper triggered further research on the role of entanglement in directing the arrow of time.

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”

If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.

Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.

When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel light-years apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”

“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.

Dmitry Rozhkov

Seth Lloyd, an MIT professor, came up with the idea that entanglement might explain the arrow of time in the 1980s while in graduate school.

The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23-year-old philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.

Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.

“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”

The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.

“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”

Lidia del Rio

As a hot cup of coffee equilibrates with the surrounding air, coffee particles (white) and air particles (brown) interact and become entangled mixtures of brown and white states. After some time, most of the particles in the coffee are correlated with air particles; the coffee has reached thermal equilibrium.

In 2009, the Bristol group’s proof resonated with quantum information theorists, opening up new uses for their techniques. It showed that as objects interact with their surroundings — as the particles in a cup of coffee collide with the air, for example — information about their properties “leaks out and becomes smeared over the entire environment,” Popescu explained. This local information loss causes the state of the coffee to stagnate even as the pure state of the entire room continues to evolve. Except for rare, random fluctuations, he said, “its state stops changing in time.”

Consequently, a tepid cup of coffee does not spontaneously warm up. In principle, as the pure state of the room evolves, the coffee could suddenly become unmixed from the air and enter a pure state of its own. But there are so many more mixed states than pure states available to the coffee that this practically never happens — one would have to outlive the universe to witness it. This statistical unlikelihood gives time’s arrow the appearance of irreversibility. “Essentially entanglement opens a very large space for you,” Popescu said. “It’s like you are at the park and you start next to the gate, far from equilibrium. Then you enter and you have this enormous place and you get lost in it. And you never come back to the gate.”

In the new story of the arrow of time, it is the loss of information through quantum entanglement, rather than a subjective lack of human knowledge, that drives a cup of coffee into equilibrium with the surrounding room. The room eventually equilibrates with the outside environment, and the environment drifts even more slowly toward equilibrium with the rest of the universe. The giants of 19th century thermodynamics viewed this process as a gradual dispersal of energy that increases the overall entropy, or disorder, of the universe. Today, Lloyd, Popescu and others in their field see the arrow of time differently. In their view, information becomes increasingly diffuse, but it never disappears completely. So, they assert, although entropy increases locally, the overall entropy of the universe stays constant at zero.

“The universe as a whole is in a pure state,” Lloyd said. “But individual pieces of it, because they are entangled with the rest of the universe, are in mixtures.”

One aspect of time’s arrow remains unsolved. “There is nothing in these works to say why you started at the gate,” Popescu said, referring to the park analogy. “In other words, they don’t explain why the initial state of the universe was far from equilibrium.” He said this is a question about the nature of the Big Bang.

Despite the recent progress in calculating equilibration time scales, the new approach has yet to make headway as a tool for parsing the thermodynamic properties of specific things, like coffee, glass or exotic states of matter. (Several traditional thermodynamicists reported being only vaguely aware of the new approach.) “The thing is to find the criteria for which things behave like window glass and which things behave like a cup of tea,” Renner said. “I would see the new papers as a step in this direction, but much more needs to be done.”

Some researchers expressed doubt that this abstract approach to thermodynamics will ever be up to the task of addressing the “hard nitty-gritty of how specific observables behave,” as Lloyd put it. But the conceptual advance and new mathematical formalism is already helping researchers address theoretical questions about thermodynamics, such as the fundamental limits of quantum computers and even the ultimate fate of the universe.

“We’ve been thinking more and more about what we can do with quantum machines,” said Paul Skrzypczyk of the Institute of Photonic Sciences in Barcelona. “Given that a system is not yet at equilibrium, we want to get work out of it. How much useful work can we extract? How can I intervene to do something interesting?”

Sean Carroll, a theoretical cosmologist at the California Institute of Technology, is employing the new formalism in his latest work on time’s arrow in cosmology. “I’m interested in the ultra-long-term fate of cosmological space-times,” said Carroll, author of “From Eternity to Here: The Quest for the Ultimate Theory of Time.” “That’s a situation where we don’t really know all of the relevant laws of physics, so it makes sense to think on a very abstract level, which is why I found this basic quantum-mechanical treatment useful.”

Twenty-six years after Lloyd’s big idea about time’s arrow fell flat, he is pleased to be witnessing its rise and has been applying the ideas in recent work on the black hole information paradox. “I think now the consensus would be that there is physics in this,” he said.

Not to mention a bit of philosophy.

According to the scientists, our ability to remember the past but not the future, another historically confounding manifestation of time’s arrow, can also be understood as a buildup of correlations between interacting particles. When you read a message on a piece of paper, your brain becomes correlated with it through the photons that reach your eyes. Only from that moment on will you be capable of remembering what the message says. As Lloyd put it: “The present can be defined by the process of becoming correlated with our surroundings.”

The backdrop for the steady growth of entanglement throughout the universe is, of course, time itself. The physicists stress that despite great advances in understanding how changes in time occur, they have made no progress in uncovering the nature of time itself or why it seems different (both perceptually and in the equations of quantum mechanics) than the three dimensions of space. Popescu calls this “one of the greatest unknowns in physics.”

“We can discuss the fact that an hour ago, our brains were in a state that was correlated with fewer things,” he said. “But our perception that time is flowing — that is a different matter altogether. Most probably, we will need a further revolution in physics that will tell us about that.”

Go Performance Tales

17 April 2014 - 7:00am

This entry is cross-posted on the datadog blog. If you want to learn more about Datadog or how we deal with the mountain of data we receive, check it out!

The last few months I've had the pleasure of working on a new bit of intake processing at Datadog. It was our first production service written in Go, and I wanted to nail the performance of a few vital consumer, processing, and scheduling idioms that would form the basis for future projects. I wrote a lot of benchmarks and spent a lot of time examining profile output, learning new things about Go, and relearning old things about programming. Although intuition can be a flawed approach to achieving good performance, learning why you get certain behaviors usually proves valuable. I wanted to share a few of the things I've learned.

Use integer map keys if possible

Our new service was designed to manage indexes which track how recently metrics, hosts, and tags have been used by a customer. These indexes are used on the front-end for overview pages and auto-completion. By taking this burden off of the main intake processor, we could free it up for its other tasks, and add more indexes to speed up other parts of the site.

This stateful processor would keep a history of all the metrics we've seen recently. If a data point coming off the queue was not in the history, it'd be flushed to the indexes quickly to ensure that new hosts and metrics would appear on site as soon as possible. If it was in the history, then it was likely already in the indexes, and it could be put in a cache to be flushed much less frequently. This approach would maintain low latency for new data points while drastically reducing the number of duplicate writes.

We started out using a map[string]struct{} to implement these histories and caches. Although our metric names are generally hierarchical, and patricia tries/radix trees seemed a perfect fit, I couldn't find nor build one that could compete with Go's map implementation, even for sets on the order of tens of millions of elements. Comparing lots of substrings as you traverse the tree kills its lookup performance compared to the hash, and memory-wise, 8-byte pointers mean you need pretty large matching substrings to save space over a map. It was also trickier to expire entries to keep memory usage bounded.

Even with maps, we were still not seeing the types of throughput I thought we could achieve with Go. Map operations were prominent in our profiles. Could we get any more performance out of them? All of our existing indexes were based on string data which had associated integer IDs in our backend, so I benchmarked the insert/hashing performance for maps with integer keys and maps with string keys:

BenchmarkTypedSetStrings 1000000 1393 ns/op BenchmarkTypedSetInts 10000000 275 ns/op

This looked pretty promising. Since the data points coming from the queue were already normalized to their IDs, we had the integers available for use as map keys without having to do extra work. Using a map[int]*Metric instead of a map[string]struct{} would give us that integer key we knew would be faster while keeping access to the strings we needed for the indexes. Indeed, it was much faster: the overall throughput doubled.

AES-NI processor extns really boost string hash performance

Eventually, we wanted to add new indexes which track recently seen "apps". This concept is based on some ad-hoc structure in the metric names themselves, which generally looked like "app.throughput" or "app.latency". We had associated backend IDs for apps, so we restored the string-keyed map for them, and overall throughput dropped like a stone. Predictably, the string map assignment in the app history, which we already knew to be slow, was to blame:

In fact, the runtime·strhash → runtime·memhash path dominated the output, using more time than all other integer hashing and all of our channel communication. This is illustrated proof, if proof were needed, that one should prefer structs to maps wherever a simple collection of named values is required.

Still, the strhash performance here seemed pretty bad. How did hashing take up so much more time under heavy insertion than all other map overhead? These were not large keys. When I asked about improving string hash performance in #go-nuts, someone tipped me off to the fact that since Go 1.1, runtime·memhash has a fast-path that uses the AES-NI processor extensions.

A quick grep aes /proc/cpuinfo showed that the aws c1.xlarge box I was on lacked these. After finding another machine in the same class with them, throughput increased by 50-65% and strhash's prominence was drastically reduced in the profiles.

Note that the string vs int profiles on sets above was done on a machine without the AES-NI support. It goes without saying that these extensions would bring those results closer together.

De-mystifying channels

The queue we read from sends messages which contain many individual metrics; in Go terms you can think of a message like type Message []Metric, where the length is fairly variable. I made the decision early on to standardize our unit of channel communication on the single metric, as they are all the same size on the wire. This allowed for much more predictable memory usage and simple, stateless processing code. As the program started to come together, I gave it a test run on the production firehose, and the performance wasn't satisfactory. Profiling showed a lot of time spent in the atomic ASM wrapper runtime·xchg (shown below) and runtime·futex.

These atomics are used in various places by the runtime: the memory allocator, GC, scheduler, locks, semaphores, et al. In our profile, they were mostly descendent from runtime·chansend and selectgo, which are part of Go's channel implementation. It seemed like the problem was a lot of locking and unlocking while using buffered channels.

While channels provide powerful concurrency semantics, their implementation is not magic. Most paths for sending, receiving, and selecting on async channels currently involve locking to maintain thread safety; though their semantics combined with goroutines change the game, as a data structure they're exactly like many other implementations of synchronized queues/ring buffers. There is an ongoing effort to improve channel performance, but this isn't going to result in an entirely lock free implementation.

Today, sending or receiving calls runtime·lock on that channel shortly after establishing that it isn't nil. Though the channel performance work being done by Dmitry looks promising, even more exciting for future performance improvements is his proposal for atomic intrinsics, which could reduce some overhead to all of these atomic locking primitives all over the runtime. At this time, it looks likely to miss 1.3, but will hopefully be revisited for 1.4.

My decision to send metrics one by one meant that we were sending, receiving, and selecting more often than necessary, locking and unlocking many times per message. Although it added some extra complexity in the form of looping in our metric processing code, re-standardizing on passing messages instead reduced the amount of these locking sends and reads so much that they virtually dropped off our subsequent profiles. Throughput improved by nearly 6x.

Cgo and borders

One of the sources of slowness that I expected before joining the project was Go's implementation of zlib. I'd done some testing in the past that showed it was significantly slower than Python's for a number of file sizes in the range of the typical sizes of our messages. The zlib C implementation has a reputation for being well optimized, and when I discovered that Intel had contributed a number of patches to it quite recently, I was interested to see how it would measure up.

Luckily, the vitess project from YouTube had already implemented a really nice Go wrapper named cgzip, which performed quite a bit better than Go's gzip in my testing. Still, it was outperformed by Python's gzip, which puzzled me. I dove into the code both of Python's zlibmodule.c and cgzip's reader.go, and noticed that cgzip was managing its buffers from Go while Python was managing them entirely in C.

I'd vaguely remembered some experiments that showed there was a bit of overhead to cgo calls. Further research revealed some reasons for this overhead:

  • Cgo has to do some coordination with the go scheduler so that it knows that the calling goroutine is blocked, which might involve creating another thread to prevent deadlock. This involves acquiring and releasing a lock.
  • The Go stack must be swapped out for a C stack, as it has no idea what the memory requirements are for the C stack, and then they must be swapped again upon return.
  • There's a C shim generated for C function calls which map some of C and Go's call/return semantics together in a clean way; eg. struct returns in C working as multi-value returns in Go.

Similar to communicating via channels above, the communication between Go function calls and C function calls was taxed. If I wanted to find more performance, I'd have to reduce the amount of communication by increasing the amount of work done per call. Because of the channel changes, entire messages were now the smallest processable unit in my pipeline, so the undoubtable benefits of a streaming gzip reader were relatively diminished. I used Python's zlibmodule.c as a template to do all of the buffer handling in C, returning a raw char * I could copy into a []byte on the Go side, and did some profiling:

452 byte test payload (1071 orig) BenchmarkUnsafeDecompress 200000 9509 ns/op BenchmarkFzlibDecompress 200000 10302 ns/op BenchmarkCzlibDecompress 100000 26893 ns/op BenchmarkZlibDecompress 50000 46063 ns/op 7327 byte test payload (99963 orig) BenchmarkUnsafeDecompress 10000 198391 ns/op BenchmarkFzlibDecompress 10000 244449 ns/op BenchmarkCzlibDecompress 10000 276357 ns/op BenchmarkZlibDecompress 5000 495731 ns/op 359925 byte test payload (410523 orig) BenchmarkUnsafeDecompress 1000 1527395 ns/op BenchmarkFzlibDecompress 1000 1583300 ns/op BenchmarkCzlibDecompress 1000 1885128 ns/op BenchmarkZlibDecompress 200 7779899 ns/op

Above, "Fzlib" is my "pure-c" implementation of zlib for Go, "Unsafe" is a version of this where the final copy to []byte is skipped but the underlying memory of the result must be manually freed, "Czlib" is vitess' cgzip library modified to handle zlib instead of gzip, and "Zlib" is Go's built in library.

Measure everything

In the end, the differences for fzlib and czlib were only notable on small messages. This was one of the few times in the project I optimized prior to profiling, and as you might imagine it produced some of the least important performance gains. As you can see below, when at full capacity, the message processing code cannot keep up with the intake and parsing code, and the post-parsed channel (purple) stays full while the post-processed channel (blue) maintains some capacity.

You might think the obvious lesson to learn from this is that age old nut about premature optimization, but this chart taught me something far more interesting. The concurrency and communication primitives you get in Go allow you to build single-process programs in the same style you'd use when building distributed systems, with goroutines as your processes, channels your sockets, and select completing the picture. You can then measure ongoing performance using the same well understood techniques, tracking throughput and latency incredibly easily.

Seeing this pattern of expensive boundary crossing twice in quick succession impressed upon me the importance of identifying it quickly when investigating performance problems. I also learned quite a lot about cgo and its performance characteristics, which might save me from ill-fated adventures later on. I also learned quite a lot about Python's zlib module, including some pathological memory allocation in its compression buffer handling.

The tools you have at your disposal to get the most performance out of Go are very good. The included benchmarking facilities in the testing library are simple but effective. The sampling profiler is low impact enough to be turned on in production and its associated tools (like the chart output above) highlight issues in your code with great clarity. The architectural idioms that feel natural in Go lend themselves to easy measurement. The source for the runtime is available, clean, and straightforward, and when you finally understand your performance issues, the language itself is amenable to fixing them.

Apr 9

The spectre haunting San Francisco

17 April 2014 - 7:00am

YESTERDAY the New York Times ran a piece on a brewing rent crisis in America:

For rent and utilities to be considered affordable, they are supposed to take up no more than 30 percent of a household’s income. But that goal is increasingly unattainable for middle-income families as a tightening market pushes up rents ever faster, outrunning modest rises in pay.

The strain is not limited to the usual high-cost cities like New York and San Francisco. An analysis for The New York Times by Zillow, the real estate website, found 90 cities where the median rent — not including utilities — was more than 30 percent of the median gross income.

The piece nods to the idea that rising rents—or housing costs generally, in America and elsewhere—are about more than supply and demand. Housing affordability activists like to point out that most new construction is for luxury housing, meaning that supply of non-luxury units is not growing by very much. Others love to say that price declines have historically gone hand in hand with falling construction.

These arguments are both nonsense. The latter point gets causation the wrong way around; given an unexpected decline in demand due to financial crisis or other shocks prices fall and interest in new construction dries up until existing inventories are cleared. The former point misses the fundamental fungibility of housing. When new construction of luxury units lags, the very rich buy up older housing stock at exorbitant prices and pay to have them redone. You see this in London, for instance, where literally every house in the city is now being rehabilitated, including those that were rehabilitated last year. Residents have to actively shoo away the builders trying to erect scaffolding, on the assumption that the owners will be wanting an extra floor or two on their house. It is a headache. There is a team of wildcat subcontractors digging us a new wine cellar as we speak. The point is that if demand for high-end housing is not satisfied with new construction, that demand will flow to existing supply, putting upward pressure on prices right across the housing stock.

A new piece in TechCrunch makes this point nicely in a very good explainer of the housing crisis in San Francisco. The economy of the Bay Area is booming, but the region is one of the most difficult places to build in the country. Prices are therefore soaring and neighbourhoods are changing, touching off some occasionally nasty social conflicts.

But the author of the TechCrunch piece, Kim-Mai Cutler, puts her finger on the real problem. Yes, supply constraints are the cause of the affordability crisis. The trouble comes in trying to understand why those constraints are there and how to alleviate them.

The issue is not technical limitations. In oil markets you have a cheap source of supply, in the form of oil that more or less bubbles obligingly out of the ground. But as soaring demand reduces the available supply of oil of that sort prices soar. Then we all worry for a bit before technology comes to the rescue. Engineers find new ways to do things using less oil, new ways to dig deeper holes, and new ways to shoot water at rocks until the rocks can't take it any more and weep oily tears. Housing is not really like that. There are cheaper or more expensive ways to build homes, but in the cities facing these crises construction costs constitute a relatively small portion of the expense of housing. The rest is rent.

That's right, rent, in the economic sense of the word:

Economic rent is the cost of non-produced inputs or advantages; the result of natural or contrived exclusivity.

Thank you, Wikipedia. So, San Francisco is a nice place to live. It's a nice place to live for a lot of reasons: the weather isn't bad, the surrounding countryside is gorgeous, the city has all sorts of cool stuff in it, and (perhaps most important) living in San Francisco gives one access to the local labour market, which is a really excellent thing to have access to. If local regulations did not do much to discourage creation of new housing supply, then the market for San Francisco would be pretty competitive; anyone with land in San Francisco could make more San Francisco by building on that land. The price of San Francisco would then fall to the marginal cost, which is the expense of building another unit of housing, which is not very high.

Now because the cost of living in San Francisco would not be very high, the consumer surplus available from living there would be extraordinary, and everyone would want to move there. Inflows of people would stop when the cost of making more San Francisco rose to meet the value derived from San Francisco by the marginal resident. Costs would rise, because the denser the city became the more expensive it would be to build new units (building super-tall towers does cost more, per unit, than building relatively modest apartment buildings). And the value to the marginal resident would fall for two reasons. First, the marginal resident will definitionally be someone who is relatively indifferent between living in San Francisco and living somewhere else. Everyone more eager to live there would already have moved in. And second, as people move in congestion costs within the city rise, reducing the value of San Francisco to everyone in San Francisco.

This, ostenibly, is why we have things like zoning codes. The welfare-maximising population of San Francisco may be higher (and possibly much, much higher) than the population which maximises the welfare of those already living in San Francisco. So the city devises a set of regulations that effectively make current residents monopolists, able to artificially limit supply and raise price. Society as a whole is slightly worse off; San Franciscans are slightly better off. 

But in fact, the structure of local politics tends to magnify rent-seeking, generating enormous social costs. The benefits and costs of population growth occur in a way that practically guarantees highly restrictive building rules. The (large) potential benefits to would-be San Franciscans accrue to people who have no political power within San Francisco. The gains to San Franciscans from population growth are distributed very broadly; when a new building project allows more people to live in San Francisco, everyone in the city derives a small benefit from that growth—from the larger market size, greater opportunities for professional networking and knowledge spillovers, and so on. But the congestion costs associated with that new project are highly concentrated on the people living in the immediate vicinity of the new construction. There is a population level at which new growth entails net costs for all San Franciscans. But residents of San Francisco will limit new growth long before it reaches that level, because there will always be a strong constituency to block projects.

We therefore get highly restrictive building regulations. Tight supply limits mean that the gap between the marginal cost of a unit of San Francisco and the value to the marginal resident of San Francisco (and the market price of the unit) is enormous. That difference is pocketed by the rent-seeking NIMBYs of San Francisco. However altruistic they perceive their mission to be, the result is similar to what you'd get if fat cat industrialists lobbied the government to drive their competition out of business.

The New York Times quotes Tyler Cowen:

Tyler Cowen, a professor of economics at George Mason University, argues that the very definitions of labor and capital are arbitrary. Instead, he looks around the world to find the relatively scarce factors of production and finds two: natural resources, which are dwindling, and good ideas, which can reach larger markets than ever before.

If you possess one of those, then you will reap most of the rewards of growth. If you don’t, you will not.

Ideas and natural resources are scarce relative to unskilled labour and salt water, but they're not that scarce. We have machines to produce more ideas, and we call them things like "San Francisco". And the ideas, conveniently, allow us to extend the apparently limited life of natural resource reserves seemingly indefinitely. If you had vast oil reserves, you may have thought you had it made, since they weren't making any more of the stuff. Then fracking came along, and suddenly your ability to reap the rewards of economic growth was greatly reduced. And if you were the sort of smarty who worked for companies that came up with brilliant ideas like fracking, you might have thought you had it made, since coming up with ideas is hard and therefore valuable. Then you went to pay your rent.

It's useful to think about things like this within the context of Thomas Piketty's "

The ratio of wealth to national income is rising in America, and a meaningful part of that rise is associated with housing. (In Britain and France, housing is even more important.) Now it might be that increased housing supply growth would reduce housing values and housing wealth, but would not reduce total wealth. The very rich now forking over a huge share of their salary for housing might instead save that money and invest it elsewhere, leading to corresponding increases in other domestic capital or net foreign capital. But I suspect that would not be the whole story. In her piece, Ms Cutler makes a point I have also emphasised in the past:

UC Berkeley economist Enrico Moretti calculated that a single tech job typically produces five additional local-services jobs.

But in San Francisco, that spillover effect is much smaller. This is in no small part because so much of our incomes end up going toward housing costs. The city’s economist Ted Egan estimates that each San Francisco tech job likely creates somewhere slightly north of two extra jobs, not five.

The housing dynamic in San Francisco raises the capital intensity of consumption. That contributes to an increase in the capital share of income and to the stock of wealth in the economy. Zoning restrictions are a tool of the oligarchy, effectively. I'm only one-fourth kidding. But they are; they are a means by which owners of capital extract an outsized share of the surplus generated by job creation.

So, what is to be done? Well, one option is simply to use the levers of government to seize back the surplus for redistribution to the masses. That's not an ideal solution, in this case at least, as it piles nasty incentive effects onto the distortions already created by zoning restrictions. Better to fix the initial distortion, which takes us to the second option.

You could reform local institutions to generate better zoning outcomes. There are lots of good ideas for how to do this floating around. What is less clear is how one builds support for institutional reform. One shouldn't say that it can't be done, but first such ideas need to win intellectual battles, and then they need to win political battles, and so it is safe to conclude that such reforms represent part of a long-term strategy for improvement.

Maybe the market will fix itself? That's not entirely impossible. Assume that there is persistence to zoning regimes, such that relatively liberal-building cities tend to stay that way even after population growth begins ramping up. And assume that as San Francisco deflects away would-be migrants to other cities, critical masses of people begin to pile up, leading to the growth of new tech hubs, at least some of which will occur in liberal-building places. Then maybe one eventually generates a flip in technological leadership to a city that likes building more than San Francisco. On the other hand, if San Francisco zoning mostly deflects away non-techies who add to San Francisco congestion without adding much to its tech-centre synergies, then San Francisco's regulations may be reinforcing its status as technological leader.

That leaves technology as the saving grace. Maybe we invent really good holodecks, which make it much less critical to actually be in San Francisco. Maybe we invent teleportation, laws of physics be damned. Maybe we simply come up with better ways to build and design cities, which minimise the real or perceived downsides to residents of new building.

Or maybe we do nothing, and simply sit back and observe as housing remains an instrument of oligarchy. Who knows. But however one imagines this playing out, we should be clear about what is happening, and what its effects have been.

Previous

War: This is not a game theory

Next

:

Piston X86-64 Assembler working in web browser and Node.js

17 April 2014 - 7:00am
About Piston Assembler

Piston X86-64 Assembler (PASM) is NASM syntax based symbolic machine code compiler for X86-64 architecture - fully working in browser and in Node.js based environments.

Written in CoffeeScript (and compiled to JavaScript) during the rainy finnish summer holiday.

Thanks to

Karel Lejska, x86reference.xml, x86asm.net
Kenneth Falck, ASM code for testing
Vikas N. Kumar, ASM code for testing
Joni Salonen, toUTF8Array
Zachary Carter, Jison
John Tobey and Matthew Crumley, Javascript-biginteger
Jeremy Ashkenas, CoffeeScript

Google's Street View computer vision can beat reCAPTCHA with 99% accuracy

16 April 2014 - 7:00pm
Posted by Vinay Shet, Product Manager, reCAPTCHA 

Have you ever wondered how Google Maps knows the exact location of your neighborhood coffee shop? Or of the hotel you’re staying at next month? Translating a street address to an exact location on a map is harder than it seems. To take on this challenge and make Google Maps even more useful, we’ve been working on a new system to help locate addresses even more accurately, using some of the technology from the Street View and reCAPTCHA teams.

This technology finds and reads street numbers in Street View, and correlates those numbers with existing addresses to pinpoint their exact location on Google Maps. We’ve described these findings in a scientific paper at the International Conference on Learning Representations (ICLR). In this paper, we show that this system is able to accurately detect and read difficult numbers in Street View with 90% accuracy.

Street View numbers correctly identified by the algorithm
These findings have surprising implications for spam and abuse protection on the Internet as well. For more than a decade, CAPTCHAs have used visual puzzles in the form of distorted text to help webmasters prevent automated software from engaging in abusive activities on their sites. Turns out that this new algorithm can also be used to read CAPTCHA puzzles—we found that it can decipher the hardest distorted text puzzles from reCAPTCHA with over 99% accuracy. This shows that the act of typing in the answer to a distorted image should not be the only factor when it comes to determining a human versus a machine.

Fortunately, Google’s reCAPTCHA has taken this into consideration, and reCAPTCHA is more secure today than ever before. Last year, we announced that we’ve significantly reduced our dependence on text distortions as the main differentiator between human and machine, and instead perform advanced risk analysis. This has also allowed us to simplify both our text CAPTCHAs as well as our audio CAPTCHAs, so that getting through this security measure is easy for humans, but still keeps websites protected.

CAPTCHA images correctly solved by the algorithm 
Thanks to this research, we know that relying on distorted text alone isn’t enough. However, it’s important to note that simply identifying the text in CAPTCHA puzzles correctly doesn’t mean that reCAPTCHA itself is broken or ineffective. On the contrary, these findings have helped us build additional safeguards against bad actors in reCAPTCHA.

As the Street View and reCAPTCHA teams continue to work closely together, both will continue to improve, making Maps more precise and useful and reCAPTCHA safer and more effective. For more information, check out the reCAPTCHA site and the scientific paper from ICLR 2014.

Google Revenue Jumps, But Misses Forecasts

16 April 2014 - 7:00pm
Sections Home Search Skip to content Skip to navigation Technology|Google Revenue Jumps, but Misses Forecasts http://nyti.ms/1paVde2 See next articles See previous articles Technology |​​NYT Now Photo

Google has been on a buying binge that has had little to do with its core business of Internet search and advertising. Credit Jim Wilson/The New York Times Continue reading the main story Continue reading the main story Continue reading the main story Share This Page Continue reading the main story Continue reading the main story

SAN FRANCISCO — The mighty Google machine faltered a bit in the first quarter.

Revenue came in $100 million short of expectations, while earnings per share missed by 12 cents.

Still, Larry Page, Google’s chief executive, called it “another great quarter” in the news release announcing the company’s results. He added, “We got lots of product improvements done, especially on mobile.”

Wall Street, which had pushed Google stock up $20 a share in earlier trading, swiftly took all that away and more. After the earnings report came out, the stock was down $27, to $530.

As Internet users migrate to mobile devices, Google earns less on its ads. Average cost-per-click, a key metric of what advertisers pay Google each time someone clicks on an ad, fell approximately 9 percent from the first quarter of 2013.

Continue reading the main story Related Coverage

The search-and-advertising giant said it had revenue of $15.42 billion for the quarter that ended March 31, up 19 percent from the first quarter of 2013. Analysts had expected revenue of $15.52 billion.

Continue reading the main story

Google shares, which were about $600 in March, came under pressure in the recent tech sell-off.

Earnings per share were $6.27. Analysts had forecast $6.39.

In absolute terms, Google is doing very well. Here is one way to measure its heft: The company is projected to increase its digital ad revenues this year by more than $5 billion, which is more than the total ad revenues of Yahoo or Microsoft. The only viable threat to Google comes from Facebook, whose ad revenues are forecast by eMarketer to jump 50 percent this year. Even so, Facebook’s revenues are only about a quarter of Google’s.

Google accounted for 32 percent of digital ad spending in 2013, eMarketer says, up from 31.3 percent in 2012.

Being the dominant player tends to restrict growth opportunities, however. So Google has been on a buying binge that has little to do with its core business of Internet search and advertising. It acquired several robotic companies, including Boston Dynamics, maker of BigDog, Cheetah and other mechanical creatures. It bought Nest Labs, which developed an innovative thermostat, for $3.2 billion.

And just this week it bought Titan Aerospace, which makes drone satellites. Google said Titan, which was founded in 2012 and has about 20 employees, could help bring Internet access to millions and help solve problems like deforestation. The purchase price was not disclosed but was likely to be around $75 million.

With $58 billion in cash in the bank as of last December and a well-oiled machine that every quarter generates billions more, Google can clearly afford to buy all sorts of companies. Generally Wall Street has applauded these acquisitions, seeing them as part of a long-term strategy to help Google develop new markets. But some analysts are raising questions.

“While one might say that it’s a natural fit for Google to have a robot army, we question how owning these businesses inside Google make sense,” wrote Colin Gillis of BGC Partners in a research note.

Google split its shares at the beginning of the month, a move that solidified the founders’ control over the company.

More on nytimes.com Site Index

Coding the Angular Tutorial App in Backbone

16 April 2014 - 7:00pm

tl;dr
The people at AngularJS created their PhoneCat tutorial app with 48 lines of JavaScript . When we coded the same app using Backbone instead of Angular, we found it took 171 lines of JavaScript – 260% more code. Here, we present a step-by-step tutorial of how we replicated the Angular tutorial in Backbone.

A while ago I decided to check out Angular since everyone’s been talking about it. I went to the Angular website and worked through their fantastic tutorial. I could see the merits of their approach, but I was left with the nagging feeling that if I coded the same application in Backbone it would only take me a few extra lines of code — a binding here and a callback there and I’d be done, right?

It turns out that I was wrong. After actually reproducing the Angular tutorial in Backbone, I found that it took significantly more JavaScript as well as a good bit of finesse. I thought I would share my Backbone version of the Angular tutorial as a step-by-step guide so that it is easy to make a direct comparison between Angular and Backbone.

To get started, clone the backbone-phonecat repository on GitHub into your current directory.

git clone https://github.com/204NoContent/backbone-phonecat.git

1

git clone https://github.com/204NoContent/backbone-phonecat.git

We’re using Node as our back-end so make sure it is installed.

If Node isn’t there, download and install it.

In order to start up the web server that powers this tutorial, cd into the backbone-phonecat directory and run

Node should output Express server listening on port 8888, at which point we can navigate over to localhost:8888, to see a list of phones displayed on the screen.

What’s on the screen presently is the finished product of what we will be building in this tutorial. The tutorial is structured so that we can directly jump to any step along the way by checking out a previous commit using git. In all, there are 11 steps, step-0 through step-10. We’ll begin by resetting our code to step-0, but since we don’t necessarily want to take down our web server by killing the running Node process, it is best to leave the Node process running in the current terminal and open up a new terminal window or tab. Once again cd into the bacbone-phonecat directory.

To reset our work-space to the first step run

Refresh the page at localhost:8888 and text Nothing here yet! should be the only thing on the screen. Node is currently generating this output by serving up the following file

views/index.ejs

<!DOCTYPE html> <html lang='en'> <head> <meta charset='utf-8'> <title><%= title %></title> <link rel="stylesheet" href="http://redirect.viglink.com?key=11fe087258b6fc0532a5ccfc924805c0&u=%2Fstylesheets%2Fapp.css"> <link rel="stylesheet" href="http://redirect.viglink.com?key=11fe087258b6fc0532a5ccfc924805c0&u=%2Fstylesheets%2Fbootstrap.css"> <!-- These scripts would be a single cacheable file in production --> <script src="/javascripts/lib/jquery.js"></script> <script src="/javascripts/lib/underscore.js"></script> <script src="/javascripts/lib/backbone.js"></script> <script src="/javascripts/jst.js"></script> <script src="/javascripts/router.js"></script> <script src="/javascripts/init.js"></script> </head> <body> <section id='main'> Nothing here yet! </section> </body> </html>

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

<!DOCTYPE html>

<html lang='en'>

    <head>

        <meta charset='utf-8'>

        <title><%= title %></title>

        <link rel="stylesheet" href="/stylesheets/app.css">

        <link rel="stylesheet" href="/stylesheets/bootstrap.css">

        <!-- These scripts would be a single cacheable file in production -->

        <script src="/javascripts/lib/jquery.js"></script>

        <script src="/javascripts/lib/underscore.js"></script>

        <script src="/javascripts/lib/backbone.js"></script>

        <script src="/javascripts/jst.js"></script>

        <script src="/javascripts/router.js"></script>

        <script src="/javascripts/init.js"></script>

    </head>

    <body>

        <section id='main'>

            Nothing here yet!

        </section>

    </body>

</html>

The text Nothing here yet! appears on the screen because it is the content of our main section. This static text is included on the server side for this step only. All future steps will use Backbone to dynamically generate all page content.

Since we’re just tying to bootstrap the app, Backbone does not have a large role so far. All we’re doing is loading Backbone’s requirements, then Backbone itself. After that we load jst.js, which is an Underscore.js templating dictionary that our Node server will automatically generate for us (which means you should ignore the specifics of the code in that file). The last two scripts are the only real parts of our Backbone app so far.

public/javascripts/router.js

Router = Backbone.Router.extend({ routes: { } });

Router = Backbone.Router.extend({

    routes: {

    }

});

 
router.js is a place for us to define Backbone routes in the future. Currently, no URLs will match since the routes object is empty.

public/javascripts/init.js

App = new Router(); $(document).ready(function () { Backbone.history.start({ pushState: true }); });

App = new Router();

$(document).ready(function () {

    Backbone.history.start({ pushState: true });

});

init.js creates new instance of the Backbone router and then tells Backbone to start monitoring browser history changes (page navigation).

Reset the work-space to the next step by running

We’ll discuss the more important changes below, and you can see the full diff on github, or by running git diff step-0 step-1

The objective of this section is to use Backbone to fill in the contents of the main section by generating some static HTML . To do this we’re going to need to create a route that will match the root path of our application http://localhost:8888.

public/javascripts/router.js

Router = Backbone.Router.extend({ routes: { '': 'phonesIndex', }, phonesIndex: function () { new PhonesIndexView({ el: 'section#main' }); } });

Router = Backbone.Router.extend({

    routes: {

        '': 'phonesIndex',

    },

    phonesIndex: function () {

        new PhonesIndexView({ el: 'section#main' });

    }

});

Here we match our application’s root path '' to the phonesIndex method of the router that we also defined. The phonesIndex method instantiates a new Backbone view and tells it to place its content inside of <section id='main'>, which by the way, is no longer populated with any placeholder text from the server.

Next, we need a need to create a Backbone view.

public/javascripts/views/phones/index_view.js

PhonesIndexView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(JST['phones/index']()); } });

PhonesIndexView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html(JST['phones/index']());

    }

});

All this view does is render the content of a JavaScript template file public/javascripts/templates/phones/index.jst, which we indicate by passing phones/index as the look up key to our JST dictionary.

public/javascripts/templates/phones/index.jst

<ul> <li> <span>Nexus S</span> <p> Fast just got faster with Nexus S. </p> </li> <li> <span>Motorola XOOMâ„¢ with Wi-Fi</span> <p> The Next, Next Generation tablet. </p> </li> </ul>

1

2

3

4

5

6

7

8

9

10

11

12

13

14

<ul>

    <li>

        <span>Nexus S</span>

        <p>

            Fast just got faster with Nexus S.

        </p>

    </li>

    <li>

        <span>Motorola XOOM™ with Wi-Fi</span>

        <p>

            The Next, Next Generation tablet.

        </p>

    </li>

</ul>

That’s pretty much it for this section, the only other change was to make sure we told our browser about the newly created JavaScript files. So in views/index.ejs , we added <script src="/javascripts/views/phones/index_view.js"></script>, but that isn’t very interesting and we won’t be mentioning script loading changes again.

Navigating over to localhost:8888 now shows the result of the Backbone generated static HTML.

Reset the work-space to the next step by running

The full diff can be found on github.

The aim of this section is to use Backbone to dynamically create a list of phones from some source data. To do this we’ll make a Backbone collection of Backbone models, and then pass each of those models to a template to be rendered out as HTML. First let’s create the phone model.

public/javascripts/models/phone.js

Phone = Backbone.Model;

1

Phone = Backbone.Model;

Super simple, but now we can create new instances of Phone and we’ll have access to all the goodness that Backbone provides for models, like being members of Backbone collections and seamless integration into the event bus.

Next up, the Backbone collection to house Phone models

public/javascripts/collections/phones_collection.js

PhonesCollection = Backbone.Collection.extend({ model: Phone });

PhonesCollection = Backbone.Collection.extend({

    model: Phone

});

We tell Backbone that our PhonesCollection will be made up of phone models by setting the value of model to Phone. When a new collection is created, Backbone will automatically convert any initialization data to their equivalent representation as Phone models.

We’ll create a new phones collection with some hard coded data in our phones/index_view.js.

public/javascripts/views/phones/index_view.js

PhonesIndexView = Backbone.View.extend({ initialize: function () { this.collection = new PhonesCollection([ {'name': 'Nexus S', 'snippet': 'Fast just got faster with Nexus S.'}, {'name': 'Motorola XOOMâ„¢ with Wi-Fi', 'snippet': 'The Next, Next Generation tablet.'}, {'name': 'MOTOROLA XOOMâ„¢', 'snippet': 'The Next, Next Generation tablet.'} ]); this.render(); new PhonesIndexListView({ el: this.$('ul.phones'), collection: this.collection }); }, render: function () { this.$el.html(JST['phones/index']()); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

PhonesIndexView = Backbone.View.extend({

    initialize: function () {

        this.collection = new PhonesCollection([

            {'name': 'Nexus S',

             'snippet': 'Fast just got faster with Nexus S.'},

            {'name': 'Motorola XOOM™ with Wi-Fi',

             'snippet': 'The Next, Next Generation tablet.'},

            {'name': 'MOTOROLA XOOM™',

             'snippet': 'The Next, Next Generation tablet.'}

        ]);

        this.render();

        new PhonesIndexListView({

            el: this.$('ul.phones'),

            collection: this.collection

        });

    },

    render: function () {

        this.$el.html(JST['phones/index']());

    }

});

this.collection is a Backbone collection of Backbone Phone models, and all we had to do is input the data. We’ve also created an instance of a Backbone view called PhonesIndexListView, which we’ve yet to create. We’ve altered the template of phones/index to only contain a single line <ul class='phones'></ul> so that we have a place for PhonesIndexListView to render its content.

The job of PhonesIndexListView will be to iterate over every phone in the phones collection and render out the corresponding phone template. Let’s create it now.

public/javascripts/views/phones/index_list_view.js

PhonesIndexListView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(''); this.collection.each(this.renderPhone, this); }, renderPhone: function (phone) { this.$el.append(new PhonesIndexListPhoneView({ tagName: 'li', model: phone }).el); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

PhonesIndexListView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html('');

        this.collection.each(this.renderPhone, this);

    },

    renderPhone: function (phone) {

        this.$el.append(new PhonesIndexListPhoneView({

            tagName: 'li',

            model: phone

        }).el);

    }

});

Currently, on initialization, the view calls its own render method that wipes the HTML of its assigned <ul>. Then it passes each model in the phones collection to renderPhone. The job of renderPhone is to pass each phone model to a new view that will process the model into HTML. That processed HTML will then be accessible throug PhonesIndexListPhoneView‘s el method. That rendered HTML for each model view will then be appended to the phones lists <ul>. The end result is the HTML of all the phones will show up on a list.
PhonesIndexListPhoneView is quite simple since all it does is convert a phone model to its HTML equivalent.

public/javascripts/views/phones/index_list_phone_view.js

PhonesIndexListPhoneView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(JST['phones/index_list_phone'](this.model)); } });

PhonesIndexListPhoneView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html(JST['phones/index_list_phone'](this.model));

    }

});

The only interesting bit is that we are passing the phone model to our JavaScript template by referencing it as the argument to the JST template method. The properties (methods) of the phone model are automatically placed in the local scope of the template.

public/javascripts/templates/phones/index_list_phone.jst

[[= get('name') ]] <p>[[= get('snippet') ]]</p>

[[= get('name') ]]

<p>[[= get('snippet') ]]</p>

Here we are using the notation [[ for embedded JavaScript, where the inclusion of the equals sign [[= indicates that the the evaluated expression will be interpolated to a string. The Backbone model method get is a convenience method for getting the value of one of the phone’s attributes, and is the complement of the Backbone’s set method (which should almost always be used to set all model attributes so that all model changes will be picked up by event bus).

Details aside, the name and snippet of each phone will now be dynamically written to the screen. We can see this in action by navigating over to localhost:8888.

Reset the work-space to the next step by running

The full diff can be found on github.

We want to add a search field to the page so that we can display only those phones that match the search criteria. To make this happen, we’ll give our phones collection the ability to filter itself in a similar fashion to Angular’s query filter. We can create this new method by customizing the out-of-the-box filter collection method that Backbone (Underscore) provides.

public/javascripts/collections/phones_collection.js

PhonesCollection = Backbone.Collection.extend({ model: Phone, query: function (query) { if (!query || query === '') return this.models; return this.filter(function (phone) { return phone.values().join().match(new RegExp(query, 'i')); }); } });

PhonesCollection = Backbone.Collection.extend({

    model: Phone,

    query: function (query) {

        if (!query || query === '') return this.models;

        return this.filter(function (phone) {

            return phone.values().join().match(new RegExp(query, 'i'));

        });

    }

});

Now when we call call query, on our PhonesCollection we will get back an array of Phone models that match our search term.

Before we go any farther, let’s take a moment to think about the specifics of what we want to happen. We would like there to be an input field on the screen where the user can enter a search term. The list of phones should be updated with each keystroke to reflect the current state of the input field. This means that we are going to need the phones collection to update and render itself every time a user types a character in the input field. There are many techniques that can be used to accomplish this behavior, but the technique we are going to use here is to create a Backbone model called Filter to collect user input.

public/javascripts/models/filter.js

Filter = Backbone.Model;

1

Filter = Backbone.Model;

The advantage of using a Backbone model is that Backbone models emit change events whenever any of their attributes change. This sort of behavior is exactly what we need. If we can link the state of our search input field to the state of Filter, then whenever the input field is changed, our Filter model will emit those change events. We’ll listen to Filter change events and update our collection accordingly. First lets alter the index view to create a new instance of our Filter and its corresponding view

NOTE: vertical ellipses mean that code has been omitted for brevity.

PhonesIndexView = Backbone.View.extend({ initialize: function () { this.filter = new Filter(); . . . new PhonesFilterView({ el: this.$('.filter'), model: this.filter, collection: this.collection }); new PhonesIndexListView({ el: this.$('ul.phones'), model: this.filter, collection: this.collection }); }, . . .

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

PhonesIndexView = Backbone.View.extend({

    initialize: function () {

        this.filter = new Filter();

.

.

.

        new PhonesFilterView({

            el: this.$('.filter'),

            model: this.filter,

            collection: this.collection

        });

        new PhonesIndexListView({

            el: this.$('ul.phones'),

            model: this.filter,

            collection: this.collection

        });

    },

.

.

.

We created a filter instance and passed it down to both a ne PhonesFilterView and PhonesIndexListView. Creating the filter model in this view and passing it to both child views means that each view will have access to filter model events.

First, let’s examine the PhonesFilterView

public/javascripts/views/phones/filter_view.js

PhonesFilterView = Backbone.View.extend({ events: { 'keydown input.query': 'setQuery', }, initialize: function () { this.render(); }, render: function () { this.$el.html(JST['phones/filter']()); }, setQuery: function (event) { // make it snappy using keydown and pushing it to next tick window.setTimeout($.proxy(function() { this.model.set('query', event.target.value.replace(/^\s+|\s+$/g, '')); }, this), 0); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

PhonesFilterView = Backbone.View.extend({

    events: {

        'keydown input.query': 'setQuery',

    },

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html(JST['phones/filter']());

    },

    setQuery: function (event) {

        // make it snappy using keydown and pushing it to next tick

        window.setTimeout($.proxy(function() {

            this.model.set('query', event.target.value.replace(/^\s+|\s+$/g, ''));

        }, this), 0);

    }

});

This view renders out an input tag in its template, but more importantly it uses Backbone’s event hash to set up a mapping between input keydown events and its own setQuery method setQuery looks a little funky but all it does is set the query attribute of our filter model to the current value of the input field. When an attribute of the filter model is set, Backbone will check if the value has changed and if so, it will trigger a 'change:attr' event on the model, where attr is the name of the attribute that was changed. So in our case, whenever the query value of the filter changes, Backbone will trigger 'change:query' on the filter model.

We can utilize this behavior to update our phone collection by listening for those change events.

public/javascripts/views/phones/index_list_view.js

PhonesIndexListView = Backbone.View.extend({ initialize: function () { this.filtered_collection = new PhonesCollection(); this.listenTo(this.filtered_collection, 'add', this.renderPhone); this.render(); this.listenTo(this.model, 'change:query', this.render); }, render: function () { var filtered_phones = this.collection.query(this.model.get('query')); this.filtered_collection.set(filtered_phones); }, renderPhone: function (phone) { var position = this.filtered_collection.indexOf(phone); this.phoneView = new PhonesIndexListPhoneView({ tagName: 'li', model: phone }); if (position === 0) { this.$el.prepend(this.phoneView.el); } else { $(this.$('li')[position - 1]).after(this.phoneView.el); } } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

PhonesIndexListView = Backbone.View.extend({

    initialize: function () {

        this.filtered_collection = new PhonesCollection();

        this.listenTo(this.filtered_collection, 'add', this.renderPhone);

        this.render();

        this.listenTo(this.model, 'change:query', this.render);

    },

    render: function () {

        var filtered_phones = this.collection.query(this.model.get('query'));

        this.filtered_collection.set(filtered_phones);

    },

    renderPhone: function (phone) {

        var position = this.filtered_collection.indexOf(phone);

        this.phoneView = new PhonesIndexListPhoneView({

            tagName: 'li',

            model: phone

        });

        if (position === 0) {

            this.$el.prepend(this.phoneView.el);

        } else {

            $(this.$('li')[position - 1]).after(this.phoneView.el);

        }

    }

});

There is bit going on here, so we’ll take it one step at a time. Toward the end of the initialization method, we set up a listener for filter changes. When a change is detected, the render method is called which filters the collection based on the query. The filtered collection is then set with the models that made it through the query filter. A collection set in Backbone does a smart update of the collection, adding and removing models as necessary. In addition, Backbone triggers an add or a remove event for every model that was added or removed from the collection. In the first part of our initialization, we set up a listener to render any models that are added to the filtered collection. The result is that each model added to the collection appears on the screen (the fancy bit in the render method just ensures that each model appears in the correct order). The only thing left to do is to remove a phone from the screen when it is removed from a collection. This is accomplished in the phone model view.

public/javascripts/views/phones/index_list_phone_view.js

PhonesIndexListPhoneView = Backbone.View.extend({ initialize: function () { this.listenTo(this.model, 'remove', this.remove); this.render(); }, . . .

PhonesIndexListPhoneView = Backbone.View.extend({

    initialize: function () {

        this.listenTo(this.model, 'remove', this.remove);

        this.render();

    },

.

.

.

Here we listen to the phone remove event and use Backbone’s built in View remove method to detach listeners and remove the element from the screen.

Navigating over to localhost:8888 now displays a list of phones that can be filtered with the search field.

Reset the work-space to the next step by running

The full diff can be found on github.

The idea behind ordering is very similar to text search. We’re going to have an order drop down of possible ordering values. When one is selected, a corresponding Filter sort attribute will be updated to reflect the current state. The Filter will naturally emit a change event, which in this case will be 'change:sort'. We can listen for this event in the phones index list view and re-render the collection. We also want to default the sorting to display the newest phone first. To do that, we default the Filter model, since Backbone supports a default hash out-of-the-box.

public/javascripts/models/filter.js

Filter = Backbone.Model.extend({ defaults: { sortBy: 'age' } });

Filter = Backbone.Model.extend({

    defaults: {

        sortBy: 'age'

    }

});

Next we alter the filter view to listen for drop down change events and respond accordingly

public/javascripts/views/phones/filter_view.js

PhonesFilterView = Backbone.View.extend({ events: { 'keydown input.query': 'setQuery', 'change select.sort': 'setSort' }, . . . render: function () { // these next few lines massage data to make // the template more straightforward this.order_options = [{ value: 'name', text: 'Alphabetical'}, { value: 'age', text: 'Newest' }]; var selected_option = _.findWhere(this.order_options, { value: this.model.get('sortBy') }, this); if (selected_option) selected_option.selected = true; this.$el.html(JST['phones/filter']({ order_options: this.order_options })); }, . . . setSort: function (event) { this.model.set('sortBy', event.target.value); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

PhonesFilterView = Backbone.View.extend({

    events: {

        'keydown input.query': 'setQuery',

        'change select.sort': 'setSort'

    },

.

.

.

    render: function () {

        // these next few lines massage data to make

        // the template more straightforward

        this.order_options = [{ value: 'name', text: 'Alphabetical'}, { value: 'age', text: 'Newest' }];

        var selected_option = _.findWhere(this.order_options, { value: this.model.get('sortBy') }, this);

        if (selected_option) selected_option.selected = true;

        this.$el.html(JST['phones/filter']({ order_options: this.order_options }));

    },

.

.

.

    setSort: function (event) {

        this.model.set('sortBy', event.target.value);

    }

});

The template reference here is somewhat interesting because we iterate over an array of drop down values and default the selection to the drop down value where the selected attribute is set to true.

public/javascripts/templates/phones/filter.jst

Search: <input class='query'> Sort by: <select class='sort'> [[ _.each(order_options, function (option) { ]] <option value="[[= option.value ]]" [[= option.selected ? 'selected': '']]>[[= option.text ]]</option> [[ }) ]] </select>

Search:

<input class='query'>

Sort by:

<select class='sort'>

    [[ _.each(order_options, function (option) { ]]

        <option value="[[= option.value ]]" [[= option.selected ? 'selected': '']]>[[= option.text ]]</option>

    [[ }) ]]

</select>

The last step is to have the phones index list view respond to filter changes.

public/javascripts/views/phones/index_list_view.js

PhonesIndexListView = Backbone.View.extend({ initialize: function () { . . . this.listenTo(this.model, 'change:sortBy', this.rerender); this.listenTo(this.model, 'change:query', this.render); }, render: function () { var filtered_phones = this.collection.query(this.model.get('query')); filtered_phones = _.sortBy(filtered_phones, function (phone) { var attr_value = phone.get(this.model.get('sortBy')); return _.isString(attr_value) ? attr_value.toLowerCase() : attr_value; }, this); this.filtered_collection.set(filtered_phones); }, . . . rerender: function () { this.filtered_collection.set(); this.render(); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

PhonesIndexListView = Backbone.View.extend({

    initialize: function () {

.

.

.

        this.listenTo(this.model, 'change:sortBy', this.rerender);

        this.listenTo(this.model, 'change:query', this.render);

    },

    render: function () {

        var filtered_phones = this.collection.query(this.model.get('query'));

        filtered_phones = _.sortBy(filtered_phones, function (phone) {

            var attr_value = phone.get(this.model.get('sortBy'));

            return _.isString(attr_value) ? attr_value.toLowerCase() : attr_value;

        }, this);

        this.filtered_collection.set(filtered_phones);

    },

.

.

.

    rerender: function () {

        this.filtered_collection.set();

        this.render();

    }

});

And to actually get the phones to order by age, we have to update our data with an age attribute.

public/javascripts/views/phones/index_view.js

PhonesIndexView = Backbone.View.extend({ initialize: function () { this.filter = new Filter(); this.collection = new PhonesCollection([ {'name': 'Nexus S', 'snippet': 'Fast just got faster with Nexus S.', 'age': 1}, {'name': 'Motorola XOOMâ„¢ with Wi-Fi', 'snippet': 'The Next, Next Generation tablet.', 'age': 2}, {'name': 'MOTOROLA XOOMâ„¢', 'snippet': 'The Next, Next Generation tablet.', 'age': 3} ]); . . .

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

PhonesIndexView = Backbone.View.extend({

    initialize: function () {

        this.filter = new Filter();

        this.collection = new PhonesCollection([

            {'name': 'Nexus S',

             'snippet': 'Fast just got faster with Nexus S.',

             'age': 1},

            {'name': 'Motorola XOOM™ with Wi-Fi',

             'snippet': 'The Next, Next Generation tablet.',

             'age': 2},

            {'name': 'MOTOROLA XOOM™',

             'snippet': 'The Next, Next Generation tablet.',

             'age': 3}

        ]);

.

.

.

We can now sort to our heart’s content at localhost:8888

Reset the work-space to the next step by running

The full diff can be found on github.

API integration is one area where Backbone really shines. The aim of this section is to issue a request to our web server to fetch our phone model data instead of having it hard coded in the view. Retrieving data in this fashion is commonplace in most production apps. Making this process work is super easy in Backbone. First we need to tell ou PhonesCollection where it can expect to find phones list data.

public/javascripts/collections/phones_collection.js

PhonesCollection = Backbone.Collection.extend({ model: Phone, url: '/api/phones', . . . });

PhonesCollection = Backbone.Collection.extend({

    model: Phone,

    url: '/api/phones',

.

.

.

});

Next, after we instantiate a new phone collection in the view, we need to tell it to fetch its data from the server.

public/javascripts/views/phones/index_view.js

PhonesIndexView = Backbone.View.extend({ initialize: function () { this.filter = new Filter(); this.collection = new PhonesCollection(); . . . this.collection.fetch(); } . . .

1

2

3

4

5

6

7

8

9

10

11

12

13

PhonesIndexView = Backbone.View.extend({

    initialize: function () {

        this.filter = new Filter();

        this.collection = new PhonesCollection();

.

.

.

        this.collection.fetch();

    }

.

.

.

Finally, since our phones list data isn’t being pre-populated like before, we have to wait until the server responds with the data before we can render it out.

public/javascripts/views/phones/index_list_view.js

PhonesIndexListView = Backbone.View.extend({ initialize: function () { this.filtered_collection = new PhonesCollection(); this.listenTo(this.filtered_collection, 'add', this.renderPhone); // render when data is returned from server this.listenTo(this.collection, 'sync', this.render); this.listenTo(this.model, 'change:sortBy', this.rerender); this.listenTo(this.model, 'change:query', this.render); } . . .

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

PhonesIndexListView = Backbone.View.extend({

    initialize: function () {

        this.filtered_collection = new PhonesCollection();

        this.listenTo(this.filtered_collection, 'add', this.renderPhone);

        // render when data is returned from server

        this.listenTo(this.collection, 'sync', this.render);

        this.listenTo(this.model, 'change:sortBy', this.rerender);

        this.listenTo(this.model, 'change:query', this.render);

    }

.

.

.

And that’s it. Our phones are now being fetched from our API back-end and displayed at localhost:8888

Reset the work-space to the next step by running

The full diff can be found on github.

Images and links in Backbone are nothing special, we just need to generate the appropriate HTML using the data already in each phone model.

public/javascripts/templates/phones/index_list_phone.jst

<a href="/phones/[[= get('id') ]]" class='thumb'><img src="[[= '/' + get('imageUrl') ]]"></a> <a href="/phones/[[= get('id') ]]">[[= get('name') ]]</a> <p>[[= get('snippet') ]]</p>

<a href="/phones/[[= get('id') ]]" class='thumb'><img src="[[= '/' + get('imageUrl') ]]"></a>

<a href="/phones/[[= get('id') ]]">[[= get('name') ]]</a>

<p>[[= get('snippet') ]]</p>

Here, we are creating the URL of an individual phone by appending it id to the phones path. The approach we usually take is to have our API return an absolute path to the phone show page for us, so all we have to do is display that URL. But, in this case, we are using the data provided by Angular. Same thing goes for the image source.

However, we have one small problem with our links. We have not told Backbone to intercept link click events, so the browser is going to do what it normally does and navigate by doing a full page refresh. Since our app is so simple, we’re just going to intercept and handle the click event where it happens – in our index view.

public/javascripts/views/phones/index_list_phone_view.js

PhonesIndexListPhoneView = Backbone.View.extend({ events: { 'click a': 'navigate' }, . . . navigate: function (event) { event.preventDefault(); App.navigate(event.currentTarget.pathname, { trigger: true }); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

PhonesIndexListPhoneView = Backbone.View.extend({

    events: {

        'click a': 'navigate'

    },

.

.

.

    navigate: function (event) {

        event.preventDefault();

        App.navigate(event.currentTarget.pathname, { trigger: true });

    }

});

After we prevent the event from propagating to the browser, we tell the instance of our Backbone router, App, to use pushState to update the browser history and also trigger any appropriate route event that matches the new URL. We have yet to implement any other routes so right now clicking a link will get us nowhere.

On the upside, since we added images and links, our index page at localhost:8888 now looks nice.

.

Reset the work-space to the next step by running

The full diff can be found on github.

We would like to make our phone links work so that clicking a phone on the index page will take you the the correct phone show page. The first step is to create the matching route.

public/javascripts/router.js

Router = Backbone.Router.extend({ routes: { '': 'phonesIndex', 'phones/:id': 'phonesShow' }, phonesIndex: function () { new PhonesIndexView({ el: 'section#main' }); }, phonesShow: function (id) { new PhonesShowView({ el: 'section#main', model: new Phone({ id: id }) }); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

Router = Backbone.Router.extend({

    routes: {

        '': 'phonesIndex',

        'phones/:id': 'phonesShow'

    },

    phonesIndex: function () {

        new PhonesIndexView({ el: 'section#main' });

    },

    phonesShow: function (id) {

        new PhonesShowView({

            el: 'section#main',

            model: new Phone({ id: id })

        });

    }

});

Now, whenever the Backbone router detects that the URL has been changed to match phones/ followed by a specific phone id, it will call the phonesShow method and pass in whatever phone id happens to match. The phoneShow method creates a new PhonesShowView, tells it where it is allowed to render its content, and also creates a new phone model. Specifically, the phone model it creates has one attribute assigned to it, its id. Setting this id is important since it allows Backbone to know where to request information about the phone from the server. To make this magic happen we need to specify the beginning part of the URL that our API responds to.

public/javascripts/models/phone.js

Phone = Backbone.Model.extend({ urlRoot: '/api/phones' });

Phone = Backbone.Model.extend({

    urlRoot: '/api/phones'

});

With the url route specified in the model constructor, calling fetch on any model instance with an id will automatically send a GET request to the url /api/phones/id, where id is the id that we set in the router.

To get the model data fetching process started, we need to set it up in our phones show view.

public/javascripts/views/phones/show_view.js

PhonesShowView = Backbone.View.extend({ initialize: function () { this.model.fetch(); this.listenTo(this.model, 'sync', this.render); }, render: function () { this.$el.html(JST['phones/show'](this.model)); } });

1

2

3

4

5

6

7

8

9

10

11

12

PhonesShowView = Backbone.View.extend({

    initialize: function () {

        this.model.fetch();

        this.listenTo(this.model, 'sync', this.render);

    },

    render: function () {

        this.$el.html(JST['phones/show'](this.model));

    }

});

The first thing we do is tell the model to go fetch its data from the API. Next, we attach a listener to that will call the render method whenever the model data is returned from the server.

As an aside, I guess technically we should attach the listener before calling fetch on the model, but since fetch is done asynchronously, the ordering doesn’t matter since the line that defines the listener will be evaluated before any callback can be fired. End aside.

Once the model is fetched from the server, Backbone automatically parses the response so that the model data can be accessed using the model get method. Just to make sure everything is working correctly, let’s create a super simple template that will display the clicked on phone’s id on the screen.

public/javascripts/templates/phones/show.jst

TBD: detail view for [[= get('id') ]]

1

TBD: detail view for [[= get('id') ]]

Now going to the phones index page at localhost:8888 and clicking on the first phone, for example, will display its id.

Reset the work-space to the next step by running

The full diff can be found on github.

In this step, our aim is to have the phone show view display all the phone details on the screen. This step has a lot of stuff going on so it is helpful to first examine what we are trying to make

There are different ways to break apart and render complicated views like this one, but the approach we are going to use here is to break it down into one parent view and three child views.

The first thing we’ll do is to adjust our main show template to display the name and description that appear in the upper right of the main view and to create HTML elements for the child views.

public/javascripts/templates/phones/show.jst

<div class='phone-images'></div> <h1>[[= get('name') ]]</h1> <p>[[= get('description') ]]</p> <ul class='phone-thumbs'></ul> <ul class='specs'></ul>

<div class='phone-images'></div>

<h1>[[= get('name') ]]</h1>

<p>[[= get('description') ]]</p>

<ul class='phone-thumbs'></ul>

<ul class='specs'></ul>

Next, we’ll create new child views and assign them their elements.

public/javascripts/views/phones/show_view.js

PhonesShowView = Backbone.View.extend({ initialize: function () { this.model.fetch(); this.listenTo(this.model, 'sync', this.render); }, render: function () { this.$el.html(JST['phones/show'](this.model)); new PhonesShowImageView({ el: this.$('.phone-images'), model: this.model }); new PhonesShowImagesListView({ el: this.$('ul.phone-thumbs'), model: this.model }); new PhonesShowSpecsView({ el: this.$('ul.specs'), model: this.model }); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

PhonesShowView = Backbone.View.extend({

    initialize: function () {

        this.model.fetch();

        this.listenTo(this.model, 'sync', this.render);

    },

    render: function () {

        this.$el.html(JST['phones/show'](this.model));

        new PhonesShowImageView({

            el: this.$('.phone-images'),

            model: this.model

        });

        new PhonesShowImagesListView({

            el: this.$('ul.phone-thumbs'),

            model: this.model

        });

        new PhonesShowSpecsView({

            el: this.$('ul.specs'),

            model: this.model

        });

    }

});

Working down the list, the PhonesShowImageView just initializes and renders

public/javascripts/views/phones/show_image_view.js

PhonesShowImageView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(JST['phones/show_image'](this.model.get('mainImage'))); } });

PhonesShowImageView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html(JST['phones/show_image'](this.model.get('mainImage')));

    }

});

The more interesting question is what is it rendering. The data included with Angular treats images as nothing more than a path to their location on the server. While this is a fine approach, we’re going to promote images from simple strings to full blown Backbone models.

public/javascripts/models/photo.js

Photo = Backbone.Model;

1

Photo = Backbone.Model;

Inspecting further the data that Angular has given us, we notice that one of the Phone‘s attributes is a list of images stored in an array. Since we are going to promote photos to Backbone models, a list of photos calls out to be converted to a Backbone collection. Let’s define that collection.

public/javascripts/collections/photos_collection.js

PhotosCollection = Backbone.Collection.extend({ model: Photo });

PhotosCollection = Backbone.Collection.extend({

    model: Photo

});

Since our data modeling is now in good shape, we turn to converting the image URLs to Photo models and converting the images array to a PhotosCollection. The place to do these conversions is wherever the data is being returned to us from the server, which in this case is in the Phone model. Backbone provides to a hook to access the server response as a special model method called parse.

public/javascripts/models/phone.js

Phone = Backbone.Model.extend({ urlRoot: '/api/phones', parse: function (res) { if (res.images) { this.photosCollection = new PhotosCollection(_.map(res.images, function (image_path) { return { path: image_path }; })); this.set('mainImage', this.photosCollection.models[0]); } return res; } });

Phone = Backbone.Model.extend({

    urlRoot: '/api/phones',

    parse: function (res) {

        if (res.images) {

            this.photosCollection = new PhotosCollection(_.map(res.images, function (image_path) { return { path: image_path }; }));

            this.set('mainImage', this.photosCollection.models[0]);

        }

        return res;

    }

});

We directly assign a photos collection to the phone model and convert each of the image strings to Photo models with one attribute called path. We also default the Photo model attribute mainImage to be the first Photo model in the collection. We set it using the set writer method so that changes will be automatically picked up by Backbone and the appropriate events will fire.

Finally, getting back to the template of our PhonesShowImageView

public/javascripts/templates/phones/show_image.jst

<img src="[[= '/' + get('path') ]]" class='phone'>

1

<img src="[[= '/' + get('path') ]]" class='phone'>

The next view in our list is the PhonesShowImagesListView. The idea for this view is to render out a thumbnail for each phone photo. Since we have access to the photo collection through our phone model it’s easy to do using the same technique as we illustrated earlier.

public/javascripts/views/phones/show_images_list_view.js

PhonesShowImagesListView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(''); this.model.photosCollection.each(this.renderImage, this); }, renderImage: function (photo) { this.$el.append(new PhonesShowImagesListImageView({ tagName: 'li', model: photo }).el); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

PhonesShowImagesListView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html('');

        this.model.photosCollection.each(this.renderImage, this);

    },

    renderImage: function (photo) {

        this.$el.append(new PhonesShowImagesListImageView({

            tagName: 'li',

            model: photo

        }).el);

    }

});

We first make sure the view’s element is empty and then for each of the photos we create a separate photo model view and append the returned HTML to the list. The photo model view currently does nothing more than render the appropriate HTML for its photo model.

public/javascripts/views/phones/show_image_view.js

PhonesShowImageView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(JST['phones/show_image'](this.model.get('mainImage'))); } });

PhonesShowImageView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html(JST['phones/show_image'](this.model.get('mainImage')));

    }

});

with corresponding templatepublic/javascripts/templates/phones/show_images_list_image.jst

<img src="[[= '/' + get('path') ]]">

1

<img src="[[= '/' + get('path') ]]">

The final view in our list is the phone’s specs. This view is really simple.

public/javascripts/views/phones/show_specs_view.js

PhonesShowSpecsView = Backbone.View.extend({ initialize: function () { this.render(); }, render: function () { this.$el.html(JST['phones/show_specs'](this.model)); } });

PhonesShowSpecsView = Backbone.View.extend({

    initialize: function () {

        this.render();

    },

    render: function () {

        this.$el.html(JST['phones/show_specs'](this.model));

    }

});

With a corresponding template that is basically just a dump of the phone’s specs.

public/javascripts/templates/phones/show_specs.jst

<li> <span>Availability and Networks</span> <dl> <dt>Availability</dt> <dd>[[= get('availability') ]]</dd> </dl> </li> <li> <span>Battery</span> <dl> <dt>Type</dt> <dd>[[= get('battery').type ]]</dd> <dt>Talk Time</dt> <dd>[[= get('battery').talkTime ]]</dd> <dt>Standby time (max)</dt> <dd>[[= get('battery').standbyTime ]]</dd> </dl> </li> . . .

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

<li>

    <span>Availability and Networks</span>

    <dl>

        <dt>Availability</dt>

        <dd>[[= get('availability') ]]</dd>

    </dl>

</li>

<li>

    <span>Battery</span>

    <dl>

        <dt>Type</dt>

        <dd>[[= get('battery').type ]]</dd>

        <dt>Talk Time</dt>

        <dd>[[= get('battery').talkTime ]]</dd>

        <dt>Standby time (max)</dt>

        <dd>[[= get('battery').standbyTime ]]</dd>

    </dl>

</li>

.

.

.

In fact, this view is too simple. It doesn’t have a real purpose and probably should just be part of the parent view. The reason we broke it off here was in anticipation of having some nicely formatted details data. We were then going to just loop over the data and programmatically generate all the details, possibly with the use of some string inflector methods. However, the data we’re using from Angular doesn’t lend itself well to that purpose, which is understandable.

Currently our main image view, and image list view suffer from a similar problem of irrelevancy, but we will see the utility of breaking them off in the last section of this tutorial.

First let’s put a little polish on the way our specs look.

Reset the work-space to the next step by running

The full diff can be found on github.

The problem that we are trying to solve is that some of our specs return a Boolean so the words true and false are being printed on the screen. We speculate that users would rather see a ✓ to represen true and an ✘ to represent false rather then the words themselves. Making this happen is pretty easy. We first define a global phones helper object and give it a checkmark method.

public/javascripts/helpers/phones_helper.js

PhonesHelper = { checkmark: function (truthy) { return truthy ? '\u2713' : '\u2718'; } };

PhonesHelper = {

    checkmark: function (truthy) { return truthy ? '\u2713' : '\u2718'; }

};

We used a global object to store phone helper methods, but we could have encapsulated the helper if we wanted to in order to to lessen the possibility of name collisions.

The only other thing to do is to use our new helper in the phone specs template.

public/javascripts/templates/phones/show_specs.jst

. . . <li> <span>Connectivity</span> <dl> . . . <dt>Infrared</dt> <dd>[[= PhonesHelper.checkmark(get('connectivity').infrared) ]]</dd> <dt>GPS</dt> <dd>[[= PhonesHelper.checkmark(get('connectivity').gps) ]]</dd> </dl> </li> <li> <span>Display</span> <dl> . . . <dt>Touch screen</dt> <dd>[[= PhonesHelper.checkmark(get('display').touchScreen) ]]</dd> </dl> </li> <li> <span>Hardware</span> <dl> . . . <dt>FM Radio</dt> <dd>[[= PhonesHelper.checkmark(get('hardware').fmRadio) ]]</dd> <dt>Accelerometer</dt> <dd>[[= PhonesHelper.checkmark(get('hardware').accelerometer) ]]</dd> </dl> </li> . . .

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

.

.

.

<li>

    <span>Connectivity</span>

    <dl>

        .

        .

        .

        <dt>Infrared</dt>

        <dd>[[= PhonesHelper.checkmark(get('connectivity').infrared) ]]</dd>

        <dt>GPS</dt>

        <dd>[[= PhonesHelper.checkmark(get('connectivity').gps) ]]</dd>

    </dl>

</li>

<li>

    <span>Display</span>

    <dl>

        .

        .

        .

        <dt>Touch screen</dt>

        <dd>[[= PhonesHelper.checkmark(get('display').touchScreen) ]]</dd>

    </dl>

</li>

<li>

    <span>Hardware</span>

    <dl>

        .

        .

        .

        <dt>FM Radio</dt>

        <dd>[[= PhonesHelper.checkmark(get('hardware').fmRadio) ]]</dd>

        <dt>Accelerometer</dt>

        <dd>[[= PhonesHelper.checkmark(get('hardware').accelerometer) ]]</dd>

    </dl>

</li>

.

.

.

And voilà, ✓’s and ✘’s on our page at localhost:8888/phones/motorola-xoom-with-wi-fi

Reset the work-space to the next step by running

git checkout -f step-10

1

git checkout -f step-10

The full diff can be found on github.

The last order of business is to let the user change out the main image for any thumbnail. We’ll let them swap out the image by clicking on whichever thumbnail image they want to see.

We’re already in good shape since we broke apart our views in step-8 and made our photos full blown Backbone models. The first thing we need to do is trigger an event to let the Phone know which photo it is suppose to replace it’s mainImage with. The natural place to do this is in the photo model view.

public/javascripts/views/phones/show_images_list_image_view.js

PhonesShowImagesListImageView = Backbone.View.extend({ events: { 'click': 'selectImage' }, . . . selectImage: function (event) { this.model.trigger('imageSelected', this.model); } });

1

2

3

4

5

6

7

8

9

10

11

12

PhonesShowImagesListImageView = Backbone.View.extend({

    events: {

        'click': 'selectImage'

    },

.

.

.

    selectImage: function (event) {

        this.model.trigger('imageSelected', this.model);

    }

});

This works by triggering a custom event 'imageSelected' on whichever Photo model was clicked, since we linked the click event to selectImage. We also make sure to pass the entire photo model along as data with the 'imageSelected' event. In Backbone, all model events are automatically trigger on their collection as well, so we can listen to the collection instead of trying to listen to every photo model. We’ll do this in the main image view.

public/javascripts/views/phones/show_image_view.js

PhonesShowImageView = Backbone.View.extend({ initialize: function () { this.listenTo(this.model.photosCollection, 'imageSelected', this.setMainImage); this.listenTo(this.model, 'change:mainImage', this.render); this.render(); }, . . . setMainImage: function (photo) { this.model.set('mainImage', photo); } });

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

PhonesShowImageView = Backbone.View.extend({

    initialize: function () {

        this.listenTo(this.model.photosCollection, 'imageSelected', this.setMainImage);

        this.listenTo(this.model, 'change:mainImage', this.render);

        this.render();

    },

.

.

.

    setMainImage: function (photo) {

        this.model.set('mainImage', photo);

    }

});

Here, we listen for a PhotosCollection 'imageSelected' event and set the main image of the Phone model to be whichever photo model was clicked. Next, the main image is re-rendered since we have a listener monitoring changes to the phone’s main image that calls the view’s render method.

With those additions, clicking a thumbnail photo at localhot:8888/phones/motorola-xoom-with-wi-fi will now replace the main image.

We’ve reach the end of our tutorial. The Angular tutorial contains two more steps – integration with a REST API in step-11 and animations in step-12. We’ve already covered integration with a REST API in step-5 of this tutorial by fetching data through Backbone models and collections. Angular’s animations seem like a great feature, and something that might be fun to do with Backbone, but animations fall outside Backbone’s core area of applicability.

All in all, coding the PhoneCat tutorial in Backbone took 171 lines of JavaScript, whereas the original PhoneCat app coded in Angular only took 48 lines. The extra lines of code in Backbone mostly came from creating a bunch of Backbone views. Backbone requires these extra views because having a well defined view structure greatly facilitates data binding. Angular takes care of most of this behind the scenes with Angular scopes.

I hope you find this tutorial helpful when trying to decide between Angular and Backbone for your next app. Angular seems like a great JavaScript framework and I’m impressed by the brevity of the code. For my next app I’m going to consider using Angular, but I’m really fond of the less restrictive nature of Backbone, so I’ll probably end up sticking with Backbone.

About Aaron O'Connell

Founder 42floors, lover of technology

Chrome Remote Desktop goes mobile

16 April 2014 - 7:00pm
Have you ever been out and about, and urgently needed to access a file that’s sitting on your home computer? Since 2011, Chrome Remote Desktop has let you remotely access your machine from another laptop or computer in a free, easy and secure way. And now, with the release of the Chrome Remote Desktop app for Android, we’re making it possible for you to do the same thing from your Android device.

If you haven’t used Chrome Remote Desktop in the past, you can get started by enabling your Windows or Mac machine for remote access through the Chrome Web Store app. Next, simply launch the Android app on your phone or tablet, tap on the computer’s name and start using your remote machine as if you were sitting right in front of it. 

Download the Android app from the Play Store, and stay tuned for the iOS app later this year.

Posted by Husain Bengali, Remotely Controlled Product Manager

Tptacek's Review of "Practical Cryptography With Go"

16 April 2014 - 7:00am

Wow. I've now read the whole book and much of the supporting code. I'm not a fan, and recommend against relying on it. Here's a laundry list of concerns: * The teaching method the book uses is badly flawed. The book's strategy is to start simple and build to complexity, which makes sense if you're teaching algebra but not if you're teaching heart surgery. The result is that each chapter culminates with the implementation of a system that is grievously insecure. Little warning is given of this, apart from allusions to future chapters improving the system. For instance, Chapter 2 closes with a chat system that uses AES-CBC without an authenticator. * The book is full of idiosyncratic recommendations. For instance, AES-CBC requires a padding scheme. There is a standard padding scheme. The book purports to present it, but instead of PKCS7, it presents 80h+00h..00h. * At one point about 1/3rd of the way through the book, it suggests using a SHA256 hash of the plaintext as an authenticator for a message. This recommendation is doubly erroneous: it confuses hash functions with MACs, and reverses the correct order of operations. Once again, the book acknowledges a "better" way of authenticating messages, but (a) doesn't warn the reader that the "worse" way is in fact broken and (b) doesn't spell out the "better" way until later. * The book pointlessly allocates several paragraphs to "salted hashing", before a chapter-length consideration of adaptive hashing. I'll use this point as a setting for a broader critique: the book is constantly making recommendations without complete context. Presumably the author knows the attack salts guard against, and the ways password hashes are actually attacked. But he's too eager to get the reader writing code, so instead of explanation, there's a blob of Golang code, in this case for a primitive nobody needs. * Total undue reverence for NIST and FIPS standards; for instance, the book recommends PBKDF2 over bcrypt and scrypt (amusingly: the book actually recommends *against* scrypt, which is too new for it) because it's standardized. If you're explaining crypto to a reader, and you're at the point where you're discussing KDFs, you'd think there'd be cause to explain that PBKDF2 is actually the weakest of those 3 KDFs, and why. Again, there's an opportunity for explanation and context, but instead: get some Golang code into the editor and move on! * The book suggests that there's a timing attack against password hash comparison. Doing a secure compare on password hashes isn't bad, per se, but the notion that password hash checks leak usable information puts into question whether the author actually understands timing attacks. * The code for secure compare is silly: the author writes their own loop using ConstantTimeByteEq, leaving the reader to wonder, "is Golang so obtuse that they provide a 'constant time byte equal' library routine, but not a 'constant time array equal'?" The answer is no, Golang is not that obtuse; the author has just missed the library function that safely compares arrays. * The book spends a huge amount of time on a password authentication protocol using random challenges, and even revisits it in a later chapter. The protocol is: S->C nonce, C->S HASH(pw, nonce), HEAD->DESK smash. "There's an even better way to authenticate users", the book suggests, but nowhere in this book are password-based key exchanged discussed. The book recommends that cryptographically-random nonces be used (believing that the size of the nonce is the big security issue with that protocol), but also that they be stored in a ledger to prevent reuse. It's especially painful that this chapter *follows* the chapter on password hashing, but does not avail itself of password hashing. * MACs aren't "keyed hashes", or for that matter a "different kind of hash". * The book, recommending a dubiously justified strategy of "fail as late as possible to avoid leaking information", adds (more than halfway through the book!) a MAC to an AES-CTR message, and in showing how to decrypt the message, checks the MAC and *decrypts the message oblivious to the MAC check*. Argh! It happens that in Golang, presuming your library doesn't use panic(), this mistake won't blow you up. But in most other languages, that pattern is extremely unsafe. Either way: the MAC fails, you chuck the message; you don't soldier on. * The book recommends (sort of) AES-CTR, but does not explain what a CTR nonce is; instead, it reuses the "GenerateIV" function it defined for CBC without comment. But CBC IVs and CTR nonces are not the same creature! Here, the conflation doesn't generate a security problem, but a reader following the CBC IV advice with their own code could get into extremely serious trouble using that for CTR. * The book *actively recommends* public key cryptography, because of concerns about key distribution. Again: bad strategy. Cryptographers use public key crypto only when absolutely required. Most settings for cryptography don't need it! Public key cryptography multiplies the number of things that can go wrong with your cryptosystem. You'd never know that to read this book; here, public key is like "cryptography 2.0". * In considering RSA, the book recommends /dev/random, despite having previously advised readers to avoid /dev/random in favor of /dev/urandom. The book was right the first time. Despite linking to a series of Internet posts about random vs. urandom (including my own), the second half of the book is choc-a-block full of references to "entropy pool depletion", as if that was a real thing (it is not). Later in the book, the author will go on to recommend an actual protocol change to avoid that fictitious problem. * The book recommends RSA, OAEP, and PSS without explaining prime numbers, or for that matter what OAEP and PSS are. This is especially painful because Golang happens to implement Rogaway's OAEP and PSS, unlike virtually all other languages, and so a reader learning crypto from this book and taking it to (say) Clojure is likely to end up with insecure RSA upon finding out that those modes aren't available. Once again: no context given, and a rush to get unexplained code onto the page. * A nit: the book spends a whole lot of time and screen space on a custom keychain implementation that is of no importance to security. It's understandable that one would take the time to document the boring supporting code that enables you to actually work with cryptography, but less clear why its details are scattered through the book and outweigh important concerns about security. * The book recommends ECDSA without explaining what ECDSA or, for that matter, DSA are. The book fails to point out the randomness requirement for DSA, presumably because Golang abstracted this away and the author was unaware of it. The randomness requirement for DSA is such a huge problem for DSA that it forms the centerpiece of Daniel Bernstein's critique of the NIST standards process. This is the bug that broke the PS3. It's discussed nowhere in the book; instead, the author benchmarks RSA against ECDSA, as if the result wasn't a foregone conclusion, and them moves on. Author: it's OK to just say "ECC is way faster than finite field crypto" and get on with important stuff. * "There is some concern over the NIST curves in cutting-edge research; however, these curves are still recommended for secure systems." Well, no: there is concern that the NIST curves are backdoored and should be disfavored and replaced with Curve25519 and curves of similar construction. Curve25519 was available to the author's Golang code, but isn't used. "There is also a growing movement of cryptographers positing a “cryptocalypse”. There is some research into factorisation of discrete log problems (which RSA and DH are based on)". Well, no: RSA isn't based on the DLP, for one thing; for another, my name is on that "growing movement", and it's been discredited, as the author goes on to reference but not acknowledge: "a good primer on developments in cryptography, including the cryptocalypse, can be found in the 30C3 talk" in which DJB demolishes the notion that Joux's index calculus improvements will have an impact on RSA. * The book writes its own Diffie-Hellman implementation and recommends it to readers. * The book gets the parameter checking in its Diffie-Hellman implementation wrong. * This book, I am not making this up, contains the string: "“We can use ASN.1 to make the format easier to parse". * The book contains a completely confused discussion of forward-secrecy. "If the long-term key is compromised, messages are still secure". An attacker, the book goes on, can use the compromised long-term key as an opportunity to perhaps get a client to leak information to it. No. The compromise of a long-term key in a forward-secret system is a hair-on-fire problem. Forward secrecy isn't magic pixie dust. The contact forward secrecy provides a design is that if the long-term key is compromised at point T, messages at points t < T can't retroactively be decrypted. That's all. * The book appears to actively recommend a protocol based on unauthenticated Diffie-Hellman, out of concern for performance. Here's a confusing snippet from the book: "This is why DH (including ECDH) is typically used in systems that require forward secrecy; it is computationally cost-prohibitive to generate new RSA keys”. Huh? Systems that use RSA don't generate ephemeral RSA keys. RSA key generation isn't the problem with RSA; the bignum operations involved in actually using an RSA key are. * The book has a chapter that tries to explain TLS, but rather than explaining the TLS handshake in detail (this is a book that is trying to teach readers how to design entire cryptographic transport protocols!), it spends several paragraphs on the X.509 metadata fields like "Country" and "Locality". Also, this is an author who has come into contact with ASN.1/DER and gone on to recommend the format to others. * The book closes with a short section on recommended primitives, such as AES-CTR over AES-CBC. But the book hasn't actually explained why it makes those recommendations. Instead, the benefits of the recommendations are implied by the structure of the book: if the author was talking about something early on, he presumes you understand that recommendation to have been bad. Schneier, in "Cryptography Engineering", closes his chapters with carefully written recommendations, most of which are correct. But more importantly, those recommendations come with restated and condensed justifications, and cohere with the earlier text on the primitives he's recommending. * Finally, the author of this book, having surveyed the crypto code he wrote, has decided that his abstractions over Golang pkg/crypto are superior to those of Golang pkg/crypto. So he's created and published his own high level library, using the constructions he's documented in this book. And he's called it... wait for it... CryptoBox. It's worth mentioning at this point that the book *at no point* discusses NaCl (which revolves around an abstracton named "crypto_box", which is the single most important recommendation a book on crypto could have made.

Wow. I've now read the whole book and much of the supporting code. I'm not a fan, and recommend against relying on it. Here's a laundry list of concerns: * The teaching method the book uses is badly flawed. The book's strategy is to start simple and build to complexity, which makes sense if you're teaching algebra but not if you're teaching heart surgery. The result is that each chapter culminates with the implementation of a system that is grievously insecure. Little warning is given of this, apart from allusions to future chapters improving the system. For instance, Chapter 2 closes with a chat system that uses AES-CBC without an authenticator. * The book is full of idiosyncratic recommendations. For instance, AES-CBC requires a padding scheme. There is a standard padding scheme. The book purports to present it, but instead of PKCS7, it presents 80h+00h..00h. * At one point about 1/3rd of the way through the book, it suggests using a SHA256 hash of the plaintext as an authenticator for a message. This recommendation is doubly erroneous: it confuses hash functions with MACs, and reverses the correct order of operations. Once again, the book acknowledges a "better" way of authenticating messages, but (a) doesn't warn the reader that the "worse" way is in fact broken and (b) doesn't spell out the "better" way until later. * The book pointlessly allocates several paragraphs to "salted hashing", before a chapter-length consideration of adaptive hashing. I'll use this point as a setting for a broader critique: the book is constantly making recommendations without complete context. Presumably the author knows the attack salts guard against, and the ways password hashes are actually attacked. But he's too eager to get the reader writing code, so instead of explanation, there's a blob of Golang code, in this case for a primitive nobody needs. * Total undue reverence for NIST and FIPS standards; for instance, the book recommends PBKDF2 over bcrypt and scrypt (amusingly: the book actually recommends *against* scrypt, which is too new for it) because it's standardized. If you're explaining crypto to a reader, and you're at the point where you're discussing KDFs, you'd think there'd be cause to explain that PBKDF2 is actually the weakest of those 3 KDFs, and why. Again, there's an opportunity for explanation and context, but instead: get some Golang code into the editor and move on! * The book suggests that there's a timing attack against password hash comparison. Doing a secure compare on password hashes isn't bad, per se, but the notion that password hash checks leak usable information puts into question whether the author actually understands timing attacks. * The code for secure compare is silly: the author writes their own loop using ConstantTimeByteEq, leaving the reader to wonder, "is Golang so obtuse that they provide a 'constant time byte equal' library routine, but not a 'constant time array equal'?" The answer is no, Golang is not that obtuse; the author has just missed the library function that safely compares arrays. * The book spends a huge amount of time on a password authentication protocol using random challenges, and even revisits it in a later chapter. The protocol is: S->C nonce, C->S HASH(pw, nonce), HEAD->DESK smash. "There's an even better way to authenticate users", the book suggests, but nowhere in this book are password-based key exchanged discussed. The book recommends that cryptographically-random nonces be used (believing that the size of the nonce is the big security issue with that protocol), but also that they be stored in a ledger to prevent reuse. It's especially painful that this chapter *follows* the chapter on password hashing, but does not avail itself of password hashing. * MACs aren't "keyed hashes", or for that matter a "different kind of hash". * The book, recommending a dubiously justified strategy of "fail as late as possible to avoid leaking information", adds (more than halfway through the book!) a MAC to an AES-CTR message, and in showing how to decrypt the message, checks the MAC and *decrypts the message oblivious to the MAC check*. Argh! It happens that in Golang, presuming your library doesn't use panic(), this mistake won't blow you up. But in most other languages, that pattern is extremely unsafe. Either way: the MAC fails, you chuck the message; you don't soldier on. * The book recommends (sort of) AES-CTR, but does not explain what a CTR nonce is; instead, it reuses the "GenerateIV" function it defined for CBC without comment. But CBC IVs and CTR nonces are not the same creature! Here, the conflation doesn't generate a security problem, but a reader following the CBC IV advice with their own code could get into extremely serious trouble using that for CTR. * The book *actively recommends* public key cryptography, because of concerns about key distribution. Again: bad strategy. Cryptographers use public key crypto only when absolutely required. Most settings for cryptography don't need it! Public key cryptography multiplies the number of things that can go wrong with your cryptosystem. You'd never know that to read this book; here, public key is like "cryptography 2.0". * In considering RSA, the book recommends /dev/random, despite having previously advised readers to avoid /dev/random in favor of /dev/urandom. The book was right the first time. Despite linking to a series of Internet posts about random vs. urandom (including my own), the second half of the book is choc-a-block full of references to "entropy pool depletion", as if that was a real thing (it is not). Later in the book, the author will go on to recommend an actual protocol change to avoid that fictitious problem. * The book recommends RSA, OAEP, and PSS without explaining prime numbers, or for that matter what OAEP and PSS are. This is especially painful because Golang happens to implement Rogaway's OAEP and PSS, unlike virtually all other languages, and so a reader learning crypto from this book and taking it to (say) Clojure is likely to end up with insecure RSA upon finding out that those modes aren't available. Once again: no context given, and a rush to get unexplained code onto the page. * A nit: the book spends a whole lot of time and screen space on a custom keychain implementation that is of no importance to security. It's understandable that one would take the time to document the boring supporting code that enables you to actually work with cryptography, but less clear why its details are scattered through the book and outweigh important concerns about security. * The book recommends ECDSA without explaining what ECDSA or, for that matter, DSA are. The book fails to point out the randomness requirement for DSA, presumably because Golang abstracted this away and the author was unaware of it. The randomness requirement for DSA is such a huge problem for DSA that it forms the centerpiece of Daniel Bernstein's critique of the NIST standards process. This is the bug that broke the PS3. It's discussed nowhere in the book; instead, the author benchmarks RSA against ECDSA, as if the result wasn't a foregone conclusion, and them moves on. Author: it's OK to just say "ECC is way faster than finite field crypto" and get on with important stuff. * "There is some concern over the NIST curves in cutting-edge research; however, these curves are still recommended for secure systems." Well, no: there is concern that the NIST curves are backdoored and should be disfavored and replaced with Curve25519 and curves of similar construction. Curve25519 was available to the author's Golang code, but isn't used. "There is also a growing movement of cryptographers positing a “cryptocalypse”. There is some research into factorisation of discrete log problems (which RSA and DH are based on)". Well, no: RSA isn't based on the DLP, for one thing; for another, my name is on that "growing movement", and it's been discredited, as the author goes on to reference but not acknowledge: "a good primer on developments in cryptography, including the cryptocalypse, can be found in the 30C3 talk" in which DJB demolishes the notion that Joux's index calculus improvements will have an impact on RSA. * The book writes its own Diffie-Hellman implementation and recommends it to readers. * The book gets the parameter checking in its Diffie-Hellman implementation wrong. * This book, I am not making this up, contains the string: "“We can use ASN.1 to make the format easier to parse". * The book contains a completely confused discussion of forward-secrecy. "If the long-term key is compromised, messages are still secure". An attacker, the book goes on, can use the compromised long-term key as an opportunity to perhaps get a client to leak information to it. No. The compromise of a long-term key in a forward-secret system is a hair-on-fire problem. Forward secrecy isn't magic pixie dust. The contact forward secrecy provides a design is that if the long-term key is compromised at point T, messages at points t < T can't retroactively be decrypted. That's all. * The book appears to actively recommend a protocol based on unauthenticated Diffie-Hellman, out of concern for performance. Here's a confusing snippet from the book: "This is why DH (including ECDH) is typically used in systems that require forward secrecy; it is computationally cost-prohibitive to generate new RSA keys”. Huh? Systems that use RSA don't generate ephemeral RSA keys. RSA key generation isn't the problem with RSA; the bignum operations involved in actually using an RSA key are. * The book has a chapter that tries to explain TLS, but rather than explaining the TLS handshake in detail (this is a book that is trying to teach readers how to design entire cryptographic transport protocols!), it spends several paragraphs on the X.509 metadata fields like "Country" and "Locality". Also, this is an author who has come into contact with ASN.1/DER and gone on to recommend the format to others. * The book closes with a short section on recommended primitives, such as AES-CTR over AES-CBC. But the book hasn't actually explained why it makes those recommendations. Instead, the benefits of the recommendations are implied by the structure of the book: if the author was talking about something early on, he presumes you understand that recommendation to have been bad. Schneier, in "Cryptography Engineering", closes his chapters with carefully written recommendations, most of which are correct. But more importantly, those recommendations come with restated and condensed justifications, and cohere with the earlier text on the primitives he's recommending. * Finally, the author of this book, having surveyed the crypto code he wrote, has decided that his abstractions over Golang pkg/crypto are superior to those of Golang pkg/crypto. So he's created and published his own high level library, using the constructions he's documented in this book. And he's called it... wait for it... CryptoBox. It's worth mentioning at this point that the book *at no point* discusses NaCl (which revolves around an abstracton named "crypto_box", which is the single most important recommendation a book on crypto could have made.