Hacker News from Y Combinator

Syndicate content
Links for the intellectually curious, ranked by readers. // via fulltextrssfeed.com
Updated: 6 hours 22 min ago

EFF Wins Battle Over Secret Legal Opinions on Government Spying

6 hours 22 min ago

Department of Justice to Release Analysis of Law Enforcement and Intelligence Agency Access to Census Records

San Francisco - The Electronic Frontier Foundation (EFF) has won its four-year Freedom of Information Act lawsuit over secret legal interpretations of a controversial section of the Patriot Act, including legal analysis of law enforcement and intelligence agency access to census records.

The U.S. Department of Justice today filed a motion to dismiss its appeal of a ruling over legal opinions about Section 215 of the Patriot Act, the controversial provision of law relied on by the NSA to collect the call records of millions of Americans. As a result of the dismissal, the Justice Department will be forced to release a previously undisclosed opinion from the Office of Legal Counsel (OLC) concerning access by law enforcement and intelligence agencies to census data under Section 215.

"The public trusts that information disclosed for the census won't wind up in the hands of law enforcement or intelligence agencies," Staff Attorney Mark Rumold said. "The public has a right to know what the Office of Legal Counsel's conclusions were on this topic, and we're happy to have vindicated that important right."

In October 2011—the 10th anniversary of the signing of USA Patriot Act—EFF sued the Justice Department to gain access to all "secret interpretations" of Section 215. At earlier stages in the litigation, the Justice Department had refused to publicly disclose even the number of documents that were at issue in the case, claiming the information was classified.

In June 2013, the lawsuit took a dramatic turn after The Guardian published an order from the Foreign Intelligence Surveillance Court authorizing the bulk collection of call records data of Verizon customers. That disclosure helped EFF secure the release of hundreds of pages of legal opinions, including multiple opinions of the Foreign Intelligence Surveillance Court excoriating the NSA for disregarding the court's orders.

However, the Justice Department continued to fight for secrecy for the legal opinion over access to census data under Section 215. Last August, a federal district court judge ordered the government to disclose the OLC opinion.

"The Justice Department has made a wise decision in dismissing the appeal," Rumold said. "We filed this suit nearly four years ago to inform the public about the way the government was using Section 215. We're well overdue to have a fully informed, public debate about this provision of law, and hopefully the disclosure of this opinion will help move the public debate forward."

Although the motion for dismissal was filed today, the government has not provided EFF with the opinion. After receiving the document, EFF will also make it available through its website.

For more information on the case visit: https://www.eff.org/foia/section-215-usa-patriot-act

Contact:

Mark Rumold
   Staff Attorney
   Electronic Frontier Foundation
   mark@eff.org

Replace CoffeeScript with ES6

6 hours 22 min ago

I’ve been looking into ES6, the next version of JavaScript, and finally got a chance to use it on a project. In the brief amount of time I was able to use it I’ve found that it solves a lot of the problems that CoffeeScript is trying to solve without drastic syntax changes.

Using ES6 Today

We can start using ES6 today through the 6to5 project which transpiles our ES6 code into ES5. 6to5 supports a plethora of build tools including Broccoli, Grunt, Gulp, and Sprockets. I’ve had a lot of success using sprockets-es6, and Sprockets 4.x will have out-of-the-box support for 6to5.

If you’re using Vim you’ll want to associate the .es6 file extension with JavaScript by putting the following code into your .vimrc.

autocmd BufRead,BufNewFile *.es6 setfiletype javascript

You can also use the 6to5 REPL to try out ES6 in your browser.

Classes

Both CoffeeScript and ES6 have class support. Let’s look at a CoffeeScript class compared to the ES6 equivalent.

CoffeeScript allows us to take advantage of setting instance variables from the parameters, string interpolation, and calling functions without parentheses:

class Person constructor: (@firstName, @lastName) -> name: -> "#{@first_name} #{@last_name}" setName: (name) -> names = name.split " " @firstName = names[0]; @lastName = names[1]; blake = new Person "Blake", "Williams" blake.setName("Blake Anderson") console.log blake.name()

With ES6 we can take advantage of classes, getters, and setters:

class Person { constructor(firstName, lastName) { this.firstName = firstName; this.lastName = lastName; } get name() { return this.firstName + " " + this.lastName; } set name(name) { var names = name.split(" "); this.firstName = names[0]; this.lastName = names[1]; } } var blake = new Person("Blake", "Williams"); blake.name = "Blake Anderson" console.log(blake.name);

If you’ve used any library or framework that provides classes in JavaScript you’ll notice the ES6 syntax has some minor differences:

  • There is no semicolon after the function name
  • The function keyword is omitted
  • There are no commas after each definition

We’re also taking advantage of getters and setters which allow us to treat the name function like a property.

Interpolation

I’ve often wished for a more powerful string syntax in JavaScript. Fortunately ES6 introduces template strings. Let’s compare CoffeeScript strings, JavaScript strings, and template strings to see what each is capable of.

CoffeeScript:

"CoffeeScript allows multi-line strings with interpolation like 1 + 1 = #{1 + 1} "

JavaScript strings:

"JavaScript strings can only span a single line " + "and interpolation isn't possible"

ES6 template strings:

`Template strings allow strings to span multiple lines and allow interpolation like 1 + 1 = ${1 + 1} `

We can take advantage of template strings in our previous example by changing our name getter to the following:

get name() { return `${this.firstName} ${this.lastName}`; }

This feels much cleaner than the string concatenation we were doing before and gets us closer to the CoffeeScript example.

Fat Arrows

Another feature that made CoffeeScript so appealing also makes an appearance in ES6: fat arrows. Fat arrows allow us to bind a function to the current value of this. First, let’s take a look at how we can handle this without a fat arrow.

With ES5 we have to keep a reference to the current value of this when defining the function:

var self = this; $("button").on("click", function() { // do something with self });

CoffeeScript fat arrows can omit arguments and parentheses entirely:

$("button").on "click", => # do something with this

ES6 fat arrows require the parentheses with or without arguments:

$("button").on("click", () => { // do something with this }); Other features

ES6 has a few other features worth worth noting in passing.

Default arguments

CoffeeScript:

hello = (name = "guest") -> alert(name)

ES6:

var hello = function(name = "guest") { alert(name); } Splats

Variadic functions, which CoffeeScript calls splats, allow you to allow you to collect additional arguments passed to your function as an array. ES6 refers to them as rest arguments.

CoffeeScript:

awards = (first, second, others...) -> gold = first silver = second honorable_mention = others

ES6:

var awards = function(first, second, ...others) { var gold = first; var silver = second; var honorableMention = others; } Destructuring

Destructuring allows you to pattern match against arrays and objects to extract specific values.

CoffeeScript:

[first, _, last] = [1, 2, 3]

ES6:

var [first, , last] = [1, 2, 3]

We can use destructuring in the name setter we defined earlier to make our code more concise:

set name(name) { [this.firstName, this.lastName] = name.split(" "); } Wrapping Up

ES6 transpilers are actively being developed and are catching up to CoffeeScript in functionality. This post only covered a handful of the features that ES6 is bringing to JavaScript but you can find out more about what’s been covered and the other features here.

On your next project set CoffeeScript aside and give ES6 a shot!

Microsoft to invest in Cyanogen

6 hours 22 min ago

According to a report from The Wall Street Journal, Microsoft will be investing $70 million in Cyanogen, Inc., the Android ROM builder. The report says $70 million would make Microsoft a "minority investor" in a round of financing that values Cyanogen in the "high hundreds of millions."

Cyanogen takes the Android source code and modifies it, adding more features and porting it to other devices. It has also started supplying Android builds directly to OEMs (like the OnePlus One), which ship the software on devices instead of stock Android. Last week during a talk in San Francisco, Cyanogen's CEO said the company's goal was to "take Android away from Google." It wants to replace the Google Play ecosystem with apps of its own, the same way Amazon uses AOSP for its Kindle Fire products but adds its own app and content stores.

A Microsoft investment in the company would be the latest in Redmond's ironic ties to Android. Microsoft is thought to make more from Android patent licensing fees than it does from Windows Phone, and through its purchase of Nokia, the company even briefly sold Android-based handsets. Now, according to WSJ, Microsoft will become an investor in a company that sells an Android distribution.

Cyanogen says 50 million people are using its modified version of Android, which would put it at five percent of the one billion active users that Google touts. Nearly all of those Cyanogen users are still using Google Play though. The real challenge for the company will be convincing its hardcore Android user base to dump Google Play and use the Cyanogen app store.

How, and Why, Apple Overtook Microsoft

6 hours 22 min ago
Continue reading the main story Share This Page Continue reading the main story

When Microsoft stock was at a record high in 1999, and its market capitalization was nearly $620 billion, the notion that Apple Computer would ever be bigger — let alone twice as big — was laughable. Apple was teetering on bankruptcy. And Microsoft’s operating system was so dominant in personal computers, then the center of the technology universe, that the government deemed the company an unlawful monopoly.

This week, both Microsoft and Apple unveiled their latest earnings, and the once unthinkable became reality: Apple’s market capitalization hit $683 billion, more than double Microsoft’s current value of $338 billion.

At Apple’s earnings conference call on Tuesday, its chief executive, Timothy D. Cook, called the quarter “historic” and the earnings “amazing.” Noting that Apple sold more than 34,000 iPhone 6s every hour, 24 hours a day, during the quarter, he said the sheer volume of sales was “hard to comprehend.”

Continue reading the main story Related Coverage

Apple earned $18 billion in the quarter — more than any company ever in a single quarter — on revenue of $75 billion. Its free cash flow of $30 billion in one quarter was more than double what IBM, another once-dominant tech company, generates in a full year, noted a senior Bernstein analyst, Toni Sacconaghi. The stock jumped more than 5 percent, even as the broader market was down.

Photo

At Apple’s earnings conference call on Tuesday, its chief executive, Timothy D. Cook, called the quarter “historic” and the earnings “amazing.” Credit Marcio Jose Sanchez/Associated Press

A far more subdued Satya Nadella, Microsoft’s chief executive, who is trying to transform the company and reduce its dependence on the Windows operating system, referred to “challenges.” Microsoft’s revenue was barely one-third of Apple’s, and operating income of $7.8 billion was less than a quarter of Apple’s. Microsoft shares dropped over 9 percent as investors worried about its aging personal computer software market.

Robert X. Cringely, the pen name of the technology journalist Mark Stephens, told me this week that when he interviewed Microsoft’s co-founder, Bill Gates, in 1998 for Vanity Fair, Mr. Gates “couldn’t imagine a situation in which Apple would ever be bigger and more profitable than Microsoft.”

“He knows he can’t win,” Mr. Gates said then of the Apple co-founder Steve Jobs.

But less than two decades later, Apple has won. How this happened contains some important lessons — including for Apple itself, if it wants to avoid Microsoft’s fate. Apple, after all, is now as dependent on the success of one product line — the iPhone accounted for 69 percent of its revenue — as Microsoft once was with Windows.

The most successful companies need a vision, and both Apple and Microsoft have one. But Apple’s was more radical and, as it turns out, more farsighted. Microsoft foresaw a computer on every person’s desk, a radical idea when IBM mainframes took up entire rooms. But Apple went a big step further: Its vision was a computer in every pocket. That computer also just happened to be a phone, the most ubiquitous consumer device in the world. Apple ended up disrupting two huge markets.

“Apple has been very visionary in creating and expanding significant new consumer electronics categories,” Mr. Sacconaghi said. “Unique, disruptive innovation is really hard to do. Doing it multiple times, as Apple has, is extremely difficult. It’s the equivalent of Pixar producing one hit after another. You have to give kudos to Apple.”

Walter Isaacson, who interviewed Mr. Jobs for his biography of the Apple co-founder and chief executive, said: “Steve believed the world was going mobile, and he was right. And he believed that beauty matters. He was deeply moved by beautiful design. Objects of great functionality also had to be objects of desire.”

Continue reading the main story iPhone Domination

The iPhone now accounts for so much of Apple’s revenue and profits that some worry the company is too dependent on one product line.

Fiscal years ended in September

4 qtrs.

through

Dec. ’14

Like many successful companies, Microsoft nurtured its dominant position, but at the risk of missing potentially disruptive innovations. “You have to acknowledge that Microsoft has been successful and it still is,” said Robert Cihra, a senior managing director and technology analyst at Evercore. “But clearly, they’ve struggled over how to protect the Windows franchise while not having that hold them back in other areas. I think even Microsoft would agree that they’ve been too concerned with protecting Windows over the years, to their detriment.”

By contrast, “Steve ingrained in the DNA of Apple not to be afraid to cannibalize itself,” Mr. Isaacson said. “When the iPod was printing money, he said that someday the people making phones will figure out they can put music on phones. We have to do that first. Now, what you’re seeing is that the bigger iPhone may be hurting sales of iPads, but it was the right thing to do.”

Mr. Cihra agreed: “Apple laid waste to its iPod business. They’re happier selling 74.5 million iPhones than they would be even if they still were selling that many iPods, which they wouldn’t be anyway because someone else would have cannibalized them.”

Microsoft has repeatedly tried to diversify, and continues to do so under Mr. Nadella. But “it’s been more of a follower whereas Apple has been more of a trendsetter, trying to reinvent an industry,” Mr. Sacconaghi said.

In belatedly buying Nokia, Microsoft is offering its own smartphone, the Windows phone, in head-to-head competition with Apple. While the device has garnered some critical praise, “I’m not sure consumers need a third option” to the Android and iOS platforms, Mr. Cihra said. Microsoft’s already tiny share of the smartphone market has been dropping.

Perhaps more surprising, the Apple model of integrating all aspects of the design and manufacture of a product, long abandoned by other manufacturers, has been vindicated. Microsoft was once content to stick to software, ceding processors to companies like Intel and the PCs themselves to an array of other manufacturers.

“Microsoft seemed to have the better business model for a very long time,” Mr. Isaacson said. “But in the end, it didn’t create products of ethereal beauty. Steve believed you had to control every brush stroke from beginning to end. Not because he was a control freak, but because he had a passion for perfection.”

Photo

Satya Nadella, Microsoft’s chief executive, is trying to transform the company and reduce its dependence on the Windows operating system. Credit Elaine Thompson/Associated Press

Apple “proved that you want to own the hardware and not just the platform,” Mr. Cihra said. “With the advent of PCs, everyone gave up on that model except Apple. But if you get that model right, the upside leverage is huge. If you want an Android device, you can go anywhere. But if you want an iPhone, you have to go to Apple.

“If you can do that, you get pricing power, and the profitability is unbelievable.” Apple reported profit margins this quarter of just under 40 percent.

And then there’s Apple’s successful leadership transition to Mr. Cook, who took over as chief executive in 2011, shortly before Mr. Jobs died. It’s not that Steve Ballmer, Bill Gates’s immediate successor, and now Mr. Nadella haven’t done a decent job at the helm of Microsoft. Until this week’s dip, Microsoft shares were close to a record high. But Mr. Gates is still very much alive, and remains engaged with the company.

Mr. Jobs “told me that Tim Cook would be an inspiring leader,” Mr. Isaacson said. “He knew Tim wouldn’t wake up every morning trying to figure out what Steve Jobs would do. Steve would never have made a bigger iPhone. He didn’t believe in it. But Tim did it, and it was the right thing to do.”

Some investors worry that Apple could become the prisoner of its own success. As Mr. Sacconaghi noted, 69 percent of the company’s revenue and 100 percent of its revenue growth for the quarter came from the iPhone, which makes Apple highly dependent on one product line. “There’s always the risk of another paradigm shift,” he said. “Who knows what that might be, but Apple is living and dying by the iPhone. It’s a great franchise until it isn’t.”

Apple is also running into “the challenge of large numbers,” Mr. Cihra said. With a market capitalization approaching $700 billion, the number “scares people,” he said. “How can it get much bigger? How is that possible?” Apple is already the world’s largest company, by a significant margin.

But he noted that by many measures, Apple shares appeared to be a bargain. “The valuation is still inexpensive,” he said. “It’s less than 13 times next year’s earnings and less than 10 times cash flow,” both below the market average. “Those are very low multiples. They have $140 billion in cash on the balance sheet and they’re generating $60 billion in cash a year. All the numbers are just enormous, which is hard for people get their heads around.”

Mr. Cihra noted that Microsoft already dominates its core businesses, leaving little room for growth. But, he said, “Apple still doesn’t have massive market share in any of its core markets. Even in smartphones, its share is only in the midteens. Apple’s strategy has been to carve out a small share of a massive market. It’s pretty much a unique model that leaves plenty of room for growth.”

Can Apple continue to live by Mr. Jobs’s disruptive creed now that the company is as successful as Microsoft once was? Mr. Cihra noted that it was one thing for Apple to cannibalize its iPod or Mac businesses, but quite another to risk its iPhone juggernaut.

“It’s getting tougher for Apple,” Mr. Cihra said. “The question investors have is, what’s the next iPhone? There’s no obvious answer. It’s almost impossible to think of anything that will create a $140 billion business out of nothing.”

A version of this article appears in print on January 30, 2015, on page B1 of the New York edition with the headline: Overtaking a Behemoth .

Left wing politics and “I don't know what to do you guys”

6 hours 22 min ago

So, to state the obvious: Jon Chait is a jerk who somehow manages to be both condescending and wounded in his piece on political correctness. He gets the basic nature of language policing wrong, and his solutions are wrong, and he’s a centrist Democrat scold who is just as eager to shut people out of the debate as the people he criticizes. That’s true.

Here are some things that are also true.

I have seen, with my own two eyes, a 19 year old white woman — smart, well-meaning, passionate — literally run crying from a classroom because she was so ruthlessly brow-beaten for using the word “disabled.” Not repeatedly. Not with malice. Not because of privilege. She used the word once and was excoriated for it. She never came back. I watched that happen.

I have seen, with my own two eyes, a 20 year old black man, a track athlete who tried to fit organizing meetings around classes and his ridiculous practice schedule (for which he received a scholarship worth a quarter of tuition), be told not to return to those meetings because he said he thought there were such a thing as innate gender differences. He wasn’t a homophobe, or transphobic, or a misogynist. It turns out that 20 year olds from rural South Carolina aren’t born with an innate understanding of the intersectionality playbook. But those were the terms deployed against him, those and worse. So that was it; he was gone.

I have seen, with my own two eyes, a 33 year old Hispanic man, an Iraq war veteran who had served three tours and had become an outspoken critic of our presence there, be lectured about patriarchy by an affluent 22 year old white liberal arts college student, because he had said that other vets have to “man up” and speak out about the war. Because apparently we have to pretend that we don’t know how metaphorical language works or else we’re bad people. I watched his eyes glaze over as this woman with $300 shoes berated him. I saw that. Myself.

These things aren’t hypothetical. This isn’t some thought experiment. This is where I live, where I have lived. These and many, many more depressing stories of good people pushed out and marginalized in left-wing circles because they didn’t use the proper set of social and class signals to satisfy the world of intersectional politics. So you’ll forgive me when I roll my eyes at the army of media liberals, stuffed into their narrow enclaves, responding to Chait by insisting that there is no problem here and that anyone who says there is should be considered the enemy.

By the way: in these incidents, and dozens and dozens of more like it, which I have witnessed as a 30-hour-a-week antiwar activist for three years and as a blogger for the last seven and as a grad student for the  past six, the culprits overwhelmingly were not women of color. That’s always how this conversation goes down: if you say, hey, we appear to have a real problem with how we talk to other people, we are losing potential allies left and right, then the response is always “stop lecturing women of color.” But these codes aren’t enforced by women of color, in the overwhelming majority of the time. They’re enforced by the children of privilege. I know. I live here. I am on campus. I have been in the activist meetings and the lefty coffee houses. My perspective goes beyond the same 200 people who write the entire Cool Kid Progressive Media.

Amanda Taub says political correctness “doesn’t exist.” To which I can only ask, how would you know? I don’t understand where she gets that certainty. Is Traub under the impression that the Vox offices represents the breadth of left-wing culture? I read dozens of tweets and hot take after hot take, insisting that there’s no problem here, and it’s coming overwhelmingly from people who have no idea what they’re talking about.

Well, listen, you guys: I don’t know what to do. I am out of ideas. I am willing to listen to suggestions. What do I do, when I see so many good, impressionable young people run screaming from left-wing politics because they are excoriated the first second they step mildly out of line? Megan Garber, you have any suggestions for me, when I meet some 20 year old who got caught in a Twitter storm and determined that she never wanted to set foot in that culture again? I’m all ears. If I’m not allowed to ever say, hey, you know, there’s more productive, more inclusive ways to argue here, then I don’t know what the fuck I am supposed to do or say. Hey, Alex Pareene. I get it. You can write this kind of piece in your sleep. You will always find work writing pieces like that. It’s easy and it’s fun and you can tell jokes and those same 200 media jerks will give you a thousand pats on the back for it. Do you have any advice for me, here, on campus? Do you know what I’m supposed to say to some shellshocked 19 year old from Terra Haute who, I’m very sorry to say, hasn’t had a decade to absorb bell hooks? Can you maybe do me a favor, and instead of writing a piece designed to get you yet-more retweets from Weird Twitter, tell me how to reach these potential allies when I know that they’re going to get burned terribly for just being typical clumsy kids? Since you’re telling me that if I say a word against people who go nuclear at the slightest provocation, I’m just one of the Jon Chaits, please inform me how I can act as an educator and an ally and a friend. Because I am out of fucking ideas.

I know, writing these words, exactly how this will go down. I know Weird Twitter will hoot and the same pack of self-absorbed media liberals will herp de derp about it. I know I’ll get read the intersectionality riot act, even though everyone I’m criticizing here is white, educated, and privileged. I know nobody will bother to say, boy, maybe I don’t actually understand the entire world of left-wing politics because I went to Sarah Lawrence. I know that. But Christ, I wish people would think outside of their social circle for 5 minutes.

Jon Chait is an asshole. He’s wrong. I don’t want these kids to be more like Jon Chait. I sure as hell don’t want them to be less left-wing. I want them to be more left-wing. I want a left that can win, and there’s no way I can have that when the actually-existing left sheds potential allies at an impossible rate. But the prohibition against ever telling anyone to be friendlier and more forgiving is so powerful and calcified it’s a permanent feature of today’s progressivism. And I’m left as this sad old 33 year old teacher who no longer has the slightest fucking idea what to say to the many brilliant, passionate young people whose only crime is not already being perfect.

Nike+ FuelBand SE BLE Protocol Reversed

6 hours 22 min ago
Nike+ FuelBand SE BLE Protocol Reversed

29 Jan 2015 on reversing, nike, nike+ fuelband se, fuelband, nike fuelband, hacking, BLE, bluetooth low energy, protocol, authentication, bluetooth, nikeband

During the last two weeks I had fun playing with the BLE protocol of the Nike+ FuelBand SE, a device to track daily steps, calories, time, etc.

I've completely reversed its protocol and found out the following key points:

  • The authentication system is vulnerable, anyone could connect to your device.
  • The protocol supports direct reading and writing of the device memory, up to 65K of contents.
  • The protocol supports commands that are not supposed to be implemented in a production release ( bootloader mode, device self test, etc ).

I've published a proof of concept Android application on github, don't expect it to be production ready code of course, but it works :)

Because! I had fun reversing it, I hate closed source hardware protocols, and as long as I know I'm the first one to actually manage to do it, despite many are trying since the first version with no luck.

The question is never why, the question is always how.

The Bluetooth Low Energy is a wireless personal area network technology designed and marketed by the Bluetooth Special Interest Group aimed at novel applications in the healthcare, fitness, beacons, security ( LOL, more on this later ), and home entertainment industries. Compared to Classic Bluetooth, Bluetooth Smart is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range.

Basically it's something that works on the bluetooth frequencies, but has very little in common to the classic bluetooth, mostly because the device protocol must be implemented by each vendor since there isn't really a standard (yet?).

Each device has its characteristics which basically are read/write channels (thing about them as sockets), while the writing method is only one, there are two modes of reading data, either you perform an active reading or you wait for the onCharacteristicChanged event and get the available data from the read channel.

The annoying part of this technology is synchronization, since read and write operations can not be performed simultaneously, instead each one needs the previous operation to be completed before being scheduled ... event programming dudes!

That's why you will find an event queue and a lot of sinchronization code in my PoC, not my fault :P

Fortunately there's a Nike official Android application that I managed to reverse, since I don't (actually didn't, more on this later ) know smali, I used the lame method of converting the APK to a JAR package using the great dex2jar tool and then JD-Gui to easily read the Java source code.

First thing first, the device is detected and recognized by its COMPANYCODE in the advertisment data ( byte[] NIKE_COMPANY_CODE = { 0, 120 } ), then a GATT service discovery is launched.

The main command service UUID is 83cdc410-31dd-11e2-81c1-0800200c9a66 and it has two characteristics:

  • Command Channel (where you write commands) : c7d25540-31dd-11e2-81c1-0800200c9a66
  • Response Channel (where you wait for responses) : d36f33f0-31dd-11e2-81c1-0800200c9a66

Once the client device attaches to these two channels, it enables notifications on the response one and the authentication procedure starts.

I'm saying theoretically because that's what some parts of the application suggest it should work, but actually I've found out that most of the authentication code is bypassed and some pretty funny constants are used :)

Everything starts with a PIN, a string that "someone" will send you (probably the Nike web api) during the first login/setup with the device, this string is stored inside the XML file /data/data/com.nike.fb/shared_prefs/profilePreferences.xml, in my case its node is:

... <string name="pin">69AB8DA2-F7D6-497C-869D-493CCF8FE8BC</string> ...

The pin is then hashed with the MD5 function and the first 6 bytes of the resulting hash are converted to hexadecimal, those 6 bytes will become the discovery_token stored in the same file:

... <string name="discovery_token">5E5E6F7A7FE2</string> ...

Every time the app finds the device and wants to connect with it, it sends the following START AUTHENTICATION command:

0x90 0x0101 0x00 0x00 0x00 ....

0x90 indicates that's a SESSION command and its bits contains the sequence number, number of total packets in the transaction and packet index ( this is the encoder ).

0x0101 are the bytes indicating the START AUTH command and all the 0x00 are zero bytes padding up to 19 bytes.

Once the app sends this packet, the device replies with a challenge response containing a 16 bytes long nonce buffer.

0xC0 0x11 0x41 0xF495C98693075322225EB8B8A4D79B39
  • 0xC0 Reply opcode ( SESSION protocol, 0 following packets, packet index 0, sequence number 4 ).
  • 0x11 Following data size ( 16 of the nonce + 1 of 0x41 ).
  • 0x41 Auth opcode OPCODE_AUTH_CHALLENGE ( namely: "Hey dude, I'm sending you the nonce! )
  • 0xF495C98693075322225EB8B8A4D79B39 : The nonce itself.

To succesfully authenticate to the device, you need to take this nonce, the previously discussed discovery_token, get a CRC32 of them, truncate it to two bytes and send it back to the device, so the resulting packet would be something like:

0xB0 0x0302 XX XX 0x00 0x00 ........
  • 0xB0 : SESSION protocol, 0 following packets, packet index 0, sequence number 5.
  • 0x0302 : Authentication request opcode.
  • XX XX : The two bytes of the truncated CRC32.
  • 0x00 ... : Zero padding up to 19 bytes.

Sounds quite simple yet robust doesn't it? Since you need both the pin ( which is probably linked to the user account ) and the nonce sent by the device, there's no way you can remotely connect to a FuelBand unless you have physical access to the owner device or you have hacked his account and used it on your device to force the web api to send you back his pin.

Right? ..... WRONG ! :D

NOTE
Besides what I'm about to write, the device is broadcasting the user discovery_token within its advertisment data ( the MANUDATA field ), so you could sniff it anyway ... LOL!

I've been stucked a couple of days on this ... I implemented everything in the right way, I was using my own discovery_token, succesfully initiated the connection to the device and got the nonce, CRC32'ed them together ... and then I got an InvalidParameterException from the class which was computing the CRC32 checksum ( that I copied from the JD-GUI decompilation ) with the message:

Length of data must be a multiple of 4

WTF DUDE?! How could the discovery_token, which is only 6 bytes long, have a size which is divisible by 4?!
So I tried to truncate it to 4 bytes, pad it, hash it ... you say it!
Nothing was working.

So I decided that it was the time for me to learn to read and write in Smali ( took me a couple of hours, quite simple actually ).

I decompiled the APK again, this time using apktool to get the smali code, injected some code of mine to make the application log the actual token it was using, recompiled it with apktool, signed it with signapk and reinstalled to my device.

Guess what?

Fuck it, who fucking cares about that token anyway? Let's just use 0xff 0xff 0xff 0xff 0xff 0xff .... !

Yeah ... although the code is there and all the mechanism described in the previous section could be robust ... they are just using a hard coded token of 0xff 0xff 0xff 0xff 0xff 0xff .... meaning that, anyonce who's able to get the nonce from the device ( so anyone with a BLE capable Android smarphone since the device itself it's sending it ) will be able to authenticate against your device and send any command ... let me facepalm again ....

So basically here the code to create an authentication packet:

CopperheadCRC32 crc = new CopperheadCRC32(); byte[] auth_token = Utils.hexToBytes("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"); /* * Create the response packet: 0xb0 0x03 0x02 [2 BYTES OF CRC] 0x00 ... */ Packet resp_packet = new Packet(19); resp_packet.setProtocolLayer( CommandResponseOperation.ProtocolLayer.SESSION ); resp_packet.setPacketCount(0); resp_packet.setPacketIndex(0); resp_packet.setSequenceNumber( challenge_packet.getSequenceNumber() + 1 ); ByteBuffer response = ByteBuffer.allocate(18); response.put( (byte)0x03 ); response.put( (byte)0x02 ); crc.update(nonce); crc.update(auth_token); short sum = (short)((0xFFFF & crc.getValue()) ^ (0xFFFF & crc.getValue() >>> 16)); response.putShort(sum); resp_packet.setPayload( response.array() );

And finally the device will reply with:

0xE0 0x01 0x42 0x00000000000000000000000000000000
  • 0xE0: SESSION layer reply, bla bla bla.
  • 0x01: 1 byte of reply.
  • 0x42: Succesfully authenticated ( FUCK YEAH! )
  • 0x00..: Padding.

Once you're succesfully authenticated, you can start sending command, each command has its own encoding standard, but the first three bytes are always:

  • protocol byte: SESSION or COMMAND constants + some bit hacking to set sequence number etc.
  • length byte: Size of the following data.
  • opcode : Code of the command:

Each command ( and its encode ) is implemented inside the class com.nike.nikerf.protocol.impl.NikeProtocolCoder_Copperhead, for instance here's the redacted implementation of Cmd_GenericMemoryBlock ( yeah -.- ):

private abstract class Cmd_GenericMemoryBlock { private static final int MAX_ADDRESS = 65536; private static final String MSG_ERR1 = "Request packet does not contain all required fields"; private static final String MSG_ERR2 = "Request fields contain invalid values"; private static final String MSG_ERR3 = "Transaction already in progress"; private static final String MSG_ERR4 = "Request does not belong to a transaction"; private static final String MSG_ERR5 = "Failed to open a transaction"; private static final String MSG_ERR6 = "Failed to close a transaction"; private static final String MSG_ERR7 = "I/O failed"; private static final byte SUBCMD_END_TRANSACTION = 3; private static final byte SUBCMD_READ_CHUNK = 0; private static final byte SUBCMD_START_READ = 4; private static final byte SUBCMD_START_WRITE = 2; private static final byte SUBCMD_WRITE_CHUNK = 1; ... public NikeMessage decode(final NikeTransaction nikeTransaction) throws ProtocolCoderException { ... decode a response ... } public void encode(final NikeTransaction nikeTransaction) throws ProtocolCoderException { ... encode this command ... } byte getOpCode() { ... } }

In my proof of concept application you will find the code to create and send CmdSettingsGet commands, retrieving some sample data such as BAND_COLOR, FUEL level, owner FIRST_NAME and device SERIAL_NUMBER.

  • Cmd_BatteryState: Retrieve battery state.
  • Cmd_Bootloader: Set the device to bootloader mode ( basically it locks down the device, the official app won't work either ... only resetting it with the usb cable will unlock it ).
  • Cmd_DesktopData: ???
  • Cmd_EventLog: Get device event log.
  • Cmd_GenericMemoryBlock: Read or Write a memory address from 0 to 0xFFFF.
  • Cmd_MetricNotificationIntervalUpdate: Set interval time to receive metrics update notifications.
  • Cmd_Notification_Subscribe: Subscribe to the notification of a specific metric.
  • Cmd_ProtocolVersion: Get device protocol version.
  • Cmd_RTC: Configure the device real time clock.
  • Cmd_Reset: Reset the device.
  • Cmd_ResetStatus: Reset the user data.
  • Cmd_SampleStore: Use the device memory to store a custom object (!!!).
  • Cmd_SampleStoreAsync: Same, but async.
  • Cmd_SelfTest: Perform a hardware self test and get the results.
  • Cmd_Session_Ctrl: Login/Logout/Ping
  • Cmd_Settings_Get: Get a setting value by its code.
  • Cmd_Settings_Get_Activity_Stats: Get user activity statistics.
  • Cmd_Settings_Get_Boolean: Get a boolean setting.
  • Cmd_Settings_Get_Int: Get an integer setting.
  • Cmd_Settings_Get_MoveReminder: Get a "move reminder" type setting.
  • Cmd_Settings_Set: Set the value of a setting by its code.
  • Cmd_Settings_Set_MoveReminder: Set a "movie reminder" setting.
  • Cmd_UploadGraphic: Upload a bitmap to show on the device led screen ( a subclass of Cmd_GenericMemoryBlock ).
  • Cmd_UploadGraphicsPack: ???
  • Cmd_Version: Get device firmware version.

Altough the device does not contain sensitive data about the user, this is a good proof of concept on how a badly implemented BLE custom protocol could lead an attacker to compromise a device ( such as the BLE proximity sensor of an alarm :) ) without any kind of authentication or expensive hardware.

Simone Margaritelli

Security researcher, hardcore C/C++ developer , wannabe reverser and coffee addicted from Rome, Italy.

Share this post

Facebook Google+

Please enable JavaScript to view the comments powered by Disqus.

comments powered by Facebook Google+

All content copyright Simone Margaritelli © 2015 • All rights reserved. Proudly published with Ghost Nike+ FuelBand SE BLE Protocol Reversed

29 Jan 2015 on reversing, nike, nike+ fuelband se, fuelband, nike fuelband, hacking, BLE, bluetooth low energy, protocol, authentication, bluetooth, nikeband

During the last two weeks I had fun playing with the BLE protocol of the Nike+ FuelBand SE, a device to track daily steps, calories, time, etc.

I've completely reversed its protocol and found out the following key points:

  • The authentication system is vulnerable, anyone could connect to your device.
  • The protocol supports direct reading and writing of the device memory, up to 65K of contents.
  • The protocol supports commands that are not supposed to be implemented in a production release ( bootloader mode, device self test, etc ).

I've published a proof of concept Android application on github, don't expect it to be production ready code of course, but it works :)

Because! I had fun reversing it, I hate closed source hardware protocols, and as long as I know I'm the first one to actually manage to do it, despite many are trying since the first version with no luck.

The question is never why, the question is always how.

The Bluetooth Low Energy is a wireless personal area network technology designed and marketed by the Bluetooth Special Interest Group aimed at novel applications in the healthcare, fitness, beacons, security ( LOL, more on this later ), and home entertainment industries. Compared to Classic Bluetooth, Bluetooth Smart is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range.

Basically it's something that works on the bluetooth frequencies, but has very little in common to the classic bluetooth, mostly because the device protocol must be implemented by each vendor since there isn't really a standard (yet?).

Each device has its characteristics which basically are read/write channels (thing about them as sockets), while the writing method is only one, there are two modes of reading data, either you perform an active reading or you wait for the onCharacteristicChanged event and get the available data from the read channel.

The annoying part of this technology is synchronization, since read and write operations can not be performed simultaneously, instead each one needs the previous operation to be completed before being scheduled ... event programming dudes!

That's why you will find an event queue and a lot of sinchronization code in my PoC, not my fault :P

Fortunately there's a Nike official Android application that I managed to reverse, since I don't (actually didn't, more on this later ) know smali, I used the lame method of converting the APK to a JAR package using the great dex2jar tool and then JD-Gui to easily read the Java source code.

First thing first, the device is detected and recognized by its COMPANYCODE in the advertisment data ( byte[] NIKE_COMPANY_CODE = { 0, 120 } ), then a GATT service discovery is launched.

The main command service UUID is 83cdc410-31dd-11e2-81c1-0800200c9a66 and it has two characteristics:

  • Command Channel (where you write commands) : c7d25540-31dd-11e2-81c1-0800200c9a66
  • Response Channel (where you wait for responses) : d36f33f0-31dd-11e2-81c1-0800200c9a66

Once the client device attaches to these two channels, it enables notifications on the response one and the authentication procedure starts.

I'm saying theoretically because that's what some parts of the application suggest it should work, but actually I've found out that most of the authentication code is bypassed and some pretty funny constants are used :)

Everything starts with a PIN, a string that "someone" will send you (probably the Nike web api) during the first login/setup with the device, this string is stored inside the XML file /data/data/com.nike.fb/shared_prefs/profilePreferences.xml, in my case its node is:

... <string name="pin">69AB8DA2-F7D6-497C-869D-493CCF8FE8BC</string> ...

The pin is then hashed with the MD5 function and the first 6 bytes of the resulting hash are converted to hexadecimal, those 6 bytes will become the discovery_token stored in the same file:

... <string name="discovery_token">5E5E6F7A7FE2</string> ...

Every time the app finds the device and wants to connect with it, it sends the following START AUTHENTICATION command:

0x90 0x0101 0x00 0x00 0x00 ....

0x90 indicates that's a SESSION command and its bits contains the sequence number, number of total packets in the transaction and packet index ( this is the encoder ).

0x0101 are the bytes indicating the START AUTH command and all the 0x00 are zero bytes padding up to 19 bytes.

Once the app sends this packet, the device replies with a challenge response containing a 16 bytes long nonce buffer.

0xC0 0x11 0x41 0xF495C98693075322225EB8B8A4D79B39
  • 0xC0 Reply opcode ( SESSION protocol, 0 following packets, packet index 0, sequence number 4 ).
  • 0x11 Following data size ( 16 of the nonce + 1 of 0x41 ).
  • 0x41 Auth opcode OPCODE_AUTH_CHALLENGE ( namely: "Hey dude, I'm sending you the nonce! )
  • 0xF495C98693075322225EB8B8A4D79B39 : The nonce itself.

To succesfully authenticate to the device, you need to take this nonce, the previously discussed discovery_token, get a CRC32 of them, truncate it to two bytes and send it back to the device, so the resulting packet would be something like:

0xB0 0x0302 XX XX 0x00 0x00 ........
  • 0xB0 : SESSION protocol, 0 following packets, packet index 0, sequence number 5.
  • 0x0302 : Authentication request opcode.
  • XX XX : The two bytes of the truncated CRC32.
  • 0x00 ... : Zero padding up to 19 bytes.

Sounds quite simple yet robust doesn't it? Since you need both the pin ( which is probably linked to the user account ) and the nonce sent by the device, there's no way you can remotely connect to a FuelBand unless you have physical access to the owner device or you have hacked his account and used it on your device to force the web api to send you back his pin.

Right? ..... WRONG ! :D

NOTE
Besides what I'm about to write, the device is broadcasting the user discovery_token within its advertisment data ( the MANUDATA field ), so you could sniff it anyway ... LOL!

I've been stucked a couple of days on this ... I implemented everything in the right way, I was using my own discovery_token, succesfully initiated the connection to the device and got the nonce, CRC32'ed them together ... and then I got an InvalidParameterException from the class which was computing the CRC32 checksum ( that I copied from the JD-GUI decompilation ) with the message:

Length of data must be a multiple of 4

WTF DUDE?! How could the discovery_token, which is only 6 bytes long, have a size which is divisible by 4?!
So I tried to truncate it to 4 bytes, pad it, hash it ... you say it!
Nothing was working.

So I decided that it was the time for me to learn to read and write in Smali ( took me a couple of hours, quite simple actually ).

I decompiled the APK again, this time using apktool to get the smali code, injected some code of mine to make the application log the actual token it was using, recompiled it with apktool, signed it with signapk and reinstalled to my device.

Guess what?

Fuck it, who fucking cares about that token anyway? Let's just use 0xff 0xff 0xff 0xff 0xff 0xff .... !

Yeah ... although the code is there and all the mechanism described in the previous section could be robust ... they are just using a hard coded token of 0xff 0xff 0xff 0xff 0xff 0xff .... meaning that, anyonce who's able to get the nonce from the device ( so anyone with a BLE capable Android smarphone since the device itself it's sending it ) will be able to authenticate against your device and send any command ... let me facepalm again ....

So basically here the code to create an authentication packet:

CopperheadCRC32 crc = new CopperheadCRC32(); byte[] auth_token = Utils.hexToBytes("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"); /* * Create the response packet: 0xb0 0x03 0x02 [2 BYTES OF CRC] 0x00 ... */ Packet resp_packet = new Packet(19); resp_packet.setProtocolLayer( CommandResponseOperation.ProtocolLayer.SESSION ); resp_packet.setPacketCount(0); resp_packet.setPacketIndex(0); resp_packet.setSequenceNumber( challenge_packet.getSequenceNumber() + 1 ); ByteBuffer response = ByteBuffer.allocate(18); response.put( (byte)0x03 ); response.put( (byte)0x02 ); crc.update(nonce); crc.update(auth_token); short sum = (short)((0xFFFF & crc.getValue()) ^ (0xFFFF & crc.getValue() >>> 16)); response.putShort(sum); resp_packet.setPayload( response.array() );

And finally the device will reply with:

0xE0 0x01 0x42 0x00000000000000000000000000000000
  • 0xE0: SESSION layer reply, bla bla bla.
  • 0x01: 1 byte of reply.
  • 0x42: Succesfully authenticated ( FUCK YEAH! )
  • 0x00..: Padding.

Once you're succesfully authenticated, you can start sending command, each command has its own encoding standard, but the first three bytes are always:

  • protocol byte: SESSION or COMMAND constants + some bit hacking to set sequence number etc.
  • length byte: Size of the following data.
  • opcode : Code of the command:

Each command ( and its encode ) is implemented inside the class com.nike.nikerf.protocol.impl.NikeProtocolCoder_Copperhead, for instance here's the redacted implementation of Cmd_GenericMemoryBlock ( yeah -.- ):

private abstract class Cmd_GenericMemoryBlock { private static final int MAX_ADDRESS = 65536; private static final String MSG_ERR1 = "Request packet does not contain all required fields"; private static final String MSG_ERR2 = "Request fields contain invalid values"; private static final String MSG_ERR3 = "Transaction already in progress"; private static final String MSG_ERR4 = "Request does not belong to a transaction"; private static final String MSG_ERR5 = "Failed to open a transaction"; private static final String MSG_ERR6 = "Failed to close a transaction"; private static final String MSG_ERR7 = "I/O failed"; private static final byte SUBCMD_END_TRANSACTION = 3; private static final byte SUBCMD_READ_CHUNK = 0; private static final byte SUBCMD_START_READ = 4; private static final byte SUBCMD_START_WRITE = 2; private static final byte SUBCMD_WRITE_CHUNK = 1; ... public NikeMessage decode(final NikeTransaction nikeTransaction) throws ProtocolCoderException { ... decode a response ... } public void encode(final NikeTransaction nikeTransaction) throws ProtocolCoderException { ... encode this command ... } byte getOpCode() { ... } }

In my proof of concept application you will find the code to create and send CmdSettingsGet commands, retrieving some sample data such as BAND_COLOR, FUEL level, owner FIRST_NAME and device SERIAL_NUMBER.

  • Cmd_BatteryState: Retrieve battery state.
  • Cmd_Bootloader: Set the device to bootloader mode ( basically it locks down the device, the official app won't work either ... only resetting it with the usb cable will unlock it ).
  • Cmd_DesktopData: ???
  • Cmd_EventLog: Get device event log.
  • Cmd_GenericMemoryBlock: Read or Write a memory address from 0 to 0xFFFF.
  • Cmd_MetricNotificationIntervalUpdate: Set interval time to receive metrics update notifications.
  • Cmd_Notification_Subscribe: Subscribe to the notification of a specific metric.
  • Cmd_ProtocolVersion: Get device protocol version.
  • Cmd_RTC: Configure the device real time clock.
  • Cmd_Reset: Reset the device.
  • Cmd_ResetStatus: Reset the user data.
  • Cmd_SampleStore: Use the device memory to store a custom object (!!!).
  • Cmd_SampleStoreAsync: Same, but async.
  • Cmd_SelfTest: Perform a hardware self test and get the results.
  • Cmd_Session_Ctrl: Login/Logout/Ping
  • Cmd_Settings_Get: Get a setting value by its code.
  • Cmd_Settings_Get_Activity_Stats: Get user activity statistics.
  • Cmd_Settings_Get_Boolean: Get a boolean setting.
  • Cmd_Settings_Get_Int: Get an integer setting.
  • Cmd_Settings_Get_MoveReminder: Get a "move reminder" type setting.
  • Cmd_Settings_Set: Set the value of a setting by its code.
  • Cmd_Settings_Set_MoveReminder: Set a "movie reminder" setting.
  • Cmd_UploadGraphic: Upload a bitmap to show on the device led screen ( a subclass of Cmd_GenericMemoryBlock ).
  • Cmd_UploadGraphicsPack: ???
  • Cmd_Version: Get device firmware version.

Altough the device does not contain sensitive data about the user, this is a good proof of concept on how a badly implemented BLE custom protocol could lead an attacker to compromise a device ( such as the BLE proximity sensor of an alarm :) ) without any kind of authentication or expensive hardware.

Simone Margaritelli

Security researcher, hardcore C/C++ developer , wannabe reverser and coffee addicted from Rome, Italy.

Share this post

Facebook Google+

Please enable JavaScript to view the comments powered by Disqus.

comments powered by Facebook Google+

All content copyright

Simone Margaritelli

© 2015 • All rights reserved. Proudly published with

Ghost

Why Every Movie Looks Sort of Orange and Blue

29 January 2015 - 2:00pm

Still from Jupiter Rising an upcoming scifi thriller 

Maybe you haven’t noticed, but in the past 20-or-so years there’s been a real catchy trend in major Hollywood movies to constrain the palette to orange and blue. The color scheme, also known as “orange and teal” or “amber and teal” is the scourge of film critics – one of whom calls this era of cinema a “dark age.”

You’re probably skeptical, so check out the following. Warning that once you know what to look for, it will be very difficult for you not to notice see this color scheme every time you look at a screen, at least for a little while:

Still from The Imitation Game (2014) a historical biopic about Alan Turing

 

Still from Into the Woods (2014) a fantasy musical

 

Still from The Wolf of Wall Street (2013)

This still from the Mad Max (2015) trailer looks a little yellower than the preceding three. But it’s also a much more intense scene. And still undeniably orange and blue. 

And then, of course, there’s every movie poster ever. Because they need to be flashy, they’re a lot brighter and more saturated. But they’re still on the whole very orange and blue:

 

Orange and blue contrast movie posters from TV Tropes

It isn’t every scene, in every movie. Some films, and some filmmakers, tend towards novel color schemes. But the rest tend towards orange and blue. The trend was already in full force a few years ago, when a blogger sampled the colors in a bunch of film trailers. This is what he came up with: 

Edmund Helmer’s 2013 analysis of film trailers

It’s like the Emerald City, except instead of making us wear green-tinted glasses, the current Hollywood wizard mutes green…along with every other color on the spectrum that isn’t orange or blue. 

Digital Colorization

The Wizard of Oz seems to predate this trend.

What the hell is going on? Well, back in the day, the colors projected on the silver screen depended first on how you shot and developed the actual, physical film, and then whether or not you had somebody go through and painstakingly, expensively apply more colors to every frame.

Now, most movies are shot digitally and it’s a lot easier to go back and rebalance things to achieve whatever affect you want. But someone still needs to actually do it. And if it doesn’t look good, that person gets in trouble.

O’ Brother Where Art Thou(2000) gets referenced a lot as an early movie to heavily digitally color grade. The Coens reportedly wanted it to look retrograde at the expense of realism, which is why it was graded so heavily: the entire movie is a nice warm sepia. The cinematographer on the film has said, “They wanted it to look like an old hand-tinted picture, with the intensity of colors dictated by the scene and natural skin tones that were all shades of the rainbow.”

But how did we get from “all the shades of the rainbow” to “orange”?


Adobe video editing software 

The big change that digitization made was it made it much easier to apply a single color scheme to a bunch of different scenes at once. The more of a movie you can make look good with a single scheme, the less work you have to do. Also, as filmmakers are bringing many different film formats together in a single movie, applying a uniform color scheme helps tie them together.

One way to figure out what will look good is to figure out what the common denominator is in the majority of your scenes. And it turns out that actors are in most scenes. And actors are usually human. And humans are orange, at least sort of!

Most skin tones fall somewhere between pale peach and dark, dark brown, leaving them squarely in the orange segment of any color wheel. Blue and cyan are squarely on the opposite side of the wheel.

 

You may remember from preschool that “opposite” color pairs like this are also known as “complementary” colors. That means that, side-by-side, they produce greater contrast than either would with any other color. And when we’re talking about color, contrast is generally a desirable thing.

One theory is that the orange-and-blue trend is driven by this affinity for contrast. If you make your actors as warm and orange as plausible while making them still look human, and make the shadows and the background as blue as possible, you’ll have a vibrant screen, and a pretty darn complementary palette. As Dan Seitz wrote, in ananalysis of generic color grading:

"It's not necessarily laziness per se. Your average colorist has to grade about two hours of movie, frame by frame sometimes, in the space of a couple of weeks. It doesn't take that many glances at the deadline bearing down on the calendar before you throw up your hands and say, 'Fuck it. Everybody likes teal and orange!'"

 

Every Film Has its Own Look

Blade Runner: Orange and blue before it got so cool it wasn’t cool anymore

Ultimately this is, of course, only a theory. Though they have gotten a lot popular in the last 15 years, orange and blue light motifs definitely predate widespread digital color grading. TV Tropes’ entry on orange-and-blue color schemes pointed out that, while it might not be naturalistic, the color combination packs a semantic punch:

"Unlike other pairs of complementary colors, fiery orange and cool blue are strongly associated with opposing concepts — fire and ice, earth and sky, land and sea, day and night, invested humanism vs. elegant indifference, good old fashioned explosions vs. futuristic science stuff. It's a trope because it's used on purpose, and it does something."

It seems plausible that, regardless of whether or not it has its origins in color theory, orange-and-blue has now reached the level of “convention.” For better or for worse, coloring your movie this way makes it really look like a movie.

But as colorist Stefan Sonnenfeldtold The Guardian, "There's no specific colour decision-making process where we sit in a room and say, 'We're only going to use complementary colours to try and move the audience in a particular direction – and only use those combinations.' Every film has its own look.” 

Sonnenfeld, it turns out, worked on some most visually spectacular and some of the most orange and blue films of the past 15 years: the Transformers series.

Stills from the Transformers films, many via Todd Miro. To be fair, action movies are especially well-suited to orange and blue. After all, explosions are usually orange. 

Transformers is so orange and teal that a team of researchers, building an algorithm to make color grading more automatic, used it as one of their example color grades. Their method, which they published in 2013, fits an input video to the characteristic visual style of any film. 

Amelie, color graded in orange and blue like Transformers (top), and Transformers, color graded in green and gold like Amelie (bottom); Bonneel, Sunkavalli et. al.

The method isn’t fully automatic -- for one it requires the user to identify which parts of the frame are foreground and background – but it was certainly an improvement on the state of the art. As color grading technology continues to improve, we might see more filmmakers branch out into more novel palettes. Until then, keep an eye out for more orange and more blue.

This post was written by Rosie Cima; you can follow her on Twitter here. To get occasional notifications when we write blog posts, please sign up for our email list

LibreOffice 4.4, the Most Beautiful LibreOffice Ever

29 January 2015 - 2:00pm
  • The user interface has been improved in a significant way
  • Interoperability with OOXML file formats has been extended
  • Improved source code quality based on Coverity Scan analysis

Berlin, January 29, 2015 – The Document Foundation is pleased to announce LibreOffice 4.4, the ninth major release of the free office suite, with a significant number of design and user experience improvements.

“LibreOffice 4.4 has got a lot of UX and design love, and in my opinion is the most beautiful ever,” says Jan “Kendy” Holesovsky, a member of the Membership Committee and the leader of the design team. “We have completed the dialog conversion, redesigned menu bars, context menus, toolbars, status bars and rulers to make them much more useful. The Sifr monochrome icon theme is extended and now the default on OS X. We also developed a new Color Selector, improved the Sidebar to integrate more smoothly with menus, and reworked many user interface details to follow today’s UX trends.”

LibreOffice 4.4 offers several significant improvements in other areas, too:

  • Support of OpenGL transitions in Windows, and improved implementation based on the new OpenGL framework;
  • Digital signing of PDF files during the export process;
  • Installation of free fonts Carlito and Caladea to replace proprietary Microsoft C-Fonts Calibri and Cambria, to get rid of font related issues while opening OOXML files;
  • Addition of several new default templates, designed by volunteers;
  • Visual editing of Impress master pages, to remove unwanted elements, adding or hiding a level to the outline numbering, and toggling bullets on or off;
  • Better Track Changes – with new buttons in the Track Changes toolbar – and AutoCorrect features in Writer;
  • Improved import filters for Microsoft Visio, Microsoft Publisher and AbiWord files, and Microsoft Works spreadsheets;
  • New import filters for Adobe Pagemaker, MacDraw, MacDraw II and RagTime for Mac;
  • Greatly expanded support for media capabilities on each platform.

A rather comprehensive description of all LibreOffice 4.4 new features, including developers’ names, is available on the release notes page at the following address: https://wiki.documentfoundation.org/ReleaseNotes/4.4. In addition, a summary of the most significant development related details has been published by Michael Meeks: https://people.gnome.org/~michael/.

People interested in technical details can find change logs here: https://wiki.documentfoundation.org/Releases/4.4.0/Beta1 (fixed in Beta 1), https://wiki.documentfoundation.org/Releases/4.4.0/Beta2 (fixed in Beta 2), https://wiki.documentfoundation.org/Releases/4.4.0/RC1 (fixed in RC1), https://wiki.documentfoundation.org/Releases/4.4.0/RC2 (fixed in RC2) and https://wiki.documentfoundation.org/Releases/4.4.0/RC3 (fixed in RC3).

Download LibreOffice

LibreOffice 4.4 is immediately available for download from the following link: http://www.libreoffice.org/download/. LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at http://donate.libreoffice.org.

About The Document Foundation

The Document Foundation is an independent, self-governing and meritocratic organization, based on Free Software ethos and incorporated in Germany as a not for profit entity. TDF is focused on the development of LibreOffice – the best free office suite ever – chosen by the global community as the legitimate heir of OOo, and as such adopted by a growing number of public administrations, enterprises and SMBs for desktop productivity.

TDF is accessible to individuals and organizations who agree with its core values and contribute to its activities. At the end of December 2014, the foundation has 205 members and over 3,000 volunteer contributors worldwide.

The infographics is also available as a PDF.

Like this:

Like Loading...

Related

Capital One Fraud Researchers May Also Have Done Some Fraud

29 January 2015 - 2:00pm

Part of why people don't like insider trading is that it seems too easy. Some people spend their days slaving over a hot spreadsheet, trying to figure out if a company will make money or not, and then you just waltz in with a tip from your buddy at the golf club and buy some call options on the company just before it announces a merger. It's just unfair. 

Say what you will about Bonan Huang and Nan Huang, but they (allegedly) worked hard for their hot tips. You don't see a lot of this on the golf course:

That's a heavily redacted list of search queries that they allegedly ran in Capital One's database of credit card sales, looking to see how many people were using their Capital One cards at Chipotle.   Bonan Huang and Nan Huang worked at Capital One "as data analysts tasked with investigating fraudulent credit card activity," but, I mean, the database was just sitting there, how could they resist taking a peek? They could not.

Their queries seem to have revealed that a lot of people were putting burritos on their Capital One cards, because on July 21, 2014, the day after querying the database, Bonan Huang and Nan Huang between them apparently bought call options on 5,500 Chipotle shares for a total of just less than $100,000. Chipotle released earnings after the market closed that afternoon. Earnings were good; in particular, revenue was up 28.6 percent quarter-on-quarter. 

The next day Bonan Huang and Nan Huang allegedly started selling their options, making a profit of about $278,000. For three days' work. But at least they wrote the queries. Usually when I say "for three days' work," the work was golf. These guys did work.

If you believe the Securities and Exchange Commission, actually, they did a ton of work:

Defendants worked for a large credit card issuer as data analysts tasked with investigating fraudulent credit card activity. While employed there, Defendants searched their employer's nonpublic database that recorded the credit card activity for millions of customers at numerous, predominantly consumer retail corporations. The Defendants conducted hundreds, if not thousands, of keyword searches of this database. These searches, which were not done in furtherance of their employment duties, allowed the Defendants to view and analyze aggregated sales data for the companies they searched.

Isn't that sort of sweet? I mean, these guys appear to have done fundamental research on a bunch of companies, and then bought stock  in the companies whose fundamental performance was better than market expectations, while selling stock in the companies whose performance was worse than expected. The SEC singles out Chipotle, as well as Cabela's and Coach, where they bought put options because sales were decreasing, though there seem to have been quite a few other trades as well. 

Apparently -- unsurprisingly -- there was a pretty strong relationship between people buying burritos or guns or purses with Capital One cards, and people buying burritos or guns or purses with cash and other credit cards, so their research proved profitable. Ridiculously profitable. From the SEC complaint:

From January 2012 to January 2015, defendants Bonan Huang and Nan Huang deposited a total of $147,300 into their six OptionsHouse accounts. During this time period they transferred approximately $1,763,500 out of these six accounts. As of January 15, 2015, the total balance in the six acounts was approximately $1,063,000. Accordingly, Bonan Huang and Nan Huang made approximately $2,826,500 trading options during this period in their OptionsHouse account. This represents a three-year return of approximately 1,819%. 

That's amazing! These two like customer-support guys at Capital One were seemingly running an incredibly successful fundamental research-driven long/short equity hedge fund. A small fund, but still. The average equity hedge fund returned 25 percent -- total, not annual -- during that period.  You sometimes see insider-trading cases where someone makes like a thousand-percent return in a few days by buying call options just before a merger. Every so often a network of tippers will yield multiple big scores like that. But to do hundreds of searches and trade multiple stocks over three years based entirely on raw consumer spending signals, and to make 1,819 percent doing it, is just phenomenal. Even if the consumer spending signals were, you know, stolen.

People have asked me if this is insider trading and, you know, sure it is? (If the allegations are true, I mean.) This is not "classical" insider trading -- trading or tipping by an insider at Chipotle or whatever -- but rather "misappropriation" insider trading:

The "misappropriation theory" holds that a person commits fraud "in connection with" a securities transaction, and thereby violates § 10(b) and Rule 10b-5, when he misappropriates confidential information for securities trading purposes, in breach of a duty owed to the source of the information. ... Under this theory, a fiduciary's undisclosed, self-serving use of a principal's information to purchase or sell securities, in breach of a duty of loyalty and confidentiality, defrauds the principal of the exclusive use of that information. In lieu of premising liability on a fiduciary relationship between company insider and purchaser or seller of the company's stock, the misappropriation theory premises liability on a fiduciary-turned-trader's deception of those who entrusted him with access to confidential information.

Here, Bonan Huang and Nan Huang allegedly got the information from their employer, Capital One, which was supposed to have exclusive use of the -- hey, wait a minute, does that mean that Capital One was allowed to trade on this data for its own profit? Wouldn't that be amazing? Surely the answer is no: I assume that Capital One signed agreements with retailers (or rather, with Visa and MasterCard, which signed agreements with retailers) in which it promised not to disclose transaction data, or use it for nefarious purposes. Really anyone who used this data would be misappropriating it from, ultimately, Chipotle. Which gets to keep its sales data to itself. Except once a quarter when it releases that data and the stock jumps.

Henry Manne, the pioneering scholar of law and economics who died last week, famously argued that insider trading should be legal, in part because it makes markets more efficient, and this case is a good example. Chipotle's and Coach's and Cabela's stocks were mispriced, the day before their earnings announcements, because those companies had earnings information that the market didn't have, and didn't tell anyone. (Until the next day.) People bought and sold those stocks at the wrong price all day long. Bonan Huang and Nan Huang seem to have done their research to figure out the right price. Illegal research, sure, but they were right -- spectacularly, and over and over again. 

That's how markets work: People do research to try to figure out the right price, and then if the price is wrong they trade, and eventually prices get to be right. And so there are tons of legal, yet somehow unfair-seeming, ways in which smart traders try to figure out the right price. There are helicopters with heat-sensitive cameras flying over oil tanks to help hedge funds get non-public oil supply information. There's a former Google engineer "selling analysis of obscure data sets" -- like "satellite images of construction sites in 30 Chinese cities" -- "to traders in search of even the smallest edges." Or there are like a billion people trying to use Twitter to predict stock prices. That is the business. You take the data that is out there, or find new ways to get new data, and then you analyze the heck out of it to find out if it tells you anything about companies that you didn't already know.

And usually the answer is that it tells you a teeny little bit, and you add a few basis points to your returns. The returns to discoverers of new data sets tend to dissipate quickly -- in part because others discover them too, but in part because, you know, the market is smart, lots of people have incentives to figure this out, how much more information could one more piece of information really give you, etc. Markets are basically efficient. (Right?) If you had asked me two days ago if raw Capital One credit-card usage data would be helpful in making excess returns in the stock market, I'd have said, sure, of course. If you'd asked me if you could use it to make consistent excess returns of 1,800 percent over three years, though, I would have been skeptical. Surely lots of Wall Street firms -- Chipotle is followed by 31 analysts -- and asset managers are doing tons of research to try to estimate Chipotle's sales. They're visiting branches and calling investor relations and talking to pork suppliers and surveying consumers and generally getting paid a lot of money to build a robust estimate of how many burritos Chipotle is selling. One more piece of data -- one credit card company's charges at Chipotle -- would be helpful, but come on, not that helpful.

Nope: Super helpful! I don't know what to tell you. It seems a shame that Bonan Huang and Nan Huang's research was apparently illegal. Because it was really good.

To contact the author on this story:
Matt Levine at mlevine51@bloomberg.net

To contact the editor on this story:
Zara Kessler at zkessler@bloomberg.net

Show HN: PolyGen App, turn gradients and photos into pretty low poly patterns

29 January 2015 - 2:00pm

PolyGen is the app for Low Poly art. It lets you create abstract wallpapers and photo-based crystal patterns. Automatically or by hand.

PolyGen patterns are ready to be used as mobile or desktop wallpapers, avatars, or social media backgrounds. What pattern are you going to create?

Inkscape 0.91 release

29 January 2015 - 2:00pm

Inkscape 0.91
Released on 2015-01-27

Release highlights
------------------
• Cairo rendering for display and PNG export
• OpenMP multithreading for all filters
• C++ code conversion
• Major improvements in the Text tool
• Measure tool
• Type design features [1],[2]
• Symbol library and support for Visio stencils
• Cross platform WMF and EMF import and export
• Improved support for Corel DRAW documents, Visio importer
• Support for real world document and page size units, e.g. millimeters
• Numerous usability improvements
• Native Windows 64-bit build
• See Notable bug fixes

-------------------------
Rendering and performance
-------------------------

Inkscape 0.91 includes a new renderer based on the Cairo library. This work was
done mainly during Google Summer of Code 2010 and 2011 projects.

• Improved performance. The new renderer is significantly faster on most
drawings. Renderings of the most complex objects are automatically cached
to improve responsiveness during editing.
• OpenMP multithreading for filters. Filters use all available processor
cores for computation. This results in substantial speedups when editing
drawings containing large filtered objects on multi-core systems.
• Substantial memory savings. Inkscape now uses less memory when opening
complex drawings, in some cases using only 25% of the memory used by
Inkscape 0.48. Larger files can now be opened.
• Responsiveness improvements. The rendering of the SVG drawing is now
cached. This results in massive improvements in responsiveness of path
highlights, object selection / deselection, and path editing in delayed
update mode.
• Rendering bug fixes. Most of the rendering glitches in our bug tracker are
no longer present in Inkscape 0.91. The following things now render
correctly:
- Pattern fills (no more gaps between tiles, regardless of
transformation)
- Stroke of transformed objects in patterns
- Patterns containing clipped objects
- Nested clipping paths
- Masked and clipped objects with large masks / clipping paths in Outline
view
- Paths with wide strokes and long miters
- Fonts

Color display mode
------------------
A grayscale display color mode has been added, that shows a preview of your
drawing in grayscale. Shift+numpad5 toggles the color display mode between
normal and grayscale.

-----
Tools
-----

Node tool
---------
The tool control bar for the Node Tool features a new dropdown to insert new
nodes on the selected segments extreme values. For example, (as demonstrated in
the image below) it is possible to add a new node at the highest point in a
curve using Insert Node at Max Y

Measurement tool
----------------
The Measurement tool is a new feature for the artist to measure the elements in
their drawing. To use the measurement tool, simply choose the tool, click
anywhere on the drawing and drag the ruler out. The measurement tool will
live-update with measurements of length and angles as you pass over objects in
your drawing.

Text tool
---------
• Text size default unit is now points (pt) and is customizable
(px,pt,pc,mm,cm,in,em)
• Text toolbar shows full list of font style variants for that font
• Files with text in em units read correctly
• Font substitution warning dialog

-------
Dialogs
-------

Gradients
---------
• Gradient toolbar enhanced to select and modify gradient stops, invert,
repeat, and link gradients
• On-canvas gradient editing fixes: double clicking to create stops, correct
focus on select
• Gradients sortable by color, name and usage in Fill/Stroke
• Gradients can be renamed in Fill/Stroke

Arrange (was rows and columns)
-------
• NEW: renamed to 'Arrange' - NEW: polar arrangement (separate tab)
<http://issuu.com/ddeclara/docs/inkscape_radial_arrangement>

Align and Distribute
--------------------
• The new updated Inkscape features a new set of buttons in the Align and
Distribute Dialog called Exchange position of selected objects. It adds the
ability to exchange the positions of the objects that the artist has
selected.

- In the following example, three objects were selected, and their
positions were exchaged with each other (using this new feature)
according to their selection order.
- There are also two other new buttons in the dialog that allow the
artist to exchange the selected objects based on the stacking (z-index)
order, or just exchange them clockwise based on the object's position
on the page.

• Keyboard shortcuts (Ctrl+Alt+Keypad numbers) for align operations

Document Properties
-------------------
Optionally disable antialiasing (bug #170356, interface partially implemented)

Find/Select
-----------
• It is now easier to select items which are not at the top of the Z-order:
use Alt+mouse wheel scroll to cycle through all items that are stacked on
top of each other at the location of the mouse pointer (use Shift+Alt+mouse
wheel scroll to add to the existing selection). At present, groups are not
honoured, i.e., only individual items within groups are considered.
• New Find/Replace dialog can operate on text or any attribute
• "Select Same" is a new feature that allows an artist to select objects that
have the same properties as the currently selected object. For example, you
could select an object that has a fill of blue. Then, using the new feature
select all other objects in the drawing with a fill set to that same shade
of blue.

The new feature is a menu choice under Edit ▶︎ Select Same or as a Context menu
if you right click on a selected object. Also there are other choices available
to select same, including: matching both Fill and Stroke, matching just stroke,
matching stroke style, or matching on object type.

Fill and Stroke
---------------
• The Gradient view in the fill and stroke dialog now displays a list of all
the gradients in the document. The list displays the gradient, the gradient
name, and number of uses of that gradient in the document.
• More compact Markers selectors

Layers
------
• Drag and drop to reorder layers and create sublayers
• Show/Hide All layers options in context menu

Symbols
-------
Inkscape has a new Symbols dialog. The dialog displays symbols from a symbol
library. Inkscape 0.91 includes five example libraries: logic symbols, AIGA/DOT
transportation symbols, map symbols, flow chart shapes and word balloons. The
dialog will also create a pseudo-library of all existing symbols in the current
Inkscape drawing. (A symbol is defined by an SVG <symbol> element.) Symbols can
be dragged from the dialog onto the Inkscape canvas.

Any document with symbols can serve as a source for a symbol library. Simply
copy it to the symbols directory in your configuration directory (typically
share/inkscape). If proper care is taken, symbols can be provided with default
fill and stroke colors that later can be overridden by the user.

Visio Stencil files (.vss) can also be used by dropping them in the same
symbols directory. Results may not be as satisfactory as using SVG symbol
libraries.

See the Symbols Dialog Wiki page for more details.

Text and Font
-------------
• NEW: lists fonts used in the current document at the top
• NEW: select all text objects with same font as current selection
• NEW (to be verified): support list with fallback fonts (CSS2)

Transform
---------
• Rotation of objects clockwise or counterclockwise

Markers
-------
• Markers now take objects color

Trace Bitmap
------------
• Trace bitmap preview updates live and is resizeable

Live Path Effects
-----------------
An object's Live Path Effects are now forked upon object duplication.

PowerStroke
~~~~~~~~~~~
Here a list of the current state. Note that this is very much work in progress
and anything can change. I think this is the most efficient place of keeping
track how the powerstroke LPE works.

• Stroke knots are purple diamonds
• When first applied, 3 stroke knots are added: start, end, and somewhere in
the middle along the path
• Add nodes: Ctrl+click purple knot
• Delete nodes: Ctrl+Alt+click purple knot
• "sort points" reorders the stroke knots according to where they lie along
the path (where they are closest to the path), instead of keeping them in
original order.
• Start and end caps can be specified. The SVG cap types are available, as
well as an extra type, "Zero width", that is identical to adding a width
control knot at the start/end of the path with zero width.
• Join type can be specified. In addition to the SVG join types, there are
two new types:
- Extrapolated: this extrapolates the contour of the stroked path to
obtain a more natural looking miter join.
- Extrapolated arc: Mathematical explanation.
- Spiro: rounds the join using a spiro curve (the rounded type rounds the
curve using an elliptical arc).

Clone Original
~~~~~~~~~~~~~~
The Clone original LPE ignores the path data of the path it has been applied
to; instead, it copies the original-d path data, i.e. the path data before LPE
calculation, from the path linked to by the Linked path parameter.

The Clone original LPE is made to be used in conjunction with powerstroke.
Powerstroke creates a path with a variable stroke, but this path can then not
be filled (because the fill is used as the stroke). To fill a powerstroked
path, one must create a second path (dummy path), apply the Clone original LPE
and link it to the powerstroked path. Because this second path clones the
original path data before the Powerstroke LPE, it can be used to fill the
powerstroked path.

To quickly create a dummy path and apply this effect, one can select the path
to 'clone' and click the menu item Edit ▶︎ Clone ▶︎ Clone original path (LPE).

Like for normal clones, pressing Shift+D, when the selected path has the Clone
original LPE applied, selects the path referred to by the LPE.

Another very useful ability of the Clone original LPE is to create a clone with
a different style than its referred path. To facilitate this, the LPE dialog
will add the Clone original LPE when a clone is selected and the "+" button is
pressed.

Filters
-------
The new Custom predefined filters allow users to create predefined filters with
custom parameters. See SpecCustomPredefinedFilters.

Trace Pixel Art (libdepixelize)
---------------
A new library developed for Inkscape to automatically vectorize raster images
specialized in Pixel Art was integrated in the form of the "Trace Pixel Art"
dialog (menu item Path ▶︎ Trace Pixel Art...). Good and old general "Trace
Bitmap" is still there. Check the supplementary material of the algorithm
authors to see a preview of how the algorithm behaves.

--------------------
Other User Interface
--------------------

General
-------
• Canvas background color can be set without exporting it (background
transparency is only used for export but not the canvas).
• Panning the canvas with the Space bar is now always turned on and doesn't
require an additional mouse button press to grab the canvas: just press the
Space bar and move the mouse pointer to pan the canvas.

Guides
------
• Guides visibility can be toggled by clicking the ruler
• Guides can now have labels, and the colour of individual guides can also be
set by the user. To label or colour a guide, double click on the guideline
to bring up the guide properties dialog.

Menu/Access
-----------
• The interface elements are accessible through the keyboard with ALT+key in
many more dialogs
• "Text and Font", "Fill and Stroke", and "Check Spelling" dialogs are added
to the text object context menu (right click)
• Menu items renamed:
□ Edit ▶︎ Preferences
□ Edit ▶︎ Input Devices
□ File ▶︎ Cleanup Document
• Checkboxes to indicated status of View ▶︎ Grid/Guides/Snap/Color Management
• Group/Ungroup from the context menu

Preferences
-----------
• New keyboard shortcut editor
• Prefs ▶︎ Interface -- New option for dockbar and switcher style (icons,
text, icons & text) (bug #1098416)
• Prefs ▶︎ Interface ▶︎ Windows -- optionally don't save & restore documents
viewport (bug #928205)
• Prefs ▶︎ Behavior ▶︎ Steps -- unit selector for steps (move, scale, inset/
outset) (bug #170293)
• Prefs ▶︎ Behavior ▶︎ Steps -- option for relative snapping of guideline
angles (rev 10307)
• Prefs ▶︎ Behavior ▶︎ Clones -- optionally relink linked offsets on
duplication (bug #686193)
• Prefs ▶︎ Input/Output ▶︎ SVG output -- NEW: optionally enforce relative or
absolute coordinates (bug #1002230)

Dialogs
-------
• Dialog status and position is remembered between sessions
• Most dialogs now dockable (including "Object properties", "Object
attributes", "Text and Font", "Check spelling", "Export PNG image", "XML
editor", "Find/Replace", and "Tiled clones")
• New preference to allow Windows users to choose between native and Gtk Open
/Save dialog
• Preferences dialog cleanup
• Document Metadata dialog merged into Document Properties

Simple calculations in spinboxes
--------------------------------
In most spinboxes (a spinbox is an entry field with up and down "spinbuttons"
next to it) you can now write simple calculations. Some examples:

• 2 * 3
• 50 + 100, or
• ((12 + 34) * (5 + 5) - 2) / 2

Moreover, you can use units in entering values, like 2 + 2 cm. The result will
be converted to the selected unit for the particular entry.

Configurable Control Handles
----------------------------
New preferences have been added to allow for the size of the on-canvas controls
to be increased or decreased. The "Input Devices" section has been updated to
control this.

------------
Translations
------------

• The Keyboard and mouse reference (inkscape-docs project) and the labels of
color palettes are now translatable.
• New UI translation in Latvian.
• New tutorial translations in Galician and Greek.
• New Keyboard and mouse reference translation in Belarusian.
• New man pages in Chinese (zh_TW) Greek (el), Japanese (ja) and Slovak (sk),
and updated French translation. [Galician (gl) and Polish (pl) in progress]
• Man pages now use PO files for translation (inkscape-docs project).
• The tutorial generation system now fully supports RTL languages.

-------------------
File format support
-------------------

• New Flash XML Graphics (FXG) export format.
• New Synfig Animation Studio (SIF) export format.
• New HTML5 Canvas export format
• New Visio (VSD) import format, based on libvisio.
• New internal CorelDraw (CDR) import format, based on libcdr.
• XAML export improvements (including a new Silverlight compatible mode).
• Compressed SVG and media export extension improvements. New options:
□ set an image directory in the zip file
□ add a text file that lists the fonts used in the SVG document.
• New preference to allow users to always link, embed or ask when importing
bitmaps.
• New preferences that allow the checking of SVG on input and/or export for
invalid or not useful elements, attributes, and properties. Options control
whether such items generate warnings (when Inkscape is run from the command
line) or in removing such items.
• The --export-text-to-path option now works with Plain SVG export.

EMF/WMF
-------
EMF and WMF input and output filters have been completely rewritten and are now
cross-platform. It is now possible to copy and paste EMF files between Windows
applications running in Wine and a native Linux version of Inkscape.

Gimp XCF
--------
• The Save Background option allows users to choose if the page background is
saved with each GIMP layer.
• The exported layers now use the label attribute or, if not set, the id
attribute
• New Resolution option
• New Help tab
• Some bugs and warnings fixed

PDF
---
• Bleed/margin: Added an option to specify an extra margin by which the
bounding box to be exported is expanded. This may be helpful to export a
PDF with a small white margin around the drawing, or for exporting a bleed
region a few mm outside the area of the page.

PDF/EPS/PS + LaTeX
------------------
• Added the possibility of scaling the image. The calc package must be
included in the preamble. Then the image can be scaled by defining \
svgscale instead of \svgwidth.
• The font shape is now also exported. \textit{} for italic text, \textbf{}
for bold text, and \textsl{} (slanted) for oblique text. It is important to
note that Arial has an oblique font shape, not italic. Thus, the result in
LaTeX will be slanted, instead of italic. It is better to use another font
in Inkscape when you want true italics.

----------
Extensions
----------

Units: Breaking change
----------------------
Due to the implementation of proper document units, the functions
inkex.unittouu and inkex.uutounit had to be modified and moved to the
inkex.Effect class.

Unit conversion calls should be replaced with inkex.Effect.unittouu and
inkex.Effect.uutounit calls (usually self.unittouu and self.uutounit).

See also: Notes On Units Handling in Extensions in 0.91

New
---
• The new guillotine extension is used for exporting PNG slices from a
drawing. The slice rectangles are defined by adding horizontal and vertical
guides within the canvas boundary, the canvas boundary serves as the
outside of the sliced area.
• The new G-code tools extension converts paths to G-code (using circular
interpolation), makes offset paths and engraves sharp corners using cone
cutters.
• New QR code generator.
• New isometric grid generator.
• New bitmap crop extension.
• New Extract text extension. Outputs a document’s text elements in a chosen
order.
• New Merge text extension.
• New HSL adjust extension.
• New Replace font extension.
• New N-Up layout extension.
• New Voronoï diagram extension (creates Voronoï diagrams and Delaunay
triangulations based on the selected objects' barycenter).
• New Interpolate Attribute in a group extension.
• New Typography extensions menu.
• New Hershey Text extension.

Improvements
------------
• Number nodes. New parameters allowing users to choose the starting dot
number and the numbering step between two nodes.
• Color Markers to Match Stroke extension improvements. The markers can now
inherit the fill and stroke colors and alpha channels from the object, or
be customized with color selectors in a separate tab.
• Optional sliders added on float and int extension parameters (full and
minimal modes).
• Extension parameters values (except attributes!) can now be contextualized
for translation (with msgctxt).
• New sub-menus in the Render menu, grouping the bar-codes, grids and layout
extensions.

-----------
SVG Support
-----------

Rendering of the following properties is now supported (without UI except via
XML editor):

• clip-rule
• color-interpolation-filters: Non-Inkscape filters that specify linearRGB
color interpolation will render properly. Filters created inside Inkscape
will still use sRGB color interpolation by default.
• text-decoration: Underline, strike-through, over line.
• text-decoration-line, text-decoration-style: Preliminary support (CSS 3).
• paint-order: Allows stroke to be painted under fill; useful for text.

--------
Snapping
--------

• The snapping preferences and the snap toolbar have been reworked (in the
underlying code and in the GUI) to should make the snapping preferences
easier to understand, maintain, and find and fix any remaining snapping
bugs
• Inkscape now also snaps perpendicularly and tangentialy to paths, when
creating paths in the pen tool, when dragging nodes, or when manipulating
guides. Newly created guides (dragged off the ruler) will snap
perpendicularly or tangentialy to any curve that is being snapped to. Two
checkboxes have been added to the document properties dialog (on the
snapping tab). Please note that snapping perpendicularly or tangetialy will
not work in the selector tool when transforming an object or a selection of
objects.
• Intersections of paths and guides can now be snapped to too
• Snapping has been implemented fully for transforming selections of multiple
nodes in the node tool
• Snapping to text anchors and baselines has been implemented properly
• If one has chosen for only snapping the snap source closest to the mouse
pointer, then the tab key can be used to cycle to the next closest snap
source

-----------------
Notable bug fixes
-----------------

Notable bug fixes since last bug fix release (0.48.4)
--------------------------------------------
• Images are no longer recompressed when embedding or exporting them. [3]
• Relative image paths are no longer stored as absolute (regression
introduced with 0.47).
• Many rendering glitches were fixed.
• The rendering of the stroke on transformed objects now matches the SVG
specification.
• Values entered in the numeric input boxes for the selector tool (X, Y,
width, height) are much more accurately applied.
• Inkscape launches faster due to new icon cache (on disk) and improved font
loading. (Bug #488247)

------------
Known issues
------------

• On MS Windows when the desktop colordepth is set to 16-bit, Inkscape is
unusable because of exploding memory usage. Please set the colordepth to
32-bit.
• The Cairo library used in the new renderer does not implement downscaling,
which causes large bitmaps to be pixelated on export. (bug #804162)
The issue can be fixed by upgrading to Cairo 1.14.0.
(https://bugs.freedesktop.org/show_bug.cgi?id=41745)
• On OS X, the conflict with X11/XQuartz's pasteboard syncing has not been
solved yet: turning off "Update Pasteboard when CLIPBOARD changes" in X11
Preferences prevents that vector data copied or cut to the clipboard gets
rasterized on paste. (bug #307005)
• On OS X 10.9 or later, embedding bitmap images on import or paste from
clipboard may crash Inkscape. (bug #1398521, #1410793)
• On OS X 10.9 or later, turning off "Displays have separate spaces" in
Mission Control helps when using X11 across multiple displays. (bug #
1244397)
• The reworked Import Clip Art feature is not available with current OS X
packages. (bug #943148)
• On MS Windows, the icons for Preferences, Undo, Redo and Revert are
missing. (bug #1269698)

For information on prior releases, please see:
http://wiki.inkscape.org/wiki/index.php/Inkscape

San Diego Hacker News Meetup 58 Tomorrow (1/30)

29 January 2015 - 2:00pm

Our 58th monthly meetup.  New members are always welcome!

We'll be at the Pangea Bakery Cafe at 7:30pm. They've got coffees, teas, pastries, cookies, smoothies, and sandwiches.  We'll hop across the street to O'Brien's Pub around 9:30pm.  O'Brien's has great food and doesn't ID at the door so all ages are welcome.

We have a group of tables reserved at Pangea. In exchange for setting aside space for us on a Friday night, the manager at Pangea asks that we:

- do not park in the Pangea parking lot (see Logistic's note below)

- try to keep a $6 minimum purchase per person

Pangea has been very supportive of our group in the past, so let's support their establishment too.

Please RSVP! We've asked for space for 15-20, but we are often close to full.

More information about our group is on the SDHN website: http://sdhn.org/

Logistics note: Street parking one block back from O'Brien's at Opportunity Rd or Ruffner St is highly recommended.  Some parking is also available on Brinell St near Pangea.  Please do not park in the Pangea parking lot.  They have very limited parking in their lot and prefer to reserve it for shorter-term parking.

Friday, January 30 at 7:30 PM (PST)

Pangea Bakery Cafe (Kearny Mesa)

4689 Convoy St
Ste 100
(between Opportunity Rd & Engineer Rd)
San Diego, CA 92111


Google Calendar

Outlook (vcs file)

iCal (ics file)

Introducing React Native [video]

29 January 2015 - 2:00am

The interactive transcript could not be loaded.

Ratings have been disabled for this video.

Rating is available when the video has been rented.

This feature is not available right now. Please try again later.

Published on Jan 28, 2015

Tom Occhino reviews the past and present of React in 2015, and teases where it's going next.

Show more Show less

Firefox Hello

29 January 2015 - 2:00am

Want to reconnect with friends around the world? Celebrate a birthday when you can’t be there in person? Learn more about Firefox Hello and see for yourself how easy it is to have a free video conversation with anyone, anywhere, right from your browser.

Questions? Visit Mozilla Support

Who Owns Los Angeles?

29 January 2015 - 2:00am

“What is this you call property?”, asked Massasoit, the leader of the Native American Wampanoag tribe. “It cannot be the earth, for the land is our mother, nourishing all her children, beasts, birds, fish and all men. The woods, the streams, everything on it belongs to everybody and is for the use of all. How can one man say it belongs only to him?”

Good question, Massasoit. Yet, due to a tragic combination of the pathogenic bacteria Leptospira and aggressive colonists the answer became irrelevant and the concept of land ownership proliferated through the majestic lands of the new world like a virus.

Today, the dust has settled and the iron horse has carried the white man to the west coast, where I currently reside. As I explored the wonderful city of Los Angeles I began to wonder to whom, exactly do I owe the pleasure of my environment? Who “owns” the dirt I stand on? So I did some research.

The “United States” is divided into 3,144 counties and county equivalents. Of these, Los Angeles county is the most populous, with over 10 million residents. The least populous, Loving County, Texas has only 82. Funny story, in 2006 a group of Libertarians attempted to buy up land and seize power in Loving County with the goal of establishing their ideals, but were thwarted by the local sheriff. The group is currently featured on a “Wanted” poster in the county’s sole courthouse.

LA County has an area of 4,751 mi2, divided across 88 cities and 2,379,680 parcels. However, much of the land is “unincorporated”, meaning it does not fall within the jurisdiction of an established city. If you would like to establish your own city in LA County you can apply to the LAFCO for as little as $2,5001. The information regarding parcel owner, location, and “assessed value” for collecting property taxes is maintained by the Assessor’s Office2.

Formats and Tools

Most, if not all, counties use GIS (geographic information systems) to maintain this data3. The LA office uses Microsoft Access for ownership and assessed value information, and the popular Shapefile format for geometry and mapping. ESRI (environmental systems research institute), founded in 1969, dominates land-use consulting with their popular ArcGIS software and Shapefile format developed in the early 1990s. A Shapefile consists of several different files, 3 of which are mandatory:

.shp – feature geometry as a set of either WKT (well known text) of WKB (well known binary) coordinates. Each of these entries can be one of several different simple datatypes such as

POINT (30 10)

LINESTRING (30 10, 10 30, 40 40)

POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))

MULTIPOLYGON (((30 20, 45 40, 10 40, 30 20)), ((15 5, 40 10, 10 20, 5 10, 15 5)))

.shx – index of positional geometry to allow quickly stepping forward and backward

.dbf – old school simple database format popular in the 1990s, here stores attributes for each shape

There are several optional files, the most important of which though is

.prj – represents the projection information of the coordinates in the shapes. More on this soon.

I performed extensive cleaning and simplification on the assessor’s office data as part of this analysis, the bulk of which was done with PostgreSQL and the fantastic PostGIS extension4.

If you want to follow along, say with an EC2 instance, first grab some dependencies.

sudo apt-get -y install postgresql postgresql-contrib postgis postgresql-9.3-postgis-2.1-scripts

Now let’s create a database for our geospatial data

createdb gis psql -d gis -c 'create extension postgis' Projections

The earth is not a perfect sphere. We represent it instead as a “geoid”, a mathematical object that, ideally, represents the precise shape of the earth if it were only under the influence of gravitation and rotation. While imperfect, the geoid, combined with satellite data provides a somewhat close approximation to the actual shape of the earth. The geoid works in tandem with different “datum”, which are coordinate systems used by regions to define a coordinate system consistent with the geoid. Today, improvement to the model and coordinate systems has led to the possibility of a single global standard for the globe, WGS84, that is gaining in popularity. Still, datums are typically more precise when defined only for a single region.

The datum used by the LA Assessor’s office is NAD1983. The naming convention originated with the first North American survey in 1901, based on an ellipsoid geoid model developed in 1866. The system was updated in 1927 based on surveys of the entire continent but using the same geoid, and updated again in 1983 using satellite and remote sensing data using GRS 80 as the geoid, the same model originally used by the popular global standard WGS84. If it sounds simple, it is not, but if you’re interested it’s a great reason to learn spherical harmonics.

Just remember a datum is a coordinate system defined on a geoid, which is a model of the earth. Geoids and datums exist for other planets too, like Mars.

I re-projected the assessor’s office data from NAD1983 to WGS84 using QGIS. All projections have a corresponding SRID (spatial reference system identifier). Let’s load the shapefile into a PostGIS table, making sure to tell it the projection WGS84, which has an SRID of 43265. You can download it from me.

wget http://dwur9qzdkvp67.cloudfront.net/la_parcels.tar.xz tar -xvf la_parcels.tar.xz shp2pgsql -I -s 4326 -g geom la_parcels.shp la_parcels | psql -d gis > import.log Basic Queries

Next let’s get an SQL prompt

psql -d gis

And run a simple query. This should improve performance a bit.

vacuum analyze;

Now let’s have some fun. What are the most expensive pieces of land in LA County?

select land_value, owner_name, address_number, street_name from la_parcels order by land_value desc limit 10; land_value owner_name address_number street_name 252,479,578 CATALINA MEDIA DEVELOPMENT II 3000 ALAMEDA AVE 183,110,154 MOBIL OIL CORP 3700 190TH ST 154,291,320 BH WILSHIRE INTERNATIONAL LLC 9900 WILSHIRE BLVD 137,295,210 BP WEST COAST PRODUCTS LLC 1801 SEPULVEDA BLVD 136,879,311 WARNER BROS ENTERTAINMENT INC 4000 WARNER BLVD 135,354,000 NEXT CENTURY ASSOCIATES LLC 2025 AVENUE OF THE STARS 132,214,693 UNIVERSAL STUDIOS LLC 3900 LANKERSHIM BLVD 129,077,263 TWENTIETH CENTURY FOX FILM CORP 10201 PICO BLVD 126,062,515 UNIVERSAL STUDIOS LLC 3900 LANKERSHIM BLVD 122,666,461 TISHMAN SPEYER ARCHSTONE SMITH 3600 BARHAM BLVD

Entertainment and oil companies dominate here. That is land only though. I wonder which of these has the most expensive “improvement”, or building? Let’s use a nested query.

select improvement_value, owner_name, address_number, street_name from (select land_value, improvement_value, owner_name, address_number, street_name from la_parcels order by land_value desc limit 10) as lands order by improvement_value desc;

20th Century Fox wins with $265 million. Exxon Mobil’s sprawling refinery is assessed at only $19 million. Someone must be trying hard to keep property taxes low. Okay what about the most expensive properties overall? Combining land and building value?

select (land_value+improvement_value) as total_value, owner_name, address_number, street_name from la_parcels order by total_value desc limit 20; total_value owner_name address_number street_name 628,970,661 CHILDREN HOSPITAL OF LOS ANGELES 4550 SUNSET BLVD 550,125,487 KAISER FOUNDATION HOSPITALS 9343 IMPERIAL HWY 523,472,588 CEDARS SINAI MEDICAL CENTER 8720 ALDEN DR 521,206,676 TRS OF THE J PAUL GETTY TRUST 199 CHURCH LANE 515,115,546 TRS OF THE J PAUL GETTY TRUST 1200 GETTY CENTER DR 511,156,285 TRS OF THE J PAUL GETTY TRUST 0 511,156,285 TRS OF THE J PAUL GETTY TRUST 0 478,152,504 CENTURY CITY MALL LLC 10250 SANTA MONICA BLVD 477,297,278 HAY,DOROTHY L DECD EST OF AND 121 LA CIENEGA BLVD 475,058,030 ANHEUSER BUSCH INC 15800 ROSCOE BLVD 466,751,222 TRIZEC 333 LA LLC 333 HOPE ST 439,548,987 MARANGI,LEONARD M ETAL TRS LESSR 100 CONGRESS ST 439,000,000 COMMUNITY REDEVELOPMENT AGENCY 350 GRAND AVE 420,500,000 WILSHIRE COURTYARD LP 5700 WILSHIRE BLVD 397,617,220 DISNEY,WALT PRODUCTIONS INC 500 BUENA VISTA ST 396,907,059 CEDARS SINAI MEDICAL CENTER 127 SAN VICENTE BLVD 394,172,461 TWENTIETH CENTURY FOX FILM CORP 10201 PICO BLVD 376,000,000 2121 AVENUE OF THE STARS LLC 2121 AVENUE OF THE STARS 364,457,522 1999 STARS LLC 1999 AVENUE OF THE STARS 361,003,213 UNIVERSAL STUDIOS LLC 3900 LANKERSHIM BLVD

Hospitals monopolize the top spots. Healthcare is expensive. No surprise to see the magnificent Getty Center either. I wonder if the multiple entries are redundant or it’s worth $2 billion. Wouldn’t be surprised either way. Did you know it’s free? Free! Unlike the hospital.

Let’s find another landmark. How about Dodger Stadium? We’ll use a forgiving string compare to make sure we match the street.

select (land_value+improvement_value) as total_value, owner_name, address_number, street_name from la_parcels where address_number = 1000 and street_name ilike 'elysian park%' total_value owner_name address_number street_name 84,409,139 “REALCO INTERMEDIARY LLC” 1000 “ELYSIAN PARK AVE”

Dodgers stadium must be worth more than $84 million. How do the assessed values compare to real world values? Let’s use One Wilshire as an example. It sold in 2013 for $437.5 million and its assessed value is $297.5 million. Not too far off.

Now let’s use the aggregation function sum() and group by to find the most expensive cities by area in LA County. Since the city column is still a bit messy we’ll use having to eliminate the outliers.

select city, price_per_area from (select sum(area) as area, sum(land_value) as land_value, count(ain) as parcels, city, (sum(land_value) / sum(area)) as price_per_area from la_parcels group by city having count(ain) > 1000 order by price_per_area desc) as cities city price_per_area total_value MANHATTAN BEACH 110.6230954 12516203385 BEVERLY HILLS 82.13910857 22623901990 HERMOSA BEACH 60.45970815 5267669536 PALOS VERDES EST 47.59576314 4671096274 W HOLLYWOOD 44.57732453 2080465255 SANTA MONICA 43.00346533 26202224215 SAN MARINO 42.27213826 4967172261 LA CANADA FLT 31.25619651 3794539133 EL SEGUNDO 21.44013791 5831487943 BURBANK 21.05596602 17013420158 SOUTH PASADENA 20.63481846 3601284180 MONTROSE 20.56928866 560769462

I still want an answer to my original question. Who owns the most land?

select owner_name, count(ain), sum(land_value)+sum(improvement_value) as holding, sum(area) as lands from la_parcels group by owner_name order by lands desc limit 10 owner_name parcel_count holding_value holding_area U S GOVT 2695 406477416 33837253322 STATE OF CALIF 1387 271712949 2551166380 L A CITY 5498 1123805464 1586934789 SANTA CATALINA ISLAND 79 22222591 1504028479 L A COUNTY 1768 603224059 870795789 TEJON RANCH CO 59 5411765 703066562 L A CITY DEPT OF WATER AND POWER 2340 212220963 627091955 NEWHALL LAND AND FARMING CO 415 199329086 618742553 MOUNTAINS RECREATION AND 622 70849023 487140373 L A CO FLOOD CONTROL DIST 3916 54042362 448175685

By area huge swaths of the county is controlled by the federal and state government, with a few agriculture companies such as Tejon Ranch and Newhall Land and Farming Company sprinkled in.

More Advanced Queries

I wonder what percentage of LA County is not held by one of the 10 entities above or is unincorporated? Here we will use a view, which allows you to treat a query like its own table.

create view top_owners as select owner_name from (select owner_name, count(ain), sum(land_value)+sum(improvement_value) as holding, sum(area) as lands from la_parcels group by owner_name order by lands desc limit 10) as owners

Now in order to select the lands these guys own we’ll use a join statement, specifying where the view and table intersect. Join by default is inner, meaning we’ll only get rows where the field matches. Now we can succinctly find the parcels owned by these entities.

select count(*) from la_parcels join top_owners on la_parcels.owner_name = top_owners.owner_name

And finally use a union operation, which can combine multiple select statements into a single column.

select sum(area) from la_parcels join top_owners on la_parcels.owner_name = top_owners.owner_name union select sum(area) from la_parcels

About 30%. And finally what % of the land in the city of LA is devoted to public space? I think this is an important metric for any city.

select sum(area) from la_parcels where owner_name ilike 'l a city' or owner_name ilike 'l a city park' union select sum(area) from la_parcels where city ilike 'los angeles'

8%. Not bad. Griffith park, Elysian park, MacArthur park, Runyon Canyon, Grand park, Vista Hermosa, LA has some fantastic public spaces. The largest green areas are owned by the federal or state government and are outside the city, though not terribly far. Due to lack of Zoning Code standardization it is difficult to get a good picture of what the lands are used for.

Spatial Queries

At last, we unleash PostGIS. Note that since we are using WGS84 our results will be in latitude and longitude rather than meters as above.

First it’s good to know what we’re working with.

select distinct GeometryType(geom) from la_parcels

The geom field is exclusively MULTIPOLYGON. If we want to work with simpler shapes we can unroll them with ST_Dumps() in to the POLYGON type.

Alright I wonder where is the geographic center of LA County? We’ll use ST_Extent() to roll up all of our geometries in to a bounding box, and find the X and Y coordinates of its center with ST_Centroid().

select ST_Y(ST_Centroid(ST_Extent(geom))), ST_X(ST_Centroid(ST_Extent(geom))) from la_parcels latitude longitude 33.80924996626 -118.29553231211

This result is from a simple box around our shapes. That is not very rigorous. Instead we should roll up all of our shapes together and form a “convex hull”, the minimum geometry that encloses them, and find the centroid of that. Let’s find how far the center of Malibu is from that point. Here we use a geometry constructor. the true in the ST_Distance() function gets us the distance across the geoid rather than straight through.

select ST_Distance(malibu_center, ST_MakePoint(-118.29553231211, 33.80924996626, 1)) from (select ST_Centroid(ST_ConvexHull(ST_Collect(geom))) as malibu_center from la_parcels where city ilike 'malibu') as malibu

0.505358312522112 is our answer. That’s quite a drive. Especially in traffic.

Last but not least, I took the liberty of creating a web interface to this dataset. Moving around the whole dataset would be inefficient, so I use google maps and javascript to ask a minimal flask application, which in turn asks the database which parcels are within the bounding box of the map’s current view. The @ operator finds the geometries inside the envelope I build. The core query is simply:

select distinct ain, land_value + improvement_value as total_value, land_value, improvement_value, owner_name, year_built, address_number, street_name, city, state, zip_code, zoning_code, area, perimeter, ST_AsGeoJSON(ST_MakeValid(geom)), ST_AsGeoJSON(ST_Centroid(ST_MakeValid(geom))) from la_parcels where geom @ ST_MakeEnvelope(%s,%s,%s,%s)

The bounding box of the map is passed to the %s parameters. A few of the geometries are invalid so ST_MakeValid() helps us out there. The query is amazingly fast, around 12ms. Drawing on the map is the slow part. Google Maps has gotten too complicated. But it still works pretty well as long as you don’t zoom out too far and avoid residential areas. The american dream of individual home ownership is slowing down my app. Hopefully I can speed it up but for now you can play around with it here. Click on the geometry to see the value. Some buildings are broken up in to many individual parcels in three dimensions. That is the difference between an apartment and a condominium. In a condo, you own the parcel from the county.

Conclusion

I was disappointed with how difficult it was to obtain this data initially and the poor quality it came in. Governments and constituencies of all sizes stand to benefit enormously from investment in modern software tools and stronger commitments to transparency.

While building I began to dream of having all the parcels of the United States in a single database. That would be a fascinating study, but the data is horribly spread about and fragmented. If you are interested in obtaining the data for your county, or another, and structuring it in to the same schema I would be very grateful, and promise to share the collected information. You can track the completeness of what has been gathered for California here. Please contact me if interested in contributing.

What if we had the data for other nations too? Could we put the entire world in a computer?

References

[1] http://lalafco.org/Forms/Application%20Form12-11%27.pdf
[2] http://assessor.lacounty.gov/extranet/default.aspx
[3] to the pedants that still insist on using “these data” give it a rest, it’s confusing to most people
[4] http://postgis.net/
[5] http://spatialreference.org/ref/epsg/wgs-84/
[6] http://postgis.net/docs/manual-1.3/ch06.html

Do you love databases? Soylent is hiring a Chief Database Architect.

Warning: Wi-Fi Blocking Is Prohibited

29 January 2015 - 2:00am

PUBLIC NOTICE

Federal Communications Commission

News Media Information 202 / 418-0500

Internet: http://www.fcc.gov

TTY: 1-888-835-5322

445 12th St., S.W.

Washington, D.C. 20554

DA 15-113

January 27, 2015

Enforcement Advisory No. 2015-01

FCC ENFORCEMENT ADVISORY

WARNING: Wi-Fi Blocking is Prohibited

Persons or Businesses Causing Intentional Interference to Wi-Fi Hot Spots

Are Subject to Enforcement Action

In the 21st Century, Wi-Fi represents an essential on-ramp to the Internet.

Personal Wi-Fi

networks, or “hot spots,” are an important way that consumers connect to the Internet. Willful

or malicious interference with Wi-Fi hot spots is illegal. Wi-Fi blocking violates Section 333 of

the Communications Act, as amended.1 The Enforcement Bureau has seen a disturbing trend in

which hotels and other commercial establishments block wireless consumers from using their

own personal Wi-Fi hot spots on the commercial establishment’s premises. As a result, the

Bureau is protecting consumers by aggressively investigating and acting against such unlawful

intentional interference.

In 2014, the Enforcement Bureau conducted an investigation, culminating with a Consent

Decree, into this kind of unlawful activity by the operator of a resort hotel and convention

center.2

In that case, Marriott International, Inc. deployed a Wi-Fi deauthentication protocol to

deliberately block consumers who sought to connect to the Internet using their own personal

Wi-Fi hot spots. Marriott admitted that the customers it blocked did not pose a security threat

1 47 U.S.C. § 333.

2 Marriott Int’l, Inc.; Marriott Hotel Servs, Inc., Order and Consent Decree, 29 FCC Rcd 11760 (Enf. Bur.

2014). Marriott and other members of the hotel and lodging industry filed a petition requesting guidance

on this issue. See Petition of Am. Hotel & Lodging Ass’n, Marriott Int’l, Inc., and Ryman Hospitality Props.

for a Declaratory Ruling to Interpret 47 U.S.C. § 333, or, in the Alternative, for Rulemaking, RM-11737 (filed

Aug. 25, 2014) (Petition). Comment was sought on the Petition. Consumer & Gov’t Affairs Bureau

Reference Information Center Petition for Rulemaking Filed, Public Notice, RM 11737 (Nov. 19, 2014).

While the Enforcement Bureau recognizes that the Petition questions our position, the Bureau will

continue to enforce the law as it understands it unless and until the Commission determines otherwise.

Page 1 of 2

to the Marriott network and agreed to settle the investigation by paying a civil penalty of

$600,000.

Following the settlement, the Enforcement Bureau has received several complaints that other

commercial Wi-Fi network operators may be disrupting the legitimate operation of personal Wi-

Fi hot spots. The Bureau is investigating such complaints and will take appropriate action

against violators.

What is Prohibited? No hotel, convention center, or other commercial establishment or the

network operator providing services at such establishments may intentionally block or disrupt

personal Wi-Fi hot spots on such premises, including as part of an effort to force consumers to

purchase access to the property owner’s Wi-Fi network. Such action is illegal and violations

could lead to the assessment of substantial monetary penalties.3

In addition, we reiterate that Federal law prohibits the operation, marketing, or sale of any type

of jamming equipment, including devices that interfere with Wi-Fi, cellular, or public safety

communications. Detailed information about the prohibition against jamming is available on

the Commission’s website at http://www.fcc.gov/encyclopedia/jammer-enforcement.

What Should You Do if You Suspect Wi-Fi Blocking? If you have reason to believe your

personal Wi-Fi hot spot has been blocked, you can file a complaint with the FCC. To do so, you

can visit www.fcc.gov/complaints or call 1-888-CALL-FCC. If you contact the FCC, you are

encouraged to provide as much detail as possible regarding the potential Wi-Fi blocking,

including the date, time, location, and possible source.

Need More Information? Media inquiries should be directed to Neil Grace at 202-418-0506 or

neil.grace@fcc.gov. For general information on the FCC, you may contact the FCC at 1-888-

CALL-FCC (1-888-225-5322) or visit our website at www.fcc.gov.

To request materials in

accessible formats for people with disabilities (Braille, large print, electronic files, audio format),

send an e-mail to fcc504@fcc.gov or call the Consumer & Governmental Affairs Bureau at 202-

418-0530 (voice), 202-418-0432 (TTY).

Issued by: Chief, Enforcement Bureau

3 All operators, including of a Part 15 device, must comply with the Communications Act, including Section

333, and the Commission’s rules.

Page 2 of 2

Note: We are currently transitioning our documents into web compatible formats for easier reading. We have done our best to supply this content to you in a presentable form, but there may be some formatting issues while we improve the technology. The original version of the document is available as a PDF, Word Document, or as plain text.

HackerSurfing: Free Housing and Food for Engineers Visiting SF

29 January 2015 - 2:00am
Hacker Surfing | Free lodging in SF for engineers and designers. No strings.

Hacker Surfing by Wefunder

Stay for up to a week. Meet startups that are hiring. No strings. Learn More Apply Now

Hi! We're the guys behind Wefunder. We've helped fund over 50 startups... and almost all of them - including ourselves - are desperately seeking talented engineers and designers to hire. So.... Hacker Surfing!

We have a couple of spare bedrooms in our "hacker palace". It's pretty swank. If you're visiting SF, stay with us for free and eat our foods... and we'll introduce you to a bunch of legit, funded startups, most of which are Y Combinator alumni. They offer market-rate salaries and cover relocation costs.

There is no obligation whatsoever! You don't need to do any spec work or other such nonsense. We enjoy meeting awesome people, so if you don't take a job at any of the startups, that's still cool. We just hope you had a good time.

Am I under any obligations?

No! None! Zero! You are not obligated to join any startup or do any work for a startup. We only ask that you - in good faith - be open to a new job in San Francisco.

Seriously? This seems to good to be true.

We like being around really interesting and motivated people, and if it's a good fit, we're confident things will just work out. We don't like obligating anyone to do anything.

What's the motivation? Are you paid to do this?

No one pays us. We like meeting and helping cool people. We ourselves - as founders of Wefunder - are looking to hire. And we've funded 55+ startups, almost all of which are desperately seeking engineers and designers. We'd feel good being matchmakers.

Who are you? Who lives in the house?

Nick Tommarello and Mike Norman - co-founders of Wefunder - live in the house. During the day, a few employees and friends of Wefunder hack in the house.

Can I work in the house?

Of course! We have rock-solid wi-fi, adjustable standing desks, and spare Apple Cinema displays lying around.

How many hackers or designers do you invite?

Typically, no more than two at any one time. We only have two additional bedrooms.

What is covered? How about my airfare?

Your lodging and food is 100% covered. Your flight, however, is not.

Are there any social events?

Every Wednesday, we cook some juicy rib-eye steak, and feed 20 +startup founders, a large number of them Y Combinator alumni. It's a great way to bond with some super-legit founders!

Where will I sleep?

In a bed in your own private room. We have a queen and a smaller bunk bed available.

What kind of food is included?

Whatever you can find in the kitchen - we're generally well stocked with snacks, eggs, and various veggies and meats.

What is the application process like?

If we have room available, we'll talk on the phone for about 15 minutes, and then decide. That's it - no technical tests.

What kind of jobs are there?

Wefunder is hiring a visual designer, a front-end engineer, and a Rails engineer: here's their jobs page. We'll add more jobs from other startups when we know who is attending.

Can I do consulting work?

If you'd like, there will be some small projects (10-20 hours) that you can work on during the program, paid at your normal rate. This could easily pay for your plane ticket. You are, however, under no obligation to consult.

You'll hear back within 48 hours. Personal Info

Name

Linkedin

Email

When would you like to visit?

Where can we see your work?

What's your proudest accomplishment?

What sort of work are you passionate about?

What survival skills do you have?

Submit Application Sponsored and Managed by Wefunder

Gamma-ray bursts are a threat to life

29 January 2015 - 2:00am

A new study confirms the potential hazard of nearby gamma-ray bursts (GRBs), and quantifies the probability of an event on Earth and more generally in the Milky Way and other galaxies. The authors find a 50% chance that a nearby GRB powerful enough to cause a major life extinction on the planet took place during the past 500 million years (Myr). They further estimate that GRBs prevent complex life like that on Earth in 90% of the galaxies.

GRBs occur about once a day from random directions in the sky. Their origin remained a mystery until about a decade ago, when it became clear that at least some long GRBs are associated with supernova explosions (CERN Courier September 2003 p15). When nuclear fuel is exhausted at the centre of a massive star, thermal pressure can no longer sustain gravity and the core collapses on itself. If this process leads to the formation of a rapidly spinning black hole, accreted matter can be funnelled into a pair of powerful relativistic jets that drill their way through the outer layers of the dying star. If such a jet is pointing towards Earth, its high-energy emission appears as a GRB.

The luminosity of long GRBs – the most powerful ones – is so intense that they are observed throughout the universe (CERN Courier April 2009 p12). If one were to happen nearby, the intense flash of gamma rays illuminating the Earth for tens of seconds could severely damage the thin ozone layer that absorbs ultraviolet radiation from the Sun. Calculations suggest that a fluence of 100 kJ/m2 would create a depletion of 91% of this life-protecting layer on a timescale of a month, via a chain of chemical reactions in the atmosphere. This would be enough to cause a massive life-extinction event. Some scientists have proposed that a GRB could have been at the origin of the Ordovician extinction some 450 Myr ago, which wiped out 80% of the species on Earth.

With increasing statistics on GRBs, a new study now confirms a 50% likelihood of a devastating GRB event on Earth in the past 500 Myr. The authors, Tsvi Piran from the Hebrew University of Jerusalem and Raul Jimenez from the University of Barcelona in Spain, further show that the risk of life extinction on extra-solar planets increases towards the denser central regions of the Milky Way. Their estimate is based on the rate of GRBs of different luminosity and the properties of their host galaxies. Indeed, the authors found previously that GRBs are more frequent in low-mass galaxies such as the Small Magellanic Cloud with a small fraction of elements heavier than hydrogen and helium. This reduces the GRB hazard in the Milky Way by a factor of 10 compared with the overall rate.

The Milky Way would therefore be among only 10% of all galaxies in the universe – the larger ones – that can sustain complex life in the long-term. The two theoretical astrophysicists also claim that GRBs prevent evolved life as it exists on Earth in almost every galaxy that formed earlier than about five-thousand-million years after the Big Bang (at a redshift z > 0.5). Despite obvious, necessary approximations in the analysis, these results show the severe limitations set by GRBs on the location and cosmic epoch when complex life like that on Earth could arise and evolve across thousands of millions of years. This could help explain Enrico Fermi’s paradox on the absence of evidence for an extraterrestrial civilization.

Deploying Tor Relays

28 January 2015 - 2:00pm

On November 11, 2014 Mozilla announced the Polaris Privacy Initiative.  One key part of the initiative is us supporting the tor network by deploying tor middle relay nodes.  On January 15, 2015 our first proof of concept (POC) went live.

TL;DR; here are our tor relays: https://globe.torproject.org/#/search/query=mozilla

When we started this POC, the requirements we had were:

  • the tor nodes should run on dedicated hardware
  • the nodes should be logically and physically separated from our production infrastructure
  • use low cost and commoditized hardware
  • nodes should be operational within 3 weeks
Hardware and Infrastructure
  • We chose to make use of our spare and decommissioned hardware.  That included a pair of Juniper EX4200 switches and three HP SL170zG6 (48GB ram, 2*Xeon L5640, 2*1Gbps NIC)
  • We dedicated one of our existing IP Transit providers to the project (2 X 10Gbps).

The current design is fully redundant.  This allows us to complete maintenance or have node failure without impacting 100% of traffic.  The worst case scenario is a 50% loss of capacity.

The design also allows us to easily add more servers in the event we need more capacity, with no anticipated impact.

Building and Learning

There is a large body of knowledge available on building Tor nodes.  I read mailing lists archives, blog posts, and tutorials. I had exchanges with people already running large relays.  There are still data points Mozilla needs to understand before our experiment is complete.  This section is a “quick run down” on some of those data points.

  • A single organization shouldn’t be running more than 10Gbps of traffic for a middle relay (and 5Gbps for an exit node).

This seems to be more of gut feeling from existing operators than a proven value (let me know if I’m wrong), but it makes sense.  We do have available transit and capacity. Understanding throughput and resource utilization is a key criteria for us.

Important Note: An operator running relays must use the “MyFamily” option in torrc.  This ensures a user doesn’t bounce through several of your servers.

A new Tor instance (identified by its private/public key pair) will take time (up to 2 months) to use all its available bandwidth. This is explained in this blog post: The lifecycle of a new relay. We will be updating our blog posts and are curious how closely our nodes mirror the lifecycle.

  • A Tor process (instance) can only push about 400Mbps.

This is based on mailing list discussions, as we haven’t reached that bandwidth yet. We run several instances per physical server.

  • A single public IP can only be shared by 2 Tor instances

This is a security feature to prevent a single person to run a ton of fake different nodes as explained in this research paper. This feature is documented in the Tor protocol specification.

  • Listen on well known ports like 80 or 443

This helps people behind strict firewall to access Tor. Don’t worry about running the process as root (needed to listen on ports < 1024), as long as you have the “User” option in torrc, Tor will drop the privileges after binding to the ports.

Automation

We decided to use Ansible for configuration management.  A few things motivated us to make that choice.

  • There was an existing ansible-tor role very close to what we needed to accomplish (and here is our pull request with our fixes and additions).
  • Some of our teams are using Ansible in production and we (Network Engineering) are considering it.
  • Ansible does not require a heavy client/server infrastructure which should make it more accessible to other operators.

And look! Mozilla’s Ansible configuration is available on GitHub!

Security

The security team helped us a lot along this project. Together we have put together a list of requirements, such as

  • strict firewall filtering
  • hardening the operating system (disable unneeded services, good SSH configuration, automatic updates)
  • hardening the network devices management plane
  • implementing edge filtering to make sure only authorized systems can connect to the “network management plane”

The only place for the infrastructure administration is the jumphost. Systems don’t accept management connection from anywhere else.

It is important to note, that many of the security requirements align nicely with what’s considered a good practices in general system and network administration. Take enabling NTP or centralized syslog for example – equally important for some services to run smoothly, for troubleshooting and for Incident Response. Similar concepts apply with the principle “make sure the network devices security is at least as good as system’s one”.

We’ve also implemented a periodic security check to be run on these systems. All of them are scanned from inside for security updates and outside for opened ports.

Metrics

One of the points we’re wondering are: how do we figure out if we’re running an efficient relay (in terms of cost, participation in the Tor network, hardware efficiency, etc). Which metrics to use and how to use them?

Looking around it seems like there is no “good answer”. We’re graphing everything we can about bandwidth and servers utilization using Observium. The Tor network already has a project to collect relays statistics called Tor metrics. Thanks to it, tools like Globe and others can exists.

Future

Note that we have just started them and they are far from running at their maximal bandwidth (for the reasons listed above). We will share more information down the road about performances and scaling.

Depending on the results of the POC,  we may move the nodes to a managed part of our infrastructure. As long as their private keys stay the same, their reputation will follow them wherever they go, no more ramp up period.

On a technical side there are a lot of possible things to do like adding IPv6 connectivity.  We’re reviewing opportunities to more parts of the deployment (like iptables, logs, etc…).

Links

Here are a few links that you might find interesting:

[blog] IPredator – building a Tor server
[mailing list] [tor-dev] Scaling tor for a global population

[mailing list] How to Run High Capacity Tor Relays
[wiki] tor – archwiki
[blog] Run A Tor-Relay On Ubuntu Trusty
[mailing list] [tor-relays] Someone broke the tor-relay speed record?
[tor website] Configuring a Tor relay on Debian/Ubuntu
[wiki] tor exit full setup

Thanks

Of course, none of that would have been possible without the help of Van, Michal (who wrote the part about security) and Opsec, Javaun, James, Moritz and the people of #tor!

Slack Acquires Screenhero (YC W13)

28 January 2015 - 2:00pm

Slack, the enterprise collaboration service that has raised $180 million and proven to be a runaway success with 365,000 daily active users, has made another acquisition to add more functionality to its platform and position itself as a sharper competitor against the likes of Microsoft. It has bought Screenhero, a Y-Combinator alum that competes against the likes of GoToMeeting or Webex, letting users of the service speak to each other and access each other’s screens for editing and more.

Stewart Butterfield, the co-founder and CEO of Slack, tells me that Screenhero will continue to operate for the time being as a separate product for new and existing customers, including its current pricing tiers, which start at $11 per user per month, and scale up to $444 per month for 50 users. Over time, Screenhero’s functionality will be integrated into Slack and Screenhero itself will shut down.

All six employees of Screenhero will join Slack, bringing the total number of employees to just over 100. The financial terms of the deal are not being disclosed, beyond the fact that it is a cash-and-stock deal.

When I first heard about Screenhero, it was by way of an email from a YC partner who was raving about how great the company was — following on from other scrensharing and paired computing services but doing it so much better. And as someone who has used a number of these services, Screenhero definitely stands out for its simplicity and power — with so little lag that it’s easy to forget that the person controlling your computer is potentially thousands of miles away.

It was this functionality that attracted Slack and Butterfield as well. “The cursor control that Screenhero offers,” he says, “we hadn’t seen anything like that before.”

Butterfield tells me that in fact he had made an offer to acquire Screenhero last year, but  CEO and co-founder Jahanzeb Sherwani refused, preferring instead to integrate with Slack. In the meantime the company had picked up customers that included SendGrid, New Relic, GitHub, Living Social, Automattic.

Over time, the idea of growing within a bigger platform started to appeal more. “We were under no pressure to sell form anyone, but we were using Slack we were spending more time in it. The product is great, but so is the team. It seemed like a natural fit.”

Prior to this, Slack bought collaboration tool Spaces in September 2014. Butterfield says that a new product with be launching as a result of that acquisition, “in the next few months.”