Hacker News from Y Combinator

Syndicate content
Links for the intellectually curious, ranked by readers. // via fulltextrssfeed.com
Updated: 18 hours 13 min ago

Kotlin, the Swift of Android

1 September 2014 - 1:00pm

Well, not quite but let's start from the beginning.

Now that Apple replaced Objective-C with Swift for iOS, the lack of a less archaic language for Android development has become more apparent.

For the desperate and adventurous there are the JVM alternatives like Scala and Groovy, but using them with Android is expensive: importing a language means importing the whole runtime, which is a nightmare for the package size and method count. Fine for small applications, but they are not what you are trying to solve with a better language.

Kotlin

Introducing Kotlin -- JVM-based language made by JetBrains (folks behind IntelliJ IDEA and, by extension, Android Studio) and named after an island near Saint Petersburg, which is where the development office behind the project is located. Introduced in 2011, it's been around for a few years now and the Android support came in the second milestone release (M2).

Expecting the standard reaction: yes, another JVM alternative, but this one is specifically designed to be light by excluding all the domain-specific heft and only keeping the core features missing from Java. If that sounds good, read on.

Features

What are those features, you ask? They are aplenty but I will focus on the ones that I have been missing from Java the most.

Named and optional arguments

Named arguments are a simple language feature that makes the code more readable, especially with longer method signatures.

Let's look at an example method:

void circle(int x, int y, int rad, int stroke) { ... }

Calling it in Java looks something like this:

circle(15, 40, 20, 1);

It takes multiple glances at the method signature to digest what it actually does. In Kotlin, the definition looks similar:

fun circle(x: Int, y: Int, rad: Int, stroke: Int) { ... }

But the call is a lot better:

circle(15, 40, rad = 20, stroke = 1);

Suppose we now want to make the stroke argument optional. In Java, you would overload a second method with one less argument and call the first one from it. Here is what you can do in Kotlin:

fun circle(x: Int, y: Int, rad: Int, stroke: Int = 1) { ... }

Not mind-blowing by any measure but still missing from Java.

Lambdas

Functional programming is all the rage these days and Kotlin supports lambdas too.

Let's start with a simple one. Suppose you have a list of integers and you want to remove all the odd elements:

list.filter {it % 2 == 1}

The function here takes the element type (in this case integer), and outputs a boolean. If we break out the filter predicate into an explicit variable, it might be easier to understand:

val predicate: (Int) -> Boolean = { it -> it % 2 == 1 } list.filter(predicate)

The syntax is similar to most other languages with lambda support so let's leave it there.

Null and type safety

Always having to check for nulls is another significant annoyance in Java, which is partially solved in Kotlin.

If you have an ordinary variable definition, the compiler does not allow it to be null.

var text1: String = "something" // All good var text2: String = null // Does not compile!

For those cases, where you allow nulls, you have to define an optional data type with a question mark following the type.

var text2: String? = null // All good

Calling methods on an optional-typed variable then requires optional calls.

text2?.replace(" ", "_")

This means that if text2 is null, the replace() method call is ignored and no NullPointerException is thrown.

If you're handed an optional-typed variable that you are absolutely sure is not null (happens a lot in Android API's, where you can safely assume that a method never returns a null but an optional is used for consistency), then you can enforce the call.

text2!!.replace(" ", "_")

This does produce a NullPointerException if text2 happens to be null, so be careful!

Another point of safety is testing for types. Let's say you have an instance of Context and you want to test whether it's an Activity (which, as you know, extends Context) and if so, do something that can only be done to an activity.

val context = getContext() if (context is Activity) { context.finish() }

Notice how after the type check you can just start using the context as an activity without having to cast it. Also, like instanceof in Java, is is null-safe so even if getContext() returns a null, the above code will not crash.

Data objects

When writing data objects, the things you always have to do manually are: toString(), hashCode() and equals(). Even though most IDE's can make this task less tedious, that doesn't help with updating the methods when a new field is added.

In Kotlin, there is no need to bother with any of that! All you do is add the prefix "data" before the class definition and all three methods will be implicitly generated.

data class Island(val name: String? = null)

So now if you were to instantiate the above class, it would produce a human-readable toString() automatically.

val island = Island(“Kotlin”) print(island.toString()) // Output: Island(name=Kotlin)

Similarly, if we create another instance with the same name, it will equal to the original object and their hashCode() will match.

val island2 = Island("Kotlin") assertTrue(island.equals(island2)) assertTrue(island.hashCode() == island2.hashCode())

Obviously, if you need those methods customised, you are still free to implement them manually the way you would in Java.

Singletons

Singletons are used often enough for a simpler way of creating them to exist. Instead of the usual static instance, getInstance() method and a private constructor, Kotlin uses the object notation.

object ApiConfig { val baseUrl: String = "https://github.com" }

For consistency, object notation is also used to define static methods.

open class MyFragment() : Fragment() { class object { fun newInstance(): MyFragment { return MyFragment() } } }

The above static method can be invoked by calling MyFragment.newInstance(), as any static method in Java.

Traits

Although Kotlin doesn't support multiple inheritance, it comes closer by supporting traits, which are essentially interfaces with default implementation.

trait SessionCloseable { fun closeSession() { Log.d(TAG, "Closing...") } }

The above trait defines an object that can close a session, so here is how we implement it in an activity.

open class MyActivity : Activity, SessionCloseable { override fun onStop() { closeSession() } }

The notable thing to remember about the syntax is that there is no difference between extending and implementing in Kotlin, so everything goes in the list after the colon.

Extension functions

Finally, let's look at a case, where you want to add functionality to an existing class from another API.

For example, if you wanted to create a method that takes a string and replaces all the spaces with underscores. In Java you would have to create a utility method that takes the original string.

public class StringUtils { public static String encodeString(String str) { return str.replaceAll(" ", "_") } }

In Kotlin, you can just create an extension method (even if the original class is final):

fun String.encodeSpaces(): String { return this.replaceAll(" ", "_") }

For those familiar with Objective-C, this is similar to categories.

How-to

To set your Android project up to use Kotlin, you need the following three things:

  1. Import the plugin
  2. Import the runtime
  3. Create and link source directories

Plugin is imported in the buildscript clause (use the latest version available):

buildscript { repositories { mavenCentral() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } apply plugin: 'kotlin-android'

Runtime is imported similarly in the dependencies clause:

dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" }

Finally, the directory structure would look something like this:

- app - src - debug - java - kotlin - main - java - kotlin - release - java - kotlin

For such structure, the android clause would need the following additions:

android { sourceSets { main.java.srcDirs += 'src/main/kotlin' debug.java.srcDirs += 'src/debug/kotlin' release.java.srcDirs += 'src/release/kotlin' } }

That's it! Now you can just create packages and *.kt files in the "kotlin" source directories and they will appear in your classpath.

For IDE support, you will need to install the Kotlin plugin in your Android Studio or IntelliJ IDEA (available from the repository) and setup Kotlin annotations for a project when prompted.

For more details, check out my sample application in the Source section, which demonstrates most of the language features.

Cost

There are two parts to the cost that Android developers worry about: size and method count, both of which can be evaluated for a simple test application.

The last section deals with the trade-offs that you will be faced with when switching to Kotlin.

Test application

There is no test application that will be indicative of the costs for any other application, largely because the number of underlying Kotlin methods that you touch is the deciding factor, which will vary wildly depending on the number of Android API's and language features that you use.

Having said that, the cost will only vary if you are using ProGuard to clean up the unused methods, otherwise you are stuck with the worst case, as the data below will show.

So here is the approach: one blank activity as generated by Android Studio (sample layout and menu, nothing else), first in its original Java form, then re-written in Kotlin. The goal is establishing the smallest running application as the best case with ProGuard enabled, and the worst case with it disabled.

Size Original ProGuard Java 55 KB 54 KB Kotlin 396 KB 57 KB

Size difference is quite small -- at most 341 KB, no more than a medium-sized library.

Method count Original ProGuard Java 24 11 Kotlin 6,063 53

Depending on how much of Kotlin your application uses, the overhead is at most 6,063 methods. Not insignificant but, put in perspective, Guava (18.0-rc1) is 14,835 and Google Play Services (5.0.77) is 20,298, so could be worse.

For the curious, here is a dump of the methods by the package (as given by dex-method-counts):

Read in 6063 method IDs. <root>: 6063 : 5 android: 3 app: 2 view: 1 java: 441 io: 68 lang: 204 annotation: 1 ref: 2 reflect: 14 math: 11 net: 1 nio: 3 charset: 3 text: 4 util: 150 concurrent: 14 locks: 10 regex: 7 javax: 14 xml: 14 parsers: 7 transform: 7 dom: 1 stream: 1 jet: 3 runtime: 3 typeinfo: 3 kotlin: 5519 browser: 10 concurrent: 120 dom: 309 internal: 13 io: 210 jvm: 154 internal: 153 math: 23 modules: 44 platform: 1 properties: 129 reflect: 319 jvm: 222 internal: 193 pcollections: 54 support: 20 template: 57 test: 69 util: 5 net: 17 gouline: 17 kotlindemo: 23 org: 61 jetbrains: 2 annotations: 2 w3c: 59 dom: 59 events: 19 Trade-offs

Finally, let's touch on some of the trade-offs. Here is a major one: although most Java libraries work just fine with Kotlin, annotation-based injection, such as Butter Knife, are not supported. At least for now.

Having said that, if you can't live without your Butter Knife, one workaround is performing the injection in a Java base class, then extending it in Kotlin and doing the rest there. Whether or not the benefit outweighs the cost is for you to decide but hey, it works.

The only other limitation is that instrumentation tests written in Kotlin seem to be ignored by the Android SDK. This, however, might be caused by something that I'm doing wrong, so as soon as I work out a solution, this section will be updated. As a workaround for now, the instrumentation tests can still be written in Java.

Conclusion

To be fair to Java, most of the features listed above have been added to Java 8, however there is no telling when that becomes available for Android (in fact, this was asked at the Android fireside chat at Google I/O 2014 and according to Xavier Ducrohet, there were no immediate plans), while Kotlin is here now.

Besides, the language is quite mature and definitely not one to be taken as a novelty. The only difficulty that I found so far is the documentation -- often lacking and outdated, however since the syntax is so similar to its JVM siblings, knowing how things work in Scala usually produces accurate enough guesses, so it's a minor hiccup.

In conclusion, Kotlin is what I intend to use for most of my Android projects from now on. Obviously, given an existing codebase it may not be wise to convert everything straightaway, however since Kotlin can co-operate with Java classes, the barriers to entry are minimal and it's definitely worth a try.

Source Resources

© 2014 Gouline Labs. All Rights Reserved.

Fourier Image Filtering

1 September 2014 - 1:00pm

Any image can be decomposed into the sum of many sinusoids at many different frequencies.
At the top is the image's frequency spectrum which shows the amplitudes of these sinusoids.
Below is the frequency response curve which scales the sinusoid amplitudes. Edit it to filter the image.
For example: Gaussian Blur, Sharpen, Edge Detection

Our Use of Little Words Can, Uh, Reveal Hidden Interests

1 September 2014 - 1:00pm

Katherine Streeter for NPR

One Friday night, 30 men and 30 women gathered at a hotel restaurant in Washington, D.C. Their goal was love, or maybe sex, or maybe some combination of the two. They were there for speed dating.

The women sat at separate numbered tables while the men moved down the line, and for two solid hours they did a rotation, making small talk with people they did not know, one after another, in three-minute increments.

I had gone to record the night, which was put on by a company called Professionals in the City, and what struck me was the noise in the room. The sound of words, of people talking over people talking over people talking. It was a roar.

What were these people saying?

And what can we learn from what they are saying?

That is why I called James Pennebaker, a psychologist interested in the secret life of pronouns.

About 20 years ago Pennebaker, who's at the University of Texas, Austin, got interested in looking more closely at the words that we use. Or rather, he got interested in looking more closely at a certain subset of the words that we use: Pennebaker was interested in function words.

For those of you like me — the grammatically challenged — function words are the smallish words that tie our sentences together.

The. This. Though. I. And. An. There. That.

"Function words are essentially the filler words," Pennebaker says. "These are the words that we don't pay attention to, and they're the ones that are so interesting."

According to the way that Pennebaker organizes language, the words that we more often focus on in conversation are content words, words like "school," "family," "live," "friends" — words that conjure up a specific image and relay more of the substance of what is being discussed.

"I speak bad Spanish," Pennebaker explains, "and if I'm in a conversation where I'm listening to the other person speak, I am just trying to find out what they are talking about. I am listening to 'what, where, when' — those big content-heavy words. All those little words in between, I don't listen to those because they're too complex to listen to."

In fact, says Pennebaker, even in our native language, these function words are basically invisible to us.

"You can't hear them," Pennebaker says. "Humans just aren't able to do it."

But computers can, which is why two decades ago Pennebaker and his graduate students sat down to build themselves a computer program.

The Linguistic Inquiry and Word Count program that Pennebaker and his students built in the early 1990s has, like any computer program, an ability to peer into massive data sets and discern patterns that no human could ever hope to match.

And so after Pennebaker and his crew built the program, they used it to ask all kinds of questions that had previously been too complicated or difficult for humans to ask.

Some of those questions included:

  • Could you tell if someone was lying by carefully analyzing the way they used function words?
  • Looking only at a transcript, could you tell from function words whether someone was male or female, rich or poor?
  • What could you tell about relationships by looking at the way two people spoke to each other?

Which brings us back to speed dating.

One of the things that Pennebaker did was record and transcribe conversations that took place between people on speed dates. He fed these conversations into his program along with information about how the people themselves were perceiving the dates. What he found surprised him.

"We can predict by analyzing their language, who will go on a date — who will match — at rates better than the people themselves," he says.

Specifically, what Pennebaker found was that when the language style of two people matched, when they used pronouns, prepositions, articles and so forth in similar ways at similar rates, they were much more likely to end up on a date.

"The more similar [they were] across all of these function words, the higher the probability that [they] would go on a date in a speed dating context," Pennebaker says. "And this is even cooler: We can even look at ... a young dating couple... [and] the more similar [they] are ... using this language style matching metric, the more likely [they] will still be dating three months from now."

This is not because similar people are attracted to each other, Pennebaker says; people can be very different. It's that when we are around people that we have a genuine interest in, our language subtly shifts.

"When two people are paying close attention, they use language in the same way," he says. "And it's one of these things that humans do automatically."

They aren't aware of it, but if you look closely at their language, count up their use of "I," and "the," and "and," you can see it. It's right there.

Pennebaker has counted words to better understand lots of things. He's looked at lying, at leadership, at who will recover from trauma.

But some of his most interesting work has to do with power dynamics. He says that by analyzing language you can easily tell who among two people has power in a relationship, and their relative social status.

"It's amazingly simple," Pennebaker says, "Listen to the relative use of the word "I."

What you find is completely different from what most people would think. The person with the higher status uses the word "I" less.

To demonstrate this, Pennebaker pointed to some of his own email, a batch written long before he began studying status.

First he shares an email written by one of his undergraduate students, a woman named Pam:

Dear Dr. Pennebaker:

I was part of your Introductory Psychology class last semester. I have enjoyed your lectures and I've learned so much. I received an email from you about doing some research with you. Would there be a time for me to come by and talk about this?

Pam

Now consider Pennebaker's response:

Dear Pam -

This would be great. This week isn't good because of a trip. How about next Tuesday between 9 and 10:30. It will be good to see you.

Jamie Pennebaker

Pam, the lowly undergraduate, used "I" many times, while Pennebaker didn't use it at all.

Now consider this email Pennebaker wrote to a famous professor.

Dear Famous Professor:

The reason I'm writing is that I'm helping to put together a conference on [a particular topic]. I have been contacting a large group of people and many have specifically asked if you were attending. I would absolutely love it if you could come... I really hope you can make it.

Jamie Pennebaker

And the return email from Famous Professor:

Dear Jamie -

Good to hear from you. Congratulations on the conference. The idea of a reunion is a nice one ... and the conference idea will provide us with a semiformal way of catching up with one another's current research.... Isn't there any way to get the university to dig up a few thousand dollars to defray travel expenses for the conference?

With all best regards,

Famous Professor

Pennebaker says that when he encountered these emails he was shocked to find that he himself obeyed this rule. He says he thought of himself as a very egalitarian person, and assumed he would never talk to people differently because of their status.

But in retrospect he says it makes sense. We use "I" more when we talk to someone with power because we're more self-conscious. We are focused on ourselves — how we're coming across — and our language reflects that.

So could we use these insights to change ourselves? Like Eliza Doolittle in My Fair Lady, could we bend our personalities by bending the words we use? Could we become stronger? More powerful? Healthier?

After 20 years of looking at this stuff, Pennebaker doubts it.

"The words reflect who we are more than [they] drive who we are," he says.

You can't, he believes, change who you are by changing your language; you can only change your language by changing who you are. He says that's what his research indicates.

Pennebaker has collected some of this research in a book called The Secret Life of Pronouns, but he says he feels the practice of using computers to count and categorize language is really just a beginning.

It's like we just invented the telescope, he tells me, and there are a million new places to look.

In fact, since this article first ran, Pennebaker has used his big data computer analysis to look at a wide range of new questions.

He's become a kind of literary detective, using the program to determine if a lost play was written by Shakespeare. (Results of that search should be published soon.)

He's also trying to figure out if function words can predict students' performance in college through an analysis of 25,000 admissions essays.

And he published an entire paper on the use of the filler words — um, like, uh, I mean and you know. One of the things that he found was that the use of these words — in addition to their function of annoying older people — was associated with conscientiousness.

Pennebaker has several other projects underway as well — using our simplest words as a window into our souls.

An earlier version of this story ran on NPR in 2012.

The Advanced Cave Culling Algorithm

1 September 2014 - 1:00pm

31 August 2014

The Advanced Cave Culling Algorithm™ is an algorithm that I happened to come up with during the development of MCPE 0.9, after we tried to enable cave generation in the worlds and being hit by how slower the game just became - it would hardly reach 40 fps on this year’s devices!
Fortunately, turns out this culling approach works pretty great with good culling ratios going from 50% to 99% (yes, 99%!) of the geometry, so it allowed us to generate caves on all phones instead than on the most powerful ones only.
On top of that, it gave a nice speed boost to Minecraft PC after it was backported :)
I think it might be an interesting read, and perhaps it could be useful to the countless voxels games being developed, so here’s how it works!

Yeah, caves are kind of slow

Minecraft’s caves are really fun to explore and get lost in thanks to their generated sponge-like structure and huge walkable area, and they have always been a part of Minecraft we wanted to bring over to MCPE.
However, while they are pretty cool for the gameplay, they are the ultimate overdraw nightmare:

  • rendering the caves by tessellating their surfaces requires a massive amount of triangles
  • they are really chaotic and twisted
  • visible from potentially everywhere
  • they form a lot of overlapping surfaces (ie. polygons one in front of another)

the overdraw here is insane!

While all of this mess of overlapping polygons wastes a lot of rendering time on desktop PCs too, the issue is even worse on tile-deferred rendering architectures such as those in mobile phones due to how they process the fragments.

Tile-deferred GPUs like the PowerVR family found in Apple’s devices can perform a very efficient Hidden Surface Removal, but at the cost of keeping track of a sorted list of screen pixels. This works very well in simple scenes, but the scene complexity becomes proportional to the amount of fragments per pixel; in a typical cave scene in Minecraft is far too huge (peaks of hundreds of triangles rendered to the same pixel), and has an obvious impact over performance. In benchmarks with caves on, even the latest iPad Mini Retina couldn’t manage to render above 40 fps, while other slightly older devices such as the iPad Mini/iPad 2 struggled keeping a playable framerate.

To make caves doable at all, then, we definitely needed a way to hide them when they are not needed, thus reducing the most evil ovedraw… but it had to be a new approach, as we already explored a couple that didn’t really cut it:

Things people tried before

Minecraft PC’s Advanced OpenGL
Notch originally tackled the problem of overdraw on PC by making use of the then-advanced OpenGL function called Hardware Occlusion Queries: it would draw a cubic “hull” of each 16x16x16 cube of blocks, then query the result to check if any pixels of the hull were visible.
If so, all of the chunk was deemed visible, and rendered.
This works for some GPUs (desktop Nvidia variants, primarily) but unfortunately it isn’t half as good as it sounds like: apart from the fact that rendering a lot more cubes is even slower, GPUs are inherently asynchronous.
That is, your GPU, at any time, lags from 1 to 3 frames behind what the CPU is doing right now.
So, without a lot of careful fiddling, rendering those hulls and reading back the result in the same frame can stall the GPU forcing it to stop and wait for the CPU.
Without the driver optimizations that Nvidia probably does, this is actually very slow.
And anyway HOQs are only available on OpenGL ES 3.0 devices, which are already the fastest around.

Checking which chunk faces are all opaque
Some people (and me) thought of an algorithm that could run on the CPU simply checking which sides of the 16x16x16 chunks were completely filled up by opaque blocks, thus forming a wall we could check against.
If a chunk was completely covered by those faces it would be safe to hide it.
However this too was a disappointment, though. It turns out that the caves are so spread all over the place that these walls of opaque blocks are quite rare, and the chance of one chunk having 6 opaque sides is very low: only about 1 in 100 chunks could actually be culled with this method.

Thinking quadrimensionally

Turns out that the previous algorithm wasn’t so bad even if it was unworkable - in fact, it’s the base of what we ended up implementing in MCPE 0.9 and Minecraft PC.
It just needed a small breakthrough to be workable!

The idea is actually very simple: what if instead of checking the walls separating the chunks, we check how those chunks connect together through the walls?
After all, we know from which direction we are looking from, and that’s an information we can put to use, by asking a more specific question to our graph:

Coming from my direction and entering the chunk through face A, is it possible to exit the chunk through face B?

Answering this question is actually quite fast, and requires storing just 15 bits in each chunk, one for each possible pair of faces - however those 15 bits have to be updated every time an opaque block changes in the chunk.
This is actually a somewhat expensive operation (~0.1-0.2 ms on most devices I tried) that would have made the stutter worse if done on the main thread. In fact, both MCPE and PC (props to @Dinnerbone) now do this in the background!

Rebuilding the graph

It’s rather straightforward to build the connectivity graph for a chunk when an opaque block changes, it follows a simple algorithm:

  • for each block that’s not opaque,
  • start a 3D flood fill, with an empty set of faces
  • every time the flood fill tries to exit the boundaries of the chunk through a face, add the face to the set
  • when the flood fill is done, connect together all the faces that were added to the set.

Try to place/remove opaque blocks in this javascript thing here to see how it would work in practice in a 2D chunk:

each color represents a different flood fill, dark tiles don’t lead anywhere;
the green lines tell which face can see which other.

After the chunks are all connected together through their visible faces, it’s time to start thinking of how to use this to decide what we’re going to show on screen, and this is where things start to be more interesting!
I’ll try to explain how the visibility graph is used in Part 2, for all of those not already too bored :)

<< Back to index

Mosh: A replacement for SSH

1 September 2014 - 1:00pm
Q: Who wrote Mosh?

Mosh was written by Keith Winstein, along with Anders Kaseorg, Quentin Smith, Richard Tibbetts and Keegan McAllister.

Q: Why another remote-terminal protocol?

Practical latency on the Internet is on the increase, with the rise of bufferbloat and sophisticated wireless links that optimize for throughput over delay. And roaming is more common than ever, now that laptops and handheld devices have largely displaced desktops. SSH is great, but frustrating to use when you want to change IP addresses or have a long-delay link or a dodgy connection.

Moreover, TELNET had some good things going for it — a local-echo mode and a well-defined network virtual terminal. Even today, SSH doesn't properly support UTF-8 end-to-end on a POSIX system.

Q: Are the mosh principles relevant to other network applications?

We think so. The design principles that Mosh stands for are conservative: warning the user if the state being displayed is out of date, serializing and checkpointing all transactions so that if there are no warnings, the user knows every prior transaction has succeeded, and handling expected events (like roaming from one WiFi network to another) gracefully.

Those don't seem too controversial, but fancy apps like Gmail-in-Chromium or on Android still behave atrociously on dodgy connections or after switching IP addresses. (Have you ever had Gmail leave an e-mail message in "Sending..." for ten hours while merrily retrieving new mail and not indicating any kind of error? Us too.) We think there may be considerable room for improvement in many network user interfaces from the application of these values.

Q: I'm using gnome-terminal or xfce4-terminal and seeing glitches in the last line of the terminal. Sometimes they go away when I select the text.

This is a bug in some versions of VTE, the terminal emulation library that powers gnome-terminal, xfce4-terminal, and some other terminal emulators. The VTE maintainers have fixed this bug; please see the below referenced bugzillas and other links. Another option is to switch to a non-VTE-based terminal, such as rxvt-unicode or xterm.

After installing a fixed package, for the fix to become effective, please make sure to restart all instances of the terminal.

See also:

Q: I'm getting "mosh requires a UTF-8 locale." How can I fix this?

To diagnose the problem, run locale on the local terminal, and ssh remotehost locale. To use Mosh, both sides of the connection will need to show a UTF-8 locale, like LC_CTYPE="en_US.UTF-8".

On many systems, SSH will transfer the locale-related environment variables, which are then inherited by mosh-server. If this mechanism fails, Mosh (as of version 1.2) will pass the variables itself. If neither mechanism is successful, you can do something like

mosh remotehost --server="LANG=en_US.UTF-8 mosh-server"

If en_US.UTF-8 does not exist on the remote server, you can replace this with a UTF-8 locale that does exist. You may also need to set LANG locally for the benefit of mosh-client. It is possible that the local and remote machines will need different locale names. See also this GitHub ticket.

Q: What does the message "Nothing received from the server on UDP port 60003" mean?

This means that mosh was able to start mosh-server successfully on the remote machine, but the client is not able to communicate with the server. This generally means that some type of firewall is blocking the UDP packets between the client and the server. If you had to forward TCP port 22 on a NAT for SSH, then you will have to forward UDP ports as well. Mosh will use the first available UDP port, starting at 60001 and stopping at 60999. If you are only going to have a small handful of concurrent sessions on a server, then you can forward a smaller range of ports (e.g., 60000 to 60010).

Tools like netstat, netcat, socat, and tcpdump can be useful for debugging networking and firewall problems.

Q: Why do you insist on UTF-8 everywhere?

We're really not UTF-8 zealots. But it's a lot easier to correctly implement one terminal emulator than to try to do the right thing in a variety of difficult edge cases. (This is what GNU screen tries to do, and in our experience it leads to some very tricky-to-debug situations.) So mosh just won't start up until the user has everything configured for a UTF-8-clean pathway. It may be annoying, but it also probably reduces frustration down the road. (Unfortunately an 8-bit vt220 and a UTF-8 vt220 are different and incompatible terminal types; the UTF-8 goes in underneath the vt220 state machine.)

Q: How do I use a different SSH port (not 22)?

As of Mosh 1.2, you can pass arguments to ssh like so:

mosh remotehost --ssh="ssh -p 2222"

Or configure a host alias in ~/.ssh/config with a Port directive. Mosh will respect that too.

Q: I'm getting 'mosh-server not found'.

Please make sure that mosh is installed on the client, and mosh (or at least mosh-server) is installed on the server you are trying to connect to. If you install mosh-server in your home directory, please see the "Server binary outside path" instructions in the Usage section, above.

Q: SSH authenticates using Kerberos tickets, but Mosh asks me for a password.

In some configurations, SSH canonicalizes the hostname before passing it to the Kerberos GSSAPI plugin. This breaks for Mosh, because the initial forward DNS lookup is done by the Mosh wrapper script. To work around this, invoke Mosh as

mosh remotehost --ssh="ssh -o GSSAPITrustDns=no"

This will often fail on a round-robin DNS setup. In that case it is probably best to pick a specific host from the round-robin pool.

Q: Why is my terminal's scrollback buffer incomplete?

Mosh 1.2 synchronizes only the visible state of the terminal. Mosh 1.3 will have complete scrollback support; see this issue and the others which are linked from there. For now, the workaround is to use screen or tmux on the remote side.

Q: How do I get 256 colors?

Make sure you are running mosh in a terminal that advertises itself as 256-color capable. (This generally means TERM will be xterm-256color or screen-256color-bce.)

Q: Has your secure datagram protocol been audited by experts?

No. Mosh is actively used and has been read over by security-minded crypto nerds who think its design is reasonable, but any novel datagram protocol is going to have to prove itself, and SSP is no exception. We use the reference implementations of AES-128 and OCB, and we welcome your eyes on the code. We think the radical simplicity of the design is an advantage, but of course others have thought that and have been wrong. We don't doubt it will (properly!) take time for the security community to get comfortable with mosh.

Q: Does mosh work with Amazon EC2?

Yes, it works great, but please remember to open up UDP ports 60000–61000 on the EC2 firewall.

Q: How do I tell if mosh is working correctly?

After you run mosh user@server, if successful you will be dropped into your login shell on the remote machine. If you want to check that mosh is being used instead of ssh, try typing Ctrl-^ Ctrl-Z to suspend the session (with mosh 1.2.4 or later on the client). Running fg will then return.

Q: What's the difference between mosh, mosh-client, and mosh-server? What one do I use?

The mosh command is a wrapper script that is designed to be the primary way that you use mosh. In most cases, you can simply just replace "ssh" with "mosh" in your command line. Behind the scenes, the mosh wrapper script will SSH to the server, start up mosh-server, and then close the SSH connection. Then it will start up the mosh-client executable on the client, passing it the necessary information for it to connect to the newly spawned mosh-server instance.

In normal usage, mosh-client and mosh-server don't need to be run directly.

Q: How do I run the mosh client and server separately?

If the mosh wrapper script isn't working for you, you can try running the mosh-client and mosh-server programs separately to form a connection. This can be a useful debugging technique.

1. Log in to the remote host, and run mosh-server.

It will give output like:

$ mosh-server MOSH CONNECT 60004 4NeCCgvZFe2RnPgrcU1PQw mosh-server (mosh 1.1.3) Copyright 2012 Keith Winstein <mosh-devel@mit.edu> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. [mosh-server detached, pid = 30261]

2. On the local host, run:

$ MOSH_KEY=key mosh-client remote-IP remote-PORT

where "key" is the 22-byte string printed by mosh-server (in this example, "4NeCCgvZFe2RnPgrcU1PQw"), "remote-PORT" is the port number given by the server (60004 in this case), and "remote-IP" is the IP address of the server. You can look up the server's IP address with "host remotehost".

3. If all goes well, you should have a working Mosh connection. Information about where the process fails can help us debug why Mosh isn't working for you.

Q: With the mosh-server on FreeBSD or OS X, I sometimes get weird color problems. What's wrong?

This bug is fixed in Mosh 1.2. Thanks to Ed Schouten and Peter Jeremy for tracking this down.

Q: How do I contribute to mosh?

We welcome your contribution! Please join us in #mosh channel on Freenode IRC, visit us on GitHub, or email mosh-devel@mit.edu.

Q: Who helped with mosh?

We're very grateful for assistance and support from:

  • Hari Balakrishnan, who advised this work and came up with the name.
  • Paul Williams, whose reverse-engineered vt500 state diagram is the basis for the Mosh parser.
  • The anonymous users who contributed session logs for tuning and measuring Mosh's predictive echo.
  • Nickolai Zeldovich for helpful comments on the Mosh research paper.
  • Richard Stallman for helpful discussion about the capabilities of the SUPDUP Local Editing Protocol.
  • Nelson Elhage
  • Christine Spang
  • Stefie Tellex
  • Joseph Sokol-Margolis
  • Waseem Daher
  • Bill McCloskey
  • Austin Roach
  • Greg Hudson
  • Karl Ramm
  • Alexander Chernyakhovsky
  • Peter Iannucci
  • Evan Broder
  • Neha Narula
  • Katrina LaCurts
  • Ramesh Chandra

AppleID password brute force proof-of-concept

1 September 2014 - 1:00pm
README.md

The end of fun, Apple have just patched

Here is appleID password bruteforce pOc. It's only p0c, so there is no

  • MultiThreading feature
  • Save-State-On-Exception feature

do it yourself

It uses Find My Iphone service API, where bruteforce protection was not implemented. Password list was generated from top 500 RockYou leaked passwords, which satisfy appleID password policy. Before you start, make sure it's not illegal in your country.

Be good :)

Follow us on twitter @hackappcom

My year with a distraction-free iPhone

1 September 2014 - 1:00am
My year with a distraction-free iPhone (and how to start your own experiment)

In 2012, I realized I had a problem.

My iPhone made me twitchy. I could feel it in my pocket, calling me, like the Ring called Bilbo Baggins. It distracted me from my kids. It distracted me from my wife. It distracted me anytime, anywhere. I just didn’t have the willpower to ignore email and Twitter and Instagram and the whole world wide web. Infinity in my pocket was too much.

I wanted to get control, but I didn’t want to give up my iPhone altogether. I loved having Google Maps and Uber and Find Friends and an amazing camera.

So I decided to try an experiment. I disabled Safari. I deleted my mail account. I uninstalled every app I couldn’t handle. I thought I’d try it for a week.

A month went by, then two, and I was loving my newfound freedom. I wrote up a post about my experience on Medium, called The distraction-free iPhone.

Then a lot of people read the post. It got over 80,000 views on Medium. Lifehacker ran it, and it got 70,000 more. Gizmodo ran it, and it got another 150,000. Obviously, other people were interested in the topic. (It’s not because I’m an interesting writer. For comparison, the next thing I wrote, about zombies, got less than 500 views.)

Sure, most of those bajillion readers — especially on Gizmodo — wanted to talk about what an idiot I am. “Why doesn’t he just buy a flip phone?!”

But a lot of people were supportive. And a lot of them actually tried it. Even some of my friends gave it a shot.

The biggest victory was when my wife made her own iPhone distraction-free. This, after 6 months of telling me I was nuts. You bet I was stoked! (Only you can’t really gloat in that situation. Not the “enlightened guy who’s too good for his iPhone” thing to do, is it?)

Anyway, I still get a lot of people asking: am I still doing it? Some of those people are probably too impatient to read this long boring intro. So for all you skimmers out there, here’s the answer in big letters:

Yes, I’m still doing it. Over one year later.

Oh great. Here comes the self-righteous part.

Over the last 12 months I’ve learned to enjoy (or at least, be OK with) moments of boredom. I reach for my phone a lot less often. It’s probably just my imagination, but it feels like it’s easier to concentrate when I need to get things done or tackle a big project.

Times on the bus when I would’ve checked email, I listen to music or just look around. I even started meditating on the bus (yes, really! And, uh… please don’t mug me) using an app called Calm. I can’t believe I’m the hippy dippy weirdo medidating on the bus using an app. But I’m actually a lot happier doing that than I was with my tweets.

At home, the phone becomes part of the stereo, and nothing more. At work, I set the thing down a lot. Nearly once a day I forget where it is — something I’d have never been able to imagine in 2012.

The weird part is this: This experiment was supposed to be a hardship. Now? It feels like the easy way out. I not only don’t want to go back, going back sounds really… difficult. Think of all the things I’d have to keep track of. Managing notifications and streams and pings and bleeps can add up to a lot of work.

The 24 hour experiment

If you’re intrigued, I encourage you to try going distraction-free for 24 hours. It’s pretty easy to set up on the iPhone, and most people who’ve done it really enjoy the break.

Now — there’s no pressure here. Some people seem to handle their smartphones just fine. For the rest of us, this is a worthy experiment.

1. Remove Safari

Safari is a big problem for me because it opens a window into a limitless universe of, y’know, everything. Infinity. At any given moment, there’s something super interesting on the Internet I haven’t seen before. Actually, I’m gonna go check real quick. NO! Must… finish… iPhone post.

You can’t delete Safari, but you can do this: Go into Settings, then Restrictions. Turn ‘em on, and then you can turn off Safari. Yes, I know, Restrictions — as if you yourself are a person you don’t trust to use your own phone. Kinda awkward, right?

2. Remove Mail

Email’s another big problem for me. There’s some good psychology behind this: our brains have a glitch that makes random rewards incredibly appealing. It’s a slot machine where the big payout is… a note from my boss, I guess.

I can’t give up email, but luckily with my job I really don’t have to have it on my phone. Over the last year, I’ve encouraged people to text or call me if they need a fast response. As an added bonus, most people have a much higher threshold for texting or calling than they do for firing off an email.

You can’t turn the Mail app all the way off on your iPhone. The easiest thing to do is delete your email account in Settings.

3. Remove “infinity” apps

Instagram, Facebook, Twitter, even the New York Times — all of these have a potentially endless supply of new and interesting stuff that I could check at any time. So none of them belong on my phone.

You can delete these apps the old-fashioned way, of course. Jiggle, jiggle, ✕!

4. Consciously decide what to keep

Having a blank desktop on the phone is surprisingly calming. Once I’d cleared off so much stuff, I wanted to keep it clean. I found it really useful to ask myself why each remaining app was on my phone. Was it a tool that made my life better? Or was it dragging me along for the ride?

So what made the cut? Here’s my list:

  1. Phone
  2. Messages
  3. Camera
  4. Apps that make me feel like I live in the future, kept in a folder inventively called “The Future.” Dropbox, Google Maps, Uber, Rdio, Instacart, and so on. (Even the weather app is pretty cool, when I stop and think about it. I mean, in the 1980s, I had a Walkman. That’s my point of reference: a freakin’ Walkman. It’s totally amazing that you can get a weather report in your pocket. And I would never, ever get addicted to it.)
  5. Useful things I rarely use, like a New York subway map or the compass.
  6. Useless things you can’t delete, like Passbook and Game Center.
I want a sensible phone, not a smart phone

This whole exercise has left me feeling like I took the iPhone into my life without ever really thinking about what it was gonna take from me. Internet, all the time, everywhere? Sign me up. Games, news, photos, popularity? Yes, please, more, please! It’s an all-you-can-eat buffet of excellent gourmet food. The trouble for me? I will always eat more than I should.

Since my line of work is helping companies build software and hardware, I’m trying to take this philosophy to heart. So I’ll leave you with a little preaching.

60 word sermon

When we invest our time and energy in technology — as creators or consumers — we should invest in products that belong in “The Future” and not those that make our lives disappear faster than they already do.

Personally, my life’s already going by at the speed of light. But this past year, it felt just the tiniest bit slower.

Tell me if you try it

Thanks for reading. If you do the experiment, I’d love to hear about it. Drop me a tweet. I’ll check it on my laptop. @jakek

The U.S. Navy Tests Its Ships in This Indoor Ocean

1 September 2014 - 1:00am

Switching testing scenarios used to take 20 minutes. Rolling waters can now be calmed in just 30 seconds.

“There are no freak waves in the world,” says naval architect Jon Etxegoien. “They are all predictable.”

He’s strolling the shores of an indoor ocean—a 12-million-gallon, football-field-size pool at the Naval Surface Warfare Center in the Maryland suburbs of Washington, D.C. A two-star admiral in crisp khaki leans on the railing nearby, watching obedient waves plunge and leap like show dolphins.

The recent installation of 216 state-of-the-art electronically-controlled wave boards has made this the most sophisticated scientific wave-testing basin of its size in the world. Scaled-down fiberglass models, cruisers the size of canoes, ride waves that max out at a few feet high. But it’s the motion of the ocean that matters. The hinged wave boards, each with its own motor synced up to software, can precisely recreate eight ocean conditions (from flat calm to typhoonlike) across all seven seas, pushing the water and moving up and down like giant piano keys whose scales and chords are waves.

The Navy tests models in the basin to be sure that billion-dollar ships will float before it builds them, but also to assess whether sailors can launch missiles and land helicopters in particular circumstances, and how vessels handle with a full tank versus running on fumes. Pitch, roll, sway, heave, acceleration, displacement—the calculations alone are enough to make you queasy.

A relic from the 1960s, the old pneumatic-powered wave system couldn’t replicate complicated open-ocean conditions, which are driven by local winds and far-off hurricanes. The testing team sometimes had to take remote-control models to the actual ocean, scouring weather reports for the perfect chop. Other seafarers have mistaken the models for “Cuban drug-smuggling submarines,” says test director Calvin Krishen. “We hear about it in the bars afterward.”

Salty stories aside, the excursions were not efficient. Simulations in the newly improved freshwater pool (the difference in density between it and saltwater is mathematically accounted for) can cover in six weeks scenarios that took many months of voyaging to recreate. Recently, the Navy tested a missile submarine slated to become operational in 2031. Other tests are classified.

The high-seas realism is unparalleled—unless, of course, the wave makers program the waves to be exactly parallel, which doesn’t happen at sea. Similar technologies have even fashioned waves that look like alphabet letters. “It almost becomes a kind of art,” Etxegoien says. “But our challenge is to do what nature can do, not what it can’t.”

Right now the pool is churned with what’s called a JONSWAP, a spectrum of specific frequencies and wavelengths derived from North Sea conditions. Fluorescent orange deer nets line the concrete beach, should a model destroyer ever run aground—though today there are no vessels under way, so the water troubles only itself.

The control center is a glass box high above the spray. But rather than Captain Nemo at his pipe organ, or Neptune himself, there’s a young man in a backward Orioles cap sitting in front of a computer, cranking out preprogrammed waves. Whether the scientists request ripples or hundred-year-swells, says Tony Lopez, an electrical engineering technician, “I just press a button that says start.”

Next Article The Invention of the “Snapshot” Changed the Way We Viewed the World

Common App Rejections

1 September 2014 - 1:00am

Before you develop your app, it’s important to become familiar with the technical, content, and design criteria that we use to review all apps. We’ve highlighted some of the most common issues that cause apps to get rejected to help you better prepare your apps before submitting them for review.

Crashes and Bugs

You should submit your app for review only when it is complete and ready to be published. Make sure to thoroughly test your app on devices and fix all bugs before submitting.

Broken Links

All links in your app must be functional. A link to user support with up-to-date contact information is required for all apps, and if you're offering auto-renewable or free subscriptions or your app is in the Kids Category, you must also provide a link to your privacy policy.

Placeholder Content

Finalize all images and text in your app before sending it in for review. Apps that are still in progress and contain placeholder content are not ready to be distributed and cannot be be approved.

Incomplete Information

Enter all of the details needed to review your app in the App Review Information section of iTunes Connect. If some features require signing in, provide a valid demo account username and password. If there are special configurations to set, include the specifics. If features require an environment that is hard to replicate or require specific hardware, be prepared to provide a demo video or the hardware. Also, please make sure your contact information is complete and up-to-date.

Inaccurate Descriptions

Your app description and screenshots should clearly and accurately convey your app's functionality. This helps users understand your app and makes for a positive App Store experience.

Misleading Users

Your app must perform as advertised and should not give users the impression the app is something it is not. If your app appears to promise certain features and functionalities, it needs to deliver.

Substandard User Interface

Apple places a high value on clean, refined, and user-friendly interfaces. Make sure your UI meets these requirements by planning your design carefully and following our design guidelines and UI Design Dos and Don'ts.

Advertisements

When submitting your app for review, you’ll be asked whether your app uses the Advertising Identifier (IDFA) to serve advertisements. If you indicate that your app uses the IDFA, but it does not have ad functionality or does not display ads properly, your app may be rejected. Make sure to test your app on an iOS device to verify that ads work correctly. Similarly, if you indicate that your app does not use the IDFA, but it does, your app will be put into the “Invalid Binary” status.

Web clippings, content aggregators, or a collections of links

Your app should be engaging and useful, and make the most of the features unique to iOS. Websites served in an iOS app, web content that is not formatted for iOS, and limited web interactions do not make a quality app.

Repeated Submission of Similar Apps

Submitting several apps that are essentially the same ties up the App Review process and risks the rejection of your apps. Improve your review experience — and the experience of your future users — by thoughtfully combining your apps into one.

Not enough lasting value

If your app doesn’t offer much functionality or content, or only applies to a small niche market, it may not be approved. Before creating your app, take a look at the apps in your category on the App Store and consider how you can provide an even better user experience.

For more resources and a list of guidelines used to review apps submitted to the App Store and Mac App Store, visit the App Review page.

Top 10 reasons for app rejections during the
7‑day period ending August 28, 2014.
  • 14%

    More information needed

  • 8%

    Guideline 2.2: Apps that exhibit bugs will be rejected

  • 6%

    Did not comply with terms in the Developer Program License Agreement

  • 6%

    Guideline 10.6: Apple and our customers place a high value on simple, refined, creative, well thought through interfaces. They take more work but are worth it. Apple sets a high bar. If your user interface is complex or less than very good, it may be rejected

  • 5%

    Guideline 3.3: Apps with names, descriptions, or screenshots not relevant to the App content and functionality will be rejected

  • 5%

    Guideline 22.2: Apps that contain false, fraudulent or misleading representations or use names or icons similar to other Apps will be rejected

  • 4%

    Guideline 3.4: App names in iTunes Connect and as displayed on a device should be similar, so as not to cause confusion

  • 4%

    Guideline 3.2: Apps with placeholder text will be rejected

  • 3%

    Guideline 3.8: Developers are responsible for assigning appropriate ratings to their Apps. Inappropriate ratings may be changed/deleted by Apple

  • 2%

    Guideline 2.9: Apps that are "beta", "demo", "trial", or "test" versions will be rejected

Total Percent of App Rejections
  • 58%
    Top 10 Reasons
  • 42%
    Other Reasons

Boeing Flies on 99% Ada

1 September 2014 - 1:00am

"Working Together" is the project name Boeing chose when it first entertained the idea of producing its 777 jet plane. The then-Seattle-based avionics company intended for the 10,000 people involved in the jetliner project to accept the company's policy of openness and non-competitiveness among both internal divisions and external suppliers. Management asserted that "working together" was the way to achieve the highest possible quality in every part of the system, from the secondary hydraulic brake to the auto-pilot system.

One challenge to the "Working Together" model was Boeing's insistence that the software be written in the Ada programming language. According to Brian Pflug, engineering avionics software manager at Boeing's Commercial Airplane Group, most companies disliked the idea of a standard language at all, and then seriously objected to Ada as too immature. In addition, one supplier was already six months into the development of their part of the project and had used another language.

Honeywell approached the request by conducting an extensive study into the benefits of Ada versus the C programming language. When the results were in, Honeywell agreed with the decision to use Ada: the study concluded that Ada's built-in safety features would translate into less time, expense, and concern devoted to debugging the software.

Sundstrand, the supplier already in development, agreed to the switch and reported that, after beginning again, the development effort continued without a hitch. "We had to start all over again," Dwayne Teske, Program Manager for the 777's main electrical-generating system, said in a telephone interview. "But the project went really smoothly after that, so Ada had a lot of positives."

Because of their involvement with Ada in the 777, these and other suppliers (including Hydro-Aire, the brake control system supplier) have continued to use the language in other system development projects. In carrying their experience to new systems, the companies have further enjoyed the benefits of Ada's portability and code reuse

Finding the Tools

Once committed to Ada, each company's first task was to find a compiler of good quality for the specific job at hand.

Honeywell was to develop the cockpit's primary flight controls in two projects, the Boeing 777's Airplane Information Management System and its Air Data/Inertial Reference System. For these projects, Honeywell purchased DDC-I, Inc.'s Ada Compiler System, using it as the front-end source for Honeywell's symbolic debugger. The two companies worked together for a year and a half to build the compiler's final debugger and the entire back-end, targeted to an Advanced Micro Devices (AMD) 29050 microprocessor. According to a recent telephone interview with Jeff Greeson, Honeywell's project leader for the 777 project's engineering, the companies "were able to build into the compiler a lot of optimization features specific to our hardware."

Hydro-Aire selected Alsys' Ada software development tools for the brake control system project. The supplier used AdaWorld cross compilers with the Smart Executive and Certification package to ensure meeting real-time and FAA requirements. The compilers are hosted on Hewlett-Packard HP 9000/300 platforms; they targeted the Motorola 58333 microcontroller, making Hydro-Aire one of the first companies to use the new chip.

Each 777's brake control system includes two Motorola microcontrollers programmed entirely in Ada. Harry Hansen, Hydro-Aire's Manager of Software Engineering reported that "We find Ada an excellent language for the development of real-time applications." The processors control the built-in test (BIT) and auto-brake functions. The BIT includes both an on-line interface to the central maintenance computer and off-line maintenance capability. The auto-brake applies the correct amount of brake pressure during landings and applies the maximum amount of pressure -- without causing a tire blow-out -- during aborted take-offs. Additionally, the system includes hardware and software to prevent skids, sensors and transducers to external systems, and hydraulic valves.

Sundstrand, too, chose a compiler from Alsys, Inc. (now Thomson Software Products, Inc.). Running on a PC host, it generated code targeted to an Intel 80186 microprocessor. The Certifiable Small Ada Run Time (CSMART) executive code that interfaces with the language resides inside the run-time controller and, therefore, had to be tested and verified. It was a major undertaking, but not a long-term inconvenience. "Ada continues to be our baseline language for future electrical systems," Teske said, "for reasons of cost and efficiency. We are now able to reuse code. We pull out certain chunks of airplane software and put them into new projects."

In a recent telephone interview, senior software engineer Malkit Rai, who led the effort on the Sundstrand 777 electrical power project, agreed on the importance of Ada's support for reuse. Ada has permanently replaced the shop's previous high-level language, PLM, which was developed by Intel and is based on PL/I. "Ten to 15 percent of the 777 Main Channel Electrical Power Generating System is already in reuse," he said. Two new projects, for the Gulfstream V business jet and the Comanche helicopter, were able to integrate Sundstrand's library of common generic packages written in Ada for the 777.

In fact, the Sundstrand power systems' 80,000 lines of code were in themselves reused by 10 to 15 percent. The embedded software's small size proves that Ada is well-suited for projects under 100,000 lines of code, as well as for large efforts. The 777's Cabin Management System, for example, is a communications module mounted on the 777's back seats and offers passengers a variety of services and is only 70,000 lines.

Putting Together a New Architecture

In comparison, Honeywell's Airplane Information Management System (AIMS) project consists of the largest central computer on the jetliner; it runs 613,000 new lines of code (defined as body semicolons), taking up 15,656 kilobytes (KB) of disk space and 4,854 KB of random-access memory (RAM). With redundancy, the software runs to 46,191 KB and 10,732 KB of RAM. A multiprocessor, rack-mounted system, the AIMS replaced many of the line-replaceable units and reduced hardware and software redundancy.

Two AIMS boxes handle the six primary flight and navigation displays: two sets are located in front of both the captain and copilot so that they can move from one seat to the other, and two central sets of engine parameters are shared by the pilots. The primary flight instruments indicate pitch and roll attitude, direction, air speed, rate of climb, altitude, etc. The AIMS also includes the central maintenance function, which receives reports from the 777's other computers and then gathers the data into a central maintenance report for the mechanic. Its monitoring system gathers data on how other functions are doing, and can determine, for example, that an engine is degrading, before it actually fails. Other AIMS functions include a data-conversion gateway, flight data acquisition, data loading, an Ada conversion gateway, and thrust management.

Honeywell's massive effort on the 777 involved over 550 software developers. The company built the AIMS computer as a custom platform based on the AMD 29050 processor. It was unique among aviation systems for integrating the other computers' functions; in other systems, each function resides in a different box [the central maintenance had its own box with its own input/output (I/O), its own central processing unit (CPU), etc.]. AIMS combines all these functions and shares the CPU and I/O among them: it uses the same signals for flight management and for displays, so that the data comes in only once instead of twice; one input circuit provides data to all of the functions; each of the functions gets a piece of the CPU, as in a mainframe computer, where systems use part of the CPU but not all of it; and every function is guaranteed its time slot. Engineer Jeff Greeson said that "The federated system is obsolete. Putting all the functions in one box is a jump ahead in technology that we've brought to the industry."

Another innovation is that the disk drive can read files formatted for the Microsoft Disk Operating System, which provides maintenance with access to the terminal communications. The mechanics can transfer files for data loading over the airplane bus, because Honeywell built the program to accept new data and to change the software. In fact, most of the equipment on the airplane has that ability, only a few classic systems do not (such as the ground-proximity warning system, which has proven sufficiently trustworthy and not in need of change).

Designing a new architecture simultaneously with a new language was "quite exciting," Greeson said. "The organizational details were difficult to put together." With Ada, managers were able to delegate the seven main functions to groups of 60-100 software engineers. The separate software entities have minimal interface with other parts of the software, and not all of the software is integrated. By working with loosely coupled pieces, the project leaders were able to farm out the functions to other groups. The loose integration, however, does not tie the software to the 777 platform, and will assist in Honeywell's using the code for other targets. "We needed the maximum ability to port it to other places," Greeson said.

The data interfaces that do exist between the software units are fairly uniform, Greeson said, because Ada helped the software engineers to implement certain rules at compilation time. "Ada forces you keep it straight there rather than at the lab," he said, "where it helped minimize our difficulties in getting it integrated and running." Because of the high level of accuracy during the compilation, less time was spent on debugging the code. Thus, Honeywell's initial study proved correct. "I'm convinced that, because of Ada, we had a minimal amount of interface problems, with which we would have killed ourselves if we had had C or Pascal," Greeson concluded. "It went much smoother than past programs."

Meeting Deadline

Using common logic to predict the project's success, skeptics might have predicted higher costs and schedule overruns, based on the suppliers' inexperience with Ada and the introduction of a new target. Instead, four and a half years after laying out the program, the 777's electrical power systems were delivered on schedule. Boeing was able to turn on the power a full six months before the maiden flight. Sundstrand's Malkit Rai agreed that the conversion from PLM to Ada did not retard production and the company made a swift transition. "We conducted a pilot program to evaluate the use of Ada in Sundstrand products," he said, "and realized that on-the-job training would be sufficient with our programmers. Within two weeks we were up to speed on Ada."

Passing Tests

The initial flight of the 777 was three hours and 48 minutes, taking Chief Pilot John Cashman from Paine Field in Everett, Washington, to Puget Sound, over the San Juan Islands, then east, crossing the Cascade mountain range, before turning back home. The jetliner was then tested for extremes of temperature, wind conditions, and potential failures.

Ronald Ostrowski, director of Engineering, claims that the Boeing twinjet is already the most tested airplane in history. For more than a year before the flight, Boeing tested the reliability of the 777's avionics and flight-control systems around the clock, in laboratories simulating flight. Design changes were made only after six months of testing the endurance of three engine types (Pratt & Whitney, Rolls Royce, and General Electric).

One compelling reason behind the extensive pre-testing was Boeing's desire to meet the Federal Aviation Agency's (FAA's) Extended Twin Operations (ETOPS) standards ahead of schedule. The original ETOPS rule was drafted in 1953 to protect against the chance of dual, unrelated engine failures. Unless a newly designed and produced aircraft has at least three engines, it usually had to wait, sometimes as long as four years, before the FAA and the Joint Airworthiness Authorities (JAA) will allow it to fly more than one hour from an airport; after a time, the new aircraft is deemed a "veteran" and is allowed to fly three hours away. A shortened trial period would drastically increase Boeing's sales.

Increasing Reliability

Granville Fraser, a propulsion engineer at Boeing, said that a company protects itself better from engine failure by preventing in-flight problems {outside} the engine, such as faulty warning lights, than by concentrating solely on the engine's mechanics. "Over 50 percent of engine shutdown is irrelevant to the core engine," he said. "It has to do with electrical, fire systems, etc." On the 777, those outside systems are programmed in Ada.

Pratt & Whitney laboratories can, therefore, test the engines, but the quality of the software will have an equal role in determining the reliability of the 777's engines and its conformation to the ETOPS standards.

On the maiden flight, with the Boeing Telemetry room in constant contact with the plane, the engines performed better than expected. The 777 proved itself an ETOPS "veteran" on its first flight out, becoming the first twin-engine plane to win FAA approval for "ETOPS out of the box." The trend towards more reliable hardware and software are revolutionizing aviation and can be found in aircrafts other than the 777. The systems in the cockpit talk to the other systems through the programming language, and in new airplanes, such as the Beechcraft 400A, the Learjet series, and some English jets, the language of choice is Ada.

Moving Ahead

Sales for the Boeing 777 both nationally and internationally have been excellent In addition to high sales in the present, Boeing's financial future is also healthy, in part, because of reusable code. As Brian Pflug has said, the ultimate value of Ada is in rapidly transferring the 777's code into the aircraft and architectures of the next millennium.

For More Information

For those who would like to obtain a copy of the PBS documentary on the 777's first flight, the video is available from PBS, 800/828-4PBS.

Notes on bookmarks from 1997

1 September 2014 - 1:00am

Notes on bookmarks from 1997

On August 30, 2014, I imported 264 bookmarks into Pinboard. The source was a file named "bookmark.htm" with a last modified date of October 12, 1997.

  • 264 bookmarks according to Pinboard's import status
  • 260 bookmarks according to Pinboard's count of tags

These bookmarks date between January 1995 and October 1997.

  • 2 were from January 1995
  • 14 were from September-December 1996
  • The rest were from 1997

Upon import, Pinboard reported 163 (63%) of the URLs as being unavailable, with 403 Forbidden, 404 Not Found, 410 Gone, or 500 Server Error. Less than 2/3 link rot over ~17 years doesn't sound so bad.

However, despite reporting 200 on the rest, many URLs weren't the original content. As one example, "serve.com" was a web host named DataRealm, and is now an American Express prepaid card. As another, a VRML tutorial is now a video about birth control. Some of these 200s are only so because of repeated 3xx redirections to ultimately unrelated content, or because of domain name hoarders serving ads.

  • 12 bookmarks were for FTP sites, all of which Pinboard reported as 500 Server Error. These were not tested with an FTP client.
  • 22 bookmarks were for local resources, all of which Pinboard reported as 404 Not Found.
  • 226 bookmarks were left for testing

Of the 226:

  • 1 was 410 Gone
  • 2 were 403 Forbidden
  • 49 were 500 Server Error
  • 76 were 404 Not Found

That's 57%, which sounds even better than the original figure. But then I looked at those ninety-eight 200 OK URLs, too.

  • 77 reported 200 OK, but were parked domains, advertising landing pages, or otherwise completely different content. This is link rot, too, just harder for an automated system to detect. I marked these as 210 OK But Gone.

That's 205 failures, an actual link rot figure of 91%, not 57%.

That leaves only 21 URLs as 200 OK and containing effectively the same content.

In an attempt to confirm and/or recover as much of the original content as possible, I checked the Internet Archive's Wayback Machine for every URL.

  • 1 of the two 403 Forbidden URLs had an old enough copy in the Wayback Machine.
  • 23, or 47%, of the forty-nine 500 Server Error URLs had copies.
  • 45, or 59%, of the seventy-six 404 Not Found URLs had copies.
  • 35, or 45%, of the seventy-seven 210 OK But Gone URLs had copies.

That's 104 failures beaten back by the Internet Archive at some level of fidelity, reducing effective link rot over ~17 years to 45%.

In addition, 9 of the twenty-one 200 OK URLs had old enough copies in the Wayback Machine, which I selected simply to provide a more accurate representation of the content.

There are a couple things you can do to help combat link rot for your own bookmarks moving forward.

First, donate to the Internet Archive: http://archive.org/donate/

Second, if you use a bookmark storage service like Pinboard or others, ask them to support adding submitted URLs to the Wayback Machine. Their bookmarklets or plugins could also submit to the Wayback Machine's "Save Page Now" endpoint. Or that could be done by the service on the back-end. For services that provide full page archives, they could capture a full WARC (network headers plus content), so every successfully cached page could be donated to the Internet Archive and integrated into the Wayback Machine. Or all of the above.

Every URL saved in more than one place increases the likelihood that their content will survive as domains change owners.

I've a lot more bookmarks to import, and doing this processing by hand is tedious.

Any 4xx or 5xx URL could be checked against the Wayback Machine, with the option to link to that instead.

It also seems like some heuristics could be developed to flag URLs as likely being 210 OK But Gone. Parked domains have common content on every URL. Advertising landers have a common format. The Wayback Machine could be checked, and content could be extracted from both and compared. URLs aren't supposed to change, and they're supposed to point to a persistent resource, but companies and domain squatters aren't playing nice. If we want our bookmarks to represent the content we saved as it was when we saved it, we have to be proactive about grooming them.

Vitorio

The Road Ahead

31 August 2014 - 7:00am

Both of my parents were teachers, and for as long as I can remember they both encouraged me to do something in life that would help others. I figured being a doctor would be the most obvious way to do that, but growing up around a pair of teachers must’ve rubbed off on me. My venue wouldn’t be the classroom but rather the Internet. On April 26, 1997, armed with very little actual knowledge, I began to share what I had with the world on a little Geocities site named Anand’s Hardware Tech Page. Most of what I knew was wrong or poorly understood, but I was 14 years old at the time. Little did I know that I had nearly two decades ahead of me to fill in the blanks. I liked the idea of sharing knowledge online and the thought of building a resource where everyone who was interested in tech could find something helpful.

That’s the short story of how I started AnandTech. There’s a lot more to it involving an upgrade to the AMD K6, a PC consulting business I ran for 2 years prior and an appreciation for writing that I didn’t know I had - but that’s the gist.

I’m 32 now. The only things that’ve been more of a constant in my life than AnandTech are my parents. I’ve spent over half of my life learning about, testing, analyzing and covering technology. And I have to say, I’ve enjoyed every minute of it.

But after 17.5 years of digging, testing, analyzing and writing about the most interesting stuff in tech, it’s time for a change. This will be the last thing I write on AnandTech as I am officially retiring from the tech publishing world. Ryan Smith is taking over as Editor in Chief of AnandTech. Ryan has been working with us for nearly 10 years, he has a strong background in Computer Science and he’s been shadowing me quite closely for the past couple of years. I am fully confident in Ryan’s ability to carry the torch and pick up where I left off. We’ve grown the staff over the course of this year in anticipation of the move. With a bunch of new faces around AnandTech, all eager to uphold the high standards and unique approach to covering tech, I firmly believe the site can continue to thrive for years to come.

It’s important for me to stress two things: this isn’t a transition because of health or business issues. I am healthy and hope to be even more so now that I won’t be flying nearly 130,000 miles every year. The website and business are both extremely strong. We’ve expanded our staff this year to include a number of new faces contributing to both mobile and more traditional PC categories. Traffic is solid, we are looking forward to a bunch of very exciting launches especially in the final quarters of 2014. On the business side we continue an amazing run of being self sustaining, profitable and growing for every since year since 1997. We don’t talk about business affairs much on the site but we set a number of records in 2013 and expect that to continue. In other words, you don’t have to worry about the ability of the site to continue to operate.

Even though I’ve been doing this for nearly 18 years, we’ve evolved with the industry. AnandTech started as a site that primarily reviewed motherboards, then we added CPUs, video cards, cases, notebooks, Macs, smartphones, tablets and anything else that mattered. The site today is just as strong in coverage of new mobile devices as it is in our traditional PC component coverage and there’s a roadmap in place to continue to support both sides of the business. Our learnings in the PC component space helped us approach mobile the right way, and our learnings in the mobile space have helped us bring the PC enthusiast message to a broader audience than would’ve ever seen it before.

Over the past year I’ve transitioned many of my personal coverage areas to other ATers. Ian took over CPUs not too long ago and Josh has been flying solo with our mobile coverage for a bit now. Even the articles I helped co-author with Josh were 90% his. Kristian has more or less been running our entire SSD review program at AnandTech for a while now and he’s been doing a tremendous job. I remember editing one of his pieces and thinking wow, this kid knows more than me. In fact I’d go as far as to say that about all of our editors at this point. We’ve got a sea of specialists here and each one of them knows more than me about the area in which they cover. I’m beyond proud of them all and honored to have worked with them.

On a personal level I’ve made myself available to all AnandTech editors for advice and guidance, however I have fully removed myself from the editorial process. I can offer a suggestion on how to deal with a situation so long as describing the situation does not reveal any confidential information to me.

To everyone I worked with in the industry - thank you for the support and help over the years. You were my mentors. You showed kindness and support to a kid who just showed up one day. I learned from you and every last one of you influenced me at a very formative period in my life. The chance you all took on me, the opportunities, and education you provided all mean the world to me. You trusted me with your products, your engineers and your knowledge - thank you.

To Larry, Cara, Mike, Howard, Virginia, Hilary and the rest of the LMCD team that has supported (and continues to support) AnandTech for almost its entire life, I thank you for making all of this possible. I learned so much about the business side of this world from you all and it helped give me perspective and knowledge that I could have never gotten on my own. For those who don't know them, the LMCD crew is responsible for the advertising side of AnandTech. They've made sure that the lights remained on and were instrumental in fueling some of our biggest growth spurts. 

To the AnandTech editors and staff, both present and past, you guys are awesome. You are easily some of the hardest working, most talented and passionate enthusiasts I've ever encountered. Your knowledge always humbles me and the effort that you've put into the site puts my own to shame. You've always been asked to do the best job possible under sometimes insane time constraints and you've always delivered. I know each and every one of you will have a bright future ahead of you. This is your ship to steer now and I couldn't be happier with the crew.

To the millions of readers who have visited and supported me and the site over the past 17+ years, I owe you my deepest gratitude. You all enabled me to spend over half of my life learning more than I ever could have in any other position. The education I’ve received doing this job and the ability to serve you all with it is the most amazing gift anyone could ever ask for. You enabled me to get the education of a lifetime and I will never be able to repay you for that. Thank you.

I’ve always said that AnandTech is your site and I continue to believe that today. Your support, criticism and push to make us better is what allowed us to grow and succeed.

In the publishing world I always hear people talk about ignoring the comments to articles as a way of keeping sane. While I understood the premise, it’s not something I ever really followed or believed in. Some of the feedback can be harsh, but I do believe that it’s almost always because you expect more from us and want us to do better. That sort of free education and immediate response you all have provided me and the rest of the AnandTech team for years is invaluable. I’m beyond proud and honored by the AnandTech audience. I believe we have some of the most insightful readers I’ve ever encountered. It’s not just our interactions that I’m proud of, but literally every company that we work with recognizes the quality of the audience and the extreme influence you all exert on the market. You’re paid attention to, respected and sometimes even feared by some of biggest names in this industry. By being readers and commenters you help keep our industry in check.

I hope you will show Ryan and the rest of the AnandTech team the same respect and courtesy that you’ve shown me over the past 17.5 years. I hope that you’ll continue to push them as you did me, and that you’ll hold the same high standards you have for so long now.

In our About Us page I write about the Cable TV-ification of the web and the trend of media in general towards the lowest common denominator. By reading and supporting AnandTech you’re helping to buck the trend. I don’t believe the world needs to be full of AnandTech-like publications, but if you like what we do I do firmly believe it’s possible to create and sustain these types of sites today. The good news is the market seems to once again value high quality content. I think web publishing has a bright future ahead of it, as long as audiences like AnandTech’s continue to exist and support publishers they value.

As for me, I won’t stay idle forever. There are a bunch of challenges out there :) You can follow me on Twitter or if you want to email me I’ve created a new public gmail account - theshimpi@gmail.com.

Thanks for the memories and the support. I really do owe you all a tremendous debt of gratitude. I hope that my work and the work that continues at AnandTech will serve as a token of my appreciation.

Take care,
Anand

Why Flux is better than an event bus

31 August 2014 - 7:00am
2014-08-29

At my company, we recently finished the fifth iteration over a medium sized, but very interactive and graphical web-app.

We started a first prototype with Backbone and as the project got bigger, expanded to Chaplin with d3.js. Some time ago, we started to use React and liked it so much that we ended up with a pure Flux-React implementation.

The reasons to use React were primarily the performance gains and the simplifying "render everything" approach. Interestingly, after strictly implementing the Flux architecture, our application was a lot easier to understand than the Backbone / Chaplin versions.

This seems strange, since you could argue that React is only a "View" solution, and Flux doesn't really add anything new, besides slapping on a new name to a couple of known concepts.

So what gives...?

Well, Backbone an Chaplin are a fine toolset, however, they give you a lot of freedom. This is in my opinion desirable. However, with freedom comes responsibility. And it turns out that its quite easy to be unresponsible, if nobody tells you how things should be done (something that Flux does).

It's the communication, stupid!

We all use frameworks that provide us with a toolbox that help us build SPAs. With those, making TodoMVC like applications are easy and straightforward.

However, if your app gets more complex, some frameworks will leave you on your own quite quickly (others will carry you a little further by beeing "opinionated"). This makes sense, since more tools won't help you to cope better with the complexity.

It's like with building a small house vs building a skyscraper. The tools and building materials are roughly the same, but building the skyscraper requires a lot more thinking on how to combine the building materials.

So let's assume you have a SPA with 20 models and 30 views. What makes things tough? It's not really rendering those models with the views. This is still easy to do.

But when somebody uses his mouse to click somewhere and expects something to change, things might get tough. If you're lucky, it's just a local action, like a dropdown menu that needs to open. If you're unlucky, it's some complex filter action on your data heavy app that causes three ajax requests and changes to seven models.

The ubiquitous tool to solve the "a lot needs to change" problem: Global events. We have an event bus somewhere and every part of the application can publish and listen to these events.

If your app becomes a mess, your events are likely the cause of it. The problem with events is that they are too easy to implement because there are are no restrictions.

Lets assume you're implementing a new model A, and if it changes, model B needs to update too. The solution that will come to your mind first is to fire an event from A directly to B (Backbone implements this with the listenTo functionality).

However, always directly passing an event to dependencies will not scale. If your app reaches a certain complexity, you will not have simple X to Y event passing, but complex M to P, Q, R to S, T to U to V, X event chains.

It's important to understand that the problem is twofold: It's not only about the complexity of the event chain (that you might not be able to avoid). It's the fact that every event chain is unique in its blueprint. Every complex interaction has its own specific event chain that ripples through your app. This will make cognitive load extremly high, since you will have to know a lot of specific things to debug your app.

The worst part about the story is that you naturally fall into the event chain trap. The marginal cognitive cost of a new event increases exponentially, but you will not notice this when adding the event. At that time it seems like an easy solution, because you won't think about all the other events. The full cost of your event chains will materialize when you need to fix elusive bugs due to edge cases after having almost finished your app.

How to avoid event chains

A lot of people probably think that event chains in their app are unavoidable. This might be true technically, but cognitively, they can be reduced or even eliminated by structuring the flow of events.

The Flux Architecture shows one effective way to achieve this. While in a React.js context, the pattern is essentially framework agnostic (and I'm sure that people have been doing this before Facebook).

Instead of a simple event bus, you implement something what Flux calls a Dispatcher.

A dispatcher is pretty much an event bus, but you can (optionally) enfore in what sequence the event is "dispatched" to its listeners.

This means that if you receive an event in say model B, you can demand that another model, say model A, should process the event first. This will help you to reduce the length of the event chain, since model A doesn't need to notify B, since B asks A to process first and can therefore assume that A has already changed.

With event bus:

Event -> Model A updates -> Event -> Model B updates (given data from A)

With dispatcher:

Event -> [Model B asks Model A to update first Model A updates, Model B updates (given data from A)]

The second option is better than the first:

  1. you save an event
  2. the dependency is explicitly stated in the right spot: when Model B receives the event. model A doesn't need to care that model B depends on it.
  3. Coupling is minimized, because model B does not generally depend on model A. It only does when this specific event is triggered.

Ordering the sequence in which the event is sent to the listener makes intuitively much more sense, because even if you have a long dependency chain, every model receives the original event. This is how it should be: The user clicks something, it affects two models, they need to change. It's only a technical nuisance that one model needs to wait for the other to update. Conceptually both models should receive the same event.

To give an analogy, imagine a dispatcher as a prism that is hit by a ray of light (the event). It will split up into it's different facets (affecting different models differently) and change the state of the app. Preferably, the order in which the different facets are processed does not matter. If it does the different facets ask others to process first.

This methodology minimizes the amount of events in your system. Furthermore, the events that remain will closely model a plastic, real user action. This means that if you implement changes to a model, you will always directly know which user interaction is the origin of the change. Much better than having a more generic event (say: "change:[attribute]") from another model.

Instead of a specific dependency tree you will always user the same approach for every action that affects the app globally:

  1. Fire an event
  2. Think about which models are affected by the event.
  3. Listen to the event with every model and implement changes
  4. If a model needs data from another one that is affected by the event, force order
  5. after changes, render your app again

In practice, this looks somewhat like this (I adjusted the terminology of Flux to decouple the pattern from it's specific implementation):

Fire Event

for example because you requested data from the server:

AppDispatcher.fireEvent({type: GET_FILTERED_DAYS, payload: data}); Listen to the event with both models

You realize that this affects two models. Your models will register a callback with the dispatcher on instantiation, so it can receive events from the dispatcher:

//Model A this.appDispatch = AppDispatcher.register((event) => { var payload = event.payload; switch(event.type) { case GET_FILTERED_DAYS: this.processFilteredDaysPayload(payload); //... // other events that we listen to //Model B this.appDispatch = AppDispatcher.register((event) => { var payload = event.payload; switch(event.type) { // other events that we listen to case GET_FILTERED_DAYS: // important: waiting for model A to update AppDispatcher.waitFor([modelA.appDispatch]); this.processFilteredDaysPayload(payload); //... // other events that we listen to Implement Changes

The processFilteredDaysPayload method will change the state of each model. The nice thing is that within procesFilteredDaysPayload of model B, you can savely get state from the model A instance, since you can be sure that it already processed the event. Both models then use the notification system of your framework to rerender the Views that are affected by the state change.

Check out the practical implementation of a dispatcher in Flux. See my blogpost here that showcases the use of a dispatcher together with async requests.

This pattern is a little more code than say your simple Backbone listenTo solution. So for small applications it might not be worth to implement such an elaborate pattern. In a big application, it will keep you from going insane.

The dispatcher pattern is not the only way to cognitively eliminate event chains. Generally, you need to enforce rules and paths of communication. There are various strategies to do this, but it is crucial that you do it. More often than not, your framework will not provide a solution for this.

Generally, the important thing is that the communication paths should not be long, generic dependency chains. They should be cognitively short, standardized and predictable (This means usually that technically, the event chains actually become longer). Optimal strategies will differ from app to app, depending on its characteristics.

If you have a codebase where an event triggers another event that triggers another one, I recommend looking for a solution that reduces the complexity. I can assure that it helped us a lot.

Bayesian vs. frequentist: squabbling among the ignorant

31 August 2014 - 7:00am

August 30, 2014

Every so often some comparison of Bayesian and frequentist statistics comes to my attention. Today it was on a blog called Pythonic Perambulations. It's the work of amateurs. Their description on noninformative priors is simplified to the point of distortion. They insist on kludging their tools instead of fixing their model when it is clearly misspecified. They use a naive construction for 95% confidence intervals and are surprised when it fails miserably, and even use this as an argument against 95% confidence intervals. Normally I would shrug and move on, but it happened to catch me in a particularly grumpy mood, so here we are.

Essays discussing frequentist versus Bayesian statistics follow a fairly standard form. The author lays out both positions, then argues for the one he (it seems invariably to be a he) likes. The two positions are quite both quite subtle, but each tries to make the concept of a probability correspond to something in the real world. Frequentists define probability as the fraction of elements of an ensemble of hypothetical outcomes of a trial with a certain property. Bayesians define probability as degree of belief. Both have mathematical models which justify this. All the models have limitations which make them of useless in practice. Which one is right?

The answer, as usual when faced with a dichotomy, is neither. van Kampen wrote a paper about quantum mechanics that has some dicta which can be translated almost directly to statistics, notably:

The quantum mechanical probability is not observed but merely serves as an intermediate stage in the computation of an observable phenomenon.

and

Whoever endows ψ with more meaning than is needed for computing observable phenomena is responsible for the consequences.

Probability, as a mathematical theory, has no need of an interpretation. Mathematicians studying combinatorics use it quite happily with nothing in sight that a frequentist or Bayesian would recognize. The real battleground is statistics, and the real purpose is to choose an action based on data. The formulation that everyone uses for this, from machine learning to the foundations of Bayesian statistics, is decision theory. A decision theoretic formulation of a situation has the following components:

  • a set Ω of possible states of nature
  • a set X of values that will result from a trial meant to measure some aspect of that state of nature
  • a set M of possible actions to take based on the outcome of that trial
  • a loss function L:Ω×M→ℝ, giving the cost of taking a particular action when one of the possible states of nature is the true one

Given these components, the task is to find a function t from X to M which minimizes the loss. The loss is a function, though, not a single value, and there are many ways we can make this well defined. Each of those ways has different uses.

For example, if we are engaged in a contest against an opponent, we may want to minimize the maximum loss we can have. Thus we choose t to minimize the maximum value L achieves over any combination of (ω,x)∈(Ω,X) which can occur.

Alternately, we can choose to integrate L against some measure μ. Usually we decompose the measure into a measure on X given Ω (the probability of getting a particular value from X given that some element of Ω is the true state of nature) and a measure on Ω. This is a Bayes procedure, with the measure on Ω the prior. We could also integrate over X but not Ω and use some other technique to eliminate that variable.

Almost any of the tricks of defining norms that you can dig out of functional analysis can be used and will have a use, but in the end you have a procedure t. You apply it to the data from your trial, and take the action dictated. Probability does not enter the picture.

We can and should fight over the specification of the states of nature Ω, of the possible decisions M, over the loss function L We should discuss the norm we use to choose our optimal procedure t. These are hard questions. There is no reason to make the situation any more difficult by attaching unnecessary ideas to probability, which is a tool for calculation and no more.

HabitRPG – Your Life the Role Playing Game

31 August 2014 - 7:00am

Play

The problem with most productivity apps on the market is that they provide no incentive to continue using them. HabitRPG fixes this by making habit building fun! By rewarding you for your successes and penalizing you for slip-ups, HabitRPG provides external motivation for completing your day-to-day activities.

Instant Gratification

Whenever you reinforce a positive habit, complete a daily task, or take care of an old to-do, HabitRPG immediately rewards you with experience points and gold. As you gain experience, you can level up, increasing your stats and unlocking more features, like classes and pets. Gold can be spent on in-game items that change your experience or personalized rewards you've created for motivation. When even the smallest successes provide you with an immediate reward, you're less likely to procrastinate. 

Consequences

Whenever you indulge in a bad habit or fail to complete one of your daily tasks, you lose health. If your health drops too low, you die and lose some of the progress you've made. By providing immediate consequences, HabitRPG can help break bad habits and procrastination cycles before they cause real-world problems. 

Accountability

With an active community, HabitRPG provides the accountability you need to stay on task. With the party system, you can bring in a group of your closest friends to cheer you on. The guild system allows you to find people with similar interests or obstacles, so you can share your goals and swap tips on how to tackle your problems. On HabitRPG, the community means that you have both the support and the accountability you need to succeed.

Man builds 3D printed concrete castle in his own backyard

31 August 2014 - 7:00am

Aug. 26, 2014 | By Alec

In Minnesota, contractor Andrey Rudenko is currently working on a project of gargantuan proportions that seems to be stretching and exploring the limits of 3D printing technology. Using a printer that was substantially modified and expanded, he has printed a concrete castle in his own backyard. And at 3 by 5 meters, this concrete structure is the world's first 3D printed concrete castle, and one of the largest objects that has, up till now, ever printed with 3D printing technology.


Rather than trying to build a machine that caters to theme parks and history enthusiasts, this project grew out of a desire to construct a 3D printer capable of constructing durable, realistic and inhabitable houses. He's already looking at various locations to realize this: 'last winter in Minnesota, which was long and frigid, showed that it is crucial to have multiple areas in different countries for experimental printing since you can never predict which conditions will arise.'

But Rudenko, who has a background in engineering and architecture, chose to firstly print his fantastical castle. This allows its creator to search for and experiment with the limits and possibilities this machine offers to construction companies. The castle's unique features and shapes offers many challenging opportunities to do this, and leave room for Rudenko to make minor adjustments to the machine. And of course, it's also a wonderful showpiece for his huge 3D printer.


As Rudenko told 3ders.org, this project follows years of preparation and planning:

'I've been interested in this technology since I was in my teens. My concrete printing experiments started about 20 years ago, but at that point, advanced computers and software were not available for this type of technology. It wasn't until a couple years ago that I came across the RepRap project and started working on this machine again. It took about a year to build and develop special concrete mixes. Additional inspiration came from the naturally-laid layered sandstone I saw on a trip to Arizona a few years back. Ideally, I hope I can obtain the same natural look to my printed walls.'

Furthermore, progress was also hampered because of the finances involved. Rudenko therefore ended up financing the printer independently, which led to many creative engineering solutions.

'When I was starting out, potential sponsors were wary of providing funds since they did not think the technology would go this far. Once the castle structure is built and the capabilities of the printer are evident, I plan to conduct an auction of the ownership to the first house; since this will be the first functional 3D printed house ever built, I'm hoping there will be a lot of takers and this will become a valuable landmark.'


A project of this size obviously needs a printer of corresponding proportions, and Rudenko necessarily built his own machine. While he has received lots of very helpful feedback from the RepRap community, the actual construction was of his own design. This massive machine is driven by Arduino Mega 2560 board and software, which is not too different from some other 3D printers, but it requires special stepper drivers 'For a big printer, I need special drivers that can handle the heavy weight of the machine as well as be compatible with the software/firmware. The best fit I found was from James Newton's Mass Mind.'

'These drivers ended up being the only ones to work properly with Marlin Firmware (I sampled other drivers, which failed), and were powerful enough to move such a huge printer.' Rudenko added.

This printer is therefore slightly different than the one developed by Behrokh Khoshnevis at USC. 'Design-wise, I'm creating a natural, free-layering of fine concrete and my goal is to have a nice-looking, natural texture, without the need for any additional finish, similar to rammed earth technology.' Rudenko is also seeking to develop a portable machine that even smaller construction companies can afford. 'The final price will be known once we build a few houses, but to the best of my knowledge, I currently see it as being priced at $30,000-50,000, though this will also vary depending on the parts and type of model.'

When that time comes, Rudenko hopes to be able to deliver a number of different kits that individual customers and small companies can put together themselves. 'Obviously I can't ship the whole machine, but I can ship an extruder, control box, some major parts, etc to help individuals put together their own version.' Khoshnevis's printer, on the other hand, at least appears to be heavier and larger, and Rudenko expects that only large-scale construction companies will be able to afford it.


The building process of this 3D printed concrete castle is now complete, but it's also a learning process for Rudenko's future plans. He's currently printing approximately a layering of 50 centimeters per day, though the size and width of the layers vary throughout the construction. Regular layering is being printed at 30mm width by 10mm height, but Rudenko can print layers of virtually any size. 'For special areas like crown moldings, I am reducing the height to 5 mm; I'm also reducing speed in delicate arias.'

Of course, a construction of this size requires the right materials to sustain the sheer size and weight of the concrete.

Rudenko said, 'Layering cement was an extremely difficult task- it required extensive tuning of the printer on a programming level, as well as using exact quantities for the cement mix. While testing the printer, I ran into obstacles (such as the nightmare of the extruder clogging) and discovered even further abilities of the printer, like that it can print much more than 50cm a day as I originally thought.'

Rudenko therefore resorted to including rebars in the bottom and top walls. 'They are needed during the pouring of a variety of cementitious filling materials inside the printed walls.' The cement used, however, is just a regular cement mix with a few additives. 'It is possible to use a special quick-setting concrete to speed up the process, but it will affect the cost, and I don't see much reason to build a house extremely fast at the expense of higher cost and lower quality.'

Instead, Rudenko is after quality and new possibilities. 'The more important advances of this technology lie in its architectural possibilities and energy-efficiency. Architects have waited many years for this technology, and now that it's here, this opens up a whole window of possibilities; soon, we will see new kinds of architecture used to construct new structures.'

This Minnesotan constructor is seeking to a part of this: 'I plan to concentrate on the development of further 3D printing technology in construction and building a community/network of people worldwide interested in research and development of this technology, with the possibility of providing DYI kits as well as a full line of model construction printers.'

For now, however, printers that are suitable to construct homes and be commercially viable as well are still in the distant future. But Rudenko is optimistic about the possibilities for both his device as well as this industry. 'My current standard is 10 millimeters in height by 30 millimeters in width, but countless other options are available with just the click of a button.' he said.

Rudenko is in the process of redesigning the printer based on the lessons learned. His goal will be to have an upgraded printer that prints 24 hours a day until the project is finished.

'I'm also planning to print the structure in one piece; printing the castle turrets by themselves was a bad idea as they were extremely difficult to lift and place. Additionally, I've figured out how to print a roof; the only thing is that the material I'd print with would have to be used in warmer climates for now.'

Rudenko's next project is a real full-scale livable house. 'The amount of correspondence I am getting proves high demand and interest in this new technology. I want to make sure that for the next project, I have the right team doing the job to fully use all of the benefits of the 3D printing machine.'

'I am open to offers from individuals or companies interested in owning the first house of this type built with the newest 3D printing technology and ready to provide abundant funds to completely cover the project and all its expenses. The interested party needs to own the lot/site and possess a permit for a house built by 3D Printing technology.' Rudenko would also like to collaborate with the interested architects, designers, and software engineers experienced in 3D tools. And you can contact him via this email for further questions.

It has been two years since Rudenko first began toying with the idea of a 3D printer that was capable of constructing homes. 'I have previously been sure I could print homes, but having finished the castle, I now have proof that the technology is ready.' Rudenko said.

'The current prototype I am working on at the moment is just a small part of the line of printers I am designing. We are talking about the beginning of a new era in construction industry. There is still much to be done.'

Watch a video of Andrey Rudenko's printing process here:

 

Posted in 3D Printing Applications

Maybe you also like:

alidan wrote at 8/28/2014 11:52:03 AM:

@Julio you could probably do something to get a paint scraper to do the work, but it would need to be an engineered process really what you could do is have an outside edge pre determined, and have something that would hold an outer layer of concrete on it to smooth that crap out. personally, i love the layerd look because i could fairly easily get some ivy to grow up it and make the whole thing covered in green.

Julio wrote at 8/27/2014 9:52:49 PM:

Adam, that's not possible. You would need to orient the scraper along the outer surface and in this design the extruder doesn't rotate. You would need an extra DoF. So not "hell, duct-tape the paint scraper to the print head". Think a little more.

Adam wrote at 8/27/2014 4:15:32 AM:

I think 3d printing concrete gets a bad rap because of the unfinished look of it, the ridges and all, but really you could just go over it with a $2 paint scraper and smooth out the outer walls and it would look completely legit and professional. hell, duct-tape the paint scraper to the printhead ( a few inches down) and the smoothing process is now automated. can't wait to print a castle of my own. one day you'll either have your own concrete printer, or rent it from the hardware store for a weekend like any other large tool.

The Lava Lamp Just Won't Quit

30 August 2014 - 7:00pm

It’s rare that one invention so perfectly embodies an entire era -- evokes, with each kaleidoscopic orb of wax, the trippy mind-state of a generation. It’s rarer yet for that invention to be a lamp filled with viscous, indeterminable sludge.

But for some time in the 1960s, the lava lamp was just that: with its slow-rising, multicolored contents and space-esque profile, it seemed to effortlessly emulate the spirit of psychedelia. In the 1990s, after it had been written off as a bygone fad, the lava lamp rose again, stronger than ever -- this time as the reigning champion of an acid-fueled counterculture rebellion.

A glance into the strange lamp’s past reveals an even stranger history: its inventor, a World War II veteran turned ardent nudist, came up with the idea while drunkenly transfixed by a strange gadget at a pub.

The Enterprising Nudist

In the English county of Dorset, Edward Craven Walker was a curious character. 

Born in 1918, he served as a Royal Air Force pilot in World War II and flew multiple photographic reconnaissance missions over enemy territory in Germany. Post-war, Craven lived in a small trailer behind a pub in London, built a successful travel agency, and sought to bring together people from the far reaches of the world. Throughout his early life, he “maintained the trim fighting figure and brisk demeanor of an R.A.F. officer.” 

Then, following a “life-changing” trip to the southern coast of France, the clean cut ex-squadron leader shed his uniform and embarked on a career as a nudist filmmaker. He became a pioneer in the genre. In 1960, under the pseudonym Michael Kaetering, Craven produced “Traveling Light,” a short film featuring a naked woman performing underwater ballet. 

The film was a box-office success, running for six months in a major London theatre before being distributed around the world. It also secured Craven a small fortune, which he subsequently invested in constructing one of the largest nudist camps in the United Kingdom. His new passion would stir much unrest in his life: he’d re-marry four times and become embroiled in controversy after banning obese people (who he called "fat fogies") from his resort.

But first, Craven would invent one of the defining relics of 1960s psychedelia.

Less-Than-Eggciting Origins

Early lava lamp prototype, using a glass shaker (1960)

On a presumably rainy day in the mid-1950s, Craven paid a visit to Queen’s Head, a small pub southwest of London. When he sat at the bar to order his first pint of Guinness, he noticed something strange perched beside liquor bottles on a shelf.

A glass cocktail shaker full of water and oil blobs sat on a hot plate; upon being heated, the oil would rise to the top of the shaker. When Craven inquired what this strange device was, the barkeep told him it was an egg timer: in just the amount of time it took the oil to rise, an egg could be fully cooked. Years earlier, a regular at the pub, Alfred Dunnett, had built the contraption, Craven was told -- but it was only a one-off, and Dunnett had since passed away. 

Determined to pursue the idea further, Craven contacted Dunnett’s widow and purchased the man’s patent for a sum of less than £20 (about $30 USD). For the next decade, between his nudist philandering and cinematic pursuits, Craven set out to craft this rudimentary egg timer into an interior decoration. 

Using an old empty bottle of Orange Squash (“a revolting drink [Craven] had in England growing up”), he paired two “mutually insoluble liquids” -- water and wax -- with a few secret chemical ingredients (one of which was purportedly tetrachloride, an agent that added weight to the wax). To heat the lamp, Craven enlisted a specialized, high-output bulb and encased it a protective base.

The physics behind Craven’s invention relied on the Rayleigh-Taylor Instability, a physical law that explores the instability produced by a lighter fluid pushing a heavier fluid. When the bulb heated the lamp, the wax was liquified into a giant, resting blob; as the wax expanded, it became less dense and rose to the top, where it invariably cooled (as a result of being further from the heat source), and sunk back down. This process would continually repeat itself while the bulb was activated.

By 1963, Craven had perfected his design. He donned his invention the “Astro Lamp,” erected a small factory in his backyard, and set out on a quixotic quest to promote it. "Edward was very focused, driven, full of ideas, and when he had an idea he would see it through to the end," Craven’s wife, Christine Baehr, later told the BBC. “But we didn't have any online technology -- we literally had to go around in a van."

The High Times of the Astro Lamp 

Craven and then-wife Christine Baehr beside the Astro Lamp van (1963)

At first, the couple had a little trouble selling the Astro Lamp to local stores -- particularly those which catered to higher-end customers. "Because it was so completely new we had to convince people it was worth going with, particularly when it came to selling," recalled Baehr. "Some people thought it was absolutely dreadful." Upon seeing the lamp, one buyer for Harrods (the Saks Fifth Avenue of England) called them “disgusting” and ordered they be taken away immediately.

To combat the hatred the lamp provoked, Craven decided he’d re-brand his invention.  In the years following World War II, there had been a rebellion against the dull, boring nature of interior design. People wanted more color, more excitement -- and with the introduction of new printing and dyeing methods, flamboyant household items were coming into vogue. Craven capitalized on this, and set out to cast the Astro Lamp as a high-end, wacky household fixture. 

He created his own company, Crestworth, to market the lamp, and took out full-page spreads in magazines featuring suavely-dressed men touting the Astro Lamp as an item of “sophisticated luxury.”

Original Astro Lamp advertisements, c.1963 (click for higher res image)

Craven offered the original Astro Lamp in 20 color combinations (five options for choice of "fluid color," and four for the color of the wax), and branded it using words like "elegant," "powerful," and "rich." With its new appeal, stores began opening up to the contraption and it soon became a hit -- but not in the way Craven had intended.

By the mid-1960s, LSD and other psychedelic drugs had snaked their way into British culture. A rising hippie counterculture, fueled by bands like Pink Floyd and The Yardbirds, was increasingly on the prowl for mind-bending experiences. With its trippy, globular formations and low-light ambience, the Astro Lamp fit the bill. While the lamp’s “sophisticated” marketing got its foot in the door, it found its eventual customer base in the revolutionaries of psychedelia. Craven responded to his new buyers with measured enthusiasm. “If you buy my lamp,” he stated in one ad, “you won’t need drugs.”

"Everything was getting a little bit psychedelic," Baehr recalled of Craven’s new target audience. "There was Carnaby Street and The Beatles and things launching into space and he thought it was quite funky and might be something to launch into."

The lamps gained steam, and soon enterprising Americans sought to introduce Craven’s product abroad, where psychedelic culture was igniting. At a German trade show in 1965, two businessmen, Adolph Wertheimer and William Rubinstein, bought the North American manufacturing rights for the Astro Lamp, established an office in Chicago, and renamed it “Lava Lite.” Backed by expert marketing and fueled by 1967’s Summer of Love, the lamp began making cameos in major television programs and films. A red model debuted in a 1968 episode of Dr. Who; this was followed by appearances in The Prisoner, The Avengers, and James Bond.

Lava lamps prominently featured in “The Wheel in Space,” a 1968 episode of Dr. Who

For Craven and his wife, there was a defining moment where they knew they’d truly achieved success. “The day a store in Birkenhead phoned to say that Ringo Starr had just been in and bought a lava lamp," recalls Baehr. "Suddenly we thought, 'Wow, we have hit it.’” 

By the end of the 60s, Craven was selling seven million Astro Lamps per year, and had made himself a multi-millionaire. 

Like most novelty items, lava lamps were a fad; as hippie culture faded in the late 1970s and blacklight posters reigned supreme, Craven saw a sharp decline in sales. To no avail, Craven tirelessly rolled out new products, none of which came remotely close to the sales numbers achieved by the Astro Lamp. Despite this, he clung to his company, believing that lava lamps would one day regain the graces of counterculture society.

The Second Coming of the Lava Lamp

For nearly two decades, the lava lamp faded into obscurity. By the late 1980s, Craven’s sales had declined to only 1,000  lamps per year, and he sat on a stockpile of thousands of Astro Lamps. Then, miraculously, the groovy orb came back to life.

Cressida Granger, a 22-year-old who ran a small antiques booth in Camden Market (a hipster hangout in north London), noticed old, “vintage” lava lamps were selling and decided to take action. In early 1989, she contacted Craven and expressed her interest in purchasing his company, Crestworth. At Craven’s behest, the two met up at a nudist camp (at Granger’s behest, both were fully clothed); it was here, amid sun-tanned bottoms, that Craven agreed to let Granger enter a partnership with him.

Granger took over operations as managing director and sales soon increased. In 1988-89, Britain experienced what would later be called the Second Summer of Love. The rise of Ecstasy, acid house music, and MDMA-inspired rave parties ignited an “explosion in youth culture” reminiscent of the 1960s hippie movement. Hedonism, rampant drug use, and chemically-enhanced positive vibes were back in style -- and with them, lava lamps. 

In 1991, Craven’s original patent (approved in 1971) expired, opening the playing field for competitors. Luckily, recalls Granger, "People didn't realize the patents had run out," and she, along with Craven, enjoyed “a lovely period of monopoly in the 90s.”

Edward Craven Walker’s original patent for the lava lamp (1971). While there is a bit of controversy surrounding the original patent holder (read here), there is no doubt that Craven popularized the device.

As per the pair’s initial agreement, Granger slowly bought out Craven’s interest in Crestworth. By 1992, she’d re-named the company Mathmos, moved into their manufacturing facility, and produced lamps using Craven’s staff, machinery, and components. 

By 1998, Granger had gained sole ownership of the company and successfully navigated the resurrection of the lava lamp, bringing sales from 1,000 units per year to 800,000 per year. Sales surged in the late 1990s, largely thanks to the release of Austin Powers: International Man of Mystery (1997), which regenerated interest in psychedelic culture. The decade was so wildly profitable for Mathmos that Granger claims more units were sold the second time around than in the 1960s -- a rare feat for a novelty item. Mathmos has also navigated through some unwanted publicity (in 2004, for instance, a man was killed when his attempt to self-heat a lava lamp on a stovetop resulted in an explosion and a glass shard through the heart).

Though his role in the company diminished, Craven stayed on as a consultant for Mathmos until his death in 2000. Today, the lamps continue to be produced in the original facility in Dorset, using the exact same formula invented by Craven over 60 years ago (it’s still a secret to this day).

In recent years, the company has encountered pressure to shift their operations to China -- a move that would make production much cheaper, but Granger hasn’t acquiesced. Bottles are still filled by hand (one employee is able to get through about 400 per day); as a result, Mathmos lamps start at $80 while cheaper, mass-produced lamps sell for as little as $15. But according to Granger, heritage is more important.

“I think it's special to make a thing in the place it's always been made,” Granger told HuffPost in 2013. “The bottles are made in Yorkshire, the bases are made in Devon, the bottles are filled in Poole and the lamps assembled to order in Poole."

Lasting Impact

Craven’s original lava lamp was relatively plain: a 52-ounce tapered glass vase, a gold base, and red “lava” in yellow liquid. Today, thousands of variations exist, from sparkly Hello Kitty-themed lamps to 6-foot, $4,000 goliaths that take hours to heat up. A formidable collector market has emerged and, according to lava connoisseur Anthony Voz, it’s the old school ones that still generate the most interest -- “the ones that weren’t so commercially successful.” This demand can be attributed to vintage nostalgia, but moreover it’s a testament to Craven’s passion, dedication, and ultimate vision.

As designer Murray Moss notes, Craven never intended the lava lamp to really be a lamp: it doesn’t give off a lot of light, it’s not utilitarian, and it isn’t used for any other purpose than to create a mood. “It’s devoid of function but rich in emotional fulfillment,” he writes, “and it can momentarily free your mind like a warm bath.” Voz adds that “it's the motion within the lamp -- the way that it flows, a mixture of light and chaos blending together” that makes them special.

The lava lamp has proven itself as more than a fading historical relic, more than a cheap gimmick. Both of the lamp’s sales boosts can be attributed to the rise of counterculture movements and the introduction of new drugs. Each time, the wacky invention visualized experimentation. Some, like the lamp’s pioneer, even found symbolism in the rising wax.

''It's like the cycle of life,” Craven told a reporter in 1997, a few years before his death. “It grows, breaks up, falls down and then starts all over again. And besides, the shapes are sexy.''

This post was written by Zachary Crockett. Follow him on Twitter here, or Google Plus here.

Psychedelics in problem-solving experiment

30 August 2014 - 7:00pm

This article needs more medical references for verification or relies too heavily on primary sources. Please review the contents of the article and add the appropriate references if you can. Unsourced or poorly sourced material may be removed. (August 2014)

Psychedelic agents in creative problem-solving experiment was a study designed to evaluate whether the use of a psychedelic substance with supportive setting can lead to improvement of performance in solving professional problems. The altered performance was measured by subjective reports, questionnaires, the obtained solutions for the professional problems and psychometric data using the Purdue Creativity, the Miller Object Visualization, and the Witkins Embedded Figures tests.[1] This experiment was a pilot that was to be followed by control studies as part of exploratory studies on uses for psychedelic drugs, that were interrupted early in 1966 when the Food and Drug Administration declared a moratorium on research with human subjects, as a strategy in combating the illicit-use problem.[2]

Contents

Procedure[edit]

Some weeks before the actual experiment, a preliminary experiment was conducted. It consisted of two sessions with four participants in each. The groups worked on two problems chosen by the research personnel. The first group consisted of four people with professional experience in electrical engineering, engineering design, engineering management and psychology. They were given 50 micrograms of LSD. The second group consisted of four research engineers, three with background on electronics and one on mechanics. They were given 100 milligrams of mescaline. Both groups were productive in ideation but, according to Fadiman, the fact that the participants didn't have actual personal stake in the outcome of the session negatively affected the actualization of the ideas. This is why the actual study focused on personal professional problems that the participants were highly motivated to tackle.[3]

The experiment was carried out in 1966 in a facility of International Foundation for Advanced Study, Menlo Park, California, by a team including Willis Harman, Robert H. McKim, Robert E. Mogar, James Fadiman and Myron Stolaroff. The participants of the study consisted of 27 male subjects engaged in a variety of professions: sixteen engineers, one engineer-physicist, two mathematicians, two architects, one psychologist, one furniture designer, one commercial artist, one sales manager, and one personnel manager. Nineteen of the subjects had had no previous experience with psychedelics. Each participant was required to bring a professional problem they had been working on for at least 3 months, and to have a desire to solve it.

Commonly observed characteristics of the psychedelic experience seemed to operate both for and against the hypothesis that the drug session could be used for performance enhancement. The research was therefore planned so as to attempt to provide a setting that would maximize improved functioning, while minimizing effects that might hinder effective functioning.[4] Each group of four subjects met for an evening session several days before the experiment. They received instructions and introduced themselves and their unsolved problems to the group. Approximately one hour of pencil-and-paper tests were also administered. At the beginning of the day of the experiment session, subjects were given 200 milligrams of mescaline sulphate (a moderately light dose compared to the doses used in experiments to induce mystical experiences). After some hours of relaxation, subjects were given tests similar to the ones on the introduction day. After the tests, subjects had four hours to work on their chosen problems. After the working phase, the group would discuss their experiences and review the solutions they had come up with. After this, the participants were driven home. Within a week after the session, each participant wrote a subjective account of his experience. Six weeks further, subjects again filled in questionnaires, this time concentrating on the effects on post-session creative ability and the validity and reception of the solutions conceived during the session. This data was in addition to the psychometric data comparing results of the two testing periods.

Results[edit]

Solutions obtained in the experiment includes:

  • a new approach to the design of a vibratory microtome
  • a commercial building design, accepted by the client
  • space probe experiments devised to measure solar properties
  • design of a linear electron accelerator beam-steering device
  • engineering improvement to a magnetic tape recorder
  • a chair design, modeled and accepted by the manufacturer
  • a letterhead design, approved by the customer
  • a mathematical theorem regarding NOR gate circuits
  • completion of a furniture-line design
  • a new conceptual model of a photon, which was found useful
  • design of a private dwelling, approved by the client
  • insights regarding how to use interferometry in medical diagnosis application sensing heat distribution in the human body

From the subjective reports, 11 categories of enhanced functioning were defined: low inhibition and anxiety, capacity to restructure problem in larger context, enhanced fluency and flexibility of ideation, heightened capacity for visual imagery and fantasy, increased ability to concentrate, heightened empathy with external processes and objects, heightened empathy with people, subconscious data more accessible, association of dissimilar ideas, heightened motivation to obtain closure, visualizing the completed solution.

The results also suggest that various degrees of increased creative ability may continue for at least some weeks subsequent to a psychedelic problem-solving session.

Several of the participants in this original study were contacted recently, and although long past retirement age, they were self-employed in their chosen fields and extremely successful.[5]

Related research[edit]

In the overview of the experiment, Harman and Fadiman mention that experiments on specific performance enhancement through directed use of psychedelics have gone on in various countries of the world, on both sides of the Iron Curtain.[6]

In the book LSD — The Problem-Solving Psychedelic, Stafford and Golightly write about a man engaged in naval research, working with a team under his direction on the design of an anti-submarine detection device for over five years without success. He contacted a small research foundation studying the use of LSD. After a few sessions of learning to control the fluidity of the LSD state (how to stop it, how to start it, how to turn it around) he directed his attention to the design problem. Within ten minutes he had the solution he had been searching for. Since then, the device has been patented by the U.S., and Navy and Naval personnel working in this area have been trained in its use.[7]

In 1999 Jeremy Narby, an anthropologist specialiced in amazonian shamanism, acted as a translator for three molecular biologists who travelled to the Peruvian Amazon to see whether they could obtain bio-molecular information in the visions they had in sessions orchestrated by an indigenous shaman. Narby recounts this preliminary experiment and the exchange of methods of gaining knowledge between the biologists and indigenous people in his article Shamans and scientists.[8]

In 1991, Denise Caruso, writing a computer column for The San Francisco Examiner went to SIGGRAPH, the largest gathering of computer graphic professionals in the world. She conducted a survey; by the time she got back to San Francisco, she had talked to 180 professionals in the computer graphic field who had admitted taking psychedelics, and that psychedelics are important to their work; according to mathematician Ralph Abraham.[9][10]

James Fadiman is currently conducting a study on micro-dosing for improving normal functioning.[11] Micro-dosing (or sub-perceptual dosing) means taking sub-threshold dose, which for LSD is 10-20 micrograms. The purpose of micro-dosing is not intoxication but enhancement of normal functionality (see nootropic). In this study the volunteers self-administer the drug approximately every third day. They then self-report perceived effects on their daily duties and relationships. Volunteers participating in the study include a wide variety of scientific and artistic professions as well as being student. So far the reports suggest that, in general, the subjects experience normal functioning but with increased focus, creativity and emotional clarity and slightly enhanced physical performance. Albert Hofmann was also aware of micro-dosing and has called it the most under-researched area of psychedelics.[12]

Since the 1930s, ibogaine was sold in France in 8 mg tablets in the form of Lambarène, an extract of the Tabernanthe manii plant. 8 mg of ibogaine could be considered a microdose since doses in ibogatherapy and -rituals vary in the range of 10 mg/kg to 30 mg/kg adding usually up to 1000 mg.[13]Lambarène was advertised as a mental and physical stimulant and was "...indicated in cases of depression, asthenia, in convalescence, infectious disease, [and] greater than normal physical or mental efforts by healthy individuals". The drug enjoyed some popularity among post World War II athletes, but was eventually removed from the market, when the sale of ibogaine-containing products was prohibited in 1966.[14] In the end of 1960's The International Olympic Committee banned ibogaine as a potential doping agent.[15] Other psychedelics have also been reported to have been used in similar way as doping.[16]

See also[edit] References[edit]
  1. ^ Harman, W. W.; McKim, R. H.; Mogar, R. E.; Fadiman, J.; Stolaroff, M. J. (1966). "Psychedelic agents in creative problem-solving: A pilot study". Psychological reports 19 (1): 211–227. doi:10.2466/pr0.1966.19.1.211. PMID 5942087.  edit
  2. ^ Tim Doody's article "The heretic" about doctor James Fadiman's experiments on psychedelics and creativity
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^ LSD — The Problem-Solving Psychedelic Chapter III. Creative Problem Solving. P.G. Stafford and B.H. Golightly
  8. ^ Shamans and scientists Jeremy Narby; Shamans through time: 500 years on the path to knowledge p. 301-305.
  9. ^ The San Francisco Examiner, August 4th 1991, Denise Caruso
  10. ^ Mathematics and the Psychedelic Revolution - Ralph Abraham
  11. ^ Psychedelic Horizons Beyond Psychotherapy Workshop - Part 3/4
  12. ^
  13. ^ Manual for Ibogaine Therapy - Screening, Safety, Monitoring & Aftercare Howard S. Lotsof & Boaz Wachtel 2003
  14. ^ Ibogaine: A Novel Anti-Addictive Compound - A Comprehensive Literature Review Jonathan Freedlander, University of Maryland Baltimore County, Journal of Drug Education and Awareness, 2003; 1:79-98.
  15. ^ Ibogaine - Scientific Literature Overview The International Center for Ethnobotanical Education, Research & Service (ICEERS) 2012
  16. ^ Psychedelics and Extreme Sports James Oroc. MAPS Bulletin - volume XXI - number 1 - Spring 2011.
External links[edit]

A React.js case study

30 August 2014 - 7:00pm

This post dissects a memory game built with React, focusing on structure and the React way of thinking

The game

The last few days I've been toying with React.js, Facebook's excellent view abstraction library. In order to grokk it I built a simple memory game, which we'll dissect in this post.

First off, here's the game running in a iframe (here's a link if you want it in a separate tab). The repo can be found here.

As you can see the game is rather simple, yet included enough state and compositions to force me to actually use React.

The code

This is the full contents of the repo:

The lib folder contains the only 3 dependencies:

  • react.js is the react librabry itself. We don't need the add-on version, just plain vanilla React.
  • JSXTransformer.js translates the JSX syntax. In production this should of course be part of the build process.
  • lodash.js is used merely to make for some cleaner code in the game logic.

The src folder then contains files for all of our React components. The hierarchy looks like thus:

Finally index.html is a super simple bootstrap kicking it all off:

<!DOCTYPE html> <html> <head> <script type="text/javascript" src="lib/lodash.js"></script> <script type="text/javascript" src="lib/react.js"></script> <script type="text/javascript" src="lib/JSXTransformer.js"></script> <script type="text/jsx" src="src/status.jsx"></script> <script type="text/jsx" src="src/board.jsx"></script> <script type="text/jsx" src="src/game.jsx"></script> <script type="text/jsx" src="src/wordform.jsx"></script> <script type="text/jsx" src="src/tile.jsx"></script> <link rel="stylesheet" href="styles.css" type="text/css"></link> </head> <body> <script type="text/jsx"> React.renderComponent( <Game />, document.querySelector("body") ); </script> </body> </html>

We'll now walk through each of the five React components, and how they map to the fundamental React principle; initial data that won't change should be passed to a component as a property, while changing data should be handle in a component's state. If we need to communicate from a child to a parent, we do this by calling a callback that was passed to the child as a property.

The Game component

First off is the Game component. It is responsible for switching between the form and the board, and passing data from the form to the board.

var Game = React.createClass({ getInitialState: function(){ return {playing: false,tiles:[]}; }, startGame: function(words){ this.setState({ tiles:_.shuffle(words.concat(words)), playing:true, seed:Math.random() }); }, endGame: function(){ this.setState({playing:false}); }, render: function() { return ( <div> <div className={this.state.playing ? "hidden" : "showing"}> <Wordform startGame={this.startGame} /> </div> <div className={this.state.playing ? "showing" : "hidden"}> <Board endGame={this.endGame} tiles={this.state.tiles} max={this.state.tiles.length/2} key={this.state.seed}/> </div> </div> ); } }); Props State Sub components Instance variables playing
tiles Wordform
Board

The Game component has two state variables:

  • playing which controls which sub component to show or hide.
  • tiles which contain the words passed to startGame, which will be triggered inside Wordform.

Game has two sub components:

  • Wordform, which it passes the startGame method.
  • Board, which is passed the endGame method and the tiles.

Note that Game always renders both the Board and the Wordform. This has to do with React component lifecycles. I first tried to do this:

return ( <div>{this.state.playing ? <Board endGame={this.endGame} tiles={this.state.tiles}/> : <Wordform startGame={this.startGame} />}</div> );

...which actually worked, but generated a React error message about an unmounted component. The official docs also state that instead of generating different components, we should generate them all and show/hide them as needed.

Also related to the life cycle of a component is the key property of the Board. Changing key ensures we have a new Board instance whenever we enter new words in the form, otherwise React will just repopulate the existing Board with new words. That means that previously flipped tiles will still be flipped, even though they now contain new words. Remove the key property and try it!

The Wordform component

This component displays a form for entering words to be used as tiles.

var Wordform = React.createClass({ getInitialState: function(){ return {error:""}; }, setError: function(msg){ this.setState({error:msg}); setTimeout((function(){ this.setState({error:""}); }).bind(this),2000); }, submitWords: function(e){ var node = this.refs["wordfield"].getDOMNode(), words = (node.value || "").trim().replace(/\W+/g," ").split(" "); if (words.length <= 2) { this.setError("Enter at least 3 words!"); } else if (words.length !== _.unique(words).length) { this.setError("Words should be unique!"); } else if (_.filter(words,function(w){return w.length > 8}).length) { this.setError("Words should not be longer than 8 characters!"); } else { this.props.startGame(words); node.value = ""; } return false; }, render: function() { return ( <form onSubmit={this.submitWords}> <p>Enter words separated by spaces!</p> <input type='text' ref='wordfield' /> <button type='submit'>Start!</button> <p className='error' ref='errormsg'>{this.state.error}</p> </form> ); } }); Props State Sub components Instance variables startGame() error

The Wordform component validates the input and passes it back up to Game by calling the startGame method which it received as a property.

In order to collect the contents of the input field we use the refs instance property, with the same key (wordfield) as given to the ref property of the corresponding node in the render output.

Note how showing and hiding error messages are done through changing the error state variable, which triggers the rerender. It feels almost like we have a two-way data binding!

The Board component

Here's the code for the Board component, which displays the game board:

var Board = React.createClass({ getInitialState: function() { return {found: 0, message: "choosetile"}; }, clickedTile: function(tile){ if (!this.wait){ if (!this.flippedtile){ this.flippedtile = tile; tile.reveal(); this.setState({message:"findmate"}); } else { this.wait = true; if (this.flippedtile.props.word === tile.props.word){ this.setState({found: this.state.found+1,message: "foundmate"}); tile.succeed(); this.flippedtile.succeed(); } else { this.setState({message:"wrong"}); tile.fail(); this.flippedtile.fail(); } setTimeout((function(){ this.wait = false; this.setState({message:"choosetile"}); delete this.flippedtile; }).bind(this),2000); } } }, render: function() { var tiles = this.props.tiles.map(function(b,n){ return <Tile word={b} key={n} clickedTile={this.clickedTile} />; },this); return ( <div> <button onClick={this.props.endGame}>End game</button> <Status found={this.state.found} max={this.props.tiles.length/2} message={this.state.message} /> {tiles} </div> ); } }); Props State Sub components Instance variables tiles
endGame() found
message Status
Tile wait
flippedtile

The Board component was passed a tiles array and an endGame callback from its parent.

It has two state variables:

  • found which counts how many pairs the player has found
  • message which contains the id of the message to display to the player

When rendered it contains two different sub components:

  • Status, which is passed found, max and message. This component deals with the instruction to the player above the tiles.
  • Tile, which represents an individual tile. Each tile is passed a word and the clickedTile callback.

The clickedTile callback will be called from the individual tiles, with the tile instance as parameter. As you can see, this method contains the full logic for the actual game.

Note how this method uses the instance variables this.wait and this.flippedtile. These do NOT need to be state variables, as they don't affect the rendering! Only state which might affect what the component looks like need to be stored using this.setState.

The Status component This component renders the info row above the game board. var Status = React.createClass({ render: function() { var found = this.props.found, max = this.props.max, texts = { choosetile:"Choose a tile!", findmate:"Now try to find the matching tile!", wrong:"Sorry, those didn't match!", foundmate:"Yey, they matched!", foundall:"You've found all "+max+" pairs! Well done!" }; return <p>({found}/{max})&nbsp;&nbsp;{texts[this.props.message === "choosetile" && found === max ? "foundall" : this.props.message]}</p>; } }); Props State Sub components Instance variables found
max
message

The Status component was passed found, max and message from its parent. It then bakes this together into a UI info row.

Note how even though the status row is constantly changing while playing, this is a totally static component. It contains no state variables, and all updates are controlled in the parent!

The Tile component

This component represents an individual tile.

var Tile = React.createClass({ getInitialState: function() { return {flipped: false}; }, catchClick: function(){ if (!this.state.flipped){ this.props.clickedTile(this); } }, reveal: function(){ this.setState({flipped:true}); }, fail: function(){ this.setState({flipped:true,wrong:true}); setTimeout((function(){this.setState({flipped:false,wrong:false});}).bind(this),2000); }, succeed: function(){ this.setState({flipped:true,correct:true}); }, render: function() { var classes = _.reduce(["flipped","correct","wrong"],function(m,c){return m+(this.state[c]?c+" ":"");},"",this); return ( <div className={'brick '+(classes || '')} onClick={this.catchClick}> <div className="front">?</div> <div className="back">{this.props.word}</div> </div> ); } }); Props State Sub components Instance variables word
clickedTile() flipped
wrong
correct

It was passed two properties from the parent; a word variable and a clickedTile callback.

The component has three state variables:

  • flipped is a flag to show if the tile has been flipped up or not. While flipped it will not receive clicks.
  • wrong is true if the tile was part of a failed match attempt.
  • correct is true if the tile has been matched to a partner.

When clicked the component will call the clickedTile callback passing itself as a parameter. All game logic is in Board, as we saw previously.

Wrapping up

I'm totally in love with React! It took a while to grasp the thinking, like for example the differentiation between state and props, and how state can belong in props when passed to a child. But when that mentality was in place, putting it all together was a breeze. I really appreciate not having to write any update or cleanup code (I'm looking at you, Backbone), delegating all that headache to React!

Passing callbacks to allow for upstream communication can feel a bit clunky, and I look forward to trying out the Flux approach instead. I also want to integrate a Router, and see how that plays along with it all.

Please enable JavaScript to view the comments powered by Disqus.

comments powered by

Say hello to x64 Assembly, part 1

30 August 2014 - 7:00pm
Introduction
There are many developers between us. We write a tons of code every day. Sometime, it is even not a bad code :) Every of us can easily write the simplest code like this:


Every of us can understand what's this C code does. But... How this code works at low level? I think that not all of us can answer on this question, and me too. I thought that i can write code on high level programming languages like Haskell, Erlang, Go and etc..., but i absolutely don't know how it works at low level, after compilation. So I decided to take a few deep steps down, to assembly, and to describe my learning way about this. Hope it will be interesting, not only for me. Something about 5 - 6 years ago I already used assembly for writing simple programs, it was in university and i used Turbo assembly and DOS operating system. Now I use Linux-x86-64 operating system. Yes, must be big difference between Linux 64 bit and DOS 16 bit. So let's start.

Preparation
Before we started, we must to prepare some things like As I wrote about, I use Ubuntu (Ubuntu 14.04.1 LTS 64 bit), thus my posts will be for this operating system and architecture. Different CPU supports different set of instructions. I use Intel Core i7 870 processor, and all code will be written processor. Also i will use nasm assembly. You can install it with:

sudo apt-get install nasm

It's version must be 2.0.0 or greater. I use NASM version 2.10.09 compiled on Dec 29 2013 version. And the last part, you will need in text editor where you will write you assembly code. I use Emacs with nasm-mode.el for this. It is not mandatory, of course you can use your favourite text editor. If you use Emacs as me you can download nasm-mode.el and configure your Emacs like this:


That's all we need for this moment. Other tools will be describe in next posts.

x64 syntax
Here I will not describe full assembly syntax, we'll mention only those parts of the syntax, which we will use in this post. Usually NASM program divided into sections. In this post we'll meet 2 following sections:
  • data section
  • text section
The data section is used for declaring constants. This data does not change at runtime. You can declare various math or other constants and etc... The syntax for declaring data section is:

section .data

The text section is for code. This section must begin with the declaration global _start, which tells the kernel where the program execution begins.

section .text
global _start
_start:

Comments starts with ; symbol. Every NASM source code line contains some combination of the following four fields:

[label:] instruction [operands] [; comment]

Fields which are in square brackets are optional. A basic NASM instruction consists from two parts. The first one is the name of the instruction which is to be executed, and the second are the operands of this command. For example:

MOV COUNT, 48 ; Put value 48 in the COUNT variable


Hello world
Let's write first program with NASM assembly. And of course it will be traditional Hello world program. Here is the code of it:


Yes, it doesn't look like printf("Hello world"). Let's try to understand what is it and how it works. Take a look 1-2 lines. We defined data section and put there msg constant with Hello world value. Now we can use this constant in our code. Next is declaration text section and entry point of program. Program will start to execute from 7 line. Now starts the most interesting part. We already know what is it mov instruction, it gets 2 operands and put value of second to first. But what is it these rax, rdi and etc... As we can read at wikipedia:

A central processing unit (CPU) is the hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system.

Ok, CPU performs some operations, arithmetical and etc... But where can it get data for this operations? The first answer in memory. However, reading data from and storing data into memory slows down the processor, as it involves complicated processes of sending the data request across the control bus. Thus CPU has own internal memory storage locations called registers: So when we write mov rax, 1, it means to put 1 to the rax register. Now we know what is it rax, rdi, rbx and etc... But need to know when to use rax but when rsi and etc...
  • rax - temporary register; when we call a syscal, rax must contain syscall number
  • rdx - used to pass 3rd argument to functions
  • rdi - used to pass 1st argument to functions
  • rsi - pointer used to pass 2nd argument to functions
In another words we just make a call of sys_write syscall. Take a look on sys_write:


It has 3 arguments:

  • fd - file descriptor. Can be 0, 1 and 2 for standard input, standard output and standard error
  • buf - points to a character array, which can be used to store content obtained from the file pointed to by fd.
  • count - specifies the number of bytes to be written from the file into the character array
So we know that sys_write syscall takes three arguments and has number one in syscall table. Let's look again to our hello world implementation. We put 1 to rax register, it means that we will use sys_write system call. In next line we put 1 to rdi register, it will be first argument of sys_write, 1 - standard output. Than we store pointer to msg at rsi register, it will be second buf argument for sys_write. And than we pass the last (third) parameter (length of string) to rdx, it will be third argument of sys_write. Now we have all arguments of sys_write and we can call it with syscall function at 11 line. Ok, we printed "Hello world" string, now need to do correctly exit from program. We pass 60 to rax register, 60 is a number of exit syscall. And pass also 0 to rdi register, it will be error code, so with 0 our program must exit successfully. That's all for "Hello world". Quite simple :) Now let's build our program. For example we have this code in hello.asm file. Than we need to execute following commands:

nasm -f elf64 -o hello.o hello.asm
ld -o hello hello.o

After it we will have executable hello file which we can run with ./hello and will see Hello world string in the terminal.Conclusion
It was a first part with one simple-simple example. In next part we will see some arithmetic. If you will have any questions/suggestions write me a comment.

All source code you can find - here.