# Hacker News from Y Combinator

Updated: 6 hours 31 min ago

### The Lava Lamp Just Won't Quit

30 August 2014 - 7:00pm

It’s rare that one invention so perfectly embodies an entire era -- evokes, with each kaleidoscopic orb of wax, the trippy mind-state of a generation. It’s rarer yet for that invention to be a lamp filled with viscous, indeterminable sludge.

But for some time in the 1960s, the lava lamp was just that: with its slow-rising, multicolored contents and space-esque profile, it seemed to effortlessly emulate the spirit of psychedelia. In the 1990s, after it had been written off as a bygone fad, the lava lamp rose again, stronger than ever -- this time as the reigning champion of an acid-fueled counterculture rebellion.

A glance into the strange lamp’s past reveals an even stranger history: its inventor, a World War II veteran turned ardent nudist, came up with the idea while drunkenly transfixed by a strange gadget at a pub.

The Enterprising Nudist

In the English county of Dorset, Edward Craven Walker was a curious character.

Born in 1918, he served as a Royal Air Force pilot in World War II and flew multiple photographic reconnaissance missions over enemy territory in Germany. Post-war, Craven lived in a small trailer behind a pub in London, built a successful travel agency, and sought to bring together people from the far reaches of the world. Throughout his early life, he “maintained the trim fighting figure and brisk demeanor of an R.A.F. officer.”

Then, following a “life-changing” trip to the southern coast of France, the clean cut ex-squadron leader shed his uniform and embarked on a career as a nudist filmmaker. He became a pioneer in the genre. In 1960, under the pseudonym Michael Kaetering, Craven produced “Traveling Light,” a short film featuring a naked woman performing underwater ballet.

The film was a box-office success, running for six months in a major London theatre before being distributed around the world. It also secured Craven a small fortune, which he subsequently invested in constructing one of the largest nudist camps in the United Kingdom. His new passion would stir much unrest in his life: he’d re-marry four times and become embroiled in controversy after banning obese people (who he called "fat fogies") from his resort.

But first, Craven would invent one of the defining relics of 1960s psychedelia.

Less-Than-Eggciting Origins

Early lava lamp prototype, using a glass shaker (1960)

On a presumably rainy day in the mid-1950s, Craven paid a visit to Queen’s Head, a small pub southwest of London. When he sat at the bar to order his first pint of Guinness, he noticed something strange perched beside liquor bottles on a shelf.

A glass cocktail shaker full of water and oil blobs sat on a hot plate; upon being heated, the oil would rise to the top of the shaker. When Craven inquired what this strange device was, the barkeep told him it was an egg timer: in just the amount of time it took the oil to rise, an egg could be fully cooked. Years earlier, a regular at the pub, Alfred Dunnett, had built the contraption, Craven was told -- but it was only a one-off, and Dunnett had since passed away.

Determined to pursue the idea further, Craven contacted Dunnett’s widow and purchased the man’s patent for a sum of less than £20 (about $30 USD). For the next decade, between his nudist philandering and cinematic pursuits, Craven set out to craft this rudimentary egg timer into an interior decoration. Using an old empty bottle of Orange Squash (“a revolting drink [Craven] had in England growing up”), he paired two “mutually insoluble liquids” -- water and wax -- with a few secret chemical ingredients (one of which was purportedly tetrachloride, an agent that added weight to the wax). To heat the lamp, Craven enlisted a specialized, high-output bulb and encased it a protective base. The physics behind Craven’s invention relied on the Rayleigh-Taylor Instability, a physical law that explores the instability produced by a lighter fluid pushing a heavier fluid. When the bulb heated the lamp, the wax was liquified into a giant, resting blob; as the wax expanded, it became less dense and rose to the top, where it invariably cooled (as a result of being further from the heat source), and sunk back down. This process would continually repeat itself while the bulb was activated. By 1963, Craven had perfected his design. He donned his invention the “Astro Lamp,” erected a small factory in his backyard, and set out on a quixotic quest to promote it. "Edward was very focused, driven, full of ideas, and when he had an idea he would see it through to the end," Craven’s wife, Christine Baehr, later told the BBC. “But we didn't have any online technology -- we literally had to go around in a van." The High Times of the Astro Lamp Craven and then-wife Christine Baehr beside the Astro Lamp van (1963) At first, the couple had a little trouble selling the Astro Lamp to local stores -- particularly those which catered to higher-end customers. "Because it was so completely new we had to convince people it was worth going with, particularly when it came to selling," recalled Baehr. "Some people thought it was absolutely dreadful." Upon seeing the lamp, one buyer for Harrods (the Saks Fifth Avenue of England) called them “disgusting” and ordered they be taken away immediately. To combat the hatred the lamp provoked, Craven decided he’d re-brand his invention. In the years following World War II, there had been a rebellion against the dull, boring nature of interior design. People wanted more color, more excitement -- and with the introduction of new printing and dyeing methods, flamboyant household items were coming into vogue. Craven capitalized on this, and set out to cast the Astro Lamp as a high-end, wacky household fixture. He created his own company, Crestworth, to market the lamp, and took out full-page spreads in magazines featuring suavely-dressed men touting the Astro Lamp as an item of “sophisticated luxury.” Original Astro Lamp advertisements, c.1963 (click for higher res image) Craven offered the original Astro Lamp in 20 color combinations (five options for choice of "fluid color," and four for the color of the wax), and branded it using words like "elegant," "powerful," and "rich." With its new appeal, stores began opening up to the contraption and it soon became a hit -- but not in the way Craven had intended. By the mid-1960s, LSD and other psychedelic drugs had snaked their way into British culture. A rising hippie counterculture, fueled by bands like Pink Floyd and The Yardbirds, was increasingly on the prowl for mind-bending experiences. With its trippy, globular formations and low-light ambience, the Astro Lamp fit the bill. While the lamp’s “sophisticated” marketing got its foot in the door, it found its eventual customer base in the revolutionaries of psychedelia. Craven responded to his new buyers with measured enthusiasm. “If you buy my lamp,” he stated in one ad, “you won’t need drugs.” "Everything was getting a little bit psychedelic," Baehr recalled of Craven’s new target audience. "There was Carnaby Street and The Beatles and things launching into space and he thought it was quite funky and might be something to launch into." The lamps gained steam, and soon enterprising Americans sought to introduce Craven’s product abroad, where psychedelic culture was igniting. At a German trade show in 1965, two businessmen, Adolph Wertheimer and William Rubinstein, bought the North American manufacturing rights for the Astro Lamp, established an office in Chicago, and renamed it “Lava Lite.” Backed by expert marketing and fueled by 1967’s Summer of Love, the lamp began making cameos in major television programs and films. A red model debuted in a 1968 episode of Dr. Who; this was followed by appearances in The Prisoner, The Avengers, and James Bond. Lava lamps prominently featured in “The Wheel in Space,” a 1968 episode of Dr. Who For Craven and his wife, there was a defining moment where they knew they’d truly achieved success. “The day a store in Birkenhead phoned to say that Ringo Starr had just been in and bought a lava lamp," recalls Baehr. "Suddenly we thought, 'Wow, we have hit it.’” By the end of the 60s, Craven was selling seven million Astro Lamps per year, and had made himself a multi-millionaire. Like most novelty items, lava lamps were a fad; as hippie culture faded in the late 1970s and blacklight posters reigned supreme, Craven saw a sharp decline in sales. To no avail, Craven tirelessly rolled out new products, none of which came remotely close to the sales numbers achieved by the Astro Lamp. Despite this, he clung to his company, believing that lava lamps would one day regain the graces of counterculture society. The Second Coming of the Lava Lamp For nearly two decades, the lava lamp faded into obscurity. By the late 1980s, Craven’s sales had declined to only 1,000 lamps per year, and he sat on a stockpile of thousands of Astro Lamps. Then, miraculously, the groovy orb came back to life. Cressida Granger, a 22-year-old who ran a small antiques booth in Camden Market (a hipster hangout in north London), noticed old, “vintage” lava lamps were selling and decided to take action. In early 1989, she contacted Craven and expressed her interest in purchasing his company, Crestworth. At Craven’s behest, the two met up at a nudist camp (at Granger’s behest, both were fully clothed); it was here, amid sun-tanned bottoms, that Craven agreed to let Granger enter a partnership with him. Granger took over operations as managing director and sales soon increased. In 1988-89, Britain experienced what would later be called the Second Summer of Love. The rise of Ecstasy, acid house music, and MDMA-inspired rave parties ignited an “explosion in youth culture” reminiscent of the 1960s hippie movement. Hedonism, rampant drug use, and chemically-enhanced positive vibes were back in style -- and with them, lava lamps. In 1991, Craven’s original patent (approved in 1971) expired, opening the playing field for competitors. Luckily, recalls Granger, "People didn't realize the patents had run out," and she, along with Craven, enjoyed “a lovely period of monopoly in the 90s.” Edward Craven Walker’s original patent for the lava lamp (1971). While there is a bit of controversy surrounding the original patent holder (read here), there is no doubt that Craven popularized the device. As per the pair’s initial agreement, Granger slowly bought out Craven’s interest in Crestworth. By 1992, she’d re-named the company Mathmos, moved into their manufacturing facility, and produced lamps using Craven’s staff, machinery, and components. By 1998, Granger had gained sole ownership of the company and successfully navigated the resurrection of the lava lamp, bringing sales from 1,000 units per year to 800,000 per year. Sales surged in the late 1990s, largely thanks to the release of Austin Powers: International Man of Mystery (1997), which regenerated interest in psychedelic culture. The decade was so wildly profitable for Mathmos that Granger claims more units were sold the second time around than in the 1960s -- a rare feat for a novelty item. Mathmos has also navigated through some unwanted publicity (in 2004, for instance, a man was killed when his attempt to self-heat a lava lamp on a stovetop resulted in an explosion and a glass shard through the heart). Though his role in the company diminished, Craven stayed on as a consultant for Mathmos until his death in 2000. Today, the lamps continue to be produced in the original facility in Dorset, using the exact same formula invented by Craven over 60 years ago (it’s still a secret to this day). In recent years, the company has encountered pressure to shift their operations to China -- a move that would make production much cheaper, but Granger hasn’t acquiesced. Bottles are still filled by hand (one employee is able to get through about 400 per day); as a result, Mathmos lamps start at$80 while cheaper, mass-produced lamps sell for as little as $15. But according to Granger, heritage is more important. “I think it's special to make a thing in the place it's always been made,” Granger told HuffPost in 2013. “The bottles are made in Yorkshire, the bases are made in Devon, the bottles are filled in Poole and the lamps assembled to order in Poole." Lasting Impact Craven’s original lava lamp was relatively plain: a 52-ounce tapered glass vase, a gold base, and red “lava” in yellow liquid. Today, thousands of variations exist, from sparkly Hello Kitty-themed lamps to 6-foot,$4,000 goliaths that take hours to heat up. A formidable collector market has emerged and, according to lava connoisseur Anthony Voz, it’s the old school ones that still generate the most interest -- “the ones that weren’t so commercially successful.” This demand can be attributed to vintage nostalgia, but moreover it’s a testament to Craven’s passion, dedication, and ultimate vision.

As designer Murray Moss notes, Craven never intended the lava lamp to really be a lamp: it doesn’t give off a lot of light, it’s not utilitarian, and it isn’t used for any other purpose than to create a mood. “It’s devoid of function but rich in emotional fulfillment,” he writes, “and it can momentarily free your mind like a warm bath.” Voz adds that “it's the motion within the lamp -- the way that it flows, a mixture of light and chaos blending together” that makes them special.

The lava lamp has proven itself as more than a fading historical relic, more than a cheap gimmick. Both of the lamp’s sales boosts can be attributed to the rise of counterculture movements and the introduction of new drugs. Each time, the wacky invention visualized experimentation. Some, like the lamp’s pioneer, even found symbolism in the rising wax.

''It's like the cycle of life,” Craven told a reporter in 1997, a few years before his death. “It grows, breaks up, falls down and then starts all over again. And besides, the shapes are sexy.''

### Psychedelics in problem-solving experiment

30 August 2014 - 7:00pm

This article needs more medical references for verification or relies too heavily on primary sources. Please review the contents of the article and add the appropriate references if you can. Unsourced or poorly sourced material may be removed. (August 2014)

Psychedelic agents in creative problem-solving experiment was a study designed to evaluate whether the use of a psychedelic substance with supportive setting can lead to improvement of performance in solving professional problems. The altered performance was measured by subjective reports, questionnaires, the obtained solutions for the professional problems and psychometric data using the Purdue Creativity, the Miller Object Visualization, and the Witkins Embedded Figures tests.[1] This experiment was a pilot that was to be followed by control studies as part of exploratory studies on uses for psychedelic drugs, that were interrupted early in 1966 when the Food and Drug Administration declared a moratorium on research with human subjects, as a strategy in combating the illicit-use problem.[2]

Contents

Procedure

Some weeks before the actual experiment, a preliminary experiment was conducted. It consisted of two sessions with four participants in each. The groups worked on two problems chosen by the research personnel. The first group consisted of four people with professional experience in electrical engineering, engineering design, engineering management and psychology. They were given 50 micrograms of LSD. The second group consisted of four research engineers, three with background on electronics and one on mechanics. They were given 100 milligrams of mescaline. Both groups were productive in ideation but, according to Fadiman, the fact that the participants didn't have actual personal stake in the outcome of the session negatively affected the actualization of the ideas. This is why the actual study focused on personal professional problems that the participants were highly motivated to tackle.[3]

The experiment was carried out in 1966 in a facility of International Foundation for Advanced Study, Menlo Park, California, by a team including Willis Harman, Robert H. McKim, Robert E. Mogar, James Fadiman and Myron Stolaroff. The participants of the study consisted of 27 male subjects engaged in a variety of professions: sixteen engineers, one engineer-physicist, two mathematicians, two architects, one psychologist, one furniture designer, one commercial artist, one sales manager, and one personnel manager. Nineteen of the subjects had had no previous experience with psychedelics. Each participant was required to bring a professional problem they had been working on for at least 3 months, and to have a desire to solve it.

Commonly observed characteristics of the psychedelic experience seemed to operate both for and against the hypothesis that the drug session could be used for performance enhancement. The research was therefore planned so as to attempt to provide a setting that would maximize improved functioning, while minimizing effects that might hinder effective functioning.[4] Each group of four subjects met for an evening session several days before the experiment. They received instructions and introduced themselves and their unsolved problems to the group. Approximately one hour of pencil-and-paper tests were also administered. At the beginning of the day of the experiment session, subjects were given 200 milligrams of mescaline sulphate (a moderately light dose compared to the doses used in experiments to induce mystical experiences). After some hours of relaxation, subjects were given tests similar to the ones on the introduction day. After the tests, subjects had four hours to work on their chosen problems. After the working phase, the group would discuss their experiences and review the solutions they had come up with. After this, the participants were driven home. Within a week after the session, each participant wrote a subjective account of his experience. Six weeks further, subjects again filled in questionnaires, this time concentrating on the effects on post-session creative ability and the validity and reception of the solutions conceived during the session. This data was in addition to the psychometric data comparing results of the two testing periods.

Results

Solutions obtained in the experiment includes:

• a new approach to the design of a vibratory microtome
• a commercial building design, accepted by the client
• space probe experiments devised to measure solar properties
• design of a linear electron accelerator beam-steering device
• engineering improvement to a magnetic tape recorder
• a chair design, modeled and accepted by the manufacturer
• a letterhead design, approved by the customer
• a mathematical theorem regarding NOR gate circuits
• completion of a furniture-line design
• a new conceptual model of a photon, which was found useful
• design of a private dwelling, approved by the client
• insights regarding how to use interferometry in medical diagnosis application sensing heat distribution in the human body

From the subjective reports, 11 categories of enhanced functioning were defined: low inhibition and anxiety, capacity to restructure problem in larger context, enhanced fluency and flexibility of ideation, heightened capacity for visual imagery and fantasy, increased ability to concentrate, heightened empathy with external processes and objects, heightened empathy with people, subconscious data more accessible, association of dissimilar ideas, heightened motivation to obtain closure, visualizing the completed solution.

The results also suggest that various degrees of increased creative ability may continue for at least some weeks subsequent to a psychedelic problem-solving session.

Several of the participants in this original study were contacted recently, and although long past retirement age, they were self-employed in their chosen fields and extremely successful.[5]

Related research

In the overview of the experiment, Harman and Fadiman mention that experiments on specific performance enhancement through directed use of psychedelics have gone on in various countries of the world, on both sides of the Iron Curtain.[6]

In the book LSD — The Problem-Solving Psychedelic, Stafford and Golightly write about a man engaged in naval research, working with a team under his direction on the design of an anti-submarine detection device for over five years without success. He contacted a small research foundation studying the use of LSD. After a few sessions of learning to control the fluidity of the LSD state (how to stop it, how to start it, how to turn it around) he directed his attention to the design problem. Within ten minutes he had the solution he had been searching for. Since then, the device has been patented by the U.S., and Navy and Naval personnel working in this area have been trained in its use.[7]

In 1999 Jeremy Narby, an anthropologist specialiced in amazonian shamanism, acted as a translator for three molecular biologists who travelled to the Peruvian Amazon to see whether they could obtain bio-molecular information in the visions they had in sessions orchestrated by an indigenous shaman. Narby recounts this preliminary experiment and the exchange of methods of gaining knowledge between the biologists and indigenous people in his article Shamans and scientists.[8]

In 1991, Denise Caruso, writing a computer column for The San Francisco Examiner went to SIGGRAPH, the largest gathering of computer graphic professionals in the world. She conducted a survey; by the time she got back to San Francisco, she had talked to 180 professionals in the computer graphic field who had admitted taking psychedelics, and that psychedelics are important to their work; according to mathematician Ralph Abraham.[9][10]

James Fadiman is currently conducting a study on micro-dosing for improving normal functioning.[11] Micro-dosing (or sub-perceptual dosing) means taking sub-threshold dose, which for LSD is 10-20 micrograms. The purpose of micro-dosing is not intoxication but enhancement of normal functionality (see nootropic). In this study the volunteers self-administer the drug approximately every third day. They then self-report perceived effects on their daily duties and relationships. Volunteers participating in the study include a wide variety of scientific and artistic professions as well as being student. So far the reports suggest that, in general, the subjects experience normal functioning but with increased focus, creativity and emotional clarity and slightly enhanced physical performance. Albert Hofmann was also aware of micro-dosing and has called it the most under-researched area of psychedelics.[12]

Since the 1930s, ibogaine was sold in France in 8 mg tablets in the form of Lambarène, an extract of the Tabernanthe manii plant. 8 mg of ibogaine could be considered a microdose since doses in ibogatherapy and -rituals vary in the range of 10 mg/kg to 30 mg/kg adding usually up to 1000 mg.[13]Lambarène was advertised as a mental and physical stimulant and was "...indicated in cases of depression, asthenia, in convalescence, infectious disease, [and] greater than normal physical or mental efforts by healthy individuals". The drug enjoyed some popularity among post World War II athletes, but was eventually removed from the market, when the sale of ibogaine-containing products was prohibited in 1966.[14] In the end of 1960's The International Olympic Committee banned ibogaine as a potential doping agent.[15] Other psychedelics have also been reported to have been used in similar way as doping.[16]

1. ^ Harman, W. W.; McKim, R. H.; Mogar, R. E.; Fadiman, J.; Stolaroff, M. J. (1966). "Psychedelic agents in creative problem-solving: A pilot study". Psychological reports 19 (1): 211–227. doi:10.2466/pr0.1966.19.1.211. PMID 5942087.  edit
2. ^ Tim Doody's article "The heretic" about doctor James Fadiman's experiments on psychedelics and creativity
3. ^
4. ^
5. ^
6. ^
7. ^ LSD — The Problem-Solving Psychedelic Chapter III. Creative Problem Solving. P.G. Stafford and B.H. Golightly
8. ^ Shamans and scientists Jeremy Narby; Shamans through time: 500 years on the path to knowledge p. 301-305.
9. ^ The San Francisco Examiner, August 4th 1991, Denise Caruso
10. ^ Mathematics and the Psychedelic Revolution - Ralph Abraham
11. ^ Psychedelic Horizons Beyond Psychotherapy Workshop - Part 3/4
12. ^
13. ^ Manual for Ibogaine Therapy - Screening, Safety, Monitoring & Aftercare Howard S. Lotsof & Boaz Wachtel 2003
14. ^ Ibogaine: A Novel Anti-Addictive Compound - A Comprehensive Literature Review Jonathan Freedlander, University of Maryland Baltimore County, Journal of Drug Education and Awareness, 2003; 1:79-98.
15. ^ Ibogaine - Scientific Literature Overview The International Center for Ethnobotanical Education, Research & Service (ICEERS) 2012
16. ^ Psychedelics and Extreme Sports James Oroc. MAPS Bulletin - volume XXI - number 1 - Spring 2011.

### A React.js case study

30 August 2014 - 7:00pm

This post dissects a memory game built with React, focusing on structure and the React way of thinking

The game

The last few days I've been toying with React.js, Facebook's excellent view abstraction library. In order to grokk it I built a simple memory game, which we'll dissect in this post.

First off, here's the game running in a iframe (here's a link if you want it in a separate tab). The repo can be found here.

As you can see the game is rather simple, yet included enough state and compositions to force me to actually use React.

The code

This is the full contents of the repo:

The lib folder contains the only 3 dependencies:

• react.js is the react librabry itself. We don't need the add-on version, just plain vanilla React.
• JSXTransformer.js translates the JSX syntax. In production this should of course be part of the build process.
• lodash.js is used merely to make for some cleaner code in the game logic.

The src folder then contains files for all of our React components. The hierarchy looks like thus:

Finally index.html is a super simple bootstrap kicking it all off:

<!DOCTYPE html> <html> <head> <script type="text/javascript" src="lib/lodash.js"></script> <script type="text/javascript" src="lib/react.js"></script> <script type="text/javascript" src="lib/JSXTransformer.js"></script> <script type="text/jsx" src="src/status.jsx"></script> <script type="text/jsx" src="src/board.jsx"></script> <script type="text/jsx" src="src/game.jsx"></script> <script type="text/jsx" src="src/wordform.jsx"></script> <script type="text/jsx" src="src/tile.jsx"></script> <link rel="stylesheet" href="styles.css" type="text/css"></link> </head> <body> <script type="text/jsx"> React.renderComponent( <Game />, document.querySelector("body") ); </script> </body> </html>

We'll now walk through each of the five React components, and how they map to the fundamental React principle; initial data that won't change should be passed to a component as a property, while changing data should be handle in a component's state. If we need to communicate from a child to a parent, we do this by calling a callback that was passed to the child as a property.

The Game component

First off is the Game component. It is responsible for switching between the form and the board, and passing data from the form to the board.

var Game = React.createClass({ getInitialState: function(){ return {playing: false,tiles:[]}; }, startGame: function(words){ this.setState({ tiles:_.shuffle(words.concat(words)), playing:true, seed:Math.random() }); }, endGame: function(){ this.setState({playing:false}); }, render: function() { return ( <div> <div className={this.state.playing ? "hidden" : "showing"}> <Wordform startGame={this.startGame} /> </div> <div className={this.state.playing ? "showing" : "hidden"}> <Board endGame={this.endGame} tiles={this.state.tiles} max={this.state.tiles.length/2} key={this.state.seed}/> </div> </div> ); } }); Props State Sub components Instance variables playing
tiles Wordform
Board

The Game component has two state variables:

• playing which controls which sub component to show or hide.
• tiles which contain the words passed to startGame, which will be triggered inside Wordform.

Game has two sub components:

• Wordform, which it passes the startGame method.
• Board, which is passed the endGame method and the tiles.

Note that Game always renders both the Board and the Wordform. This has to do with React component lifecycles. I first tried to do this:

return ( <div>{this.state.playing ? <Board endGame={this.endGame} tiles={this.state.tiles}/> : <Wordform startGame={this.startGame} />}</div> );

...which actually worked, but generated a React error message about an unmounted component. The official docs also state that instead of generating different components, we should generate them all and show/hide them as needed.

Also related to the life cycle of a component is the key property of the Board. Changing key ensures we have a new Board instance whenever we enter new words in the form, otherwise React will just repopulate the existing Board with new words. That means that previously flipped tiles will still be flipped, even though they now contain new words. Remove the key property and try it!

The Wordform component

This component displays a form for entering words to be used as tiles.

var Wordform = React.createClass({ getInitialState: function(){ return {error:""}; }, setError: function(msg){ this.setState({error:msg}); setTimeout((function(){ this.setState({error:""}); }).bind(this),2000); }, submitWords: function(e){ var node = this.refs["wordfield"].getDOMNode(), words = (node.value || "").trim().replace(/\W+/g," ").split(" "); if (words.length <= 2) { this.setError("Enter at least 3 words!"); } else if (words.length !== _.unique(words).length) { this.setError("Words should be unique!"); } else if (_.filter(words,function(w){return w.length > 8}).length) { this.setError("Words should not be longer than 8 characters!"); } else { this.props.startGame(words); node.value = ""; } return false; }, render: function() { return ( <form onSubmit={this.submitWords}> <p>Enter words separated by spaces!</p> <input type='text' ref='wordfield' /> <button type='submit'>Start!</button> <p className='error' ref='errormsg'>{this.state.error}</p> </form> ); } }); Props State Sub components Instance variables startGame() error

The Wordform component validates the input and passes it back up to Game by calling the startGame method which it received as a property.

In order to collect the contents of the input field we use the refs instance property, with the same key (wordfield) as given to the ref property of the corresponding node in the render output.

Note how showing and hiding error messages are done through changing the error state variable, which triggers the rerender. It feels almost like we have a two-way data binding!

The Board component

Here's the code for the Board component, which displays the game board:

var Board = React.createClass({ getInitialState: function() { return {found: 0, message: "choosetile"}; }, clickedTile: function(tile){ if (!this.wait){ if (!this.flippedtile){ this.flippedtile = tile; tile.reveal(); this.setState({message:"findmate"}); } else { this.wait = true; if (this.flippedtile.props.word === tile.props.word){ this.setState({found: this.state.found+1,message: "foundmate"}); tile.succeed(); this.flippedtile.succeed(); } else { this.setState({message:"wrong"}); tile.fail(); this.flippedtile.fail(); } setTimeout((function(){ this.wait = false; this.setState({message:"choosetile"}); delete this.flippedtile; }).bind(this),2000); } } }, render: function() { var tiles = this.props.tiles.map(function(b,n){ return <Tile word={b} key={n} clickedTile={this.clickedTile} />; },this); return ( <div> <button onClick={this.props.endGame}>End game</button> <Status found={this.state.found} max={this.props.tiles.length/2} message={this.state.message} /> {tiles} </div> ); } }); Props State Sub components Instance variables tiles
endGame() found
message Status
Tile wait
flippedtile

The Board component was passed a tiles array and an endGame callback from its parent.

It has two state variables:

• found which counts how many pairs the player has found
• message which contains the id of the message to display to the player

When rendered it contains two different sub components:

• Status, which is passed found, max and message. This component deals with the instruction to the player above the tiles.
• Tile, which represents an individual tile. Each tile is passed a word and the clickedTile callback.

The clickedTile callback will be called from the individual tiles, with the tile instance as parameter. As you can see, this method contains the full logic for the actual game.

Note how this method uses the instance variables this.wait and this.flippedtile. These do NOT need to be state variables, as they don't affect the rendering! Only state which might affect what the component looks like need to be stored using this.setState.

The Status component This component renders the info row above the game board. var Status = React.createClass({ render: function() { var found = this.props.found, max = this.props.max, texts = { choosetile:"Choose a tile!", findmate:"Now try to find the matching tile!", wrong:"Sorry, those didn't match!", foundmate:"Yey, they matched!", foundall:"You've found all "+max+" pairs! Well done!" }; return <p>({found}/{max})&nbsp;&nbsp;{texts[this.props.message === "choosetile" && found === max ? "foundall" : this.props.message]}</p>; } }); Props State Sub components Instance variables found
max
message

The Status component was passed found, max and message from its parent. It then bakes this together into a UI info row.

Note how even though the status row is constantly changing while playing, this is a totally static component. It contains no state variables, and all updates are controlled in the parent!

The Tile component

This component represents an individual tile.

var Tile = React.createClass({ getInitialState: function() { return {flipped: false}; }, catchClick: function(){ if (!this.state.flipped){ this.props.clickedTile(this); } }, reveal: function(){ this.setState({flipped:true}); }, fail: function(){ this.setState({flipped:true,wrong:true}); setTimeout((function(){this.setState({flipped:false,wrong:false});}).bind(this),2000); }, succeed: function(){ this.setState({flipped:true,correct:true}); }, render: function() { var classes = _.reduce(["flipped","correct","wrong"],function(m,c){return m+(this.state[c]?c+" ":"");},"",this); return ( <div className={'brick '+(classes || '')} onClick={this.catchClick}> <div className="front">?</div> <div className="back">{this.props.word}</div> </div> ); } }); Props State Sub components Instance variables word
clickedTile() flipped
wrong
correct

It was passed two properties from the parent; a word variable and a clickedTile callback.

The component has three state variables:

• flipped is a flag to show if the tile has been flipped up or not. While flipped it will not receive clicks.
• wrong is true if the tile was part of a failed match attempt.
• correct is true if the tile has been matched to a partner.

When clicked the component will call the clickedTile callback passing itself as a parameter. All game logic is in Board, as we saw previously.

Wrapping up

I'm totally in love with React! It took a while to grasp the thinking, like for example the differentiation between state and props, and how state can belong in props when passed to a child. But when that mentality was in place, putting it all together was a breeze. I really appreciate not having to write any update or cleanup code (I'm looking at you, Backbone), delegating all that headache to React!

Passing callbacks to allow for upstream communication can feel a bit clunky, and I look forward to trying out the Flux approach instead. I also want to integrate a Router, and see how that plays along with it all.

### Say hello to x64 Assembly, part 1

30 August 2014 - 7:00pm
Introduction
There are many developers between us. We write a tons of code every day. Sometime, it is even not a bad code :) Every of us can easily write the simplest code like this:

Every of us can understand what's this C code does. But... How this code works at low level? I think that not all of us can answer on this question, and me too. I thought that i can write code on high level programming languages like Haskell, Erlang, Go and etc..., but i absolutely don't know how it works at low level, after compilation. So I decided to take a few deep steps down, to assembly, and to describe my learning way about this. Hope it will be interesting, not only for me. Something about 5 - 6 years ago I already used assembly for writing simple programs, it was in university and i used Turbo assembly and DOS operating system. Now I use Linux-x86-64 operating system. Yes, must be big difference between Linux 64 bit and DOS 16 bit. So let's start.

Preparation
Before we started, we must to prepare some things like As I wrote about, I use Ubuntu (Ubuntu 14.04.1 LTS 64 bit), thus my posts will be for this operating system and architecture. Different CPU supports different set of instructions. I use Intel Core i7 870 processor, and all code will be written processor. Also i will use nasm assembly. You can install it with:

sudo apt-get install nasm

It's version must be 2.0.0 or greater. I use NASM version 2.10.09 compiled on Dec 29 2013 version. And the last part, you will need in text editor where you will write you assembly code. I use Emacs with nasm-mode.el for this. It is not mandatory, of course you can use your favourite text editor. If you use Emacs as me you can download nasm-mode.el and configure your Emacs like this:

That's all we need for this moment. Other tools will be describe in next posts.

x64 syntax
Here I will not describe full assembly syntax, we'll mention only those parts of the syntax, which we will use in this post. Usually NASM program divided into sections. In this post we'll meet 2 following sections:
• data section
• text section
The data section is used for declaring constants. This data does not change at runtime. You can declare various math or other constants and etc... The syntax for declaring data section is:

section .data

The text section is for code. This section must begin with the declaration global _start, which tells the kernel where the program execution begins.

section .text
global _start
_start:

Comments starts with ; symbol. Every NASM source code line contains some combination of the following four fields:

[label:] instruction [operands] [; comment]

Fields which are in square brackets are optional. A basic NASM instruction consists from two parts. The first one is the name of the instruction which is to be executed, and the second are the operands of this command. For example:

MOV COUNT, 48 ; Put value 48 in the COUNT variable

Hello world
Let's write first program with NASM assembly. And of course it will be traditional Hello world program. Here is the code of it:

Yes, it doesn't look like printf("Hello world"). Let's try to understand what is it and how it works. Take a look 1-2 lines. We defined data section and put there msg constant with Hello world value. Now we can use this constant in our code. Next is declaration text section and entry point of program. Program will start to execute from 7 line. Now starts the most interesting part. We already know what is it mov instruction, it gets 2 operands and put value of second to first. But what is it these rax, rdi and etc... As we can read at wikipedia:

A central processing unit (CPU) is the hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system.

Ok, CPU performs some operations, arithmetical and etc... But where can it get data for this operations? The first answer in memory. However, reading data from and storing data into memory slows down the processor, as it involves complicated processes of sending the data request across the control bus. Thus CPU has own internal memory storage locations called registers: So when we write mov rax, 1, it means to put 1 to the rax register. Now we know what is it rax, rdi, rbx and etc... But need to know when to use rax but when rsi and etc...
• rax - temporary register; when we call a syscal, rax must contain syscall number
• rdx - used to pass 3rd argument to functions
• rdi - used to pass 1st argument to functions
• rsi - pointer used to pass 2nd argument to functions
In another words we just make a call of sys_write syscall. Take a look on sys_write:

It has 3 arguments:

• fd - file descriptor. Can be 0, 1 and 2 for standard input, standard output and standard error
• buf - points to a character array, which can be used to store content obtained from the file pointed to by fd.
• count - specifies the number of bytes to be written from the file into the character array
So we know that sys_write syscall takes three arguments and has number one in syscall table. Let's look again to our hello world implementation. We put 1 to rax register, it means that we will use sys_write system call. In next line we put 1 to rdi register, it will be first argument of sys_write, 1 - standard output. Than we store pointer to msg at rsi register, it will be second buf argument for sys_write. And than we pass the last (third) parameter (length of string) to rdx, it will be third argument of sys_write. Now we have all arguments of sys_write and we can call it with syscall function at 11 line. Ok, we printed "Hello world" string, now need to do correctly exit from program. We pass 60 to rax register, 60 is a number of exit syscall. And pass also 0 to rdi register, it will be error code, so with 0 our program must exit successfully. That's all for "Hello world". Quite simple :) Now let's build our program. For example we have this code in hello.asm file. Than we need to execute following commands:

nasm -f elf64 -o hello.o hello.asm
ld -o hello hello.o

After it we will have executable hello file which we can run with ./hello and will see Hello world string in the terminal.Conclusion
It was a first part with one simple-simple example. In next part we will see some arithmetic. If you will have any questions/suggestions write me a comment.

All source code you can find - here.

### Frequentism and Bayesianism: A Practical Introduction

30 August 2014 - 7:00pm

One of the first things a scientist hears about statistics is that there is are two different approaches: frequentism and Bayesianism. Despite their importance, many scientific researchers never have opportunity to learn the distinctions between them and the different practical approaches that result. The purpose of this post is to synthesize the philosophical and pragmatic aspects of the frequentist and Bayesian approaches, so that scientists like myself might be better prepared to understand the types of data analysis people do.

I'll start by addressing the philosophical distinctions between the views, and from there move to discussion of how these ideas are applied in practice, with some Python code snippets demonstrating the difference between the approaches.

Frequentism vs. Bayesianism: a Philosophical Debate

Fundamentally, the disagreement between frequentists and Bayesians concerns the definition of probability.

For frequentists, probability only has meaning in terms of a limiting case of repeated measurements. That is, if I measure the photon flux $$F$$ from a given star (we'll assume for now that the star's flux does not vary with time), then measure it again, then again, and so on, each time I will get a slightly different answer due to the statistical error of my measuring device. In the limit of a large number of measurements, the frequency of any given value indicates the probability of measuring that value. For frequentists probabilities are fundamentally related to frequencies of events. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense.

For Bayesians, the concept of probability is extended to cover degrees of certainty about statements. Say a Bayesian claims to measure the flux $$F$$ of a star with some probability $$P(F)$$: that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of my knowledge of what the measurement reasult will be. For Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range. That probability codifies our knowledge of the value based on prior information and/or available data.

The surprising thing is that this arguably subtle difference in philosophy leads, in practice, to vastly different approaches to the statistical analysis of data. Below I will give a few practical examples of the differences in approach, along with associated Python code to demonstrate the practical aspects of the resulting methods.

Frequentist and Bayesian Approaches in Practice: Counting Photons

Here we'll take a look at an extremely simple problem, and compare the frequentist and Bayesian approaches to solving it. There's necessarily a bit of mathematical formalism involved, but I won't go into too much depth or discuss too many of the subtleties. If you want to go deeper, you might consider — please excuse the shameless plug — taking a look at chapters 4-5 of our textbook.

The Problem: Simple Photon Counts

Imagine that we point our telescope to the sky, and observe the light coming from a single star. For the time being, we'll assume that the star's true flux is constant with time, i.e. that is it has a fixed value $$F_{\rm true}$$ (we'll also ignore effects like sky noise and other sources of systematic error). We'll assume that we perform a series of $$N$$ measurements with our telescope, where the $$i^{\rm th}$$ measurement reports the observed photon flux $$F_i$$ and error $$e_i$$.

The question is, given this set of measurements $$D = \{F_i,e_i\}$$, what is our best estimate of the true flux $$F_{\rm true}$$?

(Gratuitous aside on measurement errors: We'll make the reasonable assumption that errors are Gaussian. In a Frequentist perspective, $$e_i$$ is the standard deviation of the results of a single measurement event in the limit of repetitions of that event. In the Bayesian perspective, $$e_i$$ is the standard deviation of the (Gaussian) probability distribution describing our knowledge of that particular measurement given its observed value)

Here we'll use Python to generate some toy data to demonstrate the two approaches to the problem. Because the measurements are number counts, a Poisson distribution is a good approximation to the measurement process:

In [1]:

# Generating some simple photon count data import numpy as np from scipy import stats np.random.seed(1) # for repeatability F_true = 1000 # true flux, say number of photons measured in 1 second N = 50 # number of measurements F = stats.poisson(F_true).rvs(N) # N measurements of the flux e = np.sqrt(F) # errors on Poisson counts estimated via square root

Now let's make a simple visualization of the "measured" data:

In [2]:

%matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='gray', alpha=0.5) ax.vlines([F_true], 0, N, linewidth=5, alpha=0.2) ax.set_xlabel("Flux");ax.set_ylabel("measurement number");

These measurements each have a different error $$e_i$$ which is estimated from Poisson statistics using the standard square-root rule. In this toy example we already know the true flux $$F_{\rm true}$$, but the question is this: given our measurements and errors, what is our best estimate of the true flux?

Let's take a look at the frequentist and Bayesian approaches to solving this.

Frequentist Approach to Simple Photon Counts

We'll start with the classical frequentist maximum likelihood approach. Given a single observation $$D_i = (F_i, e_i)$$, we can compute the probability distribution of the measurement given the true flux $$F_{\rm true}$$ given our assumption of Gaussian errors:

$P(D_i~|~F_{\rm true}) = \frac{1}{\sqrt{2\pi e_i^2}} \exp{\left[\frac{-(F_i - F_{\rm true})^2}{2 e_i^2}\right]}$

This should be read "the probability of $$D_i$$ given $$F_{\rm true}$$ equals ...". You should recognize this as a normal distribution with mean $$F_{\rm true}$$ and standard deviation $$e_i$$.

We construct the likelihood function by computing the product of the probabilities for each data point:

$\mathcal{L}(D~|~F_{\rm true}) = \prod_{i=1}^N P(D_i~|~F_{\rm true})$

Here $$D = \{D_i\}$$ represents the entire set of measurements. Because the value of the likelihood can become very small, it is often more convenient to instead compute the log-likelihood. Combining the previous two equations and computing the log, we have

$\log\mathcal{L} = -\frac{1}{2} \sum_{i=1}^N \left[ \log(2\pi e_i^2) + \frac{(F_i - F_{\rm true})^2}{e_i^2} \right]$

What we'd like to do is determine $$F_{\rm true}$$ such that the likelihood is maximized. For this simple problem, the maximization can be computed analytically (i.e. by setting $$d\log\mathcal{L}/dF_{\rm true} = 0$$). This results in the following observed estimate of $$F_{\rm true}$$:

$F_{\rm est} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = 1/e_i^2$

Notice that in the special case of all errors $$e_i$$ being equal, this reduces to

$F_{\rm est} = \frac{1}{N}\sum_{i=1}^N F_i$

That is, in agreement with intuition, $$F_{\rm est}$$ is simply the mean of the observed data when errors are equal.

We can go further and ask what the error of our estimate is. In the frequentist approach, this can be accomplished by fitting a Gaussian approximation to the likelihood curve at maximum; in this simple case this can also be solved analytically. It can be shown that the standard deviation of this Gaussian approximation is:

$\sigma_{\rm est} = \left(\sum_{i=1}^N w_i \right)^{-1/2}$

These results are fairly simple calculations; let's evaluate them for our toy dataset:

In [3]:

w = 1. / e ** 2 print(""" F_true = {0} F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements) """.format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N)) F_true = 1000 F_est = 998 +/- 4 (based on 50 measurements)

We find that for 50 measurements of the flux, our estimate has an error of about 0.4% and is consistent with the input value.

Bayesian Approach to Simple Photon Counts

The Bayesian approach, as you might expect, begins and ends with probabilities. It recognizes that what we fundamentally want to compute is our knowledge of the parameters in question, i.e. in this case,

$P(F_{\rm true}~|~D)$

Note that this formulation of the problem is fundamentally contrary to the frequentist philosophy, which says that probabilities have no meaning for model parameters like $$F_{\rm true}$$. Nevertheless, within the Bayesian philosophy this is perfectly acceptable.

To compute this result, Bayesians next apply Bayes' Theorem, a fundamental law of probability:

$P(F_{\rm true}~|~D) = \frac{P(D~|~F_{\rm true})~P(F_{\rm true})}{P(D)}$

Though Bayes' theorem is where Bayesians get their name, it is not this law itself that is controversial, but the Bayesian interpretation of probability implied by the term $$P(F_{\rm true}~|~D)$$.

Let's take a look at each of the terms in this expression:

• $$P(F_{\rm true}~|~D)$$: The posterior, or the probability of the model parameters given the data: this is the result we want to compute.
• $$P(D~|~F_{\rm true})$$: The likelihood, which is proportional to the $$\mathcal{L}(D~|~F_{\rm true})$$ in the frequentist approach, above.
• $$P(F_{\rm true})$$: The model prior, which encodes what we knew about the model prior to the application of the data $$D$$.
• $$P(D)$$: The data probability, which in practice amounts to simply a normalization term.

If we set the prior $$P(F_{\rm true}) \propto 1$$ (a flat prior), we find

$P(F_{\rm true}|D) \propto \mathcal{L}(D|F_{\rm true})$

and the Bayesian probability is maximized at precisely the same value as the frequentist result! So despite the philosophical differences, we see that (for this simple problem at least) the Bayesian and frequentist point estimates are equivalent.

You'll noticed that I glossed over something here: the prior, $$P(F_{\rm true})$$. The prior allows inclusion of other information into the computation, which becomes very useful in cases where multiple measurement strategies are being combined to constrain a single model (as is the case in, e.g. cosmological parameter estimation). The necessity to specify a prior, however, is one of the more controversial pieces of Bayesian analysis.

A frequentist will point out that the prior is problematic when no true prior information is available. Though it might seem straightforward to use a noninformative prior like the flat prior mentioned above, there are some surprisingly subtleties involved. It turns out that in many situations, a truly noninformative prior does not exist! Frequentists point out that the subjective choice of a prior which necessarily biases your result has no place in statistical data analysis.

A Bayesian would counter that frequentism doesn't solve this problem, but simply skirts the question. Frequentism can often be viewed as simply a special case of the Bayesian approach for some (implicit) choice of the prior: a Bayesian would say that it's better to make this implicit choice explicit, even if the choice might include some subjectivity.

Photon Counts: the Bayesian approach

Leaving these philosophical debates aside for the time being, let's address how Bayesian results are generally computed in practice. For a one parameter problem like the one considered here, it's as simple as computing the posterior probability $$P(F_{\rm true}~|~D)$$ as a function of $$F_{\rm true}$$: this is the distribution reflecting our knowledge of the parameter $$F_{\rm true}$$. But as the dimension of the model grows, this direct approach becomes increasingly intractable. For this reason, Bayesian calculations often depend on sampling methods such as Markov Chain Monte Carlo (MCMC).

I won't go into the details of the theory of MCMC here. Instead I'll show a practical example of applying an MCMC approach using Dan Foreman-Mackey's excellent emcee package. Keep in mind here that the goal is to generate a set of points drawn from the posterior probability distribution, and to use those points to determine the answer we seek.

To perform this MCMC, we start by defining Python functions for the prior $$P(F_{\rm true})$$, the likelihood $$P(D~|~F_{\rm true})$$, and the posterior $$P(F_{\rm true}~|~D)$$, noting that none of these need be properly normalized. Our model here is one-dimensional, but to handle multi-dimensional models we'll define the model in terms of an array of parameters $$\theta$$, which in this case is $$\theta = [F_{\rm true}]$$:

In [4]:

def log_prior(theta): return 1 # flat prior def log_likelihood(theta, F, e): return -0.5 * np.sum(np.log(2 * np.pi * e ** 2) + (F - theta[0]) ** 2 / e ** 2) def log_posterior(theta, F, e): return log_prior(theta) + log_likelihood(theta, F, e)

Now we set up the problem, including generating some random starting guesses for the multiple chains of points.

In [5]:

ndim = 1 # number of parameters in the model nwalkers = 50 # number of MCMC walkers nburn = 1000 # "burn-in" period to let chains stabilize nsteps = 2000 # number of MCMC steps to take # we'll start at random locations between 0 and 2000 starting_guesses = 2000 * np.random.rand(nwalkers, ndim) import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e]) sampler.run_mcmc(starting_guesses, nsteps) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points

If this all worked correctly, the array sample should contain a series of 50000 points drawn from the posterior. Let's plot them and check:

In [6]:

# plot a histogram of the sample plt.hist(sample, bins=50, histtype="stepfilled", alpha=0.3, normed=True) # plot a best-fit Gaussian F_fit = np.linspace(975, 1025) pdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit) plt.plot(F_fit, pdf, '-k') plt.xlabel("F"); plt.ylabel("P(F)")

Out[6]:

<matplotlib.text.Text at 0x1075c7510>

We end up with a sample of points drawn from the (normal) posterior distribution. The mean and standard deviation of this posterior are the corollary of the frequentist maximum likelihood estimate above:

In [7]:

print(""" F_true = {0} F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements) """.format(F_true, np.mean(sample), np.std(sample), N)) F_true = 1000 F_est = 998 +/- 4 (based on 50 measurements)

We see that as expected for this simple problem, the Bayesian approach yields the same result as the frequentist approach!

Now, you might come away with the impression that the Bayesian method is unnecessarily complicated, and in this case it certainly is. Using an Affine Invariant Markov Chain Monte Carlo Ensemble sampler to characterize a one-dimensional normal distribution is a bit like using the Death Star to destroy a beach ball, but I did this here because it demonstrates an approach that can scale to complicated posteriors in many, many dimensions, and can provide nice results in more complicated situations where an analytic likelihood approach is not possible.

As a side note, you might also have noticed one little sleight of hand: at the end, we use a frequentist approach to characterize our posterior samples! When we computed the sample mean and standard deviation above, we were employing a distinctly frequentist technique to characterize the posterior distribution. The pure Bayesian result for a problem like this would be to report the posterior distribution itself (i.e. its representative sample), and leave it at that. That is, in pure Bayesianism the answer to a question is not a single number with error bars; the answer is the posterior distribution over the model parameters!

Adding a Dimension: Exploring a more sophisticated model

Let's briefly take a look at a more complicated situation, and compare the frequentist and Bayesian results yet again. Above we assumed that the star was static: now let's assume that we're looking at an object which we suspect has some stochastic variation — that is, it varies with time, but in an unpredictable way (a Quasar is a good example of such an object).

We'll propose a simple 2-parameter Gaussian model for this object: $$\theta = [\mu, \sigma]$$ where $$\mu$$ is the mean value, and $$\sigma$$ is the standard deviation of the variability intrinsic to the object. Thus our model for the probability of the true flux at the time of each observation looks like this:

$F_{\rm true} \sim \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[\frac{-(F - \mu)^2}{2\sigma^2}\right]$

Now, we'll again consider $$N$$ observations each with their own error. We can generate them this way:

In [8]:

np.random.seed(42) # for reproducibility N = 100 # we'll use more samples for the more complicated model mu_true, sigma_true = 1000, 15 # stochastic flux model F_true = stats.norm(mu_true, sigma_true).rvs(N) # (unknown) true flux F = stats.poisson(F_true).rvs() # observed flux: true flux plus Poisson errors. e = np.sqrt(F) # root-N error, as above Varying Photon Counts: The Frequentist Approach

The resulting likelihood is the convolution of the intrinsic distribution with the error distribution, so we have

$\mathcal{L}(D~|~\theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi(\sigma^2 + e_i^2)}}\exp\left[\frac{-(F_i - \mu)^2}{2(\sigma^2 + e_i^2)}\right]$

Analogously to above, we can analytically maximize this likelihood to find the best estimate for $$\mu$$:

$\mu_{est} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = \frac{1}{\sigma^2 + e_i^2}$

And here we have a problem: the optimal value of $$\mu$$ depends on the optimal value of $$\sigma$$. The results are correlated, so we can no longer use straightforward analytic methods to arrive at the frequentist result.

Nevertheless, we can use numerical optimization techniques to determine the maximum likelihood value. Here we'll use the optimization routines available within Scipy's optimize submodule:

In [9]:

def log_likelihood(theta, F, e): return -0.5 * np.sum(np.log(2 * np.pi * (theta[1] ** 2 + e ** 2)) + (F - theta[0]) ** 2 / (theta[1] ** 2 + e ** 2)) # maximize likelihood <--> minimize negative likelihood def neg_log_likelihood(theta, F, e): return -log_likelihood(theta, F, e) from scipy import optimize theta_guess = [900, 5] theta_est = optimize.fmin(neg_log_likelihood, theta_guess, args=(F, e)) print(""" Maximum likelihood estimate for {0} data points: mu={theta[0]:.0f}, sigma={theta[1]:.0f} """.format(N, theta=theta_est)) Optimization terminated successfully. Current function value: 502.839505 Iterations: 58 Function evaluations: 114 Maximum likelihood estimate for 100 data points: mu=999, sigma=19

This maximum likelihood value gives our best estimate of the parameters $$\mu$$ and $$\sigma$$ governing our model of the source. But this is only half the answer: we need to determine how confident we are in this answer, that is, we need to compute the error bars on $$\mu$$ and $$\sigma$$.

There are several approaches to determining errors in a frequentist paradigm. We could, as above, fit a normal approximation to the maximum likelihood and report the covariance matrix (here we'd have to do this numerically rather than analytically). Alternatively, we can compute statistics like $$\chi^2$$ and $$\chi^2_{\rm dof}$$ to and use standard tests to determine confidence limits, which also depends on strong assumptions about the Gaussianity of the likelihood. We might alternatively use randomized sampling approaches such as Jackknife or Bootstrap, which maximize the likelihood for randomized samples of the input data in order to explore the degree of certainty in the result.

All of these would be valid techniques to use, but each comes with its own assumptions and subtleties. Here, for simplicity, we'll use the basic bootstrap resampler found in the astroML package:

In [10]:

from astroML.resample import bootstrap def fit_samples(sample): # sample is an array of size [n_bootstraps, n_samples] # compute the maximum likelihood for each bootstrap. return np.array([optimize.fmin(neg_log_likelihood, theta_guess, args=(F, np.sqrt(F)), disp=0) for F in sample]) samples = bootstrap(F, 1000, fit_samples) # 1000 bootstrap resamplings

Now in a similar manner to what we did above for the MCMC Bayesian posterior, we'll compute the sample mean and standard deviation to determine the errors on the parameters.

In [11]:

mu_samp = samples[:, 0] sig_samp = abs(samples[:, 1]) print " mu = {0:.0f} +/- {1:.0f}".format(mu_samp.mean(), mu_samp.std()) print " sigma = {0:.0f} +/- {1:.0f}".format(sig_samp.mean(), sig_samp.std()) mu = 999 +/- 4 sigma = 18 +/- 5

I should note that there is a huge literature on the details of bootstrap resampling, and there are definitely some subtleties of the approach that I am glossing over here. One obvious piece is that there is potential for errors to be correlated or non-Gaussian, neither of which is reflected by simply finding the mean and standard deviation of each model parameter. Nevertheless, I trust that this gives the basic idea of the frequentist approach to this problem.

Varying Photon Counts: The Bayesian Approach

The Bayesian approach to this problem is almost exactly the same as it was in the previous problem, and we can set it up by slightly modifying the above code.

In [12]:

def log_prior(theta): # sigma needs to be positive. if theta[1] <= 0: return -np.inf else: return 0 def log_posterior(theta, F, e): return log_prior(theta) + log_likelihood(theta, F, e) # same setup as above: ndim, nwalkers = 2, 50 nsteps, nburn = 2000, 1000 starting_guesses = np.random.rand(nwalkers, ndim) starting_guesses[:, 0] *= 2000 # start mu between 0 and 2000 starting_guesses[:, 1] *= 20 # start sigma between 0 and 20 sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e]) sampler.run_mcmc(starting_guesses, nsteps) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].reshape(-1, 2)

Now that we have the samples, we'll use a convenience routine from astroML to plot the traces and the contours representing one and two standard deviations:

In [13]:

from astroML.plotting import plot_mcmc fig = plt.figure() ax = plot_mcmc(sample.T, fig=fig, labels=[r'$\mu$', r'$\sigma$'], colors='k') ax[0].plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1) ax[0].plot([mu_true], [sigma_true], 'o', color='red', ms=10);

The red dot indicates ground truth (from our problem setup), and the contours indicate one and two standard deviations (68% and 95% confidence levels). In other words, based on this analysis we are 68% confident that the model lies within the inner contour, and 95% confident that the model lies within the outer contour.

Note here that $$\sigma = 0$$ is consistent with our data within two standard deviations: that is, depending on the certainty threshold you're interested in, our data are not enough to confidently rule out the possibility of a non-varying source!

The other thing to notice is that this posterior is definitely not Gaussian: this can be seen by the lack of symmetry in the vertical direction. That means that the Gaussian approximation used within the frequentist approach may not reflect the true uncertainties in the result. This isn't an issue with frequentism itself (i.e. there are certainly ways to account for non-Gaussianity within the frequentist paradigm), but the vast majority of commonly applied frequentist techniques make the explicit or implicit assumption of Gaussianity of the distribution. Bayesian approaches generally don't require such assumptions.

(Side note on priors: there are good arguments that a flat prior on $$\sigma$$ subtley biases the calculation in this case: i.e. a flat prior is not necessarily non-informative in the case of scale factors like $$\sigma$$. There are interesting arguments to be made that the Jeffreys Prior would be more applicable. Here I believe the Jeffreys prior is not suitable, because $$\sigma$$ is not a true scale factor (i.e. the Gaussian has contributions from $$e_i$$ as well). On this question, I'll have to defer to others who have more expertise. Note that subtle — some would say subjective — questions like this are among the features of Bayesian analysis that frequentists take issue with).

I hope I've been able to convey through this post how philosophical differences underlying frequentism and Bayesianism lead to fundamentally different approaches to simple problems, which nonetheless can often yield similar or even identical results.

To summarize the differences:

• Frequentism considers probabilities to be related to frequencies of real or hypothetical events.
• Bayesianism considers probabilities to measure degrees of knowledge.
• Frequentist analyses generally proceed through use of point estimates and maximum likelihood approaches.
• Bayesian analyses generally compute the posterior either directly or through some version of MCMC sampling.

In simple problems, the two approaches can yield similar results. As data and models grow in complexity, however, the two approaches can diverge greatly. In a followup post, I plan to show an example or two of these more complicated situations. Stay tuned!

Update: see the followup post: Frequentism and Bayesianism II: When Results Differ

This post was written entirely in the IPython notebook. You can download this notebook, or see a static view here.

### States with Medical Marijuana Have Fewer Painkiller Deaths

30 August 2014 - 7:00am
SmartNews Keeping you current

(BEN NELMS/Reuters/Corbis)

smithsonian.com
August 29, 2014 12:10PM

In the U.S., 23 states and the District of Columbia allow their residents to legally use medical marijuana. And, according to a new study, death certificates reveal that states with a medical marijuana law have lower rates of deaths caused by narcotic painkiller overdoses than other states.

Only California, Oregon and Washington had laws effective prior to 1999, the point when the researchers began their analysis. Ten other states put laws on their books between 1999 and 2010. The researchers analyzed each state in the years after a medical cannabis law came into effect.

Overall, the states with these laws had a nearly 25 percent reduction in opioid overdose deaths. The study was published this week in JAMA Internal Medicine

The findings could help address the nation’s growing problem with opioid overdoses—about 60 percent of deaths are people who have prescriptions for the medication. However, the study authors caution that their analysis doesn’t account for health attitudes in different states that might explain the association. They did explore whether policies addressing painkiller abuse had any effect on the decline in deaths and didn’t find a link.

Previous studies hint at why marijuana use might help reduce reliance on opioid painkillers. Many drugs with abuse potential such as nicotine and opiates, as well as marijuana, pump up the brain’s dopamine levels, which can induce feelings of euphoria. The biological reasons that people might use marijuana instead of opioids aren’t exactly clear, because marijuana doesn’t replace the pain relief of opiates.  However, it does seem to distract from the pain by making it less bothersome.

Previous Article Someday, Okra Could Help Make Ice Cream

Next Article Your House’s Germs Are Yours, And They’ll Follow You If You Move

Marissa Fessenden is a freelance science writer and artist who appreciates small things and wide open spaces.

Read more from this author | We Recommend

Take a calming breath, then watch this video to find out. Ask Smithsonian: Does Stress Turn Your Hair Gray? (1:00)

Using a CT scanner, scientists have created a 3-D model of Cryptomartus hindi Ambushed by a Prehistoric Spider (0:07)

Real raptors had feathers and, according to one paleontologist, looked a lot more like “prehistoric kickboxing killer turkeys” What Jurassic Park Got Wrong About Raptors (2:58)

Eric Green, director of the National Human Genome Research Institute, explains how genomics are revolutionizing medicine at Smithsonian's "The Future is Here" event The Human Genome: Unlocking Life's Code (15:37)

30 August 2014 - 7:00am

Ever since the dawn of the space age, a quixotic subculture of physicists, engineers, and science-fiction writers have devoted their lunch hours and weekends to drawing up plans for starships, propelled by the imperative for humans to crawl out of our Earthly cradle. For most of that time, they focused on the physics. Can we really fly to the stars? Many initially didn’t think so, but now we know it’s possible. Today, the question is: Will we?

Truth is, we already are flying to the stars, without really meaning to. The twin Voyager space probes launched in 1977 have endured long past their original goal of touring the outer planets and have reached the boundaries of the sun’s realm. Voyager 1 is 124 astronomical units (AU) away from the sun—that is, 124 times farther out than Earth—and clocking 3.6 AU per year. Whether it has already exited the solar system depends on your definition of “solar system,” but it is certainly way beyond the planets. Its instruments have witnessed the energetic particles and magnetic fields of the sun give way to those of interstellar space—finding, among other things, what Ralph McNutt, a Voyager team member and planetary scientist, describes as “weird plasma structures” begging to be explored. The mysteries encountered by the Voyagers compel scientists to embark on follow-up missions that venture even deeper into the cosmic woods—out to 200 AU and beyond. But what kind of spacecraft can get us there?

NASA

Going Small: Ion Drives

NASA’s Dawn probe to the asteroid belt has demonstrated one leading propulsion system: the ion drive. An ion drive is like a gun that fires atoms rather than bullets; the ship moves forward on the recoil. The system includes a tank of propellant, typically xenon, and a power source, such as solar panels or plutonium batteries. The engine first strips propellant atoms of their outermost electrons, giving them a positive electric charge. Then, on the principle that opposites attract, a negatively charged grid draws the atoms toward the back of the ship. They overshoot the grid and stream off into space at speeds 10 times faster than chemical rocket exhaust (and 100 times faster than a bullet). For a post-Voyager probe, ion engines would fire for 15 years or so and hurl the craft to several times the Voyagers’ speed, so that it could reach a couple of hundred AU before the people who built it died.

Star flight enthusiasts are also pondering ion drives for a truly interstellar mission, aiming for Alpha Centauri, the nearest star system some 300,000 AU away. Icarus Interstellar, a nonprofit foundation with a mission to achieve interstellar travel by the end of the century, has dreamed up Project Tin Tin—a tiny probe weighing less than 10 kilograms, equipped with a miniaturized high-performance ion drive. The trip would still take tens of thousands of years, but the group sees Tin Tin less as a realistic science mission than as a technology demonstration.

NASA

Going Light: Solar Sails

A solar sail, such as the one used by the Japanese IKAROS probe to Venus, does away with propellant and engines altogether. It exploits the physics of light. Like anything else in motion, a light wave has momentum and pushes on whatever surface it strikes. The force is feeble, but becomes noticeable if you have a large enough surface, a low mass, and a lot of time. Sunlight can accelerate a large sheet of lightweight material, such as Kapton, to an impressive speed. To reach the velocity needed to escape the solar system, the craft would first swoop toward the sun, as close as it dared—inside the orbit of Mercury—to fill its sails with lusty sunlight.

Such sail craft could conceivably make the crossing to Alpha Centauri in a thousand years. Sails are limited in speed by how close they can get to the sun, which, in turn, is limited by the sail material’s durability. Gregory Matloff, a City University of New York professor and longtime interstellar travel proponent, says the most promising potential material is graphene—ultrathin layers of carbon graphite.

Like anything else in motion, a light wave has momentum and pushes on whatever surface it strikes.

A laser or microwave beam could provide an even more muscular push. In the mid-1980s, the doyen of interstellar travel, Robert Forward, suggested piggybacking on an idea popular at the time: solar-power satellites, which would collect solar energy in orbit and beam it down to Earth by means of microwaves. Before commencing operation, an orbital power station could pivot and beam its power up rather than down. A 10-gigawatt station could accelerate an ultralight sail—a mere 16 grams—to one-fifth the speed of light within a week. Two decades later, we’d start seeing live video from Alpha Centauri.

This “Starwisp” scheme has its dubious features—it would require an enormous lens, and the sail is so fragile that the beam would be as likely to fry it as to push it—but it showed that we could reach the stars within a human lifetime.

Going Big: Nuclear Rockets

Sails may be able to whisk tiny probes to the stars, but they can’t handle a human mission; you’d need a microwave beam consuming thousands of times more power than the entire world currently generates. The best-developed scheme for human space travel is nuclear pulse propulsion, which the government-funded Project Orion worked on during the 1950s and ’60s.

When you first hear about it, the scheme sounds unhinged. Load your starship with 300,000 nuclear bombs, detonate one every three seconds, and ride the blast waves. Though extreme, it works on the same basic principle as any other rocket—namely, recoil. Instead of shooting atoms out the back of the rocket, the nuclear-pulse system shoots blobs of plasma, such as fireballs of tungsten.

You pack a plug of tungsten along with a nuclear weapon into a metal capsule, fire the capsule out the back of the ship, and set it off a short distance away. In the vacuum of space, the explosion does less damage than you might expect. Vaporized tungsten hurtles toward the ship, rebounds off a thick metal plate at the ship’s rear, and shoots into space, while the ship recoils, thereby moving forward. Giant shock absorbers lessen the jolt on the crew quarters. Passengers playing 3-D chess, or doing whatever else interstellar passengers do, would feel rhythmic thuds like kids jumping rope in the apartment upstairs.

Load your starship with 300,000 nuclear bombs, detonate one every three seconds, and ride the blast waves.

The ship might reach a tenth the speed of light. If for some reason—solar explosion, alien invasion—we really had to get off the planet fast and we didn’t care about nuking the launch pad, this would be the way to go. We already have everything we need for it. “Today the closest technology we have would be nuclear pulse,” Matloff says. If anything, most people would be happy to load up all our nukes on a ship and be rid of them.

Ideally, the bomb blasts would be replaced with controlled nuclear fusion reactions. That was the approach suggested by Project Daedalus, a ’70s-era effort to design a fully equipped robotic interstellar vessel. The biggest problem was that for every ton of payload, the ship would have to carry 100 tons of fuel. Such a behemoth would be the size of a battleship, with a length of 200 meters and a mass of 50,000 tons.

“It was just a huge, monstrous machine,” says Kelvin Long, an English aerospace engineer and co-founder of Project Icarus, a modern effort to update the design. “But what’s happened since then, of course, is microelectronics, miniaturization of technology, nanotechnology. All these developments have led to a rethinking. Do you really need these massive structures?” He says Project Icarus plans to unveil the new design in London this October.

Interstellar designers have come up with all sorts of ways to shrink the fuel tank. For instance, the ship could use electric or magnetic fields to scoop up hydrogen gas from interstellar space. The hydrogen would then be fed into a fusion reactor. The faster the ship were to go, the faster it would scoop—a virtuous cycle that, if maintained, would propel the ship to nearly the speed of light. Unfortunately, the scooping system would also produce drag forces, slowing the ship, and the headwind of particles would cook the crew with radiation. Also, pure-hydrogen fusion is inefficient. A fusion-powered ship probably couldn’t avoid hauling some fuel from Earth.

Going Dark: Scavenging Exotic Matter

Instead of scavenging hydrogen gas, Jia Liu, a physics graduate student at New York University, has proposed foraging for dark matter, the invisible exotic material that astronomers think makes up the bulk of the galaxy. Particle physicists hypothesize that dark matter consists of a type of particle called the neutralino, which has a useful property: When two neutralinos collide, they annihilate each other in a blaze of gamma rays. Such reactions could drive a ship forward. Like the hydrogen scooper, a dark-matter ship could approach the speed of light. The problem, though, is that dark matter is dark—meaning it doesn’t respond to electromagnetic forces. Physicists know of no way to collect it, let alone channel it to produce rocket thrust.

If engineers somehow overcame these problems and built a near-light-speed ship, not just Alpha Centauri but the entire galaxy would come within range. In the 1960s astronomer Carl Sagan calculated that, if you could attain a modest rate of acceleration—about the same rate a sports car uses—and maintain it long enough, you’d get so close to the speed of light that you’d cross the galaxy in just a couple of decades of shipboard time. As a bonus, that rate would provide a comfortable level of artificial gravity.

On the downside, hundreds of thousands of years would pass on Earth in the meantime. By the time you got back, your entire civilization might have gone ape. From one perspective, though, this is a good thing. The tricks relativity plays with time would solve the eternal problem of too-slow computers. If you want to do some eons-long calculation, go off and explore some distant star system and the result will be ready for you when you return. The starship crews of the future may not be voyaging for survival, glory, or conquest. They may be solving puzzles.

Going Warp: Bending Time and Space

With a ship moving at a tenth the speed of light, humans could migrate to the nearest stars within a lifetime, but crossing the galaxy would remain a journey of a million years, and each star system would still be mostly isolated. To create a galactic version of the global village, bound together by planes and phones, you’d need to travel faster than light.

Contrary to popular belief, Einstein’s theory of relativity does not rule that out completely. According to the theory, space and time are elastic; what we perceive as the force of gravity is in fact the warping of space and time. In principle, you could warp space so severely that you’d shorten the distance you want to cross, like folding a rug to bring the two sides closer together. If so, you could cross any distance instantaneously. You wouldn’t even notice the acceleration, because the field would zero out g-forces inside the ship. The view from the ship windows would be stunning. Stars would change in color and shift toward the axis of motion.

You could warp space so severely that you’d shorten the distance you want to cross, like folding a rug to bring the two sides closer together.

It seems almost mean-spirited to point out how far beyond our current technology this idea is. Warp drive would require a type of material that exerts a gravitational push rather than a gravitational pull. Such material contains a negative amount of energy—literally less than nothing, as if you had a mass of –50 kilograms. Physicists, inventive types that they are, have imagined ways to create such energy, but even they throw up their hands at the amount of negative energy a starship would need: a few stars’ worth. What is more, the ship would be impossible to steer, since control signals, which are restricted to the speed of light, wouldn’t be fast enough to get from the ship’s bridge to the propulsion system located on the vessel’s perimeter. (Equipment within the ship, however, would function just fine.)

When it comes to starships, it’s best not to get hung up on details. By the time humanity gets to the point it might actually build one, our very notions of travel may well have changed. “Do we need to send full humans?” asks Long. “Maybe we just need to send embryos, or maybe in the future, you could completely download yourself into a computer, and you can remanufacture yourself at the other end through something similar to 3-D printing.” Today, a starship seems like the height of futuristic thinking. Future generations might find it quaint.

George Musser is a writer on physics and cosmology and author of The Complete Idiot’s Guide To String Theory (Alpha, 2008). He was a senior editor at Scientific American for 14 years and has won honors such as the American Institute of Physics Science Writing Award.

### Intel Unleashes Its First 8-Core Desktop Processor

30 August 2014 - 7:00am

Uncompromised Client Processing Power with 16 Threads and DDR4 Memory Support for Content Creation, Gaming and Multitasking

NEWS HIGHLIGHTS

• The eight-core, 16-thread Intel® Core™ processor Extreme Edition is Intel's first eight-core client processor.
• Combined with the new Intel® X99 Chipset, this is the first Intel desktop platform to support DDR4 memory.
• Additional six-core unlocked enthusiast desktop SKUs also announced.

Intel® Core™ i7-5960X processor extreme edition

Intel® Core™ i7-5960X processor extreme edition die

PENNY ARCADE EXPO (PAX), Seattle, Aug. 29, 2014 – Intel Corporation unveiled its first eight-core desktop processor, the Intel® Core™ i7-5960X processor Extreme Edition, formerly code-named "Haswell-E," targeted at power users who demand the most from their PCs.

For enthusiasts, gamers and content creators craving the ultimate in performance, Intel's first client processor supporting 16 computing threads and new DDR4 memory will enable some of the fastest desktop systems ever seen. The new enhanced Intel® X99 Chipset and robust overclocking capabilities will allow enthusiasts to tune their systems for maximum performance.

"We're thrilled to unveil the next phase in our 'reinvention of the desktop' we outlined earlier this year," said Lisa Graff, vice president and general manager, Intel's Desktop Client Platform Group. "This product family is aimed squarely at those enthusiasts who push their systems further than anyone, and we're offering the speed, cores, overclocking and platform capabilities they have asked us for."

At PAX* Intel is showing what these new, high-performance powerhouses can do. Intel's booth features systems from several manufacturers using this new processor family running popular games such as Lucky's Tale*, Superhot* with the Oculus Rift*, Dark Souls 2* and Titan Fall*. Additionally, Intel will demonstrate gaming while streaming via Twitch* on a new Intel Core processor Extreme Edition-based system. Live game streamers will enjoy having high-quality high-definition streams with eight cores for lag-less gameplay.

Intel has been working with industry partners to take advantage of this new platform. Key OEMs, memory vendors, motherboard vendors and graphics partners will help grow this enthusiast ecosystem.

"Alienware always prioritizes performance first and foremost in every system that we develop and we are constantly looking for cutting-edge technologies that can give our customers a competitive edge," said Frank Azor, Alienware* general manager. "It was an easy decision to work with Intel to bring its new eight-core extreme processor to our new flagship gaming desktop, the Alienware Area-51. Using new overclocking and monitoring features in Alienware Command Center 4.0, we've been able to really push the processors to the fullest extent and are seeing impressive overclocking headroom. This new Intel processor lineup is the perfect choice for gamers who demand the absolute best performance from their systems."

"I'm ridiculously excited about Intel's new platform," said Kelt Reeves, president of Falcon Northwest*. "This is the biggest bundle of amazing new technologies all hitting at once that I can ever remember seeing. Enthusiasts will be amazed at what they can do with DDR4 memory, 10 SATA 6GB ports, 40 PCI Express lanes and eight-core CPUs."

"Now is an exciting time for enthusiasts," said Harjit Chana, chief brand officer of Digital Storm*. "Intel's new line of enthusiast processors are breaking benchmark records. Featuring up to eight CPU cores and the new X99 chipset with DDR4 memory, our customers can now customize and buy the ultimate high-performance PC."

"The new Intel Core i7 processors are a PC enthusiast's dream come true with up to 16 threads, faster speeds and cooler temperatures," said Kevin Wasielewski ORIGIN PC* CEO and co-founder. "ORIGIN PC's line of record-breaking desktops just got even faster."

"Intel's new platform delivers the maximum processing power with eight cores of unstoppable processing power and supports DDR4 memory giving extreme gamers and demanding enthusiasts exactly what they need," said Wallace Santos, CEO and founder of MAINGEAR*.

Many of these new platforms based on the Intel X99 Chipset are also Thunderbolt™ Ready. When paired with a Thunderbolt 2 add-in card, this enables a blazing-fast connection to your PC at 20 Gbps. Data intensive tasks such as 4K video editing, 3-D rendering and game development all strongly benefit from the performance of Thunderbolt 2. Check with your PC manufacturer or motherboard maker for compatibility.

Three new SKUs will be available next week ranging from six to eight cores and from $389 to$999. These new processors are also conflict-free1. Check out more about our "Extreme" lineup here.

Intel® Core™ i7-5960X processor extreme edition

Intel® Core™ i7-5960X processor extreme edition

Intel® X99 Chipset

Intel® Core™ i7-5960X processor extreme edition retail box

Intel® Core™ i7-5960X processor extreme edition die

Intel (NASDAQ: INTC) is a world leader in computing innovation. The company designs and builds the essential technologies that serve as the foundation for the world's computing devices. As a leader in corporate responsibility and sustainability, Intel also manufactures the world's first commercially available "conflict-free" microprocessors. Additional information about Intel is available at newsroom.intel.com and blogs.intel.com and about Intel's conflict-free efforts at conflictfree.intel.com.

Intel, Intel Core, Thunderbolt and the Intel logo are trademarks of Intel Corporation in the United States and other countries.

*Other names and brands may be claimed as the property of others.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

1 "Conflict-free" means "DRC conflict free," which is defined by the U.S. Securities and Exchange Commission rules to mean products that do not contain conflict minerals (tin, tantalum, tungsten and/or gold) that directly or indirectly finance or benefit armed groups in the Democratic Republic of the Congo (DRC) or adjoining countries.

### Vikram Chandra Is a Novelist Who’s Obsessed with Writing Computer Code

30 August 2014 - 7:00am
Vikram Chandra Is A Novelist Who’s Obsessed With Writing Computer Code - The Daily Beast

TORSTEN SILZ

Books

08.29.14

Acclaimed novelist Vikram Chandra is equally obsessed with the tech world of computer coding and the realm of imagination. He talks about the two realities.

When Vikram Chandra mentioned he was working on a nonfiction book about computer coding at a literary party in San Francisco last fall, I was startled. Chandra is a gifted and original novelist, author most recently of Sacred Games (2006), a sprawling, densely layered noirish detective story set in Mumbai. His first novel, Red Earth and Pouring Rain (1995), won the Commonwealth Writers Prize for best first book.  Love and Longing in Bombay (1997) was shortlisted for the Guardian fiction award, and he also has written for Bollywood  (he cowrote Mission Kashmir [2000]). Chandra is a writer, not a geek. What does he know about coding?

His new nonfiction book, Geek Sublime, shows that, like few in today’s literary world, the 53-year-old Chandra understands the esoteric scientific discipline. He is as conversant with HTML and Git as with metaphor and the twists and turns of plotting. “My writing life and my life with computers … seem mirrored, computer twinned,” he writes. “Both are explorations of process, of the unfolding of connections.”

And that’s what makes Geek Sublime so winning. Chandra, who lives in Berkeley and teaches at the University of California, has written a brilliantly comprehensible syllabus for anyone curious about the inner workings of computers or the Internet, profound in its implications and conclusions. It is a surprising and passionate book, encompassing a primer on terminology for non-mathematicians, an explanation of “logic gates,” a meditation on Sanskrit as an algorithm, and a section on what makes Steve Wozniak “hardcore.” (“Woz—and only Woz, by himself—designed the hardware, the circuit boards, and the operating system for the Apple I, which he hooked up to a standard TV, producing the first personal computer to output to a television,” Chandra explains. “And while Steve Jobs marketed this product, Woz created the Apple II, and thus—in a very large sense—set off the personal computer revolution.”).

How did Chandra’s obsession with code begin?

“I worked as a coder to put myself through college,” Chandra explains. “I first encountered computers as an undergraduate in the U.S. in the early ’80s. They seemed magical to me.  At the time, the only computers in India, where I grew up, were at elite graduate schools and government institutions, and I had never even seen one. So I enrolled right away for programming classes, and found them boringly abstracted. A couple of years later, while I was in graduate school, I worked at an off-campus job that gave me access to personal computers—and that was the beginning of the addiction.  There’s a great feeling of power, and instant feedback—if your code works, you get this rush; and if it doesn’t work, you hack at it until it does, and then the rush is even stronger.”

‘Writers, like other kinds of artists, begin with assumptions of probable obscurity and relative poverty, so there’s less external pressure.’

Chandra began to make money as a consultant and programmer when he was in the graduate writing program at the University of Houston, working on his formally complex first novel, Red Earth and Pouring Rain, which draws from the idea of interconnectedness in Indian traditions of narrative and philosophy. “My coding was and is very journeyman work,” he says. “I wrote a lot of database CRUD applications—they let you Create, Retrieve, Update, and Delete records of whatever widget or resource you were producing and selling. This was before easy, transparent accessibility to the Internet.” He managed to get through graduate school without taking any loans.

Chandra’s introduction to DOS was seductive, he says, because it was “a complete world, with systems and rules.” At what point did he realize code could aspire to elegance?

“I think the first time you have to change code you’ve written previously, to add features or remove a bug, you realize that you could have done it better in the first place, that you could have found an architecture that would make it easier to transform and grow the code. And this is terribly seductive—you’re not just building a solution to a problem, you’re potentially building a beautiful solution, with ‘beautiful’ here being defined here by an aesthetics of present and future functionality. This can be a trap.”

He describes two approaches to programming. “I think every programmer knows of a project or two that never quite got off the ground because everyone spent too much time thinking about what would be the best way to do it, what the best tools are, what the best process is. There has, in fact, been a reaction against this tendency. Now, often the advice is to get the quickest version of your code running and slap up a website; you’ll improve the code after you find users. Which of course means that the refinement often never gets done because there’s too much technical debt and it’ll cost too much and the venture capitalists may bail at any time.”

As a fiction writer, he says, he tends towards the first impulse: “I’m so used to laboring over my language at leisure, and I always know that a sentence can be made more resonant, that there’s always one more comma that can be moved.”

Why are programmers so prone to burnout?

“The pace of change in the computer industry is so frenetic that any programming skills you have today are likely to be outdated next year. So there is a very powerful imperative to keep learning, and an all-pervasive anxiety that you’re already obsolete. In Silicon Valley, you have the added fetishization of youth—the belief is that the next billion-dollar app will come from a 22-year old whose mind is still unshaped by too much knowledge, and who is therefore—supposedly—able to innovate freely. If you’re 45, you may be seen as too set in your ways to be useful to a startup, and also less likely to deliver 80-hour work weeks to the company, which is what ‘passionate’ programmers devoted to ‘disruption’ are supposed to do. So all this, plus the magical possibility of becoming a famous tech czar overnight, drives a lot of people into over-exertion on the corporate treadmill. Some finally collapse.”

Is there an equivalent risk for fiction writers (who usually work obsessively without the promise of that dazzling income)?

### SeatGeek Raises $35M 28 August 2014 - 1:00pm We started SeatGeek nearly five years ago with the goal of helping people enjoy more live entertainment by building great software. Our goal hasn’t changed, but its scope has. We’ve gone from a team of two to a team of forty. From the desktop web to iOS, Android and mobile web. And from a handful of active users (hi Mom!) to millions. We think we’re onto something big. And we’ve decided to partner with some exceptional folks to get SeatGeek moving even faster. This past week we closed a$35M Series B round led by Accel Partners, alongside Causeway Media Partners, Mousse Partners, and a number of other great investors (full list here).

From going hoarse screaming for your favorite team, to dancing along with your favorite band, live entertainment is a deeply personal, aesthetic experience. We think the software that enables those moments should be too. We are a technology company. Everyone at SeatGeek is driven to create something elegant, intuitive and useful. This financing gives us one of the tools we need to do that more quickly and for more people than ever before.

The last five years have been a blast. The next five will be even better. We’re going to remain focused on building amazing software that helps people have fun. And we’re excited to partner with Accel and others to help us make it happen.

### An Overview of the Dylan Type System

28 August 2014 - 1:00pm

In the near future, we are likely to see this list of types expand. For example, function types are already being discussed.

Classes

Classes are described in detail in the DRM. The important parts for now are:

• Classes are used to define the inheritance, structure, and initialization of objects.
• Every object is a direct instance of exactly one class, and a general instance of the superclasses of that class.
• A class determines which slots its instances have. Slots are the local storage available within instances. They are used to store the state of objects.

An interesting tidbit is that new classes can be created at run-time.

Limited Types

Limited types consist of a base type and a restricted set of constraints to be applied.

Currently, integers and collections can be limited. Limited integers have minimum and maximum bounds. Limited collections can constrain the type of the elements stored in the collection as well as the size or dimensions of the collection.

Simple examples:

define constant <byte> = limited(<integer>, min: 0, max: 255); define constant <float32x4> = limited(<vector>, of: <single-float>, size: 4);

The last example is particularly interesting (to me at least). In Dylan, <single-float> is a 32 bit float, but will usually be stored in a boxed form. By creating a limited vector of <single-float> with a size of 4, the compiler is able to optimize away bounds checks and it is able to store the floating point values without being boxed.

For example, storing some floating point values into an instance of the above <float32x4> limited type would look like:

fs[0] := 1.0s0; fs[1] := 2.0s0; fs[2] := 3.0s0; fs[3] := 4.0s0;

That compiles to this in C:

REPEATED_DSFLT_SLOT_VALUE_TAGGED_SETTER(1.0000000, fs_, 1, 1); REPEATED_DSFLT_SLOT_VALUE_TAGGED_SETTER(2.0000000, fs_, 1, 5); REPEATED_DSFLT_SLOT_VALUE_TAGGED_SETTER(3.0000000, fs_, 1, 9); REPEATED_DSFLT_SLOT_VALUE_TAGGED_SETTER(4.0000000, fs_, 1, 13);

REPEATED_DSFLT_SLOT_VALUE_TAGGED_SETTER is a C preprocessor definition that results in a direct memory access without any function call overhead. To confirm that and just for fun, here is the resulting assembler code (x86):

movl $0x3f800000, 0x8(%eax) movl$0x40000000, 0xc(%eax) movl $0x40400000, 0x10(%eax) movl$0x40800000, 0x14(%eax)

Doing the same thing, but with a regular vector would require boxing each value. For literals, this generates a static file-scope value in the C back-end, increasing the memory usage:

static _KLsingle_floatGVKd K3 = { &KLsingle_floatGVKdW, // wrapper 5.0000000 };

Similarly, when we go to fetch and add the values, using the limited vector will be far more efficient as it already knows the type of the value involved and no additional checks need to be done.

We'll use this snippet to demonstrate where fs is a limited vector as above and bfs is a normal vector with boxed values:

let s = fs[0] + fs[1]; let bs = bfs[0] + bfs[1];

This results in the following code

// Limited vector T9 = REPEATED_DSFLT_SLOT_VALUE_TAGGED(fs_, 1, 1); T10 = REPEATED_DSFLT_SLOT_VALUE_TAGGED(fs_, 1, 5); T11 = primitive_single_float_add(T9, T10); // Normal vector with boxed floats T19 = KelementVKdMM11I(bfs_, (dylan_value) 1, &KPempty_vectorVKi, &Kunsupplied_objectVKi); T20 = KelementVKdMM11I(bfs_, (dylan_value) 5, &KPempty_vectorVKi, &Kunsupplied_objectVKi); CONGRUENT_CALL_PROLOG(&KAVKd, 2); T3 = CONGRUENT_CALL2(T19, T20);

There's a lot going on there. With the limited vector, REPEATED_DSFLT_SLOT_VALUE_TAGGED is a direct memory access, while with the normal vector, it is going through the element method (whose mangled name is KelementVKdMM11I) and doing bounds checks. When performing the addition, the limited vector code is able to directly add the floating point values. However, with the normal vector, it may have gotten any type of object out of the vector, so it has to go through a generic dispatch (CONGRUENT_CALL_PROLOG and CONGRUENT_CALL2) to invoke the method for + which is mangled to be KAVKd in C.

This should help you understand that limited types can help the compiler greatly optimize the resulting code and improve the memory usage.

Union Types

In Dylan, union types represent a way to specify that a given value is one of a two or more types. They are frequently used to represent that a value does not exist:

let x :: type-union(singleton(#f), <integer>) = bar(3);

As we can see, union types are created using the type-union function.

Dylan provides a shorthand for the above technique, a method false-or, which returns the union of #f and a given type:

let x :: false-or(<integer>) = bar(3); Singleton Types

What Dylan calls singleton types are a way to create a new type that indicates that an individual object is expected. This is commonly used in method dispatch:

define method factorial (n :: singleton(0)) 1 end;

An alternative syntax makes this a bit more readable to many people:

define method factorial (n == 0) 1 end;

Singletons are described in the DRM in a bit more detail, but the important thing to note is that for a value to match a singleton type, it must be == to the object used to create the singleton. This means that not all objects can be used as singleton types; in particular, strings are a notable exception.

Also important is that a method specializer that is a singleton is considered to be the most specific match. This is because it is directly matching against the value passed in.

A common use of singleton types is in defining make methods by using a singleton type for the class argument:

define method make (class == <file-stream>, #rest initargs, #key locator, element-type = <byte-character>, encoding) => (stream :: <file-stream>) let type = apply(type-for-file-stream, locator, element-type, encoding, initargs); if (type == class) next-method() else apply(make, type, initargs) end end method make;

This example is also interesting as it demonstrates that the type is a first class object by using type-for-file-stream to look up which type should be used to instantiate the file stream. (This way of implementing a make method specialized on an abstract class like <file-stream> is a common way to implement a factory method in Dylan.)

### Show HN: SuperTrip

28 August 2014 - 1:00pm
FoodTrip is the perfect way to inject excitement into the experience of dining out. The app selects a restaurant within a user-defined distance and price point and then you simply follow the waypoint to the surprise destination. FoodTrip is free.

SuperTrip is the traditional game mode allowing for multiple destinations and is a paid feature. With the full game, the world is the limit!

The full game is \$4.99. Only one of your friends needs to get it though! Just invite them once you start the trip.

Special thanks to all Kickstarter backers.

### Brian Stevens to Step Down as CTO of Red Hat

28 August 2014 - 1:00pm

RALEIGH, N.C. — August 27, 2014 —

Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced that Brian Stevens will step down as CTO.

“We want to thank Brian for his years of service and numerous contributions to Red Hat’s business. We wish him well in his future endeavors,” said Jim Whitehurst, President and CEO of Red Hat.

In the interim, the office of the CTO will be managed by Paul Cormier, President of products and technologies at Red Hat.

We want to thank Brian for his years of service and numerous contributions to Red Hat’s business. We wish him well in his future endeavors.

Jim WhitehurstPresident and CEO, Red Hat About Red Hat

Red Hat is the world’s leading provider of open source software solutions, using a community-powered approach to reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As the connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com.

Forward-looking statements

Certain statements contained in this press release may constitute "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to changes in and a dependence on key personnel; delays or reductions in information technology spending; the effects of industry consolidation; the ability of the Company to compete effectively; the integration of acquisitions and the ability to market successfully acquired technologies and products; uncertainty and adverse results in litigation and related settlements; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; risks related to data and information security vulnerabilities; ineffective management of, and control over, the Company's growth and international operations; and fluctuations in exchange rates, as well as other factors contained in our most recent Quarterly Report on Form 10-Q (copies of which may be accessed through the Securities and Exchange Commission's website at http://www.sec.gov), including those found therein under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations". In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of this press release.

###

Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.