<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" media="screen" href="/~files/atom.xsl"?>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:feedpress="https://feed.press/xmlns">
  <feedpress:locale>en</feedpress:locale>
  <title/>
  <link href="http://feedpress.me/kenhirakawa" rel="self"/>
  <link href="http://kenhirakawa.com"/>
  <updated>2015-02-08T15:18:42-08:00</updated>
  <id>http://kenhirakawa.com</id>
  <author>
    <name>Ken Hirakawa</name>
    <email>k.hirakawa88@gmail.com</email>
  </author>
  <entry>
    <title>Forward 2 Notes - Part 2</title>
    <link href="http://kenhirakawa.com/forward-notes-2"/>
    <updated>2015-02-08T00:00:00-08:00</updated>
    <id>/forward-notes-2</id>
    <content type="html">&lt;h1 id="forward-2-notes---part-2"&gt;Forward 2 Notes - Part 2&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;This post is a continuation from my earlier post, &lt;a href="/forward-notes-1"&gt;Foward 2 Notes - Part 1&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;figure&gt;
  &lt;img src="http://kenhirakawa.com/assets/images/forward-swag.jpg" /&gt;
  &lt;figcaption&gt;Swaaag!&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Here are the talks I attended:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Keynote by Karolina Szczur and Sarah Groff-Paermo&lt;/li&gt;
  &lt;li&gt;The Better Parts, Douglas Crockford&lt;/li&gt;
  &lt;li&gt;What the… JavaScript?, Kyle Simpson&lt;/li&gt;
  &lt;li&gt;We Will All Be Game Programmers, Hunter Loftis&lt;/li&gt;
  &lt;li&gt;No More Tools, Karolina Szczur&lt;/li&gt;
  &lt;li&gt;Developing High Performance Websites and Modern Apps with JavaScript and HTML5, Doris Chen&lt;/li&gt;
  &lt;li&gt;Love &amp;amp; Node, Sarah Groff-Palermo&lt;/li&gt;
  &lt;li&gt;Developer Led Innovation, James Sacra&lt;/li&gt;
  &lt;li&gt;Choosing a JavaScript Framework, Pam Selle&lt;/li&gt;
  &lt;li&gt;Test-driven client-side apps, Pete Hodgson&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This post will have notes on the remaining talks starting with Hunter Loftis.  Enjoy!&lt;/p&gt;

&lt;h2 id="we-will-all-be-game-developers-hunter-loftis"&gt;We Will All Be Game Developers, Hunter Loftis&lt;/h2&gt;

&lt;p&gt;3 weeks to build out an FPS game in HTML5 for an internationally releasing album.&lt;/p&gt;

&lt;p&gt;Issues&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Budget (cheap), because the marketing people found Hunter really late, and so they had little left&lt;/li&gt;
  &lt;li&gt;3 weeks&lt;/li&gt;
  &lt;li&gt;Technical constraints (must work on iPhone, iPad, Android &amp;amp; Desktop!)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sure! sounds like an awesome project.  Why not help these “no-name” artists.
But… Apparently the guy behind the album is a big artist in the UK, and Hunter was the no-name artist.  Pressure!
For the next three weeks, no sleep.&lt;/p&gt;

&lt;p&gt;“Optimism is an occupational hazard of programming” - Kent Beck&lt;/p&gt;

&lt;h3 id="demo"&gt;Demo&lt;/h3&gt;

&lt;figure&gt;
  &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/iBOvh-xlctY" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;
  &lt;figcaption&gt;Demo of game built in 3 weeks&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;ul&gt;
  &lt;li&gt;Looks a lot like minecraft&lt;/li&gt;
  &lt;li&gt;The cool part is that if you get lost in the maze, it plays the sound track “Lost”&lt;/li&gt;
  &lt;li&gt;Got a Kotaku article out of it as well.  Now technically a “game developer”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="why-should-you-care"&gt;Why should You Care?&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Application interfaces are becoming more like games&lt;/li&gt;
  &lt;li&gt;JS developers aren’t strong in game development.  But the tech industry is moving towards game like interfaces with Holograph, VR, leap motion, etc&lt;/li&gt;
  &lt;li&gt;Building a simple game like pong is a lot harder than most complicated applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="ideas-from-game-dev-for-better-user-interfaces"&gt;3 Ideas from Game Dev for Better User Interfaces&lt;/h3&gt;

&lt;p&gt;Let’s steal from game dev community&lt;/p&gt;

&lt;p&gt;&lt;em&gt;1. Minimize and isolate state.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;State is the root of all evil&lt;/li&gt;
  &lt;li&gt;JS makes it easy to add type to anything&lt;/li&gt;
  &lt;li&gt;Game devs don’t have much luxury.  Compare Todolists VS Call of Duty.  There’s much more state to manage and yet they get 60 fps.  Minimizing state is important for complex applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a simple game of a guy walking around, we might need to keep track of:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;direction&lt;/li&gt;
  &lt;li&gt;XY coordinates&lt;/li&gt;
  &lt;li&gt;velocity&lt;/li&gt;
  &lt;li&gt;If he runs fast, kick off dust…&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can all capture all that with just these:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;Player&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interval&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distance&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And a set of pure functions.  Given an input -&amp;gt; Always the same output.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;velocity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getVelocity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interval&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;direction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getDirection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;velocity&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;pose&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getPose&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;distance&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;direction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;No side effects. The point is to figure out where the state is, and then use pure functions to derive the rest.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2. Enforce deterministic rendering (frames should be independent)&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;renderer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getState&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It would be a lot easier to do undo / redo if we kept a history of states, instead of modifying a state object directly.  (again, FP and immutability is a big hot topic at Forward.)&lt;/p&gt;

&lt;p&gt;Keep state independent of rendering.  React is great at doing this.  React works like a game engine and supports isomorphic JS&lt;/p&gt;

&lt;p&gt;&lt;em&gt;3. Separating rendering and simulation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Simulation leads to good animation.&lt;/p&gt;

&lt;p&gt;The pursuit of 60fps&lt;/p&gt;

&lt;p&gt;Compared to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;cinema: 24fps&lt;/li&gt;
  &lt;li&gt;with blur: 18fps&lt;/li&gt;
  &lt;li&gt;fighter pilos: 220 fps!!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re delivering any online experience, optimize for 60 fps&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;setInterval(frame, 0)&lt;/code&gt; is bad&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;setInterval(frame, 17)&lt;/code&gt; is better, but does not guarantee that the frame method runs at the next frame&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;requestAnimationFrame(frame)&lt;/code&gt; is better, but is not fully guaranteed because you can still drop frames.  If that virus scanner runs while your game is running, your game might drop frames.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, what you want to do is decouple rendering and simulation. If you drop a frame, you track it.  For every frame you drop, you simulate it.  Drop 2 frames, simulate it twice.  This way, you get the same exact behavior, regardless of dropped frames.&lt;/p&gt;

&lt;h3 id="my-favorite-bug"&gt;My favorite bug&lt;/h3&gt;

&lt;figure&gt;
  &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/eJlifh_N1ec" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;
  &lt;figcaption&gt;Created a git tag just for finding this bug.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id="kens-thoughts"&gt;Ken’s Thoughts&lt;/h3&gt;

&lt;p&gt;What he said about TODOMVC makes a lot of sense… We fuss over which framework to use to build such a simple TODOMVC, yet game developers are building complex worlds and game mechanics all in 60 fps.&lt;/p&gt;

&lt;h2 id="no-more-tools-karolina-szczur"&gt;No More Tools, Karolina Szczur&lt;/h2&gt;

&lt;p&gt;Have we reached a tipping point with too many tools? A look at up-to-date front-end tooling and better ideas for teamwork&lt;/p&gt;

&lt;h3 id="tools"&gt;Tools&lt;/h3&gt;

&lt;p&gt;Tool - we have a plethora of tools to use to help us make our lives easier, but is that always the case?&lt;/p&gt;

&lt;p&gt;“Creatives aren’t good at their art because of their tools; their talent stems from the skills and knowledge they’ve acquired while using their tools” - The Good Creative - Paul Jarvis&lt;/p&gt;

&lt;p&gt;But tools are still good to use.  Which frameworks / tools are good and what will eventually break our heart?&lt;/p&gt;

&lt;p&gt;Things to think about&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Simplicity.  Two types of simple.  Visual simple - does it look simple to use?  Operational simplicity - is it easy to use&lt;/li&gt;
  &lt;li&gt;Does it allow us to single-thread our attention?  Most crucial skill is to not multi-task, but single-threading our attention.&lt;/li&gt;
  &lt;li&gt;Complexity.  The time it takes to learn something.  Tesler’s Law of Conservation of Simplicity.  The strive for simplicity is superficial because at times, complexity is necessary.  Its a fact of the world and simplicity is in the mind.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="automation"&gt;Automation&lt;/h3&gt;

&lt;p&gt;Automation gives us time to work on problems that cannot be delegated to scripts&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tools&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Autoprefixer - worry-free vendor prefixing&lt;/li&gt;
  &lt;li&gt;Linting - check for bad patterns&lt;/li&gt;
  &lt;li&gt;Media Optimisation - strip unnecessary bytes.&lt;/li&gt;
  &lt;li&gt;Minification - strip unnecessary characters&lt;/li&gt;
  &lt;li&gt;Code bloat - might might not be useful.  Search and remove unused declarations&lt;/li&gt;
  &lt;li&gt;Testing performance - find specific elements hindering performance&lt;/li&gt;
  &lt;li&gt;Debugging layout issues - find problems with the box model. pesticide.io&lt;/li&gt;
  &lt;li&gt;GUIs for automation - Hammer format, codekit, livereload&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;NPM&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;NPM is a way to run scripts as well.  Its a native ecosystem for task automation.&lt;/li&gt;
  &lt;li&gt;Some people think npm for front-packaging isn’t good enough, but “npm loves you, front-enders, and we are about your use cases.”&lt;/li&gt;
  &lt;li&gt;bit.ly/leveldb-example&lt;/li&gt;
  &lt;li&gt;bit.ly/npm-task-automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Gulp &amp;amp; Grunt&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Alternatives to make&lt;/li&gt;
  &lt;li&gt;Gulp might be better for node people because it uses streams.&lt;/li&gt;
  &lt;li&gt;Grunt is more configuration over coding.&lt;/li&gt;
  &lt;li&gt;Which is better? Matter of preference.  She seems to prefer Gulp&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Make&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Why not use make? People tend to forget about it, but make is quite powerful&lt;/li&gt;
  &lt;li&gt;bit.ly/buidling-static&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="when-to-do-what"&gt;When to do what?&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;When should we minify code?&lt;/li&gt;
  &lt;li&gt;A Case for minifying and committing the minified code. -&amp;gt; Allows people without minification tools to use your scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="collaboration"&gt;Collaboration&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;The tools are there to empower ourselves.&lt;/li&gt;
  &lt;li&gt;Collaboration happens when our own set of biases doesn’t cloud judgement.  Tools are a way&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="developing-high-performance-websites-and-modern-apps-with-js-and-html5-doris-chen"&gt;Developing High Performance Websites and Modern Apps with JS and HTML5, Doris Chen&lt;/h2&gt;

&lt;h3 id="game---how-to-make-it-run-faster"&gt;Game - How to make it run faster&lt;/h3&gt;

&lt;p&gt;This talk is about the principles to improving the JS performance of your app&lt;/p&gt;

&lt;p&gt;Today, we’ll improve a game called High Five Yeah. The more you get high fives, the more points you get.&lt;/p&gt;

&lt;figure&gt;
  &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/JIbXwJ8gt3c" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;
  &lt;figcaption&gt;High Five Yeah!&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Components and control flow of the game&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Matrix of players&lt;/li&gt;
  &lt;li&gt;Each player has a set of directions&lt;/li&gt;
  &lt;li&gt;On touch select a player&lt;/li&gt;
  &lt;li&gt;Generate a list of neighbors to rotate&lt;/li&gt;
  &lt;li&gt;Rorate the player&lt;/li&gt;
  &lt;li&gt;Repeat from step 4&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The game allocates lots of memory if you increase the size of the grid.&lt;/p&gt;

&lt;p&gt;Always measure&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;profile the game.&lt;/li&gt;
  &lt;li&gt;Define the criteria of what “fast enough” is first. You can use IE, but feel free to use other tools.&lt;/li&gt;
  &lt;li&gt;IE developer tool has a tab called responsiveness.  It gives you a chart of FPS and CPU usage.&lt;/li&gt;
  &lt;li&gt;The three important measurements - CPU (tied to battery life), GPU, and FPS.&lt;/li&gt;
  &lt;li&gt;Similar to chrome, IE dev tools gives you color coded results - yellow for layout, green for css style calculations, etc.&lt;/li&gt;
  &lt;li&gt;The goal is to have a flat 60 fps, no dips in the charts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="principle-1-memory-usage---stay-lean"&gt;Principle 1: Memory usage - Stay Lean&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Every call to new or implicit memory allocation reserves GC memory - allocations are cheap until current pool is exhausted.  Every time the GV happens, we will see some obvious pauses&lt;/li&gt;
  &lt;li&gt;When pool is exhausted, engines force a collection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best practices for staying lean&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Avoid unnecessary object creation&lt;/li&gt;
  &lt;li&gt;Look at object creation pattern.  Use object pools when possible.&lt;/li&gt;
  &lt;li&gt;Be aware of allocation patterns, such as Setting closures to event handlers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="principle-2-use-fast-object-types-and-manipulations"&gt;Principle 2: Use Fast Object Types and Manipulations&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;In JS the order of a hash map matters.  It is better to keep the “shape” the same.  For example. &lt;code&gt;var a = {}; a.north = 1; a.south = 0;&lt;/code&gt; and &lt;code&gt;var b = {}; b.south = 0; b.north = 1&lt;/code&gt; do not share the same “shape”&lt;/li&gt;
  &lt;li&gt;Avoid creating slower property bags.  Don’t use getters and setters for perf critical paths.  For example, its probably better to use &lt;code&gt;this.n = north&lt;/code&gt; instead of doing&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;&amp;quot;n&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;get&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;nVal&lt;/span&gt;&lt;span class="p"&gt;;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;ul&gt;
  &lt;li&gt;Use simple objects for perf critical paths&lt;/li&gt;
  &lt;li&gt;Never use &lt;code&gt;delete&lt;/code&gt;, it is sloooow&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="principle-3-best-practices-for-fast-arithmetic"&gt;Principle 3: Best Practices for Fast Arithmetic&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Use 31-bit integer math when possible.  31 will go onto the stack, but a 32 float will not.  It needs to move to a slower, long term storage.&lt;/li&gt;
  &lt;li&gt;Avoid floats if they are not needed&lt;/li&gt;
  &lt;li&gt;Design for type specialized arithmetic.  Try to separate floating arithmetic from integer arithmetic&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="principle-4-do-less-work"&gt;Principle 4: Do Less Work&lt;/h3&gt;

&lt;p&gt;Avoid chattiness with the DOM&lt;/p&gt;

&lt;p&gt;BAD&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;get&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;id&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;classList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;removeClass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;oldClass&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;get&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;id&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;classList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addClass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newClass&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;GOOD&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;el&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;get&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;id&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;classList&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;el&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;removeClass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;oldClass&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;el&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addClass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newClass&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you know a value is a integer, such as when &lt;code&gt;this.borderSize = domelement.getBorderSize&lt;/code&gt;, it is better to &lt;code&gt;parseInt&lt;/code&gt; it.  25% better performance in marshaling.&lt;/p&gt;

&lt;p&gt;Paint as much as your users can see, which is 60 FPS&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;BAD &lt;code&gt;setTimeout(animate, 0)&lt;/code&gt; &amp;lt;- this is more work because its trying to animate too many times.&lt;/li&gt;
  &lt;li&gt;BETTER &lt;code&gt;requestAnimationFrame(animate)&lt;/code&gt; &amp;lt;- let the browser determine when to animate&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="key-take-away"&gt;Key Take Away&lt;/h3&gt;

&lt;p&gt;For super performance critical code, we can do a lot of micro optimizations to make performance better.  Better performance = longer battery life,&lt;/p&gt;

&lt;h2 id="love--node-sarah-groff-palermo"&gt;Love &amp;amp; Node, Sarah Groff-Palermo&lt;/h2&gt;

&lt;p&gt;This is the second part to her key note about relatables - objects that allow us to explore the relationship between people and objects&lt;/p&gt;

&lt;h3 id="creating-relatability"&gt;Creating Relatability&lt;/h3&gt;

&lt;p&gt;How do we create a robot that is relatable?&lt;/p&gt;

&lt;p&gt;There are 6 ways&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Give it a face.  From a very young age, humans are learned to detect faces.  &lt;a href="https://twitter.com/facepics"&gt;@facepics&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Sense of agency&lt;/li&gt;
  &lt;li&gt;Social behavior - teaching objects to remember.&lt;/li&gt;
  &lt;li&gt;Make the robot unpredictable.  gives the feeling of personality&lt;/li&gt;
  &lt;li&gt;Creating a sense of helplessness.  Helps people feel less lonely.&lt;/li&gt;
  &lt;li&gt;Objects that mimic us. If I turn my head and it turns its head then we like it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id="process"&gt;Process&lt;/h3&gt;

&lt;p&gt;How does one create a robot? Her process.&lt;/p&gt;

&lt;p&gt;Take input -&amp;gt; Do something with it (the gear) -&amp;gt; Output&lt;/p&gt;

&lt;p&gt;&lt;em&gt;1. Take input&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Information to take in, who is nearby, how are you feeling, which books haven’t I read?&lt;/li&gt;
  &lt;li&gt;Tools to use - proximity (IR and ultrasonic), climate temp and humidity,
ambient (light and sound), bluetooth, gas, accelerometer, motion, touch (capacitive and pressure) sensors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;2. The gear&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Turn inputs into outputs.  Tessel, Arduino, and RPi.&lt;/li&gt;
  &lt;li&gt;Arduino - C derived Arduino language.  Node via Johnny 5, DIY / Sheilds
Tessel - Node, wifi builtin, high-level modules&lt;/li&gt;
  &lt;li&gt;Both have awesome communities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;3. Outputs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Possible outputs&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;regular LEDS&lt;/li&gt;
  &lt;li&gt;addressable LEDS&lt;/li&gt;
  &lt;li&gt;motors&lt;/li&gt;
  &lt;li&gt;servors speakers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="projects"&gt;Projects&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;FriendBot - Built at SFPC&lt;/li&gt;
  &lt;li&gt;Neo-Neglect - Books that have emotion.  If it isn’t read for a while, it screams “You’re a monster!”&lt;/li&gt;
  &lt;li&gt;Spiny - Actually a failure, but its always a good thing to love all our monsters&lt;/li&gt;
  &lt;li&gt;Deluge - The feelings of being overwhelmed.  Senses BT signals in a room and visualizes it&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="questions-and-demos"&gt;Questions and Demos&lt;/h3&gt;

&lt;figure&gt;
  &lt;iframe width="560" height="315" src="https://www.youtube.com/embed/6hP8DLuN1Ps" frameborder="0" allowfullscreen=""&gt;&lt;/iframe&gt;
  &lt;figcaption&gt;FriendBot being friendly&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="developer-led-innovation-james-sacra"&gt;Developer Led Innovation, James Sacra&lt;/h2&gt;

&lt;h3 id="failure"&gt;Failure&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Developed new prototype for voice recognition in 3 months and ready to blow manager minds away&lt;/li&gt;
  &lt;li&gt;Got embarrassed at a call with sales people.  Turns out users didn’t want the prototype he developed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How did he fail?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;He didn’t understand the key problem&lt;/li&gt;
  &lt;li&gt;He didn’t explain the solution&lt;/li&gt;
  &lt;li&gt;He didn’t gain traction&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="steps-to-getting-your-idea-heard"&gt;Steps to Getting Your Idea Heard&lt;/h3&gt;

&lt;p&gt;Step 0: Need a friend (ally).
- Someone to support your ideas.  They can help you push your ideas from a unique perspective.
- Who should you befriend? Fellow developers, technical managers, business support, etc.  Grab them over lunch, talk to them&lt;/p&gt;

&lt;p&gt;Step 1: What’s the problem?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In order to innovate you must understand the problem.  Its not about, oh I want to use D3!&lt;/li&gt;
  &lt;li&gt;Quantify - 20% of customers are negatively impacted?  What is the probem and how do you quantify it?&lt;/li&gt;
  &lt;li&gt;At the same time, its important to keep developers happy and NOT use a technology from 10 years ago.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 2: Innovate&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Think tank! Innovate. Hackathons.  This can even be a 3, 4 hour meeting with key people.&lt;/li&gt;
  &lt;li&gt;Prototyping - gives you answers you need in next step, a great demo, and flushes out a lot problems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 3: Define the solution&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Quantify the solution - “We developed a prototype in half the time than we could in the existing framework”&lt;/li&gt;
  &lt;li&gt;Keyword is “we”.&lt;/li&gt;
  &lt;li&gt;People usually don’t realize minor performance issues until they see numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 4: Pitch&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Make a presentation.  Include your assessment of the problem, impact to the business, and the solution.  It will help you think through questions that wil be asked and might even lead you to do more research&lt;/li&gt;
  &lt;li&gt;Technical initiatives - at this point you should have a strong case for why you idea needs to be heard.  And if you’re not being heard, go back to step 0&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="troubleshooting"&gt;Troubleshooting&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;“No matter what I do… I can’t get someone to listen”.  This is just an excuse.  You haven’t put in the effort to be heard.&lt;/li&gt;
  &lt;li&gt;“We don’t have enough available resources to innovate”. No, you do.  During lunch or meetings or whatever, find the time&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="key-take-away-1"&gt;Key Take Away&lt;/h3&gt;

&lt;p&gt;Don’t be afraid to fail, that is the only way to succeed&lt;/p&gt;

&lt;h2 id="choosing-a-javascript-framework-pam-selle"&gt;Choosing a Javascript Framework, Pam Selle&lt;/h2&gt;

&lt;p&gt;There’s a &lt;a href="http://shop.oreilly.com/product/9781939902085.do"&gt;book&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Spoiler: I will not tell you what to do.
The hope is for you to make informed decisions for a project&lt;/p&gt;

&lt;h2 id="what-is-a-js-framework"&gt;What is a JS framework?&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Most framework were built in 2009 / 2010 - “Oh I found that I was building the same thing over and over again for consultants so I figured, I should build my own!”&lt;/li&gt;
  &lt;li&gt;Frameworks are good because you shouldn’t have to write boilerplate code.&lt;/li&gt;
  &lt;li&gt;Ask yourself, what’s our 15% (the thing that is not boilerplate, and makes your app special)  Then find the framework that makes you go fast&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="backbone"&gt;Backbone&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Backbone comes from a Rails world&lt;/li&gt;
  &lt;li&gt;It’s the “core” of an application&lt;/li&gt;
  &lt;li&gt;It’s un-opiniated. Generally more flexible. e.g. incorporating other libraries.  This is both a strength and a weakness&lt;/li&gt;
  &lt;li&gt;Has models, views, collections, events, and routing&lt;/li&gt;
  &lt;li&gt;It’s quite intuitive to get started&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="angular"&gt;Angular&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Fastest growing JS framework…&lt;/li&gt;
  &lt;li&gt;Directives! and Google!&lt;/li&gt;
  &lt;li&gt;Strongly defined building components (directives, controllers, services)&lt;/li&gt;
  &lt;li&gt;Dependency injection&lt;/li&gt;
  &lt;li&gt;Two way bindings&lt;/li&gt;
  &lt;li&gt;Karma and protractor for testing&lt;/li&gt;
  &lt;li&gt;No dependencies&lt;/li&gt;
  &lt;li&gt;Strengths - once you know what you are doing, there’s less to write.  Long feature list.  Module-friendly.  Google backing&lt;/li&gt;
  &lt;li&gt;Weaknesses - not as battle tested.  High lock-in with writing behavior in markup.  Skeptical about Google backing.  2.0 drops backwards compatibility&lt;/li&gt;
  &lt;li&gt;Has Modules, Directives, Services, Controllers, and other components like Filters, animations, i18n, l10n&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="ember"&gt;Ember&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Most complete JS framework (not necessarily the best).  Used for really intense client side applications&lt;/li&gt;
  &lt;li&gt;Built completely on modular open source components&lt;/li&gt;
  &lt;li&gt;Community super-powered&lt;/li&gt;
  &lt;li&gt;Very strongly defined MVC components&lt;/li&gt;
  &lt;li&gt;two way data binding&lt;/li&gt;
  &lt;li&gt;Ember components (like directives or web components)&lt;/li&gt;
  &lt;li&gt;dependency injection&lt;/li&gt;
  &lt;li&gt;Routing&lt;/li&gt;
  &lt;li&gt;Dependencies - ember-data, jQuery, handlebars&lt;/li&gt;
  &lt;li&gt;Strengths - convention driven like rails, the community!&lt;/li&gt;
  &lt;li&gt;Weaknesses - Sometimes TOOO much information, the “right” way to do things = difficult to get up to speed quickly, lots of rules, not as pervasive as other two frameworks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rising stars!&lt;/p&gt;

&lt;h3 id="react"&gt;React&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;React is like git diffs for browser rendering&lt;/li&gt;
  &lt;li&gt;Super performant&lt;/li&gt;
  &lt;li&gt;The view of MV*&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="polymerjs"&gt;PolymerJS&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Web components!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="evaluating-frameworks"&gt;Evaluating frameworks&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;TODOMVC&lt;/li&gt;
  &lt;li&gt;bit.ly/lg6kDSS - ranking for spreadsheets&lt;/li&gt;
  &lt;li&gt;Rank frameworks according to bsuiness, tehcnical, and team criteria&lt;/li&gt;
  &lt;li&gt;DevTap - technological evaluations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="test-driven-client-side-apps-pete-hogsdon"&gt;Test Driven Client-side Apps, Pete Hogsdon&lt;/h2&gt;

&lt;h3 id="test-driven-development"&gt;Test driven development&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;Write a failing test&lt;/li&gt;
  &lt;li&gt;Write code that makes it pass&lt;/li&gt;
  &lt;li&gt;Refactor code (people miss this part a lot)&lt;/li&gt;
  &lt;li&gt;Go back to 1&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole point is to build out a test suite that gives you the safety net to go
and refactor your code to make it better&lt;/p&gt;

&lt;h3 id="whats-the-advantage-of-tdd"&gt;What’s the Advantage of TDD?&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Feedback! TDD gives repeated, consistent, direction on a granular level as to
if your code works as you specified it.&lt;/li&gt;
  &lt;li&gt;Feedback on the design of your code.  It makes you write code that is easier to test&lt;/li&gt;
  &lt;li&gt;Easily testable code makes it easier to work with.  “This code is hard to test” === “This code is hard to work with”&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="live-coding---example-of-tdd"&gt;Live Coding - Example of TDD&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="nxtbrt.com"&gt;nxtbrt.com&lt;/a&gt; - single page app with list of BART departures.&lt;/li&gt;
  &lt;li&gt;LIVE CODING - went through the above four steps to demonstrate power of TDD.&lt;/li&gt;
  &lt;li&gt;We all want to rewrite a code base without any tests.  We don’t make things better every time because we dont have the confidence to do it. TDD gives you that confidence.&lt;/li&gt;
  &lt;li&gt;Feed back is rapid, focussed, and accurate.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="functional-testing"&gt;Functional testing&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;The feedback you get back with functional testing is not as helpful because the scope is larger with FT.  Especially if the functional test is testing an app with external dependencies such as HTTP and DOM.&lt;/li&gt;
  &lt;li&gt;The key principle with unit tests is that we test an isolated unit of code.  We get faster, more accurate feedback.&lt;/li&gt;
  &lt;li&gt;Doesn’t mean functional testing is bad though.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="key-take-away-2"&gt;Key Take Away&lt;/h3&gt;

&lt;p&gt;TDD drives developers toward better design&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;And that’s it folks! Hope you got a glimpse of what Forward 2 was like.  It will be interesting to look back to this post 10 years from now and see how JavaScript has moved forward.  What I’d like to know is, will JavaScript still be the monopoly client-side language it is today?  My gut feeling tells me that it will, and will be for a good long time.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Forward 2 Notes - Part 1</title>
    <link href="http://kenhirakawa.com/forward-notes-1"/>
    <updated>2015-02-04T00:00:00-08:00</updated>
    <id>/forward-notes-1</id>
    <content type="html">&lt;h1 id="forward-2-notes---part-1"&gt;Forward 2 Notes - Part 1&lt;/h1&gt;

&lt;figure&gt;
  &lt;img src="http://kenhirakawa.com/assets/images/foward-banner.jpg" /&gt;
  &lt;figcaption&gt;Forward 2 at Hyatt Regency San Francisco&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;I had the opportunity to attend &lt;a href="http://forwardjs.com/"&gt;Forward 2&lt;/a&gt;, a conference dedicated to forward-looking, experimental JS and front-end technologies.&lt;/p&gt;

&lt;p&gt;Overall, I thought it was a well organized conference with awesome speakers.
I’m quite exhausted from trying to distill all the information, but there was a ton of useful and practical tidbits that I look forward to putting into practice.&lt;/p&gt;

&lt;p&gt;Some of my highlights were:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The Better Parts, by Douglas Crockford - Got me re-thinking about JavaScript best practices&lt;/li&gt;
  &lt;li&gt;Love &amp;amp; Node, by Sarah Groff-Palermo - Inspirational talk that got me excited about hardware hacking again.&lt;/li&gt;
  &lt;li&gt;We Will All Be Game Developers, by Hunter Loftis - Was completely blown away by Hunter Loftis’s crazy 3 week endeavor.  Also learned why we should steal best practices from the game dev community.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What I also have here are the &lt;em&gt;raw&lt;/em&gt; notes I took at the conference.  Some of it might not make sense much, but I think I got the gist of most of the talks.&lt;/p&gt;

&lt;p&gt;Here are my notes in order of talks I attended:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Keynote by Karolina Szczur and Sarah Groff-Paermo&lt;/li&gt;
  &lt;li&gt;The Better Parts, Douglas Crockford&lt;/li&gt;
  &lt;li&gt;What the… JavaScript?, Kyle Simpson&lt;/li&gt;
  &lt;li&gt;We Will All Be Game Programmers, Hunter Loftis&lt;/li&gt;
  &lt;li&gt;No More Tools, Karolina Szczur&lt;/li&gt;
  &lt;li&gt;Developing High Performance Websites and Modern Apps with JavaScript and HTML5, Doris Chen&lt;/li&gt;
  &lt;li&gt;Love &amp;amp; Node, Sarah Groff-Palermo&lt;/li&gt;
  &lt;li&gt;Developer Led Innovation, James Sacra&lt;/li&gt;
  &lt;li&gt;Choosing a JavaScript Framework, Pam Selle&lt;/li&gt;
  &lt;li&gt;Test-driven client-side apps, Pete Hodgson&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This post will have notes on the first three.  I will post the rest soon in Part 2.  Enjoy!&lt;/p&gt;

&lt;h2 id="key-note-speaker-1-karolina-szczur"&gt;Key Note Speaker 1: Karolina Szczur&lt;/h2&gt;

&lt;h3 id="how-to-build-better-communities"&gt;How to build better communities&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Building better communities is inseparably intertwined with building a better self&lt;/li&gt;
  &lt;li&gt;Practice empathy and gratitude.&lt;/li&gt;
  &lt;li&gt;Look around. You don’t need street credit or be famous to be able to help someone&lt;/li&gt;
  &lt;li&gt;Communicate compassionately&lt;/li&gt;
  &lt;li&gt;Life’s most persistent and urgent question is - What are you doing for others?
This question is important.&lt;/li&gt;
  &lt;li&gt;Community is a group responsibility.  There isn’t one person owning and managing
the community.  Making a better community is about mutual work.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="key-take-away"&gt;Key Take Away&lt;/h3&gt;

&lt;p&gt;Let’s make the community better by taking responsibility and owning it.&lt;/p&gt;

&lt;h2 id="key-note-speaker-2-sarah-groff-palermo"&gt;Key Note Speaker 2: Sarah Groff-Palermo&lt;/h2&gt;

&lt;p&gt;Love + Node @supersgp&lt;/p&gt;

&lt;p&gt;Sarah is a designer and artist that loves tech.   Her talk was about IoT and the imminent dooms day, e.g. forfeiting power and control.&lt;/p&gt;

&lt;h3 id="our-relationship-with-hardware"&gt;Our Relationship with Hardware&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;How and why we engage with hardware, internet of things.  We can make prototypes of lovable relatable objects&lt;/li&gt;
  &lt;li&gt;If researchers figure out what technology can do, artists can reveal how and ask what its for&lt;/li&gt;
  &lt;li&gt;Art gets recognized as art by people standing up and saying as many times as they have to, i.e  I made this, and it matters.&lt;/li&gt;
  &lt;li&gt;Sarah showed two gifs of kill bot and beach both.  We have to make good decisions so that we make a world of beach bots (dancing) and not kill bots&lt;/li&gt;
  &lt;li&gt;We need to think about our relationship with objects, how we can love them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="cocerns-of-iot"&gt;Cocerns of IoT&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;IoT == distopia&lt;/li&gt;
  &lt;li&gt;$1.4 biliion dollars in bio sensing, 60 million fitness trackers. Are we giving away our privacy? Are we giving too much power to the goverment?&lt;/li&gt;
  &lt;li&gt;Internet of things raises substantial concerns about how we can keep control of our lives.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="sarahs-theory"&gt;Sarah’s theory&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;We can make things networked and controllable, and monitored, but what are we going to do it …. for?&lt;/li&gt;
  &lt;li&gt;What we can make objects be loyal, have sentiment.  What if objects were assistive technology (chairs that hug back)&lt;/li&gt;
  &lt;li&gt;We might actually being saving the world by doing this&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="relatables"&gt;Relatables&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Allow objects to speak, have character, and be lovable&lt;/li&gt;
  &lt;li&gt;Not only do objects that present information in a way that makes you act, but they talk to you.  For example, a milk carton that speaks about the cows and how happy the cows are …&lt;/li&gt;
  &lt;li&gt;Instead of creating monsters like Frankenstein, we should create relatables that we can love&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="anxeities"&gt;Anxeities&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Technophobia - lack of understanding of technology.  Deep concerns. By encouraging people to see objects as lovable, we can get them to understand technology&lt;/li&gt;
  &lt;li&gt;Anthromorphism - Unfulfilling promises.  We might be expecting a level of personal connection with robots, but they might not be able to deliver and so we feel like we are let down.&lt;/li&gt;
  &lt;li&gt;Corporate overreach.  Are we giving too much power away? We now have surveillance tools for everything.  From pets, to kids, to even yourself. &lt;a href="http://pavlok.com/"&gt;Pavlok&lt;/a&gt; shocks you if you are doing something you don’t like.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="examples"&gt;Examples&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Sarah showed examples of some relatables&lt;/li&gt;
  &lt;li&gt;Neo-Neglect - Books that tell you when they feel lonely.  If the book wasn’t opened and read for a while, it “dies”.  Made by putting copper wires inside books that detect when a book gets closed / open.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="key-take-away-1"&gt;Key Take Away&lt;/h3&gt;

&lt;p&gt;If we make objects to be relatable, lovable and caring, maybe we won’t end up with dooms day.&lt;/p&gt;

&lt;h2 id="the-better-parts-douglas-crockford"&gt;The Better Parts, Douglas Crockford&lt;/h2&gt;

&lt;p&gt;Author of Javascript: The Good Parts&lt;/p&gt;

&lt;h3 id="the-good-parts"&gt;The Good Parts&lt;/h3&gt;
&lt;p&gt;Using programming languages more effectively, and using that experience to create and select better programming languages&lt;/p&gt;

&lt;p&gt;“It seems that perfection is attained not when there is no more to add, but when there is nothing more to subtract” - This applies to programming languages.  Removing the bad parts.&lt;/p&gt;

&lt;p&gt;Principles of Good Parts&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;if a feature is sometimes usable and sometimes dangerous and if there is a better option then always use the better options….&lt;/li&gt;
  &lt;li&gt;We are not paid to use every feature of the language,  We are paid to write programs that work well and are free of error&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Arguments Against Good Parts&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;“Just a matter of opinion.””  DC - No, its not if it makes you write better code&lt;/li&gt;
  &lt;li&gt;“Every feature is an essential tool.””&lt;/li&gt;
  &lt;li&gt;“I have a right to use every feature.””  DC - “I have the right to write crap” (lol). We have the responsibility to write good code&lt;/li&gt;
  &lt;li&gt;“I need the freedom to express myself.”&lt;/li&gt;
  &lt;li&gt;“I need to reduce my key strokes.”  DC - We don’t spend the majority of time writing code, but rather in times where we felt like this: “Oh what have I done… why does this not work”.  If we can make a program error free, we’ll have less to type because there’s less to fix.&lt;/li&gt;
  &lt;li&gt;“There was a good reason why there is that feature”&lt;/li&gt;
  &lt;li&gt;“I would ever make a mistake with a dangerous feature.””&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The origin of triple equal - The guy who built double equal with coercion realized that double equal was bad, and so proposed a better solution, but it was turned down by committee due to legacy issues and instead was introduced as ===.&lt;/p&gt;

&lt;h3 id="danger-driven-development"&gt;Danger Driven Development&lt;/h3&gt;

&lt;p&gt;Software development is difficult because of scheduling -&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The time it takes to write the code..&lt;/li&gt;
  &lt;li&gt;The time it takes to make the code work right.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We want to make 2 small as possible.  We should take the time to write correct code.&lt;/p&gt;

&lt;h3 id="new-good-parts-in-es6"&gt;New Good Parts in ES6&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Proper tail calls - JS becomes a real functional programming language.   DC’s most favorite feature&lt;/li&gt;
  &lt;li&gt;Ellipsis …. aka… rest, aka… spread operator&lt;/li&gt;
  &lt;li&gt;Modules.  Much better than the crappy require stuff&lt;/li&gt;
  &lt;li&gt;Let.  let is the new var.  A lot of people come from Java and the syntax between Java and JS are similar.  BUT this causes confusion because the way JS works is different than Java.  Using let removes the confusion.&lt;/li&gt;
  &lt;li&gt;Weak maps.  Its the way objects should always work.  It was a mistake to make all keys a string.  Now with weak maps, keys can be anything.  They introduced weak maps because changing maps would break legacy code…&lt;/li&gt;
  &lt;li&gt;DC does not think generators are not worth the effort&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="bad-part-in-es6"&gt;Bad Part in ES6&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;CLASSES.  Java people wanted classes, but what ES6 gives them is a “class” that is just syntactic sugar.  The way to think in JS is to think in FP.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;(name) =&amp;gt; {id: name}&lt;/code&gt;, for those people who hate to write ‘function’,… because its too much to type.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Languages are better by making them smaller.  Removing the imperfections.&lt;/p&gt;

&lt;h3 id="good-parts-reconsidered"&gt;Good Parts Reconsidered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I stopped using &lt;code&gt;new&lt;/code&gt; years ago.  Use &lt;code&gt;Object.create&lt;/code&gt; instead.&lt;/li&gt;
  &lt;li&gt;Now I stopped using &lt;code&gt;Object.create&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;I have stopped using &lt;code&gt;this&lt;/code&gt;. &lt;a href="adsafe.org"&gt;ADSafe.org&lt;/a&gt;.  Removing &lt;code&gt;this&lt;/code&gt; makes things much easier and safer.  The problem with &lt;code&gt;this&lt;/code&gt; is that it can be bound to the window object, which compromises security.  DC’s solution is to make &lt;code&gt;this&lt;/code&gt; illegal.&lt;/li&gt;
  &lt;li&gt;I stopped using &lt;code&gt;null&lt;/code&gt;.  &lt;code&gt;null&lt;/code&gt; and &lt;code&gt;undefined&lt;/code&gt; are not the same.  &lt;code&gt;undefined&lt;/code&gt; is better because it’s the value that the language uses.  Theres an error with typeof with &lt;code&gt;null&lt;/code&gt;, which returns Object, which is wrong.&lt;/li&gt;
  &lt;li&gt;I stopped using falsiness.  To check if a value is undefined, I &lt;code&gt;value === undefined&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="loops-reconsidered"&gt;Loops Reconsidered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I don’t use &lt;code&gt;for&lt;/code&gt;.  I now use &lt;code&gt;array.forEach&lt;/code&gt; and its many sisters.&lt;/li&gt;
  &lt;li&gt;I don’t use &lt;code&gt;for in&lt;/code&gt;.  I now use &lt;code&gt;Object.keys(object).forEach&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;With ES6, I will stop using &lt;code&gt;while&lt;/code&gt; because we’ll have proper tail recursion call.  No longer a penalty for running recursion.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id="the-next-language"&gt;The Next Language&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;If JS is the last language, that will be really sad.&lt;/li&gt;
  &lt;li&gt;Programmers are as emotional and irrational as normal people.  It took a generation to agree that high level languages were a good idea.  I took a generation to agree that goto was a bad idea.  It took a generation to agree that objects were a good idea. I took two generations to agree that landbas were a good idea.&lt;/li&gt;
  &lt;li&gt;Once stuff gets in the brain wrong, it rarely gets fixed.  So we need to wait a whole generation for it to die and then get a consensus on the good things.&lt;/li&gt;
  &lt;li&gt;Things get discovered and only 20 years later, we move on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Systems languages / Application languages&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;System languages - low level stuff.  C.  Go.&lt;/li&gt;
  &lt;li&gt;Application languages - Everything else.  We need more innovation in both sectors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Application languages&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Classical inheritance vs Prototypal inheritance.  Most are class based Class.  The Prototype has a lot of advantages over class.&lt;/li&gt;
  &lt;li&gt;With classes, you need to build a classification taxonomy.  It is built during the time when we have the least understanding of the system.  So we invariably get it wrong. We then build the application with a broken hierarchy and taxonomy, which creates lots of friction.  Now we have to rip things apart and get things right, and its scary. That doesn’t happen with prototypes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prototype Inheritance
- memory conservation - Object.create (shallow copy) vs Object.copy
- Confusion: own vs inhertied.
- Retroactive heredity.  Using &lt;code&gt;__proto__.&lt;/code&gt;
- Performance inhibiting.&lt;/p&gt;

&lt;p&gt;JS is class free OOP, the best thing JS gave to the world.&lt;/p&gt;

&lt;h3 id="how-dc-will-make-objects"&gt;How DC will make objects&lt;/h3&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;constructor&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;other&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;other_constructor&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;spec&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// For inhertiance&lt;/span&gt;
      &lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// member, other, mehtod, spec&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;freeze&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="c1"&gt;//Freezing so that the object is immutable.&lt;/span&gt;
      &lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;other&lt;/span&gt;
      &lt;span class="c1"&gt;// All the other public methods&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3 id="kens-thoughts"&gt;Ken’s Thoughts&lt;/h3&gt;

&lt;p&gt;Seems like FP and immutability is a big theme across this conference and the JS community.  Are we seeing the next paradigm shift?&lt;/p&gt;

&lt;h2 id="what-the-javascript-kyle-simpson"&gt;What The… Javascript, Kyle Simpson&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Sarte: “Hell is other people.”&lt;/li&gt;
  &lt;li&gt;Devs: “hell is other people’s code”&lt;/li&gt;
  &lt;li&gt;Kyle: “Hell is not understanding my own code.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pulled out a lot of crazy and WTFs from Javascript - WAT video.  WAT is nothing compared to what he found in the dark corners of the JS world. &lt;a href="youdontknowjs.com"&gt;youdontknowjs.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Definition of WTF: not just funny, not a bug, not just ugly, not cross browser quirks, but inconsistent, incoherent, and unreasonable code that is part of the spec.  Most WTFs come from the lack of understanding and the reasoning behind the WTF.&lt;/p&gt;

&lt;p&gt;Kyle is not bashing TC39, and usually is in the unconventional camp of defending JS.  but today he will defect and hate on the language.&lt;/p&gt;

&lt;p&gt;Yes, there was a lot of swearing in this talk.&lt;/p&gt;

&lt;h3 id="warmups"&gt;Warmups&lt;/h3&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MAX_VALUE&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true;&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MIN_VALUE&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// false;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="c1"&gt;// false due to coercion&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// error&lt;/span&gt;
&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// error&lt;/span&gt;
&lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// &amp;quot;42.00&amp;quot;&lt;/span&gt;
&lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// &amp;quot;42.00&amp;quot;&lt;/span&gt;
&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;..&lt;/span&gt;&lt;span class="nx"&gt;toFixed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// &amp;quot;42.00&amp;quot;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3 id="coercion"&gt;Coercion&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Kyle: Coercion… is awesome…! (He’s defending coercion here, not being sarcastic)&lt;/li&gt;
  &lt;li&gt;If you read the spec, the above is rational because it follows the spec and makes sense.&lt;/li&gt;
  &lt;li&gt;Javascript tries to do a best guess of what you were trying to do&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="c1"&gt;// Not WTF because of coercion&lt;/span&gt;
&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="c1"&gt;// true&lt;/span&gt;
&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt; &lt;span class="c1"&gt;// &amp;quot;[object Object]&amp;quot;&lt;/span&gt;
&lt;span class="p"&gt;{}&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt; &lt;span class="c1"&gt;// 0&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="c1"&gt;// Not WTF, but strageness due to historical reasons&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 0&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot; &amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 0&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 0&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;-0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// -0&lt;/span&gt;

&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// -0&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;- 0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// error because its not a valid sytax.&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Again, this is according to spec.&lt;/p&gt;

&lt;h3 id="now-for-crazy-coercion---wtfs"&gt;Now for crazy coercion - WTFs&lt;/h3&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;0.&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 0&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;.0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 0&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;.&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// NaN - WTF!&lt;/span&gt;

&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;0O0&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 0 - WTF&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Different rules for undefined and null&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// NaN&lt;/span&gt;
&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;// 0&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Strings&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// &amp;quot;null&amp;quot;&lt;/span&gt;
&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt; &lt;span class="c1"&gt;// &amp;quot;&amp;quot; - the null just goes away... Wtf!&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Symbols in ES6&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Symbol&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;that&amp;#39;s cool&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Symbol (that&amp;#39;s cool)&lt;/span&gt;

&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Symbol (that&amp;#39;s cool)&lt;/span&gt;

&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;&amp;quot;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// TypeError!  Wasn&amp;#39;t JS all about guessing what you wanted to do?&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The WTF is that Javascript is inconsistent in how it treats WTFs.  In most cases, it tries to guess, but in other cases it just throws an error.&lt;/p&gt;

&lt;h3 id="switch-default-break"&gt;Switch default break&lt;/h3&gt;

&lt;p&gt;Specs say that &lt;code&gt;default&lt;/code&gt; can be anywhere, but depending on where you put it, it can be WTF.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="k"&gt;switch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;10&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In the above case, it goes to the &lt;code&gt;default&lt;/code&gt; case, but because it has no &lt;code&gt;break;&lt;/code&gt;, it loops through all options again.&lt;/p&gt;

&lt;h3 id="finally"&gt;Finally&lt;/h3&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The finally &lt;code&gt;return&lt;/code&gt; overrides the &lt;code&gt;return&lt;/code&gt; in the try.  We lost 2!&lt;/p&gt;

&lt;h3 id="temporal-dead-zone--tdz"&gt;Temporal Dead Zone.  TDZ&lt;/h3&gt;

&lt;p&gt;TDZ: A variable can be in a special state.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// undefined&lt;/span&gt;
  &lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// TDZ error&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;WTF because typeof should work with any kind of variable.  Again, inconsistencies.&lt;/p&gt;

&lt;h3 id="classes"&gt;“Class”es&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;super&lt;/code&gt; is statically bound unlike &lt;code&gt;this&lt;/code&gt;, which is dynamically bound.
Using &lt;code&gt;super&lt;/code&gt; doesn’t work with dynamicism which is what JS is all about.  And so now we’re left in this weird state where we have both dynamic… and static… things and it’s a big WTF&lt;/p&gt;

&lt;h3 id="key-take-away-2"&gt;Key Take Away&lt;/h3&gt;

&lt;p&gt;Javascript has its good parts, but there are bad WTFs that are inherent in the spec itself.&lt;/p&gt;

&lt;p&gt;WTF!&lt;/p&gt;

&lt;p&gt;TIL, &lt;code&gt;__proto__&lt;/code&gt; is pronounced dunderproto!&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Nap Time</title>
    <link href="http://kenhirakawa.com/nap-time"/>
    <updated>2014-02-17T00:00:00-08:00</updated>
    <id>/nap-time</id>
    <content type="html">&lt;h1 id="nap-time"&gt;Nap Time&lt;/h1&gt;

&lt;p&gt;Lilypad Arduinos are awesome because they are wearable.  I bought one almost 8 months ago, but never ended up doing anything with it other than &lt;a href="http://kenhirakawa.com/one-way-rf-communication-with-arduino-and-node/"&gt;build an RFDuino&lt;/a&gt;.  To give it its due diligence (and not have it collect dust), I decided to build a simple eye mask with a built-in alarm clock for power napping.  After all, research has shown many times over that power naps can &lt;a href="http://io9.com/the-science-behind-power-naps-and-why-theyre-so-damne-1401366016"&gt;boost productivity, creativity, well-being, learning, and much more&lt;/a&gt;.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/nap-time-final.jpg" /&gt;
&lt;figcaption&gt;My poor job at sewing&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Here’s how it works.  Put on the eye mask, nap for 10 minutes, and let the mask vibrate to wake you up.  It has a tri-color LED that will start pulsing while it’s on. To turn off the alarm, click the button board or simply turn off the Lilypad Arduino altogether.  It’s dead simple, yet super effective.&lt;/p&gt;

&lt;h2 id="i-want-one"&gt;I Want One&lt;/h2&gt;

&lt;p&gt;Here’s what you’ll need.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Lilypad Arduino&lt;/li&gt;
  &lt;li&gt;Lilypad Buzzer&lt;/li&gt;
  &lt;li&gt;Lilypad Tri-color LED&lt;/li&gt;
  &lt;li&gt;Lilypad Button Board&lt;/li&gt;
  &lt;li&gt;LiPo Battery&lt;/li&gt;
  &lt;li&gt;Conductive threads and a needle&lt;/li&gt;
  &lt;li&gt;Nice, comfy eye mask&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can get all items except the eye mask by purchasing a &lt;a href="https://www.sparkfun.com/products/11261"&gt;ProtoSnap Development Kit&lt;/a&gt;.  The eye mask I purchased can be found on &lt;a href="http://www.amazon.com/Dream-Zone--Earth-Therapeutics-Sleep/dp/B000JE2C9Y/ref=sr_1_3?ie=UTF8&amp;amp;qid=1383798238&amp;amp;sr=8-3&amp;amp;keywords=eyemask"&gt;Amazon&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Assuming you have your &lt;a href="http://arduino.cc/en/Guide/HomePage#.UwKTLEJdUzF"&gt;Arduino development environment set&lt;/a&gt;, the first thing you’ll want to do is install &lt;a href="http://playground.arduino.cc/Code/Timer#Installation"&gt;Timer&lt;/a&gt;. It’s a simple timing utility library that our code will use to keep track of time. Installation instructions can be found &lt;a href="http://playground.arduino.cc/Code/Timer#Installation"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, grab the .ino source file from my &lt;a href="https://github.com/khirakawa/power-napper/blob/master/src/nap.ino"&gt;github repo&lt;/a&gt; and upload it to your Lilypad.&lt;/p&gt;

&lt;p&gt;Finally, wire your components using the schematic below.  I recommend using banana clips first (don’t sew yet).&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/nap-time-lilypad-schematic.png" /&gt;
&lt;figcaption&gt;Schematic of Lilypad Arduino connected to a buzzer, tri-color LED, and a button&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;When you turn on your Lilypad you should immediately see the LED glow green.  After 10 minutes the buzzer should start to vibrate.&lt;/p&gt;

&lt;p&gt;Once you know that it’s working, sew your components onto your eye mask. Make sure to keep the conductive thread from touching each other, or you’ll be left with a shorted circuit and a big, big frown.&lt;/p&gt;

&lt;p&gt;That’s it. Now go take a nap!&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Selenium - Unable to add cookie to page</title>
    <link href="http://kenhirakawa.com/selenium-unable-to-add-cookie-to-page"/>
    <updated>2013-10-04T00:00:00-07:00</updated>
    <id>/selenium-unable-to-add-cookie-to-page</id>
    <content type="html">&lt;h1 id="selenium---unable-to-add-cookie-to-page"&gt;Selenium - Unable to add cookie to page&lt;/h1&gt;

&lt;p&gt;Selenium can be a pain in the butt sometimes.  If you are having trouble running Selenium tests on IE9 because of an &lt;code&gt;Unable to add cookie to page&lt;/code&gt; error, try the following:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Go open Internet Explorer&lt;/li&gt;
  &lt;li&gt;Open internet options&lt;/li&gt;
  &lt;li&gt;Go to the Programs tab&lt;/li&gt;
  &lt;li&gt;Amass all the courage you have to set IE as default browser&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That should hopefully fix it.&lt;/p&gt;

&lt;p&gt;The root cause is most likely related to this &lt;a href="https://code.google.com/p/selenium/issues/detail?id=4307"&gt;closed bug&lt;/a&gt;. Essentially, selenium looks for a registry key to determine if the page is in HTML.  If it fails to find the registry key (for some reason), selenium will spit out the error above.&lt;/p&gt;

&lt;p&gt;Note, I’m running selenium server 2.32 with IEDriver 2.33.&lt;/p&gt;

&lt;p&gt;FWIW.&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Load Disqus on localhost</title>
    <link href="http://kenhirakawa.com/load-disqus-on-localhost"/>
    <updated>2013-07-14T00:00:00-07:00</updated>
    <id>/load-disqus-on-localhost</id>
    <content type="html">&lt;h1 id="load-disqus-on-localhost"&gt;Load Disqus on localhost&lt;/h1&gt;

&lt;p&gt;Do you get a &lt;code&gt;We were unable to load Disqus&lt;/code&gt; error when loading Disqus on localhost?  The &lt;a href="https://www.google.com/search?q=load+disqus+on+localhost"&gt;top Google results&lt;/a&gt; will tell you that you need to enable the following flag in &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;disqus_developer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// this would set it to developer mode&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Unfortunately, I had no luck with such flag.  I can’t find it in any of Disqus’s documentation either, so my bet is that the flag was deprecated.&lt;/p&gt;

&lt;p&gt;You can instead set the &lt;code&gt;disqus_url&lt;/code&gt; config param to point to the correct site URL.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="c1"&gt;// jekyll syntax.&lt;/span&gt;
&lt;span class="c1"&gt;// This evaluates to http://kenhirakawa.com/load-disqus-on-localhost&lt;/span&gt;
&lt;span class="c1"&gt;// for this page&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;disqus_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;{{ site.url }}{{ page.url }}&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now when you load your site via localhost, Disqus will match the &lt;code&gt;disqus_url&lt;/code&gt; param with your registered site URL.  If they match, Disqus will load. If you leave the parameter undefined, the script will use &lt;code&gt;window.location.href&lt;/code&gt; instead, which evaluates to ‘localhost’.  Obviously, ‘localhost’ does not match your site URL, which explains why Disqus didn’t load on localhost in the first place.&lt;/p&gt;

&lt;p&gt;One more thing that made me trip was the website URL format under the Site Identity settings.  I had it set to &lt;code&gt;http://www.kenhirakawa.com&lt;/code&gt;, when it should have been just &lt;code&gt;http://kenhirakawa.com&lt;/code&gt;.  The syntax is documented &lt;a href="http://help.disqus.com/customer/portal/articles/472007-i-m-receiving-the-message-%22we-were-unable-to-load-disqus-%22"&gt;here&lt;/a&gt;, but I’d wish they’d just client-side validate the field.&lt;/p&gt;

&lt;p&gt;Hope this helps anyone having trouble loading Disqus on localhost!&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>One Way RF Communication with Arduino and Node</title>
    <link href="http://kenhirakawa.com/one-way-rf-communication-with-arduino-and-node"/>
    <updated>2013-06-29T00:00:00-07:00</updated>
    <id>/one-way-rf-communication-with-arduino-and-node</id>
    <content type="html">&lt;h1 id="one-way-rf-communication-with-arduino-and-node"&gt;One Way RF Communication with Arduino and Node&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Here I will document how I put together a node controlled RF receiver and transmitter using relatively inexpensive RF parts and &lt;a href="https://github.com/khirakawa/duino"&gt;a fork&lt;/a&gt; of &lt;a href="https://github.com/ecto/duino"&gt;ecto’s duino framework&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/rfduino-setup.jpg" /&gt;
&lt;figcaption&gt;Arduino UNO with WRL-10532 RF Link Receiver&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;For a recent side project, I needed to wirelessly send data from one Arduino to another, and have the receiver do some data processing via node.  I ended up using a pair of Wenshing RF receivers / transmitters to do the wireless communication as they were inexpensive and easy to use.  On the software side, I used a forked version of ecto’s duino framework to interface with the Arduinos and support RF communication.  If you’re in need of building something similar, read on!&lt;/p&gt;

&lt;h2 id="lets-build-it"&gt;Let’s Build It&lt;/h2&gt;

&lt;p&gt;First things first, I bought an &lt;a href="https://www.sparkfun.com/products/10532"&gt;RF link receiver&lt;/a&gt; and &lt;a href="https://www.sparkfun.com/products/10534"&gt;transmitter&lt;/a&gt; from SparkFun for a total cost of less than $10.  Here’s the schematic drawing of the receiver I wired up.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/rfduino-receiver-schematic.png" /&gt;
&lt;figcaption&gt;WRL-10532 RF Link Receiver wired to data pin 2 of Arduino&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Here’s the schematic of the transmitter.  You can optionally attach a wire to the right-most pin to extend the antenna for more reliable transmission.  Same goes for the receiver above.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/rfduino-transmitter-schematic.png" /&gt;
&lt;figcaption&gt;WRL-10534 RF Link Transmitter wired to data pin 3 of Lilypad Arduino&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="rfduino"&gt;[RF]Duino&lt;/h2&gt;

&lt;p&gt;For all my Arduino projects, I’ve been using ecto’s &lt;a href="https://github.com/ecto/duino"&gt;duino framework&lt;/a&gt;.  It’s plain simple and easy to use.  Unfortunately, it doesn’t support RF communication, so I decided to fork it and write my own &lt;a href="https://github.com/khirakawa/duino"&gt;library&lt;/a&gt;.  This library supports both receiving and transmitting data via &lt;code&gt;RFReceiver&lt;/code&gt; and &lt;code&gt;RFTransmitter&lt;/code&gt; classes.&lt;/p&gt;

&lt;p&gt;The first thing you’ll want to do is install the framework as a node module to your project (assuming, you already have a node program created):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install git://github.com/khirakawa/duino.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note that this flavor of duino replaces the servo module with the RF modules, so if you’re looking to use both, my fork won’t do.&lt;/p&gt;

&lt;p&gt;Next, upload the framework’s Arduino sketch located at &lt;code&gt;node_modules/duino/src/du/du.ino&lt;/code&gt; to your Arduino.  The same sketch should be uploaded to both your receiver board and transmitter board.  This program issues commands to the RF receiver / transmitter on behalf of your node app.&lt;/p&gt;

&lt;p&gt;That’s it! Preparation done.&lt;/p&gt;

&lt;p&gt;You can now transmit data by doing:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;duino&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;board&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Board&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ready&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;

  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;rfTransmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RFTransmitter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;pin&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;03&amp;#39;&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;rfTransmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;hey there delilah&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And receive it with:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;duino&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;board&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Board&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ready&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;

  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;rfReceiver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RFReceiver&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;pin&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;02&amp;#39;&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;rfReceiver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;read&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;data&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The library does impose a hard limit of 26 characters for a single transmission.  If you need to send more, simply call &lt;code&gt;rfTransmitter.send&lt;/code&gt; multiple times, sequentially.
For more information on other available commands, consult ecto’s &lt;a href="https://github.com/ecto/duino#libraries"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="next-steps"&gt;Next Steps&lt;/h2&gt;

&lt;p&gt;I had to remove the servo module from duino because the &lt;code&gt;Servo.h&lt;/code&gt; include in the &lt;code&gt;du.ino&lt;/code&gt; sketch conflicted with the &lt;code&gt;VirtualWire.h&lt;/code&gt; library used for RF communication.  Future versions will have this fixed.&lt;/p&gt;

&lt;p&gt;If you need an even cheaper, smaller, and more reliable wireless Arduino board, I would highly suggest purchasing an &lt;a href="http://www.rfduino.com/shop.html"&gt;RFDuino&lt;/a&gt; made by OpenSourceRF.  It’s quite a remarkable piece technology and opens up a ton of new possibilities.  I can’t wait for my RFDuino to arrive!  Until then though, what I’ve outlined above will suffice =).&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Working Memory</title>
    <link href="http://kenhirakawa.com/working-memory"/>
    <updated>2013-05-07T00:00:00-07:00</updated>
    <id>/working-memory</id>
    <content type="html">&lt;h1 id="working-memory"&gt;Working Memory&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Research and Application of Working Memory and the Effects of Anxiety&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="the-working-memory-model"&gt;The Working Memory Model&lt;/h2&gt;

&lt;p&gt;Working memory failures happen relatively frequently for everyone &lt;a href="#Schacter2001"&gt;[1]&lt;/a&gt;.  For example, people may forget their pin number they have created just moments ago for a bank account.  More critically, nuclear plant operators may fail to follow critical procedures under life-threatening, emergency situations.  Such incidents can be attributed to factors such as age, psychological stressors, and emotional anxiety that hinder working memory.  It is crucial to understand the how working memory functions to better design interfaces that reduce cognitive load and tackle the limitations of working memory.&lt;/p&gt;

&lt;p&gt;Working memory is a limited-capacity system that maintains and stores information in the short term to support complex cognitive tasks such as learning, reasoning, and language comprehension.  Baddeley’s model, working memory can be divided into four subsystems: the central executive, the visuospatial sketchpad, the phonological loop, and the episodic buffer &lt;a href="#Baddeley2003"&gt;[2]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="central-executive"&gt;Central Executive&lt;/h3&gt;

&lt;p&gt;Schneider and Detweiler suggest that the concurrent storage and processing is only one aspect of working memory; the prime function of working memory is the coordination and allocation of resources by the central executive &lt;a href="#Schneider1988"&gt;[3]&lt;/a&gt;.  The central executive is responsible for focusing, dividing, and switching attention and resources to and from its slave subsystems, along with the need to bridge working memory with long-term memory &lt;a href="#Baddeley2003"&gt;[2]&lt;/a&gt;.  Miyake et al proposes that the central executive fulfills three basic functions &lt;a href="#Miyake2000"&gt;[4]&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Inhibition - “one’s ability to deliberately inhibit dominant, automatic, or prepotent responses when necessary” &lt;a href="#Miyake2000"&gt;[4, P. 57]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Shifting - rapid, seamless shifting between multiple, concurrent tasks, operations or mental sets&lt;/li&gt;
  &lt;li&gt;Updating - refreshing and reconstructing working memory representations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The two storage systems that the central executive coordinates information from is the visuospatial sketchpad and the phonological loop.&lt;/p&gt;

&lt;h3 id="visuospatial-sketchpad"&gt;Visuospatial Sketchpad&lt;/h3&gt;

&lt;p&gt;The visuospatial sketchpad is important for many fields, such as architecture and engineering, where visuospatial imagery is key to success.  Similarly, the visuospatial components of working memory have played a critical role in scientific discoveries, such as the discovery of the general theory of relativity by Einstein &lt;a href="#Ghisilen1952"&gt;[5]&lt;/a&gt;.  The visuospatial sketchpad holds encoded, spatial information that is captured from the visual sensory system or retrieved from long-term memory to produce a recollection of an image &lt;a href="#Wickens2004"&gt;[6, P. 129]&lt;/a&gt;.  For instance, architects may use their visuospatial sketchpad to create a mental synthesis of their design, and retain information regarding where each structural component is located.&lt;/p&gt;

&lt;h3 id="phonological-loop"&gt;Phonological Loop&lt;/h3&gt;

&lt;p&gt;The phonological loop is most likely the simplest out of the quadripartite system and has been investigated most extensively &lt;a href="#Baddeley1992"&gt;[7]&lt;/a&gt;.  This system has two components, a phonological store that stores acoustic or speech-based information for a brief period of time (1 to 2 seconds), and an articulatory control process.  Its main functions are twofold.  1) It uses subvocal repetition to maintain information within the phonological store, and 2) it uses subvocalization to register visually presented materials in the phonological store.  The phonological store plays a crucial role for long-term phonological learning, especially with acquiring foreign languages &lt;a href="#Baddeley1992"&gt;[7]&lt;/a&gt;.  This subsystem is susceptible to a number of capacity and recall issues, including:&lt;/p&gt;

&lt;p&gt;•Acoustic similarity effect - recalling of ordered items is more difficult if the items are similar in sound.  For instance, recalling “man, cap, can, map, mad” is more difficult than recalling “pit, day, cow, pen, rig” &lt;a href="#Baddeley1992"&gt;[7, P. 558]&lt;/a&gt;.  Recalling is not affected by semantic similarities of items.
•Word-length effect - longer words are more susceptible to recall failure because they take longer to rehearse.  Generally, humans can remember about as many words as they can say in 2 seconds &lt;a href="#Baddeley1992"&gt;[7, P. 558]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="episodic-buffer"&gt;Episodic Buffer&lt;/h3&gt;

&lt;p&gt;The fourth subsystem of the working memory model is the episodic buffer and it is responsible for connecting working memory with long-term memory.  The episodic buffer can be thought of as a temporary storage that acts as a global workspace for the central executive.  Working memory retrieves information from long term memory by “downloading” the information into the episodic buffer, and proceeds to manipulate and create new representations of the information, rather than simply activating the nodes in the semantic network &lt;a href="#Baddeley2003"&gt;[2]&lt;/a&gt;.  According to Baddeley, the episodic buffer is assumed to be accessible to conscious awareness, which then provides convenient bindings for different systems to integrate with working memory.&lt;/p&gt;

&lt;h2 id="limitations-of-working-memory---capacity"&gt;Limitations of Working Memory - Capacity&lt;/h2&gt;

&lt;p&gt;In his seminal research, Miller has described the magical number of 7 plus or minus two chunks that represent the capacity limit of working memory &lt;a href="#Miller1956"&gt;[8]&lt;/a&gt;.  Furthermore, Miller focused on the ability to effectively increase working memory capacity by intelligently “chunking” items together.  According to Simon, chunks are a collection of items that have strong ties with each other, but weaker ties with other, concurrent chunks in working memory storage &lt;a href="#Simon1974"&gt;[9]&lt;/a&gt;.  For example, the following 12 letter sequence “fbicbsibmirs” can be grouped into four chunks, FBI, CBS, IBM, and IRS.  Later research and evidence suggest that the limit is substantially fewer, stating that 4 plus or minus 1 is the limit &lt;a href="#Cowan2001"&gt;[10]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The practical identification of chunks and forming of associations are dependent on the knowledge stored in long-term memory &lt;a href="#Cowan2001"&gt;[10]&lt;/a&gt;.  During the process of chunking, related concepts within long-term memory are evoked and activated.  Activated information is more readily accessible as well, and acts as a supplement to working memory in the form of the phonological buffer or the visuospatial sketchpad &lt;a href="#Cowan2001"&gt;[10]&lt;/a&gt;.  Thus chunks “can be more than just a conglomeration of a few items from the stimulus” &lt;a href="#Cowan2001"&gt;[10, P. 92]&lt;/a&gt;.  In fact, Gobet and Simon found that expert chess players compared to other chess players differ not in the quantity of chunks but in the size of these chunks &lt;a href="#Gobet1996"&gt;[11]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="duration"&gt;Duration&lt;/h3&gt;
&lt;p&gt;The capacity limits of working memory are closely tied to the time constraints of working memory.  Information in working memory will be lost if the chunks are not periodically reactivated through the process of maintenance rehearsal.  During this process, one continuously thinks of this item over and over and thereby refreshing the information in storage.  For acoustic information, this rehearsal is in the form of a series of subvocal articulation that occurs in the phonological loop, and is subject to a number of factors that can impair rehearsal, including word-length effects and acoustic similarity effects.  According to Card, Moran, and Newell the half-life of chunks is estimated to be approximately 70 seconds for one chunk and 7 seconds for three chunks &lt;a href="#Card1986"&gt;[12]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="attention"&gt;Attention&lt;/h3&gt;
&lt;p&gt;Working memory has a limited supply of attentional resources.  Processing and maintaining information rely on the same pool of resources and no two concurrent tasks can be attended to simultaneously &lt;a href="#Barrouillet2007"&gt;[13]&lt;/a&gt;.  If attention is fully consumed by a concurrent task, the strength of the unattended information will suffer from a time-based decay; a reactivation is necessary before the memory trace is completely lost.  According to the time-based, resource-sharing model, “sharing of attention is achieved through a rapid and incessant switching of attention from processing to maintenance” &lt;a href="#Barrouillet2004"&gt;[14, P. 571]&lt;/a&gt;.  High cognitive load is therefore the behavior in which tasks impede switching by continuously demanding attentional resources and preventing the central executive to perform executive functions such as maintenance rehearsals.  On the other hand, cognitively less demanding tasks allow for frequent pauses and switching to other concurrent tasks &lt;a href="#Barrouillet2007"&gt;[13]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="the-effects-of-anxiety"&gt;The Effects of Anxiety&lt;/h2&gt;

&lt;p&gt;Anxiety can have a disruptive effect on working memory.  Eysenck, Derakshan, and Santos define anxiety as an emotional and motivational state occurring in threatening situations, in which individuals are unable to instigate a clear behavioral pattern to remove the threat &lt;a href="#Eysenck2007"&gt;[15]&lt;/a&gt;.  Individuals often try to develop strategies to reduce anxiety in order to achieve a certain goal, and are often times worried about the threat itself.  Anxiety itself has adverse effects on the cognitive performance; in fact, the right amount of anxiety can boost performance, whereas any amount further will become detrimental to cognitive performance &lt;a href="#Eysenck2007"&gt;[15]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The processing efficiency theory was developed by Eysenck and Calvo to explain the effects of anxiety on cognitive performance effectiveness and efficiency &lt;a href="#Eysenck1992"&gt;[16]&lt;/a&gt;.  According to the theory, worrisome thoughts impacts the central executive by consuming cognitive resources of working memory, leaving less available resources for other concurrent tasks.  Any task that has substantial demand on working memory storage and processing will be most vulnerable to high performance impairments due to anxiety.  Detrimental effects are also expected in the phonological loop because worry typically involves subvocal speech activity &lt;a href="#Rapee1993"&gt;[17]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Attentional control theory posits that anxiety disrupts the balance between two main types of attentional systems: a top-down, goal-driven system and a bottom-up stimulus driven system &lt;a href="#Eysenck2007"&gt;[15]&lt;/a&gt;.  The top-down control of attentional system is influenced by metacognitive factors such as expectations and goals.  The bottom-up system is involved in the bottom-up control of attention through the signal detection of sensory events, especially those that are salient or unexpected.  Under stressful conditions when anxiety is high, the bottom-up, stimulus-driven system is given more weight in terms of influence.  Anxiety affects the bottom-up system via automatic processing of threat-related stimuli, resulting in a decrease in the influence of its counterpart, goal-directed attentional system &lt;a href="#Eysenck2007"&gt;[15]&lt;/a&gt;.  Attention is then shifted from task-relevant processes to task-irrelevant ones (e.g. threat related distractors, worrisome thoughts) and so impairs processing efficiency.&lt;/p&gt;

&lt;p&gt;Interestingly, there are a number of research studies that have shown high anxiety groups to outperform none to lower anxious groups &lt;a href="#Byrne1995"&gt;[18]&lt;/a&gt;.  According to attentional control theory, the reason is because anxiety impairs efficiency more than performance effectiveness.  Furthermore, worry can instigate motivation to lessen anxiety and potential performance impairments can be mitigated &lt;a href="#Eysenck2007"&gt;[15]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="case-study"&gt;Case Study&lt;/h2&gt;
&lt;p&gt;A website may go offline due to a denial-of-service attack.  It is then the responsibility of system admins to detect the threat, locate the issue, and mitigate the attack.  The first tool admins may use is an IT infrastructure-monitoring tool such as Nagios.  From CPU usage statistics to network host summaries, Nagios provides all the necessary information for a system administrator.  However, under cognitive load, system administrators may perform suboptimally due to anxiety and pressure.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/working-memory-nagios.png" /&gt;
&lt;figcaption&gt;Figure 1: Service status page of Nagios with four critical incidents&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Nagios is designed so that any critical issues or warnings are fully apparent in the interface (figure 1).  When a user first logs on, a popup is shown with the number of critical incidents that need to be attended to (figure 2).  This gives a clear path of action for the admin that aligns well with the admin’s goal of removing the threat, and moreover putting the site back online.  The admin can also consult the network status map to view the network topology diagram.  This is helpful because it removes the need to mentally synthesize the network and its intricacies in the visuospatial sketchpad.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/working-memory-notifications.png" /&gt;
&lt;figcaption&gt;Figure 2: Notification of critical problems that popup immediately after login&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;During emergency situations, the anxiety and pressure would cause the attentional resources of the central executive to be consumed by task-irrelevant, distracting tasks, such as worrisome thoughts of losing customers and revenue.  The phonological loop would be occupied with the same thoughts as well in the form of subvocal speech.  Less time is spent with maintenance rehearsals, and therefore chunks in working memory, information that is crucial to diagnosing the problem, become more susceptible to loss and time-decay.&lt;/p&gt;

&lt;p&gt;A number of improvements can be made to the interface.  During critical events, it is important to reduce memory load of the user and any methods to offload information in working memory is of value.  Nagios can improve the interface by providing a textbox column in the service status table, in which the admin can take notes per incident.  This removes the need for the admin to retain all diagnostic information in working memory.  IP address comparisons under status information incurs cognitive load because of the need to sequence between each string.  It is recommended to display a side-by-side comparison chart instead, similar to what programmatic diff-tools output (figure 3).&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/working-memory-ip.png" /&gt;
&lt;figcaption&gt;Figure 3: A side-by-side comparison chart for IP addresses that is less taxing on working memory&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Working memory is the limited capacity system of the human brain that is acts as a cognitive workspace.  It bridges the bottom-up sensory systems with top-down cognitive systems and interfaces with long-term memory.  It is subject to various limitations and recall issues, including limitations of capacity, duration, and attentional resources.  Designers should leverage this knowledge to design interfaces that reduce cognitive load and tackle the limitations of working memory.&lt;/p&gt;

&lt;h2 id="references"&gt;References&lt;/h2&gt;

&lt;ol class="bibliography"&gt;&lt;li&gt;&lt;span id="Schacter2001"&gt;[1]D. L. Schacter, &lt;i&gt;The Seven Sins of Memory&lt;/i&gt;. 2001.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Baddeley2003"&gt;[2]A. Baddeley, “Working memory: looking back and looking forward.,” &lt;i&gt;Nature reviews. Neuroscience&lt;/i&gt;, vol. 4, no. 10, pp. 829–39, Oct. 2003.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Schneider1988"&gt;[3]W. Schneider and M. Detweiler, “A Connectionist / Control Architecture for Working Memory,” &lt;i&gt;\ldots OF LEARNING&amp;amp;MOTIVATION: V21&lt;/i&gt;, 1988.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Miyake2000"&gt;[4]a Miyake, N. P. Friedman, M. J. Emerson, a H. Witzki, a Howerter, and T. D. Wager, “The unity and diversity of executive functions and their contributions to complex ‘Frontal Lobe’ tasks: a latent variable analysis.,” &lt;i&gt;Cognitive psychology&lt;/i&gt;, vol. 41, no. 1, pp. 49–100, Aug. 2000.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Ghisilen1952"&gt;[5]B. Ghisilen, &lt;i&gt;The Creative Process&lt;/i&gt;. 1952.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Wickens2004"&gt;[6]C. D. Wickens, &lt;i&gt;An Introduction To Human Factors Engineering&lt;/i&gt;, Second Edi. Upper Saddle River, NJ: Pearson Education, Inc., 2004, p. 587.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Baddeley1992"&gt;[7]A. Baddeley, “Working Memory,” &lt;i&gt;Science&lt;/i&gt;, 1992.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Miller1956"&gt;[8]G. Miller, “The magical number seven, plus or minus two: some limits on our capacity for processing information,” &lt;i&gt;The psychological review&lt;/i&gt;, 1956.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Simon1974"&gt;[9]H. A. Simon, “How big is a chunk,” &lt;i&gt;Science&lt;/i&gt;, 1974.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Cowan2001"&gt;[10]N. Cowan, “The magical number 4 in short-term memory: a reconsideration of mental storage capacity.,” &lt;i&gt;The Behavioral and brain sciences&lt;/i&gt;, vol. 24, no. 1, pp. 87–114; discussion 114–85, Feb. 2001.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Gobet1996"&gt;[11]F. Gobet and H. A. Simon, “Templates in Chess Memory: A Mechanism for Recalling Several Boards,” 1996.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Card1986"&gt;[12]S. Card, T. Moran, and A. Newell, “The model human processor,” &lt;i&gt;Ariel&lt;/i&gt;, 1986.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Barrouillet2007"&gt;[13]P. Barrouillet, S. Bernardin, S. Portrat, E. Vergauwe, and V. Camos, “Time and cognitive load in working memory.,” &lt;i&gt;Journal of experimental psychology. Learning, memory, and cognition&lt;/i&gt;, vol. 33, no. 3, pp. 570–85, May 2007.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Barrouillet2004"&gt;[14]P. Barrouillet, S. Bernardin, and V. Camos, “Time constraints and resource sharing in adults’ working memory spans.,” &lt;i&gt;Journal of experimental psychology. General&lt;/i&gt;, vol. 133, no. 1, pp. 83–100, Mar. 2004.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Eysenck2007"&gt;[15]M. W. Eysenck, N. Derakshan, R. Santos, and M. G. Calvo, “Anxiety and cognitive performance: attentional control theory.,” &lt;i&gt;Emotion&lt;/i&gt;, vol. 7, no. 2, pp. 336–353, 2007.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Eysenck1992"&gt;[16]M. W. Eysenck and M. G. Calvo, “Anxiety and performance: The processing efficiency theory,” &lt;i&gt;Cognition &amp;amp; Emotion&lt;/i&gt;, vol. 6, no. 6, pp. 409–434, 1992.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Rapee1993"&gt;[17]R. M. Rapee, “The utilization of working memory by worry,” &lt;i&gt;Behaviour Research and Therapy&lt;/i&gt;, vol. 31, pp. 617–620, 1993.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Byrne1995"&gt;[18]A. Byrne and M. W. Eysenck, “Trait anxiety, anxious mood, and threat detection,” &lt;i&gt;Cognition &amp;amp; Emotion&lt;/i&gt;, vol. 9, no. 6, pp. 549–562, 1995.&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;
</content>
  </entry>
  <entry>
    <title>Metacognition</title>
    <link href="http://kenhirakawa.com/metacognition"/>
    <updated>2013-04-16T00:00:00-07:00</updated>
    <id>/metacognition</id>
    <content type="html">&lt;h1 id="metacognition"&gt;Metacognition&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Research and Application of Metacognition and Decision Making&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="what-is-metacognition"&gt;What is Metacognition?&lt;/h2&gt;

&lt;p&gt;Metacognition refers to the second-order cognition of the mind; thoughts about thoughts, knowledge about knowledge, and self-reflection of abilities are all within the realm of metacognition.  Flavell defines metacognition as the “knowledge that takes as its object or regulates any aspect of any cognitive endeavor” &lt;a href="#Flavell1978"&gt;[1, P. 8]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Metacognition plays a critical role for humans because of its usefulness and adaptiveness &lt;a href="#Papleontiou-louca2003"&gt;[2]&lt;/a&gt;.  In fact, according to Flavell, humans have characteristics that make metacognition a necessity &lt;a href="#Flavell1987"&gt;[3]&lt;/a&gt;.  For example, people are error prone and fallible, thus careful monitoring, regulation, and assessment is in constant need.  Survival instincts require humans to plan ahead and make decisions based on critical evaluation of alternative choices.  In order to make such decisions, metacognitive skills are required.  Finally, human beings are conscious organisms that can think about and communicate psychological events.  The act of thinking and explaining such events is in itself a metacognitive engagement.&lt;/p&gt;

&lt;p&gt;In his seminal research in cognitive development theory of children, Flavell defined the term metacognitive knowledge and metacognitive experience &lt;a href="#Flavell1979"&gt;[4]&lt;/a&gt;.  Metacognitive knowledge is knowledge stored in long-term memory that refers to “people as cognitive creatures with their diverse cognitive tasks, goals, actions, and experiences” &lt;a href="#Flavell1979"&gt;[4, P. 906]&lt;/a&gt;.  It involves the knowledge of one’s own cognition.  Metacognitive experiences are realization of certain momentary experiences.  It can vary in length of duration, level of consciousness, and complexity.  The sudden, anxious feeling that one may encounter after realizing one does not understand an exam’s material, is an example of a metacognitive experience.  Such phenomena are likely to happen when there is high, concentrated thinking and feelings &lt;a href="#Flavell1979"&gt;[4]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Metacognitive experiences can have important effects on cognitive goals, tasks, strategies and metacognitive knowledge.  Experiences can add, delete, or revise metacognitive knowledge; similar to how new information is assimilated and accommodated into long-term memory &lt;a href="#Piaget1962"&gt;[5]&lt;/a&gt;.  Experiences such as puzzlement and frustration can also lead to a re-establishment of new goals and revision of previous plans.  Furthermore, they can activate cognitive or metacognitive strategies.   For instance, the act of self-testing oneself with questions and noting how well they were answered is a metacognitive strategy aimed at the metacognitive goal of assessing knowledge.&lt;/p&gt;

&lt;p&gt;Efklides added a third phenomenon of metacognition called metacognitive skills &lt;a href="#Efklides2006"&gt;[6]&lt;/a&gt;.  Metacognitive skills are conscious orchestration of mental processes to plan, monitor progress, allocate attentive resources, and regulate cognition and strategy use.  Researchers such as Flavell, Butler and Winne argue that individuals who can accurately self manage and regulate are the most successful at learning because they can utilize the right set of tools to achieve goals and modify strategies based on awareness of effectiveness &lt;a href="#Flavell1979"&gt;[4], [7]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="decision-making"&gt;Decision Making&lt;/h2&gt;

&lt;p&gt;Decision making is a complex, dynamic process that requires metacognition.  Zeleny and Cochrane defines it as “a complex search for information full of detours, enriched by feedback from casting about in all directions”, where one must constantly gather and discard information under fluctuating uncertainty &lt;a href="#Zeleny1982"&gt;[8, P. 86]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Utility theory states that under ideal circumstances, the economic man will choose the option that provides the greatest pleasure and utility &lt;a href="#Edwards1954"&gt;[9]&lt;/a&gt;.  This theory assumes three characteristics of the decision maker: the individual 1) has complete information, 2) has infinite sensitivity, and 3) is completely rational.  The decision making then becomes a process of obtaining an adequate measurement of attractiveness for each alternative and choosing the one with the highest merit.&lt;/p&gt;

&lt;p&gt;These characteristics are of course, unrealistic and false.  Humans rarely have complete information, and so under uncertainty, decision making becomes a problem of heuristics.  Humans are also irrational creatures and are easily influenced by biases such as the framing effect.  Furthermore, humans lack the cognitive resources to optimize in a practical manner due to limitations of working memory, information, and computational facilities &lt;a href="#Simon1955"&gt;[10]&lt;/a&gt;.  According to Schwartz et al, people who attempt to maximize utility tend to experience less happiness, optimism, and satisfaction &lt;a href="#Schwartz2002"&gt;[11]&lt;/a&gt;.  Maximizers also tend to rely on external sources for information, increasing dependency and leading to procrastination of choice &lt;a href="#Iyengar2006"&gt;[12]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of maximizing, there is a tendency for humans to satisfice, or select what is “good enough”.  Simon defines satisficing as “decision making that sets an aspiration level, searches until an alternative is found that is satisfactory by the aspiration level criterion, and selects that alternative” &lt;a href="#Simon1956"&gt;[13, P. 168]&lt;/a&gt;.  For example, a group may decide on a suboptimal plan, not because it offers the most utility, but because it was the first plan that was unanimously agreed upon.&lt;/p&gt;

&lt;h2 id="the-process-of-decision-making"&gt;The Process of Decision Making&lt;/h2&gt;

&lt;p&gt;Zeleny and Cochrane describe utility theory as an outcome-oriented model of decision making &lt;a href="#Zeleny1982"&gt;[8]&lt;/a&gt;.  Under circumstances of uncertainty however, decision making is more complex and is better modeled by a process-oriented approach.  Decision making happens in three stages: pre-decision, decision, and post decision.&lt;/p&gt;

&lt;h3 id="pre-decision-stage"&gt;Pre-decision Stage&lt;/h3&gt;

&lt;p&gt;A decision making process is initialized by a sense of conflict &lt;a href="#Zeleny1982"&gt;[8]&lt;/a&gt;.  This conflict arises from the lack of suitable alternatives and the infeasibility of the ideal alternative.  The decision maker’s goal is to resolve the pre-decision conflict either by finding the ideal (rare), or by choosing amongst a set of alternatives.  Information that further separates and distinguishes choices  are gathered in an objective fashion.  This stage is also iterative with continuous reinterpretations and reassessments.  Once alternatives are sufficiently divergent, a decision can be made.&lt;/p&gt;

&lt;h3 id="decision-stage"&gt;Decision Stage&lt;/h3&gt;

&lt;p&gt;As the individual gets closer to finalizing a decision, alternatives that are further away from the ideal choices are discarded, and partial decisions are made.  When alternatives are discarded, a re-evaluation of remaining choices ensues.  Priority order may change, displacing the ideal choice with an alternative closer to the feasible set &lt;a href="#Zeleny1982"&gt;[8]&lt;/a&gt;.  For instance, after an exhaustive search for an unrealistic deal for a car that is priced significantly lower than the MSRP, the individual may discard the ideal for a more feasible choice that is valued at standard pricing.  Partial decisions are also important to make because it reduces the initial conflict.  When the final decision is made, the initial pre-decision conflict is fully resolved.&lt;/p&gt;

&lt;h3 id="post-decision-stage"&gt;Post-decision Stage&lt;/h3&gt;

&lt;p&gt;Even when the final decision unfolds, a sense of post-decision dissonance may ensue &lt;a href="#Zeleny1982"&gt;[8]&lt;/a&gt;.  Regret is likely to manifest as well, especially when a decision is made between two equally attractive, but not identical alternatives.  This is because the ideal alternative has been displaced by a feasible, but less attractive option.  In order to counteract this, a bias will arise toward the decision made.  Humans will seek consonant information that conforms to their decision to increase confidence &lt;a href="#Zeleny1982"&gt;[8]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is important to note that during any stage of the decision making process, cognitive load for the individual can be high.  This is especially the case if the decision needs to be made in a short period of time.  Edland and Svenson reported that under workload overload, input selectivity becomes high, accuracy decreases, and strategies become less diverse &lt;a href="#Edland1993"&gt;[14]&lt;/a&gt;.  It is recommended that designers make information available, interpretable, and salient for the most objectively important tasks.  For instance, visualizing options for different laptop models in a chart format will help lower cognitive load because all the information is externalized and does not need to be stored in working memory.  Even relieving small amounts of cognitive load by automating tasks can be a sufficient remedial measure  &lt;a href="#Wickens2004"&gt;[15]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="irrational-behavior-and-bias"&gt;Irrational Behavior and Bias&lt;/h2&gt;

&lt;p&gt;Individuals do not always make rational, consistent decisions across tasks and situations.  Research has shown that individual differences can affect decision making, including those of risk aversion and risk judgments, and decision making competence &lt;a href="#Parker2007"&gt;[16]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Decision making can be influenced by emotions.  Shiv and Fedorikhin found that when consumers do not allocate resources to decision making, the consumer is more likely to decide based on affect rather than cognition &lt;a href="#Shiv1999"&gt;[17]&lt;/a&gt;.  For instance, given a binary choice of chocolate cake or fruit salad, consumers will have a higher chance of selecting chocolate cake when decision is based mainly on affect.  This also suggests consumers whose cognitive processes are constrained are more likely to impulse buy.&lt;/p&gt;

&lt;h2 id="framing"&gt;Framing&lt;/h2&gt;

&lt;p&gt;Decision problems can be formulated in multiple ways, and often times the framing of the problem itself can sway the preferences in choice.  This is known as the framing effect and occurs when two “logically equivalent (but not transparently equivalent) statements of a problem lead decision makers to choose different options” &lt;a href="#Rabin1998"&gt;[18, P. 36]&lt;/a&gt;.  For example, an economic program that results in 90% employment gains public support, yet if the same program states it results in 10% unemployment, opposition rises &lt;a href="#Druckman2001"&gt;[19]&lt;/a&gt;.  Generally, individuals tend to prefer risk-aversion when the problem is framed in a positive manner (gain), but shift to risk taking when the alternatives involve loss &lt;a href="#Tversky1981"&gt;[20]&lt;/a&gt;.  Duckman argues however, that framing effects can be diminished with the availability of credible advice &lt;a href="#Druckman2001"&gt;[19]&lt;/a&gt;.  Framing effects can also happen with outcomes; prospect theory states that a loss is more significant than an equivalent gain.&lt;/p&gt;

&lt;h2 id="expert-decision-makers"&gt;Expert Decision Makers&lt;/h2&gt;

&lt;p&gt;Proficient decision makers are able to consult and exploit past experiences to handle uncertainty and novelty.  According to Cohen’s R/M model, they undergo a process called meta-recognition &lt;a href="#Cohen1998"&gt;[21]&lt;/a&gt;.  During the meta-recognition process, a gap is found during assessment and a skilled decision maker will patch the flaw or weakness that is found.  These weaknesses can be a discovery of incompleteness, conflict, or unreliability &lt;a href="#Cohen1986"&gt;[22]&lt;/a&gt;.  Correction occurs by an external action, attention shifting, and/or assumption revision.  These actions are meant to stimulate retrieval of new, potentially relevant information either from an external source, or from long-term memory.  They will then reevaluate the results and continually test the current state of comprehension.  Various strategies, such as a quick test, are adopted to fill in the gap of understanding and inconsistency in knowledge.  This meta-recognitional skill is analogous to meta-comprehension skills in proficient readers &lt;a href="#Baker1985"&gt;[23]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="case-study"&gt;Case Study&lt;/h2&gt;

&lt;p&gt;Flight Control is a mobile strategy game where the objective is to land as many aircrafts to their landing zones while avoiding mid-air collisions.  The game has numerous characteristics that differentiate it from a more static strategic game such as chess.  The positions and timings from which flights appear are unknown.  As time progresses, more flights emerge with variations in speed and size.  Such dynamic characteristics of this game make it impractical to take normative, object-oriented approaches.  To deal with such uncertainties, a player must use metacognition to monitor performance and generate winning strategies.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/metacognition-red-circles.png" /&gt;
&lt;figcaption&gt;Figure 1: Screenshots of Flight Control.  White lines signify path of flight.  Red circles indicate potential crashes.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;During the early stages of the game, it is easy to compute the optimum path from a flight’s current position to its landing zone.  The player’s initial strategy may be to simply draw a linear line between the flight and the target.  However, as more flights enter the field, it becomes cognitively taxing to monitor all flights simultaneously.  Not only does the player need to keep track of the position of the flight, but also the direction and the speed at which it is flying relative to other flights in the field of vision.  Deciding on a path to draw becomes more complex than to draw a linear line.  It is at this point that the player realizes that with the current strategy, a crash is inevitable.  Such metacognitive experiences push the player to reassess and switch strategies.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/metacognition-strategy.png" /&gt;
&lt;figcaption&gt;Figure 2: A situation where a simple strategy will not suffice.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;A novice player may simply “stick with” the path that was first drawn because it satisfices the goal at a satisfactory level.  However, this is prone to crash threats appearing at unforeseeable time as the game progresses.  Expert players on the other hand, are able to plan ahead by making decisions based on past experience and heuristics.  For instance, they may have learned to recognize spots on the field in which collisions are more likely to happen and avoid such areas.&lt;/p&gt;

&lt;p&gt;As more flights emerge, cognitive load is put on the user and game difficulty increases.  This is obviously intentional to make the game challenging.  However, if this were to hypothetically be an actual flight control program, cognitive load and dissonance are components that are to be avoided at all costs.  Certain designs can be made to the program to help the flight controller, such as visually displaying locations where collisions are guaranteed to happen, given the current set of flights and paths drawn.  In fact, the entire program could be automated, relieving the user from heavy mental computational work.&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Metacognition plays a critical role for humans.  As error-prone organisms with survival instincts, humans must make use of metacognition to monitor progress, assess and revisit strategies, and make decisions based on critical evaluation of alternative choices.  Decision making is a complex process of selecting the option closest to the ideal.  During the this progress, the individual may undergo heavy cognitive load, and it is important for the designer to externalize as much information to prevent workload overload.&lt;/p&gt;

&lt;h2 id="references"&gt;References&lt;/h2&gt;

&lt;ol class="bibliography"&gt;&lt;li&gt;&lt;span id="Flavell1978"&gt;[1]J. H. Flavell, “Metacognitive Development,” &lt;i&gt;Structural/Process Theories of Complex Human Behavior&lt;/i&gt;, pp. 34–78, 1978.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Papleontiou-louca2003"&gt;[2]E. Papleontiou-louca, “The concept and instruction of metacognition,” &lt;i&gt;Teacher Development&lt;/i&gt;, vol. 7, no. 1, pp. 9–30, 2003.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Flavell1987"&gt;[3]J. H. Flavell, “Speculations about the Nature and Development of Metacognition,” &lt;i&gt;Metacognition, Motivation and Understanding&lt;/i&gt;, 1987.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Flavell1979"&gt;[4]J. H. Flavell, “Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry.,” &lt;i&gt;American Psychologist&lt;/i&gt;, vol. 34, no. 10, pp. 906–911, 1979.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Piaget1962"&gt;[5]J. Piaget, &lt;i&gt;Play, dreams and imitation&lt;/i&gt;. 1962.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Efklides2006"&gt;[6]A. Efklides, “Metacognition and affect: What can metacognitive experiences tell us about the learning process?,” &lt;i&gt;Educational Research Review&lt;/i&gt;, vol. 1, no. 1, pp. 3–14, Jan. 2006.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Butler1995"&gt;[7]D. L. Butler and P. H. Winne, “Feedback and Self-regulated Learning; a theoretical synthesis,” &lt;i&gt;Review of Educational Research&lt;/i&gt;, vol. 65, pp. 245–281, 1995.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Zeleny1982"&gt;[8]M. Zeleny and J. L. Cochrane, &lt;i&gt;Multiple Criteria Decision Making&lt;/i&gt;. 1982.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Edwards1954"&gt;[9]W. Edwards, “THE THEORY OF DECISION MAKING,” &lt;i&gt;Psychological bulletin&lt;/i&gt;, vol. 51, no. 4, pp. 380–417, 1954.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Simon1955"&gt;[10]H. A. Simon, “A Behavioral Model of Rational Choice,” &lt;i&gt;The quarterly journal of economics&lt;/i&gt;, vol. 69, no. 1, pp. 99–118, 1955.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Schwartz2002"&gt;[11]B. Schwartz, A. Ward, J. Monterosso, S. Lyubomirsky, K. White, and D. R. Lehman, “Maximizing versus satisficing. Happiness is a matter of choice.,” &lt;i&gt;Journal of Personality and Social Psychology&lt;/i&gt;, vol. 83, pp. 1178–1197, 2002.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Iyengar2006"&gt;[12]S. S. Iyengar, R. E. Wells, and B. Schwartz, “Doing Better but Feeling Worse Looking for the ‘Best’ Job Undermines Satisfaction,” &lt;i&gt;Psychological Science&lt;/i&gt;, vol. 17, no. 2, pp. 143–150, 2006.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Simon1956"&gt;[13]H. A. Simon, “Rational Choice and the Structure of the Environment,” &lt;i&gt;Psychological review&lt;/i&gt;, vol. 63, no. 2, pp. 129–138, 1956.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Edland1993"&gt;[14]A. Edland and O. Svenson, “Judgment and decision making under time pressure,” &lt;i&gt;Time pressure and stress in human judgment and decision making&lt;/i&gt;, pp. 27–40, 1993.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Wickens2004"&gt;[15]C. D. Wickens, &lt;i&gt;An Introduction To Human Factors Engineering&lt;/i&gt;, Second Edi. Upper Saddle River, NJ: Pearson Education, Inc., 2004, p. 587.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Parker2007"&gt;[16]A. M. Parker, “Maximizers versus satisficers: Decision-making styles, competence, and outcomes,” &lt;i&gt;\ldots and Decision Making&lt;/i&gt;, vol. 2, no. 6, pp. 342–350, 2007.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Shiv1999"&gt;[17]B. Shiv and A. Fedorikhin, “Heart and Mind in Conflict : The Interplay of Affect and Cognition in Consumer Decision making,” &lt;i&gt;Journal of Consumer research&lt;/i&gt;, vol. 26, no. 3, pp. 278–292, 1999.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Rabin1998"&gt;[18]M. Rabin, “Psychology and Economics,” &lt;i&gt;Journal of Economic Literature&lt;/i&gt;, pp. 11–46, 1998.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Druckman2001"&gt;[19]J. N. Druckman, “Using Credible Advice to Overcome Framing Effects,” &lt;i&gt;Journal of Law, Economics, and Organization&lt;/i&gt;, vol. 17, no. 1, pp. 62–82, Apr. 2001.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Tversky1981"&gt;[20]A. Tversky and D. Kahneman, “The Framing of Decision and the Psychology of Choice,” &lt;i&gt;Science&lt;/i&gt;, 1981.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Cohen1998"&gt;[21]M. S. Cohen, J. T. Freeman, and B. Thompson, “Critical Thinking Skills in Tactical Decision Making: A Model and A Training Strategy,” &lt;i&gt;\ldots individual and team training&lt;/i&gt;, 1998.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Cohen1986"&gt;[22]M. S. Cohen, “An expert system framework for non-monotic reasoning about probabilistic assumptions.,” &lt;i&gt;Uncertainty in artificial intelligence&lt;/i&gt;, pp. 279–293, 1986.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Baker1985"&gt;[23]L. Baker, “How do we know when we don’t understand? Standards for evaluating text comprehension,” &lt;i&gt;Metacognition, cognition, and human performance&lt;/i&gt;, vol. 1, pp. 155–205, 1985.&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;
</content>
  </entry>
  <entry>
    <title>Prior Knowledge</title>
    <link href="http://kenhirakawa.com/prior-knowledge"/>
    <updated>2013-04-02T00:00:00-07:00</updated>
    <id>/prior-knowledge</id>
    <content type="html">&lt;h1 id="prior-knowledge"&gt;Prior Knowledge&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Research and Application of Schemas, Semantic Networks, and Affordances&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="what-is-prior-knowledge"&gt;What is Prior Knowledge&lt;/h2&gt;

&lt;p&gt;Prior knowledge plays a critical role in human cognitive tasks.   When information is perceived from sensory inputs, top-down processing aids the brain in attaching meaning to what is sensed.  Prior knowledge is stored in long-term memory and guides the mind to a course of action.  Past experiences also aid in processing and understanding signals that are too weak or degraded to be efficiently perceived, such as the classic example of a doctor’s handwriting.  Although words may be illegible, expectations of sentence structure and vocabulary allow the top-down cognitive process to guess the word and “fill in the blank” correctly. These expectations are based on how frequent humans encountered the event in the past and the context in which the stimulus was perceived &lt;a href="#Wickens2004"&gt;[1, P. 125]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Psychologists have developed numerous theories that model how knowledge is structured and organized in long-term memory.  These include schema theory and semantic networks.  This paper will review the research literature behind the models, discuss the effect of prior knowledge on human factor design, and apply the research to a case study.&lt;/p&gt;

&lt;h2 id="schema-theory"&gt;Schema Theory&lt;/h2&gt;

&lt;p&gt;The information stored in long-term memory is actively organized around central concepts known as schemas.  Schemas are built from the interaction with the environment to organize experience and are mental representations of objects, events, or people &lt;a href="#Arbib1992"&gt;[2]&lt;/a&gt;. When an external stimulus is perceived, humans try to make sense of the input in terms of stock schemas stored in long term memory.  This process of linking incoming information with prior knowledge is called assimilation &lt;a href="#Piaget1962"&gt;[3]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Assimilation is one aspect of the learning process.   Piaget suggests that learning is guided by the organization of schemas and the adaptation of schemas, which include the “assimilation of new information into existing schemas, or the accommodation of schemas to new information, which may not fit into existing schemas” &lt;a href="#596-Chalmers2003"&gt;[4]&lt;/a&gt;.  Thus, a schema can change over time as needed.  Knowledge is also linked to an action schema, and usually entails the expectation of particular responses.  In Schmidt’s term, this is called a recall schema and is defined as the schema in which motor movements are mapped with the actual outcomes &lt;a href="#Schmidt1976"&gt;[5]&lt;/a&gt;.  In a similar sense, the act of recalling a story is done by relating the experience to one’s set of familiar schemas rather than from rote memorization of details &lt;a href="#Arbib1992"&gt;[2]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What pushes learning is what Piaget describes as Equilibration.  Equilibration is the force that drives cognitive development.  When new information is captured through sensory inputs, but cannot be assimilated into the stock schemas in our long-term memory, equilibration forces the learning process to mold current schemas to accommodate the new information and bring balance &lt;a href="#Piaget1978"&gt;[6]&lt;/a&gt;.  When there is a one to one mapping between new material and stock schemas, it is said to be intuitive.  On the other hand, if the effort of accommodation is substantial, rejection can occur.  Thus designers are recommended to avoid applying design patterns that make accommodation difficult.&lt;/p&gt;

&lt;h2 id="semantic-networks"&gt;Semantic Networks&lt;/h2&gt;

&lt;p&gt;The larger collection of interrelated schemas is called a semantic network &lt;a href="#Rumelhart1976"&gt;[7]&lt;/a&gt;.  Each schema stored in long-term memory can be thought of as a node in a network.  Related nodes are connected by links and a connection between two nodes denotes an association of information.  Collins and Quillian first suggested that a semantic network was structured in a tree-structured hierarchy, with “connections determined by class-inclusion relations” &lt;a href="#Collins1969"&gt;[8]&lt;/a&gt;.  Although economical, this type of structure is severely limited because it can only deal with inheritance-based categorization of typical objects &lt;a href="#Steyvers2005"&gt;[9]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of the tree-structure, typical textbooks and later research show a network with an arbitrary set of nodes and connections with no structure.  A node, or concept is defined by what it is connected to.  According to researchers, this seemingly arbitrary network structure exhibits certain characteristics.  Similar to other natural networks, semantic networks possess 1) a small-world structure arising from a 2) scale-free organization.  In other words, nodes are organized into neighborhoods, hubs, and the distance between two seemingly unrelated nodes is short on average &lt;a href="#Steyvers2005"&gt;[9]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;During a recall of information, certain nodes are activated.  Activation in one node can trigger the activation of other nodes connected along network links.  Anderson and Pirolli call this spreading activation &lt;a href="#Anderson1984"&gt;[10]&lt;/a&gt;.  Spreading occurs near instantaneously, but the activation strength will decay exponentially with distance.  Theories such as ACTE and ACT* are just some of the theories that model spreading activation.&lt;/p&gt;

&lt;p&gt;Certain characteristics determine how well prior knowledge is recalled from long-term memory.  One characteristic is strength.  Strength is determined by how recently information was used, as well as how frequently the node was activated.  Frequency is directly proportionate to the number of links connected.  The more connections, the greater the chances are of activation, and therefore a likely increase in strength.  This is because “activations converging on a node from multiple sources will sum” &lt;a href="#Anderson1984"&gt;[10, P. 794]&lt;/a&gt;.  In other words, memory retrieval will often fail due to weak strength, weak or few associations with other information, and or presence of interfering associations.&lt;/p&gt;

&lt;h2 id="priming"&gt;Priming&lt;/h2&gt;

&lt;p&gt;The notion of activation is also tied to the effect of priming.  Priming is the “facilitated identification of perceptual objects from reduced cues as a consequence of a specific prior exposure to an object” &lt;a href="#244-Schacter1992"&gt;[11]&lt;/a&gt;.  In terms of semantic networks, prior activation of nodes enhances the ability to process subsequent stimuli in a certain fashion related to the activated neighborhood.  Humans tend to see things related to the priming stimuli.  For instance, when humans enter a kitchen, the “kitchen” schema is brought forth and nodes linked to it, such as “furniture” and “sink”, are activated with a high probability.  Thus we expect to find objects such as a dining table or a refrigerator in a kitchen instead of unrelated objects such as a motor vehicle.  Ratcliff and McKoon found that as the network distance between the prime and the target decreased, the facilitative effect of a prime on target recognition increased &lt;a href="#Ratcliff1981"&gt;[12]&lt;/a&gt;.  In fact, the effect of priming decays exponentially as a function of distance &lt;a href="#Anderson1983"&gt;[13]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Priming effects can occur in individuals who have brain damage.  This leads to show that priming is separate from the brain mechanism responsible for recollecting past, episodic memory.  Such is the case for amnesic patients who can demonstrate effects of priming after encountering certain words or objects &lt;a href="#Schacter1992"&gt;[11]&lt;/a&gt;.  Priming can also occur without consciousness and operates at a pre-semantic level of processing that does not involve access to the meanings of words or objects &lt;a href="#Schacter1992"&gt;[11]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Priming is an effective tool for human factors design.  Designers can take advantage of the effect to influence user behavior and thought process.  For example, the designer may opt to use the word “patients” instead of “customers” in a dental software program to predispose the secretary to think in a care giving, and not a business, perspective.  However, if the incorrect schema is instantiated through priming, misunderstanding and confusion can ensue.  Such is the case with the Stroop effect &lt;a href="#MacLeod1991"&gt;[14]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="metaphors"&gt;Metaphors&lt;/h2&gt;

&lt;p&gt;Many psychologists have suggested that metaphors play a special role in how humans organize conceptual knowledge.  Metaphor-based schema is “the representational structure that maps knowledge about a conceptual metaphor’s vehicle domain onto its topic domain” &lt;a href="#McKoon1980"&gt;[15, P. 613]&lt;/a&gt;.  It can be thought as a memory aid for learning new knowledge through the interaction of two different domains.  Gibbs suggests that metaphoric expressions are constructed from available cognitive mappings stored in long-term memory &lt;a href="#Gibbs1992"&gt;[16]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Metaphors are a common tool used in interface design.  A classic example is the metaphor of desktops used in operating system GUIs.  Virtual files on the desktop can be placed into folders to organize and categorize, just as physical paperwork can be organized in file cabinets.  Other metaphorical examples found in software design include magnifying glasses to represent zoom or search, trash icons for delete, and national flag icons for language selection.&lt;/p&gt;

&lt;h2 id="affordance"&gt;Affordance&lt;/h2&gt;

&lt;p&gt;When a metaphor is applied to an object, it is said that the metaphor brings with it, a set of affordances.  Affordances are inherent properties of an object that allows an actor to perform an action.  For example, a knob on a door affords ‘twistability’ and the door affords ‘openability’.  According to Gibson, affordances are independent of the user’s ability to perceive the properties &lt;a href="#Gibson1977"&gt;[17]&lt;/a&gt;.  Affordances are also relative to the capabilities of the user.  For instance, a staircase affords ‘climbability’ to an adult, but not to a toddler who is not tall enough to reach the first step.&lt;/p&gt;

&lt;p&gt;In contrast to Gibson’s view, Norman’s definition of affordance is defined in terms of both real and perceived affordances.  According to Norman, an affordance suggests a strong cue to its operation. Thus when both real and perceived are linked, affordance emerges and the user knows how to use the object without labels or explanation &lt;a href="#Norman1999"&gt;[18]&lt;/a&gt;.  In terms of Norman’s view, a hidden door would not have an affordance of ‘openability’ because the user does not perceive the existence of the door.  Affordance is therefore dependent on the experience, culture, and prior knowledge of the actor.&lt;/p&gt;

&lt;p&gt;The two different definitions of affordance have caused confusion in the HCI community &lt;a href="#McGrenere2000"&gt;[19]&lt;/a&gt;.  Realizing that the ambiguity of the term would further spread the misuse, McGrenere and Ho have further solidified the concept of affordance by extending Gibson’s definition.  One such clarification is that affordances do not always map one to one to system functions because affordances are often nested within other properties.  For example, the ‘margin adjustability’ of a text editor is within the parent affordance of ‘document editiability’.  This aligns with Gaver’s research on nested and sequential affordances &lt;a href="#Gaver1991"&gt;[20]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Gibson states that affordance is binary; the property either exists, or doesn’t.  Warren however, demonstrated that there is in fact, an optimal point in which the user can act upon the affordance with ease &lt;a href="#Warren1984"&gt;[21]&lt;/a&gt;.  His experiment with stair ‘climbability’ showed that people’s visually guided judgments of ‘climbability’ accurately reflected a U-shaped function relating work to riser height and leg length (π=R/L).  Affordances are therefore not binary, as Gibson suggests.  Instead, there is a difficulty range for the affordance.  Designers should strive to design elements that achieve the optimal point, π_optimal.&lt;/p&gt;

&lt;h2 id="application-of-research"&gt;Application of Research&lt;/h2&gt;

&lt;p&gt;Tesla.com’s “Go Electric” is a section that aims to answer frequently asked questions in an interactive manner.  It is analyzed here in terms of McGrenere and Ho’s extended definition of Gibsonian affordance.&lt;/p&gt;

&lt;p&gt;The mileage calculator has four types of design patterns that control mileage-altering variables: slides (driving type, vehicle), dials (highway speed), scrollers (temperature), and buttons (climate control).  Each pattern provides certain affordances through the use of metaphors.  For instance, the raised-looking knob within the blue, vertical track reminds the user of a physical slide switch, which brings forth the affordance of ‘slideability’.  The highway speed dial in the shape of a speedometer is also an excellent use of metaphor.  The user can easily recognize what the dial controls (speed).  The knob also has a style that affords ‘slideability’&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/prior-knowledge-mileage.png" /&gt;
&lt;figcaption&gt;Figure 1: Four different types of controls used to vary mileage variables&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;The A/C button’s label and the fan icon above act as a priming mechanism to activate the schema of climate control.  In fact, the entire website, with images of cars, roads, and dashboards activate the nodes within the semantic network related to motor vehicles.  Thus when the user views the A/C button, the user understands its reference to the climate control system of a Tesla car, and not to that of a desk fan.&lt;/p&gt;

&lt;p&gt;Aforementioned affordances are all nested under the parent affordance of ‘function invokeability’.  It is also important to distinguish between the underlying affordance of the controls from the information that specifies the affordance &lt;a href="#McGrenere2000"&gt;[19]&lt;/a&gt;.  The affordance of mileage adjustability, and nested within it, slideability, pressability, and scrollability all exist regardless of the user’s perception and knowledge of them.  The shape and style of the controls are the graphical vehicles that transport the affordance information to the user.&lt;/p&gt;

&lt;p&gt;The “How much do you pay per kilowatt hour” number spinner on the charge time and cost calculator is one design that fails to clearly specify the underlying affordance to the user.  The design has two components: buttons that adjust the variable and a box that displays the monetary value.  The design fails because the slightly depressed style of the box is indicative of an editable input box and invites users to click-and-edit, even though the field is read-only.  There are no actionable affordances, yet the design looks as if it does.  In Gaver’s term, this is a false affordance and signifies a sub-optimal π value.  An alternative design is shown below.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/prior-knowledge-kilowatt.png" /&gt;
&lt;figcaption&gt;Figure 2: (left) Current design with false affordance.  (right) Recommended design without input field style&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Prior knowledge plays a critical role in human cognitive tasks.  Schemas and semantic networks are two models that map cognitive knowledge in the brain.  Leveraging the facilitative effects of priming and metaphors can make designs intuitive and help the user assimilate or accommodate new information more easily.  However, if the incorrect nodes of the semantic network are activated, designs can become difficult to comprehend and lead to abandonment.  Designers should strive to design interfaces in a way that matches the user’s expectation and the affordances of the interfaces should provide actions that help the user meet their goals.&lt;/p&gt;

&lt;h2 id="references"&gt;References&lt;/h2&gt;

&lt;ol class="bibliography"&gt;&lt;li&gt;&lt;span id="Wickens2004"&gt;[1]C. D. Wickens, &lt;i&gt;An Introduction To Human Factors Engineering&lt;/i&gt;, Second Edi. Upper Saddle River, NJ: Pearson Education, Inc., 2004, p. 587.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Arbib1992"&gt;[2]M. A. Arbib, “Schema Theory,” &lt;i&gt;Encyclopedia of artificial intelligence&lt;/i&gt;, 1992.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Piaget1962"&gt;[3]J. Piaget, &lt;i&gt;Play, dreams and imitation&lt;/i&gt;. 1962.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Chalmers2003"&gt;[4]P. a. Chalmers, “The role of cognitive theory in human–computer interface,” &lt;i&gt;Computers in Human Behavior&lt;/i&gt;, vol. 19, no. 5, pp. 593–607, Sep. 2003.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Schmidt1976"&gt;[5]R. A. Schmidt, “The Schema as a Solution to Some Persistent Problems in Motor Learning Theory,” &lt;i&gt;Motor Control: Issues and Trends&lt;/i&gt;, pp. 41–65, 1976.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Piaget1978"&gt;[6]J. Piaget, &lt;i&gt;The development of thought: Equilibration of cognitive structures&lt;/i&gt;. B. Blackwell, 1978.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Rumelhart1976"&gt;[7]D. E. Rumelhart and A. Ortony, “The representation of knowledge in memory,” &lt;i&gt;Center for Human Information Processing&lt;/i&gt;, pp. 99–135, 1976.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Collins1969"&gt;[8]A. M. Collins and M. R. Quillian, “Retrieval Time from Semantic Memory,” &lt;i&gt;Journal of verbal learning and verbal behavior&lt;/i&gt;, vol. 2, 1969.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Steyvers2005"&gt;[9]M. Steyvers and J. B. Tenenbaum, “The large-scale structure of semantic networks: statistical analyses and a model of semantic growth.,” &lt;i&gt;Cognitive science&lt;/i&gt;, vol. 29, no. 1, pp. 41–78, Jan. 2005.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Anderson1984"&gt;[10]J. R. Anderson and P. L. Pirolli, “Spread of activation.,” &lt;i&gt;Journal of \ldots&lt;/i&gt;, vol. 10, no. 4, pp. 791–798, 1984.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Schacter1992"&gt;[11]D. L. Schacter, “Priming and Multiple Memory Systems: Perceptual Mechanisms of Implicit Memory,” &lt;i&gt;Journal of Cognitive Neuroscience&lt;/i&gt;, vol. 4, no. 3, pp. 244–256, Jul. 1992.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Ratcliff1981"&gt;[12]R. Ratcliff and G. McKoon, “Does activation really spread?,” &lt;i&gt;Psychological review&lt;/i&gt;, vol. 88, pp. 454–457, 1981.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Anderson1983"&gt;[13]J. R. Anderson, “A Spreading Activation Theory of Memory,” &lt;i&gt;Journal of verbal learning and verbal behavior&lt;/i&gt;, 1983.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="MacLeod1991"&gt;[14]C. M. MacLeod, “Haifa Centuryof Researchon the Stroop Effect: An Integrative Review,” &lt;i&gt;Psychological Bulletin&lt;/i&gt;, vol. 109, no. 2, pp. 163–203, 1991.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="McKoon1980"&gt;[15]G. McKoon and R. Ratcliff, “Priming in Item Recognition: The Organization of Propositions in Memory for Text,” &lt;i&gt;Journal of Verbal Learning and Verbal Behavior&lt;/i&gt;, 1980.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Gibbs1992"&gt;[16]R. W. Gibbs, “Categorization and metaphor understanding,” &lt;i&gt;Psychological review&lt;/i&gt;, pp. 572–577, 1992.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Gibson1977"&gt;[17]J. J. Gibson, “The theory of affordance,” &lt;i&gt;Perceiving, Acting and Knowing&lt;/i&gt;, pp. 67–82, 1977.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Norman1999"&gt;[18]D. A. Norman, “Affordance, Concentions, and Design,” &lt;i&gt;interactions&lt;/i&gt;, pp. 38–42, 1999.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="McGrenere2000"&gt;[19]J. McGrenere and W. Ho, “Affordances: Clarifying and evolving a concept,” &lt;i&gt;Graphics Interface&lt;/i&gt;, no. May, pp. 1–8, 2000.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Gaver1991"&gt;[20]W. W. Gaver, “Technology Affordances,” &lt;i&gt;\ldots in computing systems: Reaching through technology&lt;/i&gt;, 1991.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Warren1984"&gt;[21]W. H. Warren, “Perceiving affordances: Visual guidance of stair climbing,” &lt;i&gt;Journal of Experimental Psychology: Human Perception and Performance&lt;/i&gt;, vol. 10, pp. 683–703, 1984.&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;
</content>
  </entry>
  <entry>
    <title>Water Me, Please!</title>
    <link href="http://kenhirakawa.com/water-me-please"/>
    <updated>2013-03-10T00:00:00-08:00</updated>
    <id>/water-me-please</id>
    <content type="html">&lt;h1 id="water-me-please"&gt;Water Me, Please!&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;A tale of how I built a water level sensor to monitor my bamboo’s water supply.  When water is depleted, it tweets ‘WATER ME FOOL’.&lt;/em&gt;&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/water-me-please-bamboo.jpg" /&gt;
&lt;figcaption&gt;My overgrown bamboo&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;My bamboo is dying.&lt;/p&gt;

&lt;p&gt;I’ve had my bamboo plant ever since I started working at my current company. It’s a nice addition to my office desk, with it being green and all. The problem is, I always forget to water the plant. The yellowed, coiled up leaves are my only reminder. I have no idea how this plant has made it this far.&lt;/p&gt;

&lt;h2 id="lets-fix-it"&gt;Let’s Fix It&lt;/h2&gt;

&lt;p&gt;There are many ways to “fix” this. One could simply set a recurring alarm on a phone every 3 or so weeks, as a reminder to water the bamboo. It’s the easiest route, but there’s certainly no fun involved.&lt;/p&gt;

&lt;p&gt;What if the plant told me that it needed water? This &lt;a href="http://www.botanicalls.com/"&gt;isn’t a new idea&lt;/a&gt;, but I had an Arduino Uno collecting dust. I think it’s time to make use of it.&lt;/p&gt;

&lt;h2 id="arduino-to-the-rescue"&gt;Arduino to the Rescue&lt;/h2&gt;

&lt;p&gt;The requirements for this new system would be simple.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Accurately notify me when the water level is low&lt;/li&gt;
  &lt;li&gt;Simple to implement&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Diving deeper into (1), the system should publish a tweet to its &lt;a href="https://twitter.com/kensplant"&gt;feed&lt;/a&gt; when it is deprived of water and when its been rescued.&lt;/p&gt;

&lt;p&gt;Here’s how I implemented it.&lt;/p&gt;

&lt;h3 id="hardware"&gt;Hardware&lt;/h3&gt;

&lt;p&gt;I read this &lt;a href="http://lifeboatfarm.wordpress.com/2009/12/28/arduino-water-level-gauge/"&gt;blog&lt;/a&gt; and used their design for the water level sensor. For my design, I’m using one 10kΩ and five 2.2kΩ resistors in series. Here’s the schematic drawing:&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/water-me-please-schematic.png" /&gt;
&lt;figcaption&gt;Design of water sensor&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;The theory behind this is that when the when the sensor (the five 2.2kΩ resistors) is submerged into water, the sensor gets shorted resulting in a change in voltage.  By measuring this voltage change, we can determine when water level is low.&lt;/p&gt;

&lt;p&gt;Sensor measurements are read in through analog sensor A0.  Power is programmatically applied via pin 7 only when measurements are needed. This is to limit electrolysis.&lt;/p&gt;

&lt;p&gt;And here’s the prototype:&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/water-me-please-prototype.jpg" /&gt;
&lt;figcaption&gt;The resistors are wrapped around the GND wire to make it more probe-like&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;It works like a charm.  The raw data readings range from 540 (when it’s dry) to around 350 (when its submerged in water).  The values don’t mean anything.  All the software cares about is when the measurement has surpassed the IM_DYING_GIVE_ME_WATER threshold.  This value is set to 520.&lt;/p&gt;

&lt;h3 id="software"&gt;Software&lt;/h3&gt;

&lt;p&gt;I wrote a node.js script that interfaces with the Arduino to collect readings from the water sensor.  I’ll explain the gist of how the script works here.  For the full source code, checkout my &lt;a href="https://github.com/khirakawa/water-me-please"&gt;repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The node.js script uses &lt;a href="https://github.com/ecto/duino"&gt;duino&lt;/a&gt;.  Its a neat, lightweight framework for working with Arduinos in node.js. As per their instructions, I’ve uploaded the &lt;code&gt;src/du.ino&lt;/code&gt; sketch onto my Arduino.  Now I can operate the Arduino with javascript. Awesome.&lt;/p&gt;

&lt;p&gt;The water sensor code is in &lt;code&gt;lib/sensor.js&lt;/code&gt;.  Inside the module you’ll find the &lt;code&gt;WaterLevelSensor&lt;/code&gt; class.  When instantiated, this object will create a new &lt;code&gt;arduino.Board&lt;/code&gt; object at a baudrate of 9600, set up the analog sensor at pin A0, and set the pin mode of pin 7 to ‘OUT’.  Pin 7 is used to toggle the power to the sensor.  Here’s the code.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="c1"&gt;// our water level sensor&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;WaterLevelSensor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;debug&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;debug&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pin&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensor_pin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensor_pin&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;A0&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;baudrate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;baudrate&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;9600&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// This will be used to determine when to tweet&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;IM_DYING_THRESHOLD&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;520&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;self&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Board&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;baudrate&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;baudrate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;debug&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;arduino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Sensor&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;pin&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensor_pin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;throttle&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ready&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// set pin to out mode&lt;/span&gt;
    &lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pinMode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;out&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ready&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In order for the object to emit an event on the ready state, it must inherit &lt;code&gt;events.EventEmitter&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="c1"&gt;// make it emitterable&lt;/span&gt;
&lt;span class="nx"&gt;util&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inherits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;WaterLevelSensor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EventEmitter&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Finally, it defines the measure method.  The code powers the sensor by setting pin 7 to HIGH and then takes the mean average of five measurements.  The final value is passed to a callback.  Lastly, power pin (7) is set to LOW to power off the sensor.  This is important because we don’t want DC current to continuously flow through the resistors in the water and cause corrosion via electrolysis.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="c1"&gt;// measures for 5 reads and calls callback with average value&lt;/span&gt;
&lt;span class="nx"&gt;WaterLevelSensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prototype&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;measure&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;self&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;readCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;measurements&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt; &lt;span class="c1"&gt;// running list of measurements&lt;/span&gt;

  &lt;span class="c1"&gt;// send 5V to pin to enable sensor&lt;/span&gt;
  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HIGH&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// read from sensor&lt;/span&gt;
  &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;readCallback&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;measurements&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;parseInt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;measurements&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;median&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getMedian&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;measurements&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="c1"&gt;// reset mode&lt;/span&gt;
      &lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;digitalWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;board&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LOW&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;removeListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;read&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;readCallback&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="c1"&gt;// we&amp;#39;re finished, evoke the callback&lt;/span&gt;
      &lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;median&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;read&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;readCallback&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here’s an example of how the &lt;code&gt;WaterLevelSensor&lt;/code&gt; object is used.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre&gt;&lt;code class="language-javascript" data-lang="javascript"&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;sensor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;WaterLevelSensor&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// on ready state&lt;/span&gt;
&lt;span class="nx"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ready&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
    &lt;span class="nx"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;measure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// If the value exceeds threshold, tweet.&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;IM_DYING_THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// tweet it&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;300000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// measure every 5 minutes&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;There you have it.  A homemade Botanicalls made from a bunch of resistors, jumper wires, an Arduino, and node.js.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/water-me-please-final.jpg" /&gt;
&lt;figcaption&gt;Water level sensor in action&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="next-steps"&gt;Next Steps&lt;/h2&gt;

&lt;p&gt;Although the code does its best to limit corrosion, the resistors are still exposed to the elements and will naturally corrode.  Unfortunately, the resistors on my sensor have already rusted due to extensive testing of the sensor (still works though!).  An alternative method would be to use ultra-sound sensing or capacitive sensing.  That’ll be a topic worthy of its own project.  For now, I’m happy with what I’ve made.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;
</content>
  </entry>
  <entry>
    <title>Preattentive Processing</title>
    <link href="http://kenhirakawa.com/preattentive-processing"/>
    <updated>2013-03-03T00:00:00-08:00</updated>
    <id>/preattentive-processing</id>
    <content type="html">&lt;h1 id="preattentive-processing"&gt;Preattentive Processing&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Research and Application of the Gestalt Principles&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="preattentive-processing-1"&gt;Preattentive Processing&lt;/h2&gt;

&lt;p&gt;Preattentive processing plays an important role in human vision.  An understanding of the fundamental mechanism in which preattentive processing occurs allows designers to design displays with increased quality and quantity of information.  When patterns or groups are identified before conscious attention is directed, preattentive processing is said to have occurred.  This suggests that processing occurs in a rapid, parallel fashion across the visual field &lt;a href="#Treisman1985"&gt;[1]&lt;/a&gt;.  Typically, the time it takes for a preattentive task to complete is within 200 to 250ms, and this is accomplished with minimal effort &lt;a href="#Healey1993"&gt;[2]&lt;/a&gt;.  These features are said to have a ‘pop-out’ effect &lt;a href="#Ware2013"&gt;[3, P. 153]&lt;/a&gt;.  Examples of preattentively processed features that exhibit said effect include shape, color, and proximity.&lt;/p&gt;

&lt;h3 id="the-importance-of-preattentive-processing"&gt;The Importance of Preattentive Processing&lt;/h3&gt;

&lt;p&gt;Preattentive processing is important in many regards.  From a neural network perspective, the ability to decompose, group, and compress the physical signals received by the retina is driven by efficiency.  This is because human attention has only a meager capacity &lt;a href="#Zhaoping2006"&gt;[4]&lt;/a&gt;.  From a design perspective, it is often useful to be able to show information ‘at a glance’.  If a mark on a map is required to be instantaneously identified of belonging to a certain type, it should be differentiated from all other marks in a preattentive manner &lt;a href="#Ware2013"&gt;[3, P. 154]&lt;/a&gt;.
Furthermore, preattentive processing is crucial in organizing the information-dense field of vision as a whole.&lt;/p&gt;

&lt;h2 id="gestalt-laws"&gt;Gestalt Laws&lt;/h2&gt;

&lt;p&gt;Gestalt laws provide a clear description of many basic perceptual phenomena of pattern recognition.  According to Koffka, the laws explain how individual elements may be visually organized into structures &lt;a href="#Koffka1935"&gt;[5]&lt;/a&gt;.  These psychological principles have influenced many research areas since 1924, including visual screen design &lt;a href="#Chang2002"&gt;[6]&lt;/a&gt;.  However, the German psychologists behind the gestalt laws were simply observers; the laws relied heavily on phenomenology and did not sufficiently support their principles with objective data &lt;a href="#Kubovy1998"&gt;[7]&lt;/a&gt;.  Although their neurological explanations for these laws have been debunked, the robust laws themselves have withstood the test of time.&lt;/p&gt;

&lt;h3 id="proximity"&gt;Proximity&lt;/h3&gt;

&lt;p&gt;One of the major gestalt laws of organizing the visual field is the grouping of stimuli on the basis of proximity.  The law simply states that objects that are close together are grouped together.  This is alternatively called the spatial concentration principle &lt;a href="#Slocum1983"&gt;[8]&lt;/a&gt;.  By spatially grouping elements, less time is spent with eye movement and neural processing because information is picked up more readily in foveal vision &lt;a href="#Ware2013"&gt;[3, P. 181]&lt;/a&gt;.  According to this law three rows of four black circles are seen in figure 1.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/preattentive-dots.png" /&gt;
&lt;figcaption&gt;Figure 1: Three rows of four black circles are seen based on the law of proximity&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;The gestalt psychologists never demonstrated whether it is the physical proximal stimuli on the retina, or the perceived proximity that governs this principle &lt;a href="#Rock1964"&gt;[9]&lt;/a&gt;.  Later research effectively showed that proximal grouping can in fact be driven purely from a bottom-up, preattentive process, as demonstrated by Compton &amp;amp; Logan’s modifed version of the van Oeffelen and Vos’s code algorithm &lt;a href="#Compton1993"&gt;[10]&lt;/a&gt;.
When proximity is the sole differentiating feature, the gestalt law of proximity stands; objects that are clustered together are grouped.  Making design decisions based simply on visual aesthetics or “what looks good” should never supersede this principle.  A common design practice therefore is to place symbols representing related information close together.&lt;/p&gt;

&lt;h3 id="similarity"&gt;Similarity&lt;/h3&gt;

&lt;p&gt;The gestalt law of similarity state that objects that are similar in features such as color, orientation, and shape, are grouped together.  Koffka demonstrated that grouping by shape is stronger than grouping by color by contrasting the effectiveness of each similarity feature to that of proximity.  Even when color and shape similarity is not in competition with proximity grouping, grouping by shape similarity had a stronger effect &lt;a href="#Koffka1935"&gt;[5]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It would of course be a mistake to conclude that one principle is invariably stronger than the other.  More recent research has demonstrated that grouping of common color is actually more powerful than grouping by common shape.  Examination of similarity and proximity showed persuasive evidence that under certain conditions, both common color and shape were shown to override grouping by proximity &lt;a href="#Quinlan1998"&gt;[11]&lt;/a&gt;.  What this seems to suggest is that there is an intimate relationship between the features of similarity (color, shape) and proximity, and that it is important to consider the effects of each individual effect, and its conjoint effect when making design decisions.  This is especially the case when designing grid layouts of a data set; the use of low-level visual similarity channel properties such as color are recommended &lt;a href="#Ware2013"&gt;[3, P. 182]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="similarity-and-proximity"&gt;Similarity and Proximity&lt;/h3&gt;

&lt;p&gt;When the proximity feature is conjoined with other features, there are cases where the principle breaks down.  For example, in the figure 2, the proximity principle predicts grouping into columns, yet for some, rows of dots are seen; the row-column perception seems to fluctuate.  This is because the proximity and similarity principles are acting in opposition.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/preattentive-proximity-color.png" /&gt;
&lt;figcaption&gt;Figure 2: Proximity and color similarity acting in opposition&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;For simple dot lattice patterns, Kubovy &amp;amp; Holcombe has shown that the relationship is a decaying exponential function.  Additionally, they have found that the features have an additive property; the overall effect is the sum of each individual effect &lt;a href="#Kubovy2008"&gt;[12]&lt;/a&gt;.  Unfortunately, to generalize their findings, strength functions for more complex feature patterns must be discovered &lt;a href="#Kubovy1998"&gt;[7]&lt;/a&gt;.  This example has shown that specific types of redundant coding can lead to a decrease in visual search performance for certain elements &lt;a href="#Ware2013"&gt;[3, P. 160]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="orientation"&gt;Orientation&lt;/h3&gt;

&lt;p&gt;Another similarity feature that is worth discussing is the feature of orientation.  Orientation is one of the basic features processed at the first, parallel processing stage.  When exploring the manner in which orientation controls perceptual grouping, Beck found that “differences in line orientation could be as effective as differences in brightness in segregating two groups of elements” &lt;a href="#Beck1967"&gt;[13]&lt;/a&gt;.  Furthermore, Beck has shown that line orientations are more readily grouped if the direction of the component lines is 45 or 135 degrees, and that vertical and horizontal lines do not readily facilitate grouping.&lt;/p&gt;

&lt;p&gt;This seems to correlate with the visual system’s physical structure; according to the Gabor model of V1 receptive fields, V1 neurons have an orientation tuning of at least 30 degrees &lt;a href="#Ware2013"&gt;[3, P. 204]&lt;/a&gt;.  Thus symbol and glyph element orientations should be separated by at least 30 degrees (and optimally, 45 degrees) for a texture field to be distinct from an adjacent texture field.&lt;/p&gt;

&lt;h3 id="closure-and-common-region"&gt;Closure and Common Region&lt;/h3&gt;

&lt;p&gt;A common region is a terminology used by Palmer to describe the region within an enclosed contour, and is a stronger organizing principle than proximity &lt;a href="#Palmer1992"&gt;[14]&lt;/a&gt;.  Ware explains, “when a closed contour is seen, there is a strong perceptual tendency to divide regions of space into inside or outside the contour” &lt;a href="#Ware2013"&gt;[3, P. 186]&lt;/a&gt;.  Essentially, elements within a common region or a closure are grouped together.  This occurs regardless of the number of elements contained within it &lt;a href="#Donnelly1991"&gt;[15]&lt;/a&gt; and can be “dominated by the smallest background area” &lt;a href="#Palmer1992"&gt;[14]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="redundant-coding"&gt;Redundant Coding&lt;/h2&gt;

&lt;p&gt;Redundant coding, that is, adding visual redundancy to elements, can lead to a decrease in visual search performance for certain elements &lt;a href="#Ware2013"&gt;[3, P. 160]&lt;/a&gt;, as seen above in figure 2.  This is because conjunction searches are generally not preattentive, although some combinations of features have been proven to be preattentive, such as luminance and shape &lt;a href="#Ware2013"&gt;[3, P. 161]&lt;/a&gt;.   Feature Integration Theory explains the difference between disjunctive and conjunctive search &lt;a href="#Treisman1980"&gt;[16]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="feature-integration-theory"&gt;Feature Integration Theory&lt;/h2&gt;

&lt;p&gt;Feature Integration Theory (FIT) is a theory that seeks to explain the phenomena of preattentive processing &lt;a href="#Treisman1980"&gt;[16]&lt;/a&gt;.  According to the theory, the perceptual system is divided into separate maps, each of which records the presence of different visual features and the location of the features it represents.  Looking for a target that differs from its distractors by a single factor, such as shape, requires consulting of a single feature map.  However for conjunctive search, focal attention must be give to combine and compare multiple features maps.  This joint search is what causes conjunctive search to be inefficient &lt;a href="#Smith2007"&gt;[17, P. 132]&lt;/a&gt;.  Observers may also report illusory conjunctions when the brain fails to combine and compare maps, perhaps due to attentional overload &lt;a href="#Treisman1982"&gt;[18]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="application-of-research-rio-2016rio2016com"&gt;Application of Research: &lt;a href="rio2016.com"&gt;Rio 2016&lt;/a&gt;&lt;/h2&gt;

&lt;p&gt;The Rio 2016 Olympic website was analyzed and reviewed based on the principles of proximity, similarity, and common region.  The site is not a time-sensitive site, but is nevertheless densely packed with information.  The site’s aim to excite, inspire, and spread the knowledge of Rio is done through a mixture of text and graphics, all organized using vibrant colors, lines, white space, and shapes.  It is critical for the website to display these information in proper structure.&lt;/p&gt;

&lt;h3 id="proximity-1"&gt;Proximity&lt;/h3&gt;

&lt;p&gt;The sections of the homepage under the main banner are grouped together using proximity.  In general, photo images that are spatially close to corresponding text are clustered as a whole.  An example of this is the “News”, “Interview”, and “Rio de Janeiro” section.  The numbered list under the “Most Viewed” section is also identified as a block due to close proximity and alignment.  In fact, the border between each list item can be removed to reduce redundancy (Figure 3).&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/preattentive-proximity.png" /&gt;
&lt;figcaption&gt;Figure 3: Removing the bottom border of each item list still yields the same grouping effect due to close proximity&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Each individual section is grouped using white space, albeit not in an optimal fashion.  Take for example, the goalball player under the “Paralympic Sports” section.  Based on proximity, the player is falsely associated with the “Time Brasil” section, especially considering similarity in skin tone and the orange color-coding of the “Time Brasil” section.  The “Learn More” call to action button is equidistant from the goalball player and the “Rio de Janeiro” section tag, yet the button is related to the latter, again, due to similarities in color.  This supports Quinlan &amp;amp; Wilton’s stance that color similarity overrides proximity.  An adjustment in the distance between each section is recommended to clearly separate out blocks of elements.  Alternatively, a different color, or a common background can be used, although redundant coding should be avoided.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/preattentive-comparison.png" /&gt;
&lt;figcaption&gt;Figure 4: Alternative designs are compared against the original design (A) using different principles B) Increased distance between sections C) alternate color (purple) D) common background (gray)&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id="similarity-1"&gt;Similarity&lt;/h3&gt;

&lt;p&gt;Each individual section on the homepage is color coded in hues of orange, green, or blue.  It is easy to differentiate between the “Olympic Sports” section in green and the “Time Brasil” section in orange.  Shape similarity is also used to group section headings, although its strength is weakened by color similarity and lack of proximity.&lt;/p&gt;

&lt;p&gt;The countdown days can spark confusion.  The law of proximity states that the two countdown numbers go together, yet color similarity yields different groupings; the Paralympic countdown is correctly bucketed with the “Time Brasil” section below, but the Olympic countdown is perhaps grouped with “Interviews” because of matching text color (blue).  This is similar to the fluctuation problem seen in figure 2.  A solution would be to color code both countdowns to be the same (orange).&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/preattentive-countdown.png" /&gt;
&lt;figcaption&gt;Figure 5: By making the countdown colors the same, it is much easier to differentiate the two sections&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id="closure-and-common-region-1"&gt;Closure and Common Region&lt;/h3&gt;

&lt;p&gt;The site takes use of closure and common region to group elements on the page.  The “Photos” and “News” sections are separated out from the rest of the sections because it has a common background of gray.  Proximity would state that the “Photos” header and headline are separated from the group of photos near it, yet they are perceptually labeled as a single group.  This supports the notion that common region is a much more powerful feature than proximity or similarity as demonstrated by Palmer &lt;a href="#Palmer1992"&gt;[14]&lt;/a&gt;.  The “Rio 2016 and You” footer section at the bottom of the homepage exhibits the same grouping mechanism.  Sections within the footer section are split based on closure, although the enclosing line itself is only part present.&lt;/p&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Preattentive processing is an important component of the visual system.  As illustrated with Rio 2016 Olympic website, the gestalt principles of proximity, similarity, closure and common region can be used to easily capture information ‘at a glance’.  Confusion can ensue, if the improper feature, or a combination of features, is applied.  Designers should be aware of these principles to help organize the visual field without directed, conscious attention.&lt;/p&gt;

&lt;h2 id="references"&gt;References&lt;/h2&gt;

&lt;ol class="bibliography"&gt;&lt;li&gt;&lt;span id="Treisman1985"&gt;[1]A. Treisman, “Preattentive Processing in Vision,” &lt;i&gt;Computer vision, graphics, and image processing&lt;/i&gt;, 1985.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Healey1993"&gt;[2]C. G. Healey, K. S. Booth, and J. T. Enns, “Harnessing preattentive processes for multivariate data visualization,” &lt;i&gt;Graphics Interface&lt;/i&gt;, 1993.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Ware2013"&gt;[3]C. Ware, &lt;i&gt;Information Visualization&lt;/i&gt;, Third Edit. Waltham, MA: Elsevier, 2013, p. 512.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Zhaoping2006"&gt;[4]L. Zhaoping and P. Dayan, “Pre-attentive visual selection.,” &lt;i&gt;Neural networks : the official journal of the International Neural Network Society&lt;/i&gt;, vol. 19, no. 9, pp. 1437–9, Nov. 2006.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Koffka1935"&gt;[5]K. Koffka, “Principles of Gestalt psychology,” pp. 1–14, 1935.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Chang2002"&gt;[6]D. Chang, L. Dooley, and J. E. Tuovinen, “Gestalt Theory in Visual Screen Design — A New Look at an old subject,” &lt;i&gt;\ldots of the Seventh world conference on \ldots&lt;/i&gt;, 2002.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Kubovy1998"&gt;[7]M. Kubovy, A. O. Holcombe, and J. Wagemans, “On the Lawfulness of Grouping by Proximity,” &lt;i&gt;Cognitive psychology&lt;/i&gt;, vol. 98, no. 35, pp. 71–98, 1998.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Slocum1983"&gt;[8]T. A. Slocum, “Predicting visual clusters on graduated circle maps,” &lt;i&gt;Cartography and Geographic Information Science&lt;/i&gt;, vol. 10, no. 1, pp. 59–72, 1983.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Rock1964"&gt;[9]I. Rock and L. Brosgole, “Grouping Based on Phenomenal Proximity.,” &lt;i&gt;Journal of experimental psychology&lt;/i&gt;, vol. 67, no. 6, pp. 531–8, Jun. 1964.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Compton1993"&gt;[10]B. J. Compton and G. D. Logan, “Evaluating a computational model of perceptual grouping by proximity.,” &lt;i&gt;Perception &amp;amp; psychophysics&lt;/i&gt;, vol. 53, no. 4, pp. 403–21, Apr. 1993.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Quinlan1998"&gt;[11]P. T. Quinlan and R. N. Wilton, “Grouping by proximity or similarity? Competition between the Gestalt principles in vision,” &lt;i&gt;Perception&lt;/i&gt;, vol. 27, no. 4, pp. 417–430, 1998.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Kubovy2008"&gt;[12]M. Kubovy and M. van den Berg, “The whole is equal to the sum of its parts: a probabilistic model of grouping by proximity and similarity in regular patterns.,” &lt;i&gt;Psychological review&lt;/i&gt;, vol. 115, no. 1, pp. 131–54, Jan. 2008.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Beck1967"&gt;[13]J. Beck, “Perceptual grouping produced by line figures,” &lt;i&gt;Perception &amp;amp; Psychophysics&lt;/i&gt;, vol. 2, no. 11, pp. 491–495, Nov. 1967.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Palmer1992"&gt;[14]S. E. Palmer, “Common Region: A new principle of perceptual grouping,” &lt;i&gt;Cognitive psychology&lt;/i&gt;, vol. 24, no. 3, pp. 436–447, 1992.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Donnelly1991"&gt;[15]N. Donnelly, G. W. Humphreys, and M. J. Riddoch, “Parallel computation of primitive shape descriptions,” &lt;i&gt;Journal of Experimental Psychology: Human Perception and Performance&lt;/i&gt;, vol. 17, no. 2, p. 561, 1991.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Treisman1980"&gt;[16]a M. Treisman and G. Gelade, “A feature-integration theory of attention.,” &lt;i&gt;Cognitive psychology&lt;/i&gt;, vol. 12, no. 1, pp. 97–136, Jan. 1980.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Smith2007"&gt;[17]E. E. Smith and S. M. Kosslyn, &lt;i&gt;No Title&lt;/i&gt;. Prentice Hall, Inc, 2007, p. 610.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Treisman1982"&gt;[18]a Treisman, “Perceptual grouping and attention in visual search for features and for objects.,” &lt;i&gt;Journal of experimental psychology. Human perception and performance&lt;/i&gt;, vol. 8, no. 2, pp. 194–214, Apr. 1982.&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;
</content>
  </entry>
  <entry>
    <title>The Significance of Contrast</title>
    <link href="http://kenhirakawa.com/significance-of-contrast"/>
    <updated>2013-02-19T00:00:00-08:00</updated>
    <id>/significance-of-contrast</id>
    <content type="html">&lt;h1 id="the-significance-of-contrast"&gt;The Significance of Contrast&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Research and Application of the Effects of Contrast&lt;/em&gt;&lt;/p&gt;

&lt;h2 id="signal-detection-theory"&gt;Signal Detection Theory&lt;/h2&gt;

&lt;p&gt;The human visual system has evolved to have survival purposes.  Being able to spot a tiger amidst tall, golden grass is a life or death situation that depends heavily on the visual system’s ability to discern the target from its distractors.  The more distractors there are, the less likely the target is detected.  This phenomenon is called the set-size effect and is the result of an increase in the probability of the noise from one or more of the distractors exceeding that of the target, causing the observer to possibly choose a distractor instead of the target &lt;a href="#Cameron2004"&gt;[1]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The set-size effect can be modeled as a signal-noise problem by a broader theory called the signal detection theory.  Signal detection theory is the means to discern information-bearing signals from background noise.  The theory holds that the discrimination of a signal depends on the intensity of the signal and the psychophysical state of the individual.  In the case of the tiger, the signal stemming from the well-camouflaged tiger may be too weak for the individual to perceive, leaving the tiger undetected.&lt;/p&gt;

&lt;h2 id="how-to-achieve-signal-detection"&gt;How to Achieve Signal Detection&lt;/h2&gt;

&lt;p&gt;The visual system is continuously processing signals from the visual field.  During the early stages of the visual system process, billions of neurons work in parallel to extract features, such as the orientations of edges and colors.  At this stage, the bottom-up, physical sensory processors proceed in a fairly automatic fashion &lt;a href="#Wolfe1994"&gt;[2]&lt;/a&gt;.  Unfortunately, there is more information out in the world than the human brain can process and therefore a more selective approach is needed.&lt;/p&gt;

&lt;p&gt;One option is to discard input.  Receptors in human eyes in the periphery are spaced more widely and ganglion cell receptive fields are large for coarser sampling.  Only at the fovea is the retinal image processed in full detail.  The second option is to process the information selectively by conducting a limited capacity search across a smaller contained space &lt;a href="#Wolfe1994"&gt;[2]&lt;/a&gt;.  In either case, the signal arriving to the retina from the physical world is key.  The stronger the signal, the more likely the signal will reach our visual cortex the quickest and grab attention.  One important component of signal strength is the presence of contrast.&lt;/p&gt;

&lt;h2 id="what-is-contrast"&gt;What is Contrast?&lt;/h2&gt;

&lt;p&gt;Contrast is the difference from the perceptual norm.  It is the difference in light between the target and the background.  This relative measurement of light is what is transmitted from the eye to the brain; nothing about the physical amount of light falling onto the retina is channeled to the brain &lt;a href="#Ware2013"&gt;[3, P. 69]&lt;/a&gt;.  Generally, the higher the contrast, the larger the signal strength becomes.  It should be noted however, that the highest contrast does not necessarily mean the best contrast, which is suggested by the split-complementary color scheme.  Ultimately, whether an individual sees something depends heavily on contrast.&lt;/p&gt;

&lt;h3 id="luminance"&gt;Luminance&lt;/h3&gt;
&lt;p&gt;Luminance is the physical measure of the amount of light and plays a key role in determining contrast.  It is one dimension of the color space (red-green, blue-yellow being the other two) and carries the most information, making it the most important dimension for human vision.  Without luminance, visual perception would be impossible.&lt;/p&gt;

&lt;p&gt;Manipulating luminance contrast can create various effects.  An example of this is simultaneous brightness contrast, which is the general effect whereby a gray patch placed on a dark background appears lighter than the same patch on a lighter background.  This effect can lead to errors of judgment (nearly 20%) when reading quantitative values displayed using a gray scale.  Hence, it is advisable not to use gray scale to represent more than two to four numerical values &lt;a href="#Ware2013"&gt;[3, P. 75]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The loss of luminance contrast can have an impact on the reading rates of normally sighted readers.  The reading rates of users decreased by a factor of two for a tenfold reduction in luminance contrast of text.  Comparing color contrast to reading performance also showed a similar result, suggesting that color contrast also affects reading rates &lt;a href="#Legge1990"&gt;[4]&lt;/a&gt;.  Furthermore, the size of the text contributes to the readability of text, especially for people with low vision; reading rates declined rapidly for very small and very large letters &lt;a href="#Legge1987"&gt;[5]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="brightness"&gt;Brightness&lt;/h3&gt;
&lt;p&gt;Brightness is the perception of light.  It is the psychological response to a physical stimulus of luminance.  The human visual system is very sensitive to this stimulus; visual senses can detect about a 0.5% change in brightness, constituting brightness as a crucial channel in which information is delivered &lt;a href="#Ware2013"&gt;[3, P. 89]&lt;/a&gt;.  The relationship between luminance (L) and brightness (B) can be represented by S.S. Stevens simple model, B = KL^n, where K is a constant and n is an exponent in the order of 0.33 &lt;a href="#Ngai2000"&gt;[6]&lt;/a&gt;.  This simple equation however only applies to lights viewed in relative isolation in the dark, such as bright LED lights on the control panel of an aircraft during nighttime &lt;a href="#Ware2013"&gt;[3, P. 83]&lt;/a&gt;.  Hue, saturation, and other variables must be taken into consideration when designing for brightness.&lt;/p&gt;

&lt;h3 id="hue"&gt;Hue&lt;/h3&gt;
&lt;p&gt;Hue is the wavelength of light and is the color we perceive.  According to the V(λ) function, humans are roughly 100 times more sensitive to yellow than to blue &lt;a href="#Ware2013"&gt;[3, P. 80]&lt;/a&gt;.  Furthermore, the visual sensors are less receptive to red than to other colors such as green, yellow, or orange.  Using blue colors on a black background should be avoided because the luminance difference between the two colors is minimal.  Furthermore, equiluminant colors can be stressful to see because the visual system cannot perceive the edges properly.  A minimum luminance contrast ratio of 3:1 is recommended &lt;a href="#Ware2013"&gt;[3, P. 82]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The human eye consists of three distinct color receptors called cones and light sensitive receptors called rods.  The human foveal vision has higher concentration of green and red sensitive cones, but no rods, making red, green, and yellow look sharper and crisper.  The peripheral vision on the other hand, has a high concentration of blue sensitive cones and luminance sensitive rods; another reason to avoid using blue for smaller, finer details in a visually competitive arena.&lt;/p&gt;

&lt;p&gt;A characteristic that limits the effectiveness of color is that roughly 7 percent of the male population is color-blind.  Most prevalent is protanopia, which is the color deficiency in the red-green channel.  Another limitation is that the color vision facility of human visual sensors become less engaged under low illumination settings, particularly for red because it is not seen by rods &lt;a href="#Wickens2004"&gt;[7, P. 73]&lt;/a&gt;.  Although color plays a crucial role in data visualization, an important guideline is to design for monochrome first so that elements can still be seen under non-optimal conditions &lt;a href="#Schneiderman2005"&gt;[8]&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id="the-special-properties-of-blue"&gt;The Special Properties of Blue&lt;/h3&gt;

&lt;p&gt;Blue is a special hue because it is the only color that is known to have a universal meaning.  It is also the hue that is least receptive to the eye.  This is especially the case for the elderly.  The elderly have a difficult time perceiving blue, especially in low contrast conditions, because of poor visual acuity &lt;a href="#Owsley1981"&gt;[9]&lt;/a&gt;; as time progresses our eye lens, the muscles around the eye become rigid, and the eye accumulates damage from years of UV light exposure.  Blue is also typically out of focus, particularly when a nearby red object is focused due to chromatic aberration &lt;a href="#Ware2013"&gt;[3, P. 49]&lt;/a&gt;.  For these reasons, blue is not recommended for use in small, fine details.&lt;/p&gt;

&lt;h3 id="saturation"&gt;Saturation&lt;/h3&gt;

&lt;p&gt;Saturation is another dimension that affects contrast.  Saturation represents the purity of the color.  The less saturation there is, the more washed out a color looks.  An object’s perceived saturation is also determined by its surrounding.  Objects appear more vivid and richly saturated against low-contrast, gray surroundings than against high-contrast, multi-colored backgrounds &lt;a href="#Brown1997"&gt;[10]&lt;/a&gt;.  For large color coded areas, less saturation should be used.  For example, maps should have low saturation so that smaller objects with higher saturation, such as dropped pins, can be distinctly seen &lt;a href="#Ware2013"&gt;[3, P. 125]&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id="application-of-research---scroogledcom"&gt;Application of Research - Scroogled.com&lt;/h2&gt;
&lt;p&gt;Scroogled.com is a website that uses high contrast to gain attention and provide a sense of alertness.  With its highly saturated colors and moving arrows, it is a visual arena with almost every element fighting to grab the observer’s attention.  Improvements can be made to the site to reduce fatigue and visual stress.&lt;/p&gt;

&lt;h3 id="red-alert"&gt;Red Alert&lt;/h3&gt;

&lt;p&gt;Take for example the alert message on privacy on the homepage of Scroogled.  The background is a highly saturated red that consumes a large portion of the real estate on the page.  It has bolded white text for the headline and uses a smaller, finer font for detailed text.  The section passes the minimum contrast ratio of 3:1 with a ratio of 5.9:1 (using Colour Contrast Analyzer found &lt;a href="http://paciellogroup.com/resources/contrastAnalyser"&gt;here&lt;/a&gt; ), and works well for protanopia users who are red-green color-blind.   However, the use of this particular red is fatiguing because it has extremely high signal strength, making the white text difficult to read.  Additionally, the area has a high density of high contrast edges as well, which contributes to fatigue.  This issue falls into the bucket of too much contrast, as users are susceptible to the phenomenon of negative afterimage &lt;a href="#Wickens2004"&gt;[7, P. 73]&lt;/a&gt;.  It is recommended that the background color is lowered in saturation or brightness or swapped out with a different hue altogether.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/contrast-scroogle.png" /&gt;
&lt;figcaption&gt;Figure 1: Left - The current website with high contrast background.  Right - The same background with lowered brightness for lowered signal strength&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h3 id="the-logo"&gt;The Logo&lt;/h3&gt;
&lt;p&gt;The Scroogled logo is the largest text on the page and is positioned at a prominent location.  When viewed at a small enough browser width, the logo overlaps with the background image of the two individuals.  Every letter overlapping the image except for the yellow ‘E’ is stressful to see because they are all roughly equiluminant with the background.  The blue ‘L’ next to the red ‘G’ becomes unfocused when visual attention is turned to the red ‘G’.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/contrast-table.png" /&gt;
&lt;figcaption&gt;Table 1: the luminance contrast measured for letters overlapping the background image.  The background was determined to be the average color around the letter within a 5px range.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;Although the relative large size helps to see the logo, a number of improvements can be made.  First, luminance contrast boundaries can be used to help define the large letters (Wave, p.113).  Second, the opacity of the background can be dropped to increase the luminance contrast as well as the color and brightness difference.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/contrast-scroogled-dark.png" /&gt;
&lt;figcaption&gt;Figure 2: The logo with a darker background&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="data-visualization"&gt;Data Visualization&lt;/h2&gt;

&lt;p&gt;The Carnegie Mellon study explained on the “Get the Facts” page displays two charts for data visualization.  One is a pie chart with each sector color-coded with different shades of blue.  Those that have poor acuity, such as the elderly, will find it difficult to perceive the numbers labeled on each sector, especially for the three sectors on the left (5%, 13%, 16%).   Even if the chart was designed with monochrome first as suggested by Schneiderman, the number of gray shades exceeds that of the recommended two to four numerical values &lt;a href="#Ware2013"&gt;[3, P. 75]&lt;/a&gt;.  An improvement would be to use more than just one hue to color-code the pie chart in addition to using bolded text for higher contrast.&lt;/p&gt;

&lt;figure&gt;
&lt;img src="http://kenhirakawa.com/assets/images/contrast-monochrome.png" /&gt;
&lt;figcaption&gt;Figure 3: Left - The original pie chart.  Middle - Monochromatic pie chart.  Right - Pie chart with different hues used for color-coding.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;The visual system depends heavily on the relative signals from the visual field.  The most basic component to signal strength is therefore the presence of contrast.  Luminance, brightness, hue, and saturation can all be used to create contrast.  As shown with Scroogled.com, the excess or lack of contrast makes elements difficult to perceive.  Interfaces should be designed with the right amount of contrast for the perceived visual system.&lt;/p&gt;

&lt;h2 id="references"&gt;References&lt;/h2&gt;

&lt;ol class="bibliography"&gt;&lt;li&gt;&lt;span id="Cameron2004"&gt;[1]E. L. Cameron, J. C. Tai, M. P. Eckstein, and M. Carrasco, “Signal detection theory applied to three visual search tasks–identification, yes/no detection and localization.,” &lt;i&gt;Spatial vision&lt;/i&gt;, vol. 17, no. 4-5, pp. 295–325, Jan. 2004.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Wolfe1994"&gt;[2]J. M. Wolfe, “Guided search 2.0 A revised model of visual search,” &lt;i&gt;Psychonomic bulletin &amp;amp; review&lt;/i&gt;, vol. 1, no. 2, pp. 202–238, 1994.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Ware2013"&gt;[3]C. Ware, &lt;i&gt;Information Visualization&lt;/i&gt;, Third Edit. Waltham, MA: Elsevier, 2013, p. 512.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Legge1990"&gt;[4]G. E. Legge, D. H. Parish, a Luebker, and L. H. Wurm, “Psychophysics of reading. XI. Comparing color contrast and luminance contrast.,” &lt;i&gt;Journal of the Optical Society of America. A, Optics and image science&lt;/i&gt;, vol. 7, no. 10, pp. 2002–10, Oct. 1990.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Legge1987"&gt;[5]G. E. Legge, G. S. Rubin, and A. Luebker, “PSYCHOPHYSICS OF READING : V . THE ROLE OF CONTRAST IN NORMAL VISION,” &lt;i&gt;Vision research&lt;/i&gt;, vol. 27, no. 7, pp. 1165–1177, 1987.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Ngai2000"&gt;[6]P. Y. Ngai, “The Relationship Between Luminance Uniformity and Brightness Perception,” &lt;i&gt;Journal of the Illuminating Engineering Society&lt;/i&gt;, vol. 29, no. 1, pp. 41–50, 2000.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Wickens2004"&gt;[7]C. D. Wickens, &lt;i&gt;An Introduction To Human Factors Engineering&lt;/i&gt;, Second Edi. Upper Saddle River, NJ: Pearson Education, Inc., 2004, p. 587.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Schneiderman2005"&gt;[8]S. B. Schneiderman and C. Plaisant, &lt;i&gt;Designing The User Interface&lt;/i&gt;, 4Th Editio. Pearson Addison Wesley, USA, 2005.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Owsley1981"&gt;[9]C. Owsley, R. Sekuler, and C. Boldt, “Aging and low-contrast vision: face perception.,” &lt;i&gt;Investigative Ophthalmology &amp;amp; Visual Science&lt;/i&gt;, no. August, pp. 362–365, 1981.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span id="Brown1997"&gt;[10]R. O. Brown and D. I. MacLeod, “Color appearance depends on the variance of surround colors.,” &lt;i&gt;Current biology : CB&lt;/i&gt;, vol. 7, no. 11, pp. 844–9, Nov. 1997.&lt;/span&gt;&lt;/li&gt;&lt;/ol&gt;
</content>
  </entry>
</feed>
