The Creation Engine No. 2 blog is 102018-07-02T20:12:00+01:00Jim,2018-07-02:2018/07/02/this-blog-is-10/<p>Just over ten years ago I set up <a href="">The Creation Engine No. 2</a> after previously blogging on the original <a href="">Linden Lab</a> hosted <a href="">Creation Engine</a> and before that on <a href="">Terra Nova</a>. So, while I&#8217;ve been blogging for almost 14 years, 10 years of The Creation Engine No. 2 seems like a good excuse to look back at the last decade of&nbsp;blogging.</p> <p>Behind the scenes the technology behind The Creation Engine No. 2 has changed a lot. Originally hosted as a <a href="">Django</a> site on <a href="">Byteflow</a>, I converted the blog to a static site generated by <a href="">Pelican</a> and hosted by <a href="">github</a> almost exactly 5 years ago. I wrote a post at the time detailing the move, which I finally <a href="">published on its 5th birthday</a> after rediscovering it today. Despite the big changes behind the scenes, <a href="">github&#8217;s support for custom domains</a> and <a href="">Pelican&#8217;s support for <span class="caps">RSS</span> import</a> meant that the only externally visible changes to The Creation Engine No. 2 were cosmetic. All the <span class="caps">RSS</span> feeds and links to the site were unaffected, which is great because <a href="">Cool URIs don&#8217;t change</a>.</p> <p>While the technology has changed a lot, the content continues to mostly be a <a href="">hack diary</a> describing the various contributions I&#8217;ve made to projects like the <a href=""><span class="caps">EVE</span> <span class="caps">CREST</span> <span class="caps">API</span></a>, <a href="">nailgun</a>, <a href="">buck</a>, <a href="">infer</a> and <a href="">ReactVR</a>. I write these posts as documentation in case I have to solve similar problems in future, but it&#8217;s also great when they&#8217;re useful to others, so I&#8217;m very happy to see that my recent set of posts about using <a href="">ReactVR with Redux</a> have been my most popular posts to&nbsp;date.</p> <p>I don&#8217;t attend as many conferences now as I did while I was working at <a href="">Linden Lab</a>, but the various conference write up posts serve a similar purpose: reminding me about the talks and speakers I&#8217;ve seen and hopefully also pointing others to useful information. Social media is now more effective for letting people know about future events, so I rarely use The Creation Engine to announce upcoming events, but still think its a useful place to collect longform thoughts about past events and links to&nbsp;recordings.</p> <p>One post which doesn&#8217;t fit in to the above catatories is my <a href="">review of Now We Are 40</a> and thoughts on growing up with the web. Part of the reason I posted it here is because I could. I don&#8217;t need someone to speak for me or my generation, I can do it myself. I&#8217;m very grateful that I&#8217;ve been able to share my thoughts with the world for the last decade and very happy that a lot of people have found them&nbsp;useful.</p> <p>Happy birthday to The Creation Engine No. 2. Here&#8217;s to the next&nbsp;10.</p>Replicated Redux: The Movie2018-05-22T11:30:00+01:00Jim,2018-05-22:2018/05/22/replicated-redux-movie/<div class="flex-video"><iframe width="640" height="360" src="" frameborder="0" allowfullscreen></iframe></div> <p>The recording of my recent <a href="">React Europe talk about Replicated Redux is now online</a> and I&#8217;ve written several other posts describing <a href=""> designing</a>, <a href="">testing</a> and <a href="">generalising</a> the library if you would like to know more about the details. If you’d like to play the web version of pairs or see the rest of the code, it’s available on github <a href="">here</a>.</p> <p>People often describe multi-user networked <span class="caps">VR</span> experiences as laggy, but code to hide latency by optimisticly predicting the effects of local actions is hard to write, difficult to test and often application specific. It feels great to have helped make this problem a little easier to solve for React developers by building Replicated&nbsp;Redux.</p>Replaying Replicated Redux2017-11-10T22:56:00+00:00Jim,2017-11-10:2017/11/10/replaying-replicated-redux/<p>While <a href="">property based tests proved to be a powerful tool for finding and fixing problems with ReactVR pairs</a>, the limitations of the simplistic <code>clientPredictionConstistenty</code> mechanism&nbsp;remained.</p> <p>It&#8217;s easy to think of applications where one order of a sequence of actions is valid, but another order is invalid. Imagine an application which models a door which can be locked: an <code>unlock</code> action followed by an <code>open</code> action should be valid, but an <code>open</code> action followed by <code>unlock</code> should be invalid given a starting state where the door is locked. It&#8217;s a lot more difficult to imagine how every ordering of this simple sequence of actions can be made either valid or&nbsp;invalid.</p> <p>The limitation of <code>clientPredictionConsistency</code> is caused by the master client having to see an invalid action before it notices that clients need resyncing. An obvious way to avoid this limitation would be to have all other clients let the master know if they have seen an invalid action, but this solution becomes more complicated when you want to avoid the master sending duplicate sync actions if multiple clients report invalid actions&nbsp;simultaneously.</p> <p>At this point, I took a step back: even if clients could report conflicts without duplicate resyncs, the improved <code>clientPredictionConsistency</code> would centralise conflict resolution in the master. Clients receiving a state sync action would have no context on the conflict and so would be unable to do anything more than reset their local state. Reusing the state sync mechanism which allows late joining is simple, but doesn&#8217;t allow anything more than effectively&nbsp;rejoining.</p> <p>One of the nice things about Redux actions is that they are more meaningful than either <span class="caps">UI</span> events or state updates: it would be nice if clients could use the context they have in the actions to resolve conflicts and reconcile optimistic updates with authoritative actions gracefully. This made me think of the <a href="">optimistic update mechanism used by Half Life</a> which keeps a list of actions which have been predicted locally and reapplies the predictions to new states received by the server. <a href="">Redux was built to easily support this kind of time travel</a> through application history, so I wondered whether something similar could be built for replicated&nbsp;redux.</p> <p>Some hacking on these ideas produced <code>replayConsistency</code>: a generalisation of the Half Life optimistic update ideas applied to arbitrary Redux actions. When a non-master client generates a valid local action it is sent to the master, immediately reduced locally, but also appended to a list of predicted actions. When the client recieves a new action from the master it rewinds the state back to the start of the prediction, applies the new master validated action and then reapplies the predicted actions if they are still valid. Eventually every predicted action becomes part of the total ordering defined by the series of actions validated by the master and is sent back to the client, or the state which caused the prediction to be invalid on the master is reached on the client. In either case the prediction is discarded. In the case where a prediction becomes invalid, the client has the state before the prediction, the master validated action and the list of predicted actions available when <code>updatePredictions</code> is called. This context allows the client to do something significantly more sophisticated to fix the local state than simply reseting the entire local state. In fact <code>replayConsistency</code> does not need to send state syncs at all, making it significantly more efficient than <code>clientPredictionConsistency</code> which I renamed <code>resyncConsistency</code> to make the differences between the two optimistic consistency policies&nbsp;clear.</p> <script src=""></script> <p>Switching out <code>resyncConsistency</code> for <code>replayConsistency</code> and eyeballing several games of ReactVR Pairs suggested that the new consistency mechanism was working as intended, but finding all of the kinks in <code>resyncConsistency</code> had required testing thousands of sequences of actions using property based tests. My existing tests didn&#8217;t apply here: they made sure that an application would work given the limitations of <code>resyncConsistency</code>. The property I really wanted to ensure held for all consistency mechanisms is that regardless of the predictions made at each client, eventually all clients would be&nbsp;consistent.</p> <script src=""></script> <p>This test generates a sequence of pairs actions which might be sent by the master or one of two non-master clients and then checks that all clients are eventually consistent even in the pathological case where each non-master predicts all of its actions before getting any actions from the master. A nice feature of this test is that it is independent of the consistency mechanism and so the same test can be run to ensure that both <code>resyncConsistency</code> and <code>replayConsistency</code> result in all clients being eventually consistent for thousands of sequences of&nbsp;actions.</p> <p>With my tests passing I had high confidence that <code>replayConsistency</code> was working and didn&#8217;t impose any limitations on event ordering making it a much more general and efficient solution than <code>resyncConsistency</code> as well as much easier to use as it doesn&#8217;t require complicated reasoning about application event ordering. The potential to perform sophisticated application specific state reconciliation when predictions are invalidated is also interesting and I&#8217;m excited to see what we can do with it in&nbsp;future.</p> <p>If you&#8217;d like to play the ReactVR version of pairs or see the rest of the code, it&#8217;s available on github <a href="">here</a>.</p> <p>All code in this post is made available under the <a href="">ReactVR examples license</a>.</p>Building Safety in to Social VR2017-10-26T23:07:00+01:00Jim,2017-10-26:2017/10/26/building-safety-in-to-social-vr/<div class="flex-video widescreen"><iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>Last year I hosted a panel on <a href="">creating a safe environment for people in <span class="caps">VR</span></a> with <a href="">Tony Sheng</a> and <a href="">Darshan Shankar</a> at <span class="caps">OC3</span>. I <a href="">commented at the time</a> that the discussion reminded me of the story of <a href="">LambdaMOO</a> becoming a self-governing community told by <a href="">Julian Dibbell</a> in <a href="">My Tiny Life</a>, so I was very happy to have the opportunity to compare the established and novel approaches to Building Safety in to Social <span class="caps">VR</span> at <a href=""><span class="caps">OC4</span></a> with <a href="">Mike Howard</a>.</p> <p>While technology changes, people stay the same, so it&#8217;s important that we don&#8217;t lose the lessons learned by people like Julian, <a href="">Richard Bartle</a>, <a href="">Raph Koster</a>, <a href="">Damion Schubert</a> and the many others who have been wrestling with and thinking about these problems for&nbsp;decades.</p>Testing Replicated Redux2017-07-31T20:29:00+01:00Jim,2017-07-31:2017/07/31/testing-replicated-redux/<p>Opening a couple of browser windows and clicking around was more than sufficient for testing the initial version of <a href="">ReactVR pairs</a>. Implementing a simple middleware to log actions took advantage of the <a href="">Redux</a> approach of reifying events to allow a glance at the console to reveal precisely which sequence of actions caused a&nbsp;problem.</p> <p>Adding support for <a href="">optimistic consistency</a> made testing more challenging. In order to test conflict resolution, conflicting actions needed to be generated on multiple clients almost simultaneously. After a couple of sessions testing broken versions of pairs with friends it was clear that a more efficient process was required. Fortunately, Redux actions are independent of the <span class="caps">UI</span> events which generate them. This separation of concerns made it trivial to randomly generate and dispatch actions without driving the <span class="caps">UI</span>. Opening clients dispatching several randomly generated actions per second made it easy to generate conflicts to test optimistic consistency policies while watching games play out made it easy to eyeball the results to check that they were correct. This random action generation mechanism can be enabled by adding <code>?random</code> as the query string when opening the Pairs example in a&nbsp;browser.</p> <div class="flex-video"><iframe width="560" height="315" src=";controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe></div> <p>One of the problems found by this approach was that clients didn&#8217;t always end up eventually consistent. One client would end up with all squares shown and all pairs scored, while another would have some squares hidden. After some digging it turned out that in these cases the master would be reducing a hide action followed by a score action, while the other client would reduce the actions in the reverse order, causing the hide acton to be invalid. Without a way for a non-master client to let the master know about the conflict the master would not send a sync action and the clients would not end up eventually&nbsp;consistent.</p> <p>This problem identified a limitation with the optimistic <code>clientPredictionConsistency</code> policy: if any sequence of actions causes a conflict then every ordering of those actions must also cause a conflict in order for the clients to end up eventually consistent. The fix for the hide-score case seemed clear: if the score action was only valid if the pair was shown then both orderings of those actions would generate a conflict and so the master would generate a sync action regardless of the order in which it reduced the actions. Some more eyeballing seemed to suggest that the problem had been solved, but a better way to test the property that sync action generation is independent of action order was to write a property based&nbsp;test.</p> <p>Some Googling revealed that my Facebook colleague <a href="">Lee Byron</a> had written a JavaScript property based testing framework called <a href="">test-check</a> which was compatible with the <a href="">Jest</a> framework used by <a href="">ReactVR</a> tests, so I started hacking. I soon had a test which could generate an arbitrary sequence of actions, dispatch them and check that if the sequence of actions generated a sync action then dispatching the same sequence of actions in reverse would also generate a sync&nbsp;action.</p> <script src=""></script> <p>I could now test that the property held for thousands of action sequences in a few seconds and so I found the next bug almost immediately. While my change to make any ordering of test then hide generate a sync had fixed one problem it had introduced another. The validity of score events was now dependent on the preceding show events, so it was possible for show-show-score to be valid but for every other order of those events to cause the score event to be invalid and so not&nbsp;reduced.</p> <p>At this point I took a step back. The only situation that should cause a conflict that needs to be resolved is when more than one player tries to score the same pair. In this situation clients don&#8217;t have enough information to resolve the conflict and a master client needs to pick an ordering and communicate the result to the other clients. In the case of hide and score actions every client can do the right thing. Hide actions can be made to not hide scored squares and score actions can be made to show pairs. With the reducers changed to work in this way hide actions can always be reduced and score actions are only invalid when they conflict with each other. With these changes in place the validation logic becomes dramatically simpler to reason about and the property based tests were unable to find any more cases which would not be eventually consistent even after thousands of&nbsp;runs.</p> <p>Testing distributed systems is hard, but combining replicated Redux with property based tests has proved to be a powerful way to gain a high degree of confidence that applications will work correctly despite limitations in the current simplistic <code>clientPredictionConsistency</code> mechanism. The same property based tests will enable new optimistic consistency mechanisms without those limitations to be developed far more quickly in&nbsp;future.</p> <p>If you&#8217;d like to play the ReactVR version of pairs or see the rest of the code, it&#8217;s available on github <a href="">here</a>.</p> <p>All code in this post is made available under the <a href="">ReactVR examples license</a>.</p>ReactVR Redux Revisited2017-07-04T20:29:00+01:00Jim,2017-07-04:2017/07/04/react-vr-redux-revisited/<p>There were a couple of aspects of my <a href="">previous experiments</a> building networked <a href="">ReactVR</a> experiences with <a href="">Redux</a> that were unsatisfactory: there wasn&#8217;t a clean separation between the application logic and network code and, while the example exploited idempotency to reduce latency for some actions, actions which could generate conflicts used a conservative consistency mechanism which added at least a network round trip to those actions. So, I did some more&nbsp;hacking.</p> <p>In an attempt to create a clean separation between application and network logic I kept the network code in a redux middleware and moved the application logic to an isValid callback which returns true if an action can be safely&nbsp;reduced:</p> <script src=""></script> <p>With this in place the simple, conservative, dumbTerminalConsistency policy can be implemented in a few lines of&nbsp;code:</p> <script src=""></script> <p>Clients generate actions in response to <span class="caps">UI</span> interactions and send those actions to the master. The master returns valid actions which are then reduced. Conflicts are resolved by serializing actions through the master client: isValid will return true for the first event, which will be distributed to all clients, but false for subsequent conflicting actions which are discarded. All clients see a single, consistent view at the cost of a round-trip to the master for all&nbsp;actions.</p> <p>The same isValid method can be reused to implement an optimistic clientPredictionConsistency policy which treats the clients as decoupled&nbsp;simulations:</p> <script src=""></script> <p>When using this middleware, clients immediately reduce actions which are valid given their local state and distribute local actions to all other clients. If a conflict is detected, the master client uses the same setState mechanism used to allow late joining to reset the decoupled simulations to the master state. The effects of actions are seen immediately at the cost of occasionally seeing the effects of actions roll back when the state is reset. By designing reducers to be idempotent and making the isValid callback as permissive as possible the number of state reset actions can be minimized. In the case of pairs, state resets only occur when two clients try to get the points for the same pair, or if a client tries to hide half of a pair which has been&nbsp;scored.</p> <p>Testing pairs with the two different consistency policies over a high latency ngrok connection gives wildly different experiences. With dumbTerminalConsistency introducing 500ms round-trip latencies between clicking and seeing results the experience feels laggy, slow and clumsy. With clientPredictionConsistency the effects of local actions are seen with 0 latency and the experience is fast, snappy and frantic. Glitches caused by state resets are occasionally jarring, but often go unnoticed as the focus of attention is on the board during the game before switching to the scores at the end once they are eventually consistent. While it may make sense to use conservative consistency for some applications, pairs definitely benefits from an optimistic&nbsp;approach.</p> <p>Being able to independently develop application logic and consistency mechanisms is extremely valuable. While developing the pairs example I was able to get the dumbTerminalConsistency middleware working, then the pairs game logic and then switch between the dumbTerminalConsistency and clientPredictionConsistency policies to determine whether I had a problem with the game logic or middleware while getting optimistic local updates working. I could imagine a similar approach being valuable for other applications. Conservative consistency could be used during development, then optimistic consistency policies could be experimented with to find the right trade off between latency and consistency without worrying about breaking the application logic by mixing in tightly coupled local update&nbsp;logic.</p> <p>Its easy to imagine more sophisticated optimistic consistency mechanisms: middleware which generates anti-events to avoid full state resets when the state becomes too large, approaches which use Redux time travel approaches to rewrite history when conflicts are detected or policies which extrapolate predictions or interpolate corrections to avoid discontinuities for example. Many of these approaches could be implemented in generic ways, but developers would still have the option to build middleware which exploits application specific knowledge where&nbsp;appropriate.</p> <p>The Redux approach of defining the next state as a function of an action applied to the current state lends itself to building sophisticated decoupled simulations. I hope to see these approaches become standard in networked ReactVR applications in the near future. Modern <span class="caps">VR</span> hardware provides incredibly low motion to photon latency and it would be a shame to see the sense of presence it can create broken by the network round-trips inherent in client-server architectures. Optimistic updates, client prediction and zero latency should be the&nbsp;default.</p> <p>If you&#8217;d like to play the ReactVR version of pairs or see the rest of the code, it&#8217;s available on github <a href="">here</a>.</p> <p>All code in this post is made available under the <a href="">ReactVR examples license</a>.</p>Generation JPod2017-06-03T00:00:00+01:00Jim,2017-06-03:2017/06/03/generation-jpod/<p><a href=""><img src=""></a></p> <p>I&#8217;ve just got back from <a href="">Kaş</a> where I spent a lovely few days celebrating <a href="">Pinar and Simon&#8217;s wedding</a> and while there spent a few hours reading <a href="">Now We Are 40</a>: a thoughtful and entertaining look at everything from house music to house prices from the perspective of Generation&nbsp;X.</p> <p>While a lot of the book was familiar, I was surprised how different my experience has been. I don&#8217;t feel part of the last generation who grew up without digital technology, but part of the first generation to grow up with&nbsp;it.</p> <p>I learned to program at school, started writing code professionally at 18 and haven&#8217;t stopped. I didn&#8217;t work many <a href="">McJob</a>s, but have worked in many <a href="">JPod</a>s (and seen friends move to Vancouver to work in the games industry there). I may have bought my first smartphone when I got my first job, but my life has felt digital from the&nbsp;start.</p> <p>One focus of &#8220;Now We Are 40&#8221; is the changes in music over the last few decades from the pre-digital rave scene to the current over-saturation of recorded music available from streaming services which has rendered selling recorded music unsustainable for almost everyone. Food and drink followed on as the next big thing, something I&#8217;ve experienced personally as my <a href="">brother-in-law</a> morphed from hip hop pimp to sommelier. You may not need to buy recorded music any more, but you still need to eat and&nbsp;drink.</p> <p>In fact raves, clubs, cafes and restaurants are mostly a place to hang out. When people were incredulous at the idea of spending real money on virtual goods <a href="">Cory</a> and I used to point out that most of what you were buying in Starbucks was not coffee or service, but an experience just as ephemeral as a <a href="">virtual hat</a>.</p> <p>Virtual worlds like <a href="">Second Life</a> and now <a href="">Social <span class="caps">VR</span></a> showed that you could digitise the hanging out too. It&#8217;s already proving to be an <a href="">invaluable lifeline</a> to people who find it hard to hang out in real life. While festivals, clubs and gigs are where musicians are increasingly making money in real life, they also work in virtual worlds. I first saw the <a href="">Qemists</a>, now one of my favourite bands, on the <a href="">Ninja Tune</a> stage at a festival in Second Life organised by <a href="">Aleks</a>. Later, <a href="">Leon</a> poked me on Facebook to ask me to advise his new tech startup because, after music and food, technology is apparently the new rock and&nbsp;roll.</p> <p>For most of my career I have been building experiences which compliment the real world and working at Facebook is the first time that I have felt like I might be working somewhere that has been disrupting existing industries. The view that Facebook is an existential threat to <a href="">the open web</a> (a prospect that <a href="">Bryan</a> likens to the bug sucking the wildebeest dry) is relatively widespread and I remember a circle forming around me when I told <a href="">Aral</a> and some of the other web developers at a <a href="">Skiff</a> Christmas party that I was going to work&nbsp;there.</p> <p>In fact I&#8217;ve spent much of the last few years working on <a href="">open</a> <a href="">source</a> tools that will benefit the wider web while also being able to support my family more sustainably than I could working in a games industry where redundancies and closures were more common than <a href="">Philip Rosedale</a> being asked how much he would sell Second Life for at&nbsp;Davos.</p> <p>While the software industry seems to be constantly changing with new tools, languages, platforms and frameworks arriving all the time, a deeper disruption is potentially coming to the business of writing code for a living. Increasingly large parts of software systems are <a href="">learned by machines rather than programed by humans</a>. As Jeff Dean at Google observed: &#8220;If Google were created from scratch today, much of it would be learned, not coded.&#8221; When I started studying Computer Science in Nottingham my dad advised me not to become just a computer caretaker. It&#8217;s very possible that I may end up becoming a computer trainer&nbsp;instead.</p> <p>If my experience has felt so radically different despite being only a few years younger than <a href="">Tiffanie Darke</a> - if I feel more Generation JPod than Generation X - are we already at the point where technological change is <a href="">rendering the use of 15-20 year long generations obsolete</a>?</p> <p>The difference may also just be because <a href="">&#8220;The future is already here — it&#8217;s just not very evenly distributed.&#8221;</a>. If you were connecting early modems to <a href=""><span class="caps">BBS</span></a>es at the start of the 90s it was easy to become a digital native. If you were busy dancing to Charly in a warehouse you may have had to catch up&nbsp;later.</p> <p>One thing that is clear is that we need to work out how our increasingly disrupted and automated society will function. If <a href="">software is eating the world</a> and software is increasingly learned, then we&#8217;re going to have to find a way for people to flourish in that future. Brexit and Trump show what happens when people are worried about their place in the world. I&#8217;d like to see my children grow up in a future which is closer to the <a href="">The Culture</a> than <a href="">Mad Max</a>. There&#8217;s a <a href=",_2017">general election in the <span class="caps">UK</span> next week</a>. My next plan is to vote for a more progressive&nbsp;future.</p>2² Decades2017-04-20T00:00:00+01:00Jim,2017-04-20:2017/04/20/2-2-decades/<p>Several years ago when we were in <a href="">100 robots</a> together, <a href="">Max</a> was celebrating his 40th birthday. When I said that mine would be in 2017, it felt like an impossibly far future date, but, after what feels like the blink of an eye, here we&nbsp;are.</p> <p>Along with many other lovely gifts I received this morning was a book with the subtitle <a href="">Whatever happened to Generation X? by Tiffanie Darke</a> complete with a bright yellow acid house cover and a quote by Douglas Coupland on the&nbsp;cover.</p> <p>I&#8217;ll read the book when I go to Simon and Pinar&#8217;s wedding next month, but I&#8217;ll share my immediate reaction now. Despite the term being popularised by <a href="">Coupland&#8217;s book</a>, whatever did happen to generation X we won&#8217;t read it in a book. We&#8217;ll read and write about it on the web we&nbsp;built.</p> <p>While I remember my parents freaking out when I wanted to wear a bright yellow acid house badge to school, at the time I was more in to loud guitar music like <a href="">Nirvana</a> and <a href="">Blur</a>. From the perspective of loud guitars it felt like I&#8217;d missed the party: Metalica&#8217;s <a href="">Master of Puppets</a> was already receading in to the rear view mirror and <a href="">Led Zeppelin</a> firmly in my parents era. While we didn&#8217;t have <a href="">The Beatles</a> though, I did have&nbsp;computers.</p> <p>There are plenty of people who would argue that I missed the boat there too: <a href="">Boolean algebra</a> was developed in 1848; the <a href="">Halting Problem</a> proved to be undecidable over Turing machines in 1936 and <a href="">Quicksort</a> was developed in 1959. While the <a href="">Infer</a> team refused to give up at the halting problem and are now producing amazing real world results using static analysis, a lot of computer science was finished before I was&nbsp;born.</p> <p>My kind of computers weren&#8217;t huge machines crunching numbers and doing maths though, they were small pieces loosely joined. Connecting to things and each other they didn&#8217;t operate on maths, but changed the world or built new ones. They automated my physics experiments so that I could spend more time kissing Ali in the common room, helped <a href="">reverse engineer Grand Theft Auto maps</a> and <a href="">automated synthesiser parameters</a> when I didn&#8217;t have real controls for&nbsp;them.</p> <p>They let me record <a href="">hours</a> of <a href="">music</a> and made writing books, making films and recording music accessible to everyone. While that made lives harder for those trying to make a living from their art it helped many more lives flourish. Napster may have made Metallica pretty upset, but the french horn player from my school could plunder the past for <a href="">funk loops to accompany his synthesisers</a>.</p> <p>The <span class="caps">DIY</span> explosion gave us hip-hop and a million flavours of dance music and the networks to share it. Eventually it also gave us digital versions of the Beatles and, now I have been able to download and listen to it all, I&#8217;m convinced they have nothing to top the <a href="">Aphex Twin</a>.</p> <p>The same democratization of tools meant that as a software engineer I could scratch an itch and choose to build my own service on top of world class open source software or work for one of the companies that became huge making the web easier to use. I&#8217;ve seen enough of how the startup sausage is made to know that a lot of the glitter is not gold, but owning the means of production means I at least have the choice to strike out on my&nbsp;own.</p> <p>Climate change may mean that our real world horizons are closer and the piles of stuff we collect smaller, but the virtual vistas we can explore are ever&nbsp;growing.</p> <p>When I watch my children grow up with <a href="">YouTube</a> it&#8217;s amazing to think about what they will accomplish in the future. If they want to do something, they watch it, learn it and do it. Nothing is unknown and nothing is impossible. They&#8217;re incredible, which is lucky, as together we&#8217;re going to have to save the&nbsp;world.</p> <p>These thoughts are my own. They don&#8217;t represent my employer. They don&#8217;t attempt to speak for my generation. I write them and share them because I can and because I want to. Someone might read them and comment on them or link to them to build a web. Thats how my generation works and that&#8217;s what we built. We may not have had the Beatles, but I&#8217;m <span class="caps">OK</span> with&nbsp;that.</p>VR Redux2017-01-04T20:20:00+00:00Jim,2017-01-04:2017/01/04/vr-redux/<p><a href="">Mike</a> and I have been talking about how to easily build simple networked social applications with <a href="">ReactVR</a> for a while, so I spent some time hacking over the Christmas break to see if I could build a ReactVR version of the pairs game in <a href="">Oculus Rooms</a>. Pairs is simple and fun, but also interesting as it’s real time and has the potential to generate conflicting updates that need to be&nbsp;resolved.</p> <p><a href="">Redux</a> seemed like a promising starting point as it reifies events and allows flexible event processing in a similar way to <a href=""><span class="caps">MASSIVE</span>-3</a>. I used <a href="">websockets</a> as they are already supported by ReactVR along with <a href="">wsrelay</a> to network the&nbsp;clients.</p> <p>With those pieces in place the simplest way to network the clients is to implement a middleware function to send every action generated in one client to all the others. In the case of actions which show a tile this is sufficient as the action is idempotent. If two players click on a square at the same time, the order that the actions are reduced in doesn’t matter: in either case the result is that the element is revealed. We can exploit the idempotency by optimistically processing the action locally before sending it to other clients to minimise network&nbsp;latency.</p> <script src=""></script> <p>Scoring is trickier. While each client can tell when a pair has been revealed, only the first player to reveal the pair should score a point. As the actions to reveal tiles are potentially processed in different orders on each client that could lead to inconsistent scores even if only the first is processed. A simple way to avoid this inconsistency is to nominate one client to be the master and only have that client generate score actions. This can be implemented as another middleware to avoid generating actions inside a&nbsp;reducer.</p> <script src=""></script> <p>The master client can also be made responsible for sending the current state of the simulation to new clients to support late&nbsp;joining.</p> <script src=""></script> <p>With those parts done the app is usable and makes an interesting example of one possible way to network ReactVR applications. This was the first time I’d used React, ReactVR or Redux and I was very impressed by how easy to use and flexible they are. With the addition of some small pieces of middleware Redux can be used to implement a distributed simulation with flexible consistency mechanisms to trade off latency and consistency. The pairs example shows that even within a simple application applying different consistency mechanisms to different actions and parts of the application state is&nbsp;useful.</p> <p>The next things to experiment with are using <a href="">WebRTC</a> to allow peer to peer communication between clients to further reduce latency, add a server to allow trusted and hidden state and allowing clients to subscribe to a subset of actions to allow heterogeneous clients and <a href=";hl=en&amp;as_sdt=0&amp;as_vis=1&amp;oi=scholart&amp;sa=X&amp;ved=0ahUKEwi3peaslYXOAhVM6iYKHZ_pCd8QgQMIJTAA">interest management</a>.</p> <p>If you&#8217;d like to play the ReactVR version of pairs or see the rest of the code, it&#8217;s available on github <a href="">here</a>. </p> <p>All code in this post is made available under the <a href="">ReactVR examples license</a>.</p>Creating A Safe Environment For People In VR2016-10-31T23:28:00+00:00Jim,2016-10-31:2016/10/31/creating-a-safe-environment-for-people-in-vr/<div class="flex-video widescreen"><iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>I was very happy that <a href="">Oculus</a> found time at <a href=""><span class="caps">OC3</span></a> to host a panel on creating a safe environment for people in <span class="caps">VR</span>. As social <span class="caps">VR</span> becomes more popular over the next few years it will quickly have to learn how to keep people safe together in shared environments. Some of the challenges are new: tracked hands make it easier to violate other avatars and gesture abusively without resorting to custom animations or scripts; invading personal space is a much bigger issue and increased presence makes all experiences more visceral for better or worse. Some novel solutions have already been developed for these challenges, notably the personal space bubbles developed for <a href="">Bigscreen</a> and widely used in other social <span class="caps">VR</span> experiences. However, a lot of these problems have been experienced in virtual worlds since <a href=""><span class="caps">MUD1</span></a> launched nearly 40 years ago. I was particularly struck by how the experiences with filtering, reporting and governance in <a href="">Altspace <span class="caps">VR</span></a> reminded me of the story of <a href="">LambdaMOO</a> becoming a self-governing community told by <a href="">Julian Dibbell</a> in <a href="">My Tiny Life</a>. People like Julian, <a href="">Richard Bartle</a> and <a href="">Raph Koster</a> have been wrestling with and thinking about these problems for decades. The pioneers of this generation of <span class="caps">VR</span> should make sure they learn from those&nbsp;experiences.</p>crestexplorer2016-08-21T21:48:00+01:00Jim,2016-08-21:2016/08/21/crestexplorer/<div class="flex-video widescreen"><iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>At the 3rd Party Dev State of the Union at <span class="caps">EVE</span> Fanfest 2016 earlier this year, <span class="caps">CCP</span> FoxFour drew my attention to a limitation of the current approach used by <a href="">crestmatic</a> to generate <span class="caps">CREST</span> documentation: it only discovers resources always reachable from the <span class="caps">API</span> root from the perspective of the authorised character at generation time. As <span class="caps">CREST</span> now includes APIs for transient resources like <a href=";t=475607">fleets</a> the entire <span class="caps">API</span> isn&#8217;t reachable for documentation from a nightly run using the credentials of a character not in a&nbsp;fleet.</p> <p>There are a couple of ways to fix this. At the <span class="caps">API</span> level <span class="caps">OPTIONS</span> responses could refer to linked representations that can exist. An alternative is to extend <a href="">crestexplorerjs</a> with <span class="caps">OPTIONS</span> metadata to make it expose live documentation about reachable&nbsp;resources.</p> <p>At a recent hackathon I took the second approach. crestexplorer now makes <span class="caps">OPTIONS</span> requests to each resource it requests and uses <a href="">crestschema</a> and data urls to generate downloadable <a href=""><span class="caps">JSON</span> schema</a> descriptions for each of the representations available for a resource. It also hyperlinks to the alternative representations so now all representations of all reachable resources are available in crestschema and the human readable descriptions of each field are added to the rendered resource as&nbsp;hovertext.</p> <p>Now documentation for any valid <span class="caps">CREST</span> resource live by specifying the <span class="caps">URI</span> for the resource as the crestexplorer hash fragment. Even when the resources are transient documentation is available for resources while they&nbsp;exist.</p>Strange Tales From Other Worlds2016-05-10T00:00:00+01:00Jim,2016-05-10:2016/05/10/strange-tales-from-other-worlds/<div class="flex-video widescreen"><iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>At the end of last year, <a href="">Michael Brunton-Spall</a> and <a href="">Jon Topper</a> asked me if I would like to give the opening keynote at <a href="">Scale Summit</a> as I had &#8220;lots of experience scaling weird things&#8221;, by which they meant Second Life and <span class="caps">EVE</span> Online. I immediately thought of <a href="">The Corn Field</a>, a place in Second Life that is a reference to an <a href="">episode of The Twilight Zone</a> where naughty avatars were sent to think about what they had done. Several more bizarre anecdotes quickly came to mind and soon I had a talk: Strange Tales From Other Worlds. A few people who couldn&#8217;t make it to Scale Summit mentioned that they wanted to hear the talk, but the recording I made at Scale Summit was somewhat disapointing. Luckily I was invited to give the talk again a few weeks ago to the Oculus team at Facebook London and made a Keynote recording which I&#8217;ve now uploaded to YouTube <a href="">here</a>. The strange tales are mostly true, but a few errors snuck in to the second telling: firstly <span class="caps">MUD</span> stands for <a href="">Multi-User Dungeon</a> and secondly I forgot to credit <a href="">Brian Bossé</a> for his amazing work on <a href="">TiDi</a> the second time&nbsp;around.</p>Towards A Generic Media Type System2016-04-17T00:00:00+01:00Jim,2016-04-17:2016/04/17/towards-a-generic-media-type-system/<p>The early days of RESTful hypermedia <span class="caps">API</span> design tends to involve lots of homogeneous collections. In the case of <span class="caps">CREST</span> vnd.ccp.eve.Api-v1 pointed to the logged in vnd.ccp.eve.ccp.Capsuleer-v1 which pointed to a vnd.eve.ccp.CharacterCollection-v1 of contacts which pointed to many vnd.ccp.eve.Capsuleer-v1 representations, one for each capsuleer&nbsp;contact.</p> <p>Adding search makes things more complicated. A typical search resource will accept a query and return a collection of hyperlinks to heterogeneous resources. <span class="caps">EVE</span> allows capsuleers to have contacts that might be other capsuleers, agents, corporations or alliances. In the future capsuleers may also be able to have coallitions in their contacts list. In this hypothetical future a contacts management application developed now would end up receiving search results that it wouldn&#8217;t understand. After requesting a resource and receiving an unknown representation the client can ignore the resource, but it would be better to filter out unknown resources as part of the&nbsp;query.</p> <p>Server side filtering is typically added to search via extra query parameters. In our <span class="caps">CREST</span> example we might add q=Jayne&amp;type=capsuleer parameters if we just wanted to find the capsuleer Jayne Fillon. An unfortunate consequence of this design is that we end up with application level types used to filer searches as well as Media Types used for <span class="caps">HTTP</span> content&nbsp;negotiation.</p> <p>It would be nice to just use content negotiation, but we want to filter the resources that are referenced by the returned resource, not the returned representation itself. The search request might return a vnd.ccp.eve.Collection-v1 representation, but we want to make sure that the hyperlinks in that collection only point to vnd.ccp.eve.Capsuleer-v1&nbsp;resources.</p> <p>As a client we&#8217;d like to specify is that we want search to return a vnd.ccp.eve.CollectionOfCapsuleerReferences. If the client wants to include corporations in the search results it should be able to specify a CollectionOfCapsuleerOrCorporationReferences. We&#8217;d like a richer media type&nbsp;system.</p> <p>While this could be implemented in the backend just by switching on the Accept type, the combinatorial explosion of potential search results quickly makes this impractical. A simpler way to experiment with this approach would be to implement a proxy which can query the <span class="caps">API</span> for Collection-v1 representations and convert them in to arbitrary CollectionOfFooAndBarReference&nbsp;representations.</p> <p>The proxy could also be used to inline reprsentations, allowing clients to request a CollectionOfFooAndBars rather than a collection of hyperlinks. If clients only wanted a subset of the full Foo and Bar representations thay might ask for CollectionOfJustNameFromFooAndJustNameFromBars. These additions would address some of the biggest headaches caused by <span class="caps">API</span> designers who have to decide which fields to denormalise in to collection resources to avoid clients having to make a huge number of requests in order to provide meaningful choices to&nbsp;users.</p> <p>This approach potentially provides a lot of the benefits of GraphQL to RESTful clients just through normal <span class="caps">HTTP</span> content negotiation protocols and maintains the benefits of maintaining a small number of versions to <span class="caps">API</span> developers. If a composite type refers to a type which is deprecated in the underlying <span class="caps">API</span>, it can return as deprecated response as&nbsp;normal.</p> <p>As with programming languages, it seems that if we&#8217;re going to support strong media types it&#8217;s useful to also support generic media&nbsp;types.</p>#recordstoreday2016-04-15T00:00:00+01:00Jim,2016-04-15:2016/04/15/recordstoreday/<p><a data-flickr-embed="true" href="" title="StoryBirdAlbum"><img src="" width="600" height="800" alt="StoryBirdAlbum"></a><script async src="//" charset="utf-8"></script></p> <p>3 weeks ago I spent a few hours with photoshop working on the <a href="">Story Bird</a> logo that Linda made a while ago to make it suitable for print. 2 weeks ago I spent a few hours researching the best way to convert the 24 bit 48 Khz Story Bird mixes to 16 bit 44.1 Khz audio files that could be burned to <span class="caps">CD</span>. A week ago I went to Maplins in Tottenham Court Road, bought a cake of printable <span class="caps">CD</span>-Rs, dusted off my old MacBook Pro and spent the weekend burning copies of the <a href="">Story Bird</a> album. Tomorrow the CDs will go on sale at <a href="">The Marwood Coffee Shop</a> in Brighton for <a href="">#recordstoreday</a>.</p> <p>The whole process felt fun, quaint, cute, anachronistic, nostalgic, and absurd. The last time I burned CDs was 3 years ago, when I made a few copies of the <a href="">100 Robots Sustain</a> album for friends. My current MacBook Pro doesn&#8217;t have an optical drive, let alone a <span class="caps">CD</span> writer. The <span class="caps">CD</span> player in our living room doesn&#8217;t work. I ripped our <span class="caps">CD</span> collection to <span class="caps">MP3</span> a decade ago when we still lived in Nottingham and needed to make space for our new family. Until I tested the Story Bird CDs, the last 2 operational <span class="caps">CD</span> players in our house haven&#8217;t played a <span class="caps">CD</span> in months. It&#8217;s great to see real, physical copies of the album I&#8217;ve spent the last 6 months working on, the CDs sound great and the unique hand printed covers look amazing, but as with the piles of vinyl records sold tomorrow on #recordstoreday they will mostly serve as limited edition&nbsp;souvenirs.</p> <p>#recordstoreday promotes collecting records just as the #comicbookday that inspired it promotes collecting comics. Fetishising obsolete formats is harmless fun, but discovering new music and supporting musicians is important. On the 23rd of April, once you&#8217;ve filed away the limited edition vinyl you queued overnight to buy on #recordstoreday, spend some time in the comfort of your own home downloading some amazing music on <a href="">#bandcampday</a>.</p>#bandcampday2016-03-26T18:59:00+00:00Jim,2016-03-26:2016/03/26/bandcampday/<p><img alt="Black barn mixing desk" src="" title="Black barn mixing desk" /></p> <p>I love record shops. Whenever I had pocket money it would go on <a href="">Metallica</a> and <a href="">Nirvana</a> CDs bought from <a href="">Our Price</a> or black t-shirts to match. When I lived in Nottingham I bought <a href="">Boards Of Canada</a> CDs from the same <a href="">Selectadisc</a> that my Dad bought a rare <a href="">Fairport Convention</a> single from decades before and the latest <a href="">Ninja Tune</a> or <a href="">Rudy Van Gelder</a> edition <a href="">Blue Note</a> records for £5 from <a href="">Fopp</a>. While working at Linden Lab in San Francisco my brother and I gorged on releases by the <a href="">Quannum</a> collective hungrily snatched up from <a href="">Amoeba Music</a>. While walking home from <a href="">BrewDog</a> with <a href="">Alistair</a> recently, we were both enthusing about the amazing <a href="">Resident</a> in Brighton. I commented that it was a shame that even Resident didn&#8217;t have room for the incredible <a href="">Tera Melos</a> who I had just seen at <a href="">The Green Door Store</a> after finding their music along with <a href="">Cleft</a>, <a href="">100 Onces</a> and <a href="">many others</a> on <a href="">Bandcamp</a>.</p> <p>I also love making music. While I was still at school I recorded Timeless Mind songs on a Fostex X-28 cassette 4-track. When I got to Nottingham I watched <a href="">4hero</a> coax amazing sounds out of an Atari while I projected virtual worlds on the wall behind them, tried to convince Cubase to sync <span class="caps">MIDI</span> parts to audio on an underpowered <span class="caps">PC</span>, shut my brother in my spare room until he recorded saxophone parts for Vanishing Trick and eventually saw some of my music in Selectadisc thanks to <span class="caps">DJ</span> <span class="caps">SS</span> and <a href="">Formation Records</a>. Computers democratized music production and unleashed a Cambrian explosion of <span class="caps">DIY</span> musical creativity that can&#8217;t fit in to any record shop, but can fit on the Internet. When Resident runs out of room, Bandcamp keeps going. Bandcamp also allows musicians to charge a <a href="">fair price</a> for their music and takes a fair percentage for their services: an important consideration in a world where making money as a musician is becoming increasingly&nbsp;hard.</p> <p>A couple of days ago I was reminded of the conversation with Alistair when I started seeing mentions of this year&#8217;s <a href="">#recordstoreday</a>. #recordstoreday is great for record stores; Bandcamp is great for the rest of music; why isn&#8217;t there a #bandcampday? On the 23rd of April, the week after you spend the afternoon buying exclusive slabs of shiny black vinyl at your local record shop, why not spend an afternoon uncovering some of the musical treasures on Bandcamp? I hope to have the first <a href="">Story Bird</a> album and new <a href="">Point Mass</a> music finished by then; I have some old Vanishing Trick tracks I can release and if I can find a good tape recorder I might even be able to upload some of the Timeless Mind recordings from 22 years ago. If you&#8217;re a musician, do you have any hidden gems you could release on the 23rd of April? If so, join <a href="">this event</a> and let&#8217;s see how much music we can release on the first ever <a href="">#bandcampday</a>.</p>crestmatic2016-01-03T23:13:00+00:00Jim,2016-01-03:2016/01/03/crestmatic/<div class="flex-video widescreen"><iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>A year ago I gave a talk at <a href=""><span class="caps">EVE</span> Vegas</a> about building RESTful <span class="caps">CREST</span> applications. My <a href="">#1 recommendation</a> was to specify representations in requests, but that&#8217;s hard to do when there is little documentation on which representations are available and what they&nbsp;contain.</p> <p>Fortunately <span class="caps">CREST</span> is <a href=";t=1369">self describing</a>: send an <span class="caps">OPTIONS</span> request to a <span class="caps">CREST</span> <span class="caps">URI</span> and a list of methods and representations that can be used with that <span class="caps">URI</span> is returned. The metadata isn’t in a standard format, so I built the <a href="">crestschema</a> library which can convert the <span class="caps">CREST</span> format in to more useful <a href=""><span class="caps">JSON</span> schema</a>s. The library works in browsers, in applications like <a href="">crestexplorer</a>, and from <a href="">node</a> in applications like <a href="">crestschemaspider</a> which can crawl the <span class="caps">CREST</span> <span class="caps">API</span> to find all reachable representations. Converted schemas can then be used with wide variety of <a href="">software, libraries and languages</a> to validate data from the live <span class="caps">CREST</span> <span class="caps">API</span>, generate parsers or automatically generate&nbsp;documentation.</p> <p>With the schemas converted it&#8217;s easy to build <a href="">crestmatic</a> which uses crestschema and <a href="">matic</a> to generate the documentation. Adding a <a href="">travis</a> step to publish the generated documentation to <a href="">gh-pages</a> and wiring up <a href=""></a> to trigger a build every night ensures that the documentation is automatically kept up to date. If you&#8217;d like to see some changes please feel free to submit a pull request or just donate some <span class="caps">ISK</span> to Capt Out if you find the <a href="">documentation</a> or <a href="">code</a>&nbsp;useful.</p>Free Tests For Everyone!2015-06-11T17:00:00+01:00Jim,2015-06-11:2015/06/11/free-tests-for-everyone/<p><img src="" width="400"></p> <p>Modern software development is sometimes colourfully described as being similar to firing tracer bullets at a target. Rather than spending time doing a lot of research, design and specification up front, the smallest, simplest version of the software is built and the feedback gathered from its use is used to improve the next version. Being able to continuously integrate, test and deploy software means that we can make this feedback loop incredibly tight but because the same engineers are developing the software and the tests we constantly have to think carefully about the testing investments we make and their opportunity&nbsp;costs.</p> <p>Software engineers allow themselves to use knowledge of how the software works to pick good white box tests. They rely on code reviewers to catch important test cases that they&#8217;ve missed. They rely on unit tests to exercise pathological cases which are hard to exercise in end-to-end or integration tests. They use bugs discovered in previous releases to inform tests cases developed for the next. The goal is to ensure that the next version of the software provides feedback on how close it is to the target rather than just exploding on&nbsp;impact.</p> <p>The decisions around which tests to write are hard because tests have costs. They take time to write and maintain and need to be changed and updated along with the software they test. What would happen if some or all of the tests could be&nbsp;free?</p> <p>This question has tantalised computer scientists since Turing’s work on the <a href="">halting problem</a> in the 1930s and while academic progress has been made, tools based on that work have tended to either only work on small codebases; perform trivial analysis; generate a high percentage of false positives; produce difficult to understand reports or impose other overheads on the development process that make the resulting testing pretty far from free. As a result while we theoretically know how to do sophisticated analysis, in practice we tend to rely mostly on linters doing trivial checks alongside automated tests which check a handful of paths through the software written by&nbsp;engineers.</p> <p>So it was wonderful when my mixture of hope, anticipation and scepticism gave way to delight when the Infer static analyser delivered on the promise of useful, non-trivial static analysis at Facebook. While Infer currently only reports on a small subset of the errors that it can detect, for those classes of errors Infer&#8217;s output is generally as useful as seeing a failing test case: Infer&#8217;s 80% fix rate on the non-trivial bugs it finds and reports on is extraordinary in an environment like&nbsp;Facebook&#8217;s.</p> <p>Infer and tools like it won&#8217;t completely replace test cases written by engineers. To paraphrase Cristiano, without application specific information, no formula will be able to determine whether a piece of software is working as intended. As a companion to test suites though, static analysis will be transformative. It&#8217;s likely that sophisticated static analysers will soon be used by everyone from the smallest software engineering teams to the biggest tech companies. When high quality tests are free, why wouldn&#8217;t you run&nbsp;them?</p> <p>Working with Infer in production at Facebook over the last year has been like living in the future. It has changed the way I think about testing and about the testing investments that should be made. I’m incredibly excited to see its public release as an <a href="">open source</a> tool that can be used by everyone and very grateful to have been able to help distribute this future more&nbsp;widely.</p>Investing In Testing2015-06-10T20:22:00+01:00Jim,2015-06-10:2015/06/10/investing-in-testing/<p><img alt="Droidcon London" src="" title="Droidcon London" /></p> <p>Last year I was talking to an engineer at <a href="">Droidcon London</a> who was working on an Android app with 100% test coverage. I immediately asked whether he thought 100% test coverage was worthwhile: many software engineering teams strive to achieve 100% test coverage, but few succeed because it&#8217;s an enormous investment and one that I&#8217;m not sure often pays&nbsp;off.</p> <p>As with <a href="">technical debt</a>, I think it&#8217;s useful to think of tests as technical investments. Time is invested writing and maintaining tests and the expected return is in time saved writing and debugging code or shipping hotfixes. However, in many cases that payoff doesn&#8217;t happen. It&#8217;s easy to write tests which never fail or slow down the software development process they were intended to speed up. Large software systems tend to accumulate lots of connective tissue in the form of methods which simply pass a call along to another object. If the software builds and starts these tests will always pass and so deliver little value, but they fail when the code is changed, requiring an engineer to investigate the failure and change or remove the test. Even good test investments incur an opportunity cost as time spent writing tests is time not spent improving the software being&nbsp;tested.</p> <p>An <a href="">alpha</a> software engineer then is one that can pick the investments that pay off while avoiding spending time writing tests that don&#8217;t. There are lots of useful investment strategies that can be employed. In some cases test driven development can save more development time than it costs to write the tests, meaning that the the investment can pay off in hours. In other cases the past can be a useful guide to the future: if a bug is discovered then writing a test to ensure that it can&#8217;t occur again is often a good bet. Similarly, if changes to one part of a system cause failures in another then writing tests for those dependencies can avoid similar breakages in the future. In both cases it&#8217;s easier to add the tests later if the code is designed to be testable, which in turn means that it&#8217;s often a good idea to write at least one test for each part of the system, to ensure that more tests can be added when needed. Adding tests to code that you need to change can be a good strategy as it prioritises parts of the system that are changing while allowing parts that just work to keep running without tests. However, if those parts of the system continually change the tests being added can add maintenence cost without having time to deliver a return on their investment. In a system where the user interface behaviour changes less frequently than its implementation, investing in end-to-end tests can be worthwhile. The end-to-end tests have the opportunity to find many different bugs in different revisions of the software, which is changing faster than the user interface, but this needs to be weighed against the high maintenence costs of end-to-end tests and the difficulty of diagnosing problems when they&nbsp;fail.</p> <p>In all of these cases the goal is to write the tests with the highest expected return, or at least write those tests first. The problem with just striving for 100% code coverage as an investment strategy is that it values all tests equally. Any test which adds to code coverage is considered valuable: even those which will never fail and just add maintenance overhead. As the tests which can&#8217;t fail are the easiest to write they can often end up being written first. As more of these tests are added, the costs mount without benefits being realised and eventually tests stop being written with 80% code coverage, but with many of the most valuable tests missing and a demoralized and dissilusioned&nbsp;team.</p> <p>When it comes to investing in testing it pays to invest in&nbsp;alpha.</p>buckd2014-08-18T21:13:00+01:00Jim,2014-08-18:2014/08/18/buckd/<p><a href="" title="BuckGraffiti by Jim Purbrick, on Flickr"><img src="" width="800" height="600" alt="BuckGraffiti"></a></p> <p>One of the things I’ve been working on since joining Facebook is <a href="">Buck</a>, an open source Android <span class="amp">&amp;</span> Java build tool which is significantly faster than many other Java build tools for a <a href="">number of reasons</a>.</p> <p>As well as being fast, Buck gains a lot of power and flexibility by using Python to generate build rules. Once projects become very large, however, this can become a problem as Buck has to execute thousands of python scripts to build its dependency graph before it can start its parallel build process. When I started working on Buck this parse phase could last tens of seconds. Buck was already much faster than Ant, but <a href="">test driven development</a> could be&nbsp;painful.</p> <p>Our initial work focussed on making the parsing step faster and after some experimentation with <a href="">Jython</a> we discovered that bigger improvements could be made by running a long lived Python process which could be handed build files to execute as&nbsp;required.</p> <p>As is often the case, the bulk of the improvements could be made by caching. Build files change far less often than source files, so caching the build file output avoids the need to spend a lot of time parsing in the common case when only a small number of source files change. After spending some time looking at serialising the build file output to disk it became clear that a more effective approach would be to cache the output in memory by running Buck as a long lived server process using&nbsp;Nailgun.</p> <p><a href="">Nailgun</a> is a client, protocol, and server for running Java programs without incurring the <span class="caps">JVM</span> startup overhead. Nailgun makes converting Java applications to client-server architectures as simple as passing the name of the class containing your <code>Main</code> method to the nailgun Server and client application. Early experiments running Buck with Nailgun showed a lot of promise, allowing us to reduce parse time to close to zero, but running buck as a server invalidated several assumptions that required a non-trivial amount of work to&nbsp;fix.</p> <p>The environment had to be threaded through from the client and calls to <code>System.getenv()</code> replaced, <code>System.exit()</code> could no longer be used for garbage collection, so resource lifetimes had to be managed with <a href="">try-with-resources</a> blocks and Nailgun needed to be extended to detect client disconnections which could be thrown as <a href="">InterruptedException</a>s to ensure that killing the Nailgun client cancelled builds as expected. It’s also worth noting that for large, long running applications like Buck the <span class="caps">JVM</span> started overhead saved by Nailgun is not significant, but the time saved by the long running process being able to maintain a <span class="caps">JIT</span> cache of Java class files&nbsp;is.</p> <p>With Buck running as a long running server process the next step was to make it correctly invalidate cached build rules when build files changed. In order to avoid building outputs each time a file is touched, buck hashes the contents of input files to see if they have actually changed. While this saves a lot of time when switching between similar source control branches it requires reading each input file each time a build is run: something which was adding several seconds to the per-build overhead that we were trying to&nbsp;reduce.</p> <p>To avoid this overhead we switched to a composite approach which watches the file system for changes and then checks the hashes of the contents of changed files. In the case where a few files are edited only a few hashes are generated and compared, in the case where source control systems touch many files without changing their contents comparing hashes avoids unnecessary&nbsp;rebuilding.</p> <p>Initially we used the standard <a href="">Java WatchService</a> to generate file change events, but found that in practice the latency between changing a file and the FileWatcher generating events was far too high. Luckily <a href="">wez</a>, <a href="">sid0</a> and friends had built <a href="">Watchman</a> which provides very low latency file change events and an easy to use <span class="caps">JSON</span> based interface which only took a day to wire in to Buck. Watchman is an amazing piece of technology, but requires some tweaking of <span class="caps">OS</span> settings to work well, so if you notice Buck taking a long time to parse project files you may need to check the <a href="">system specific preparation</a>.</p> <p>When combined with exopackage and a number of other optimisations, the benefits of the Buck daemon are significant. Trivial builds now take a small fraction of the time they used to and in some cases it’s possible to incrementally build and install of an app in <a href="">less time than it takes some build systems to do a no-op build</a>.</p> <p>I’ve had a great time working with the amazingly talented Buck team and I’m very happy to see buckd improving build times within Facebook and&nbsp;beyond.</p> <p>Now it’s time to go back to writing a test, watching it fail and making it pass: with a lot less waiting&nbsp;around.</p>Organisational Structures2014-03-20T23:00:00+00:00Jim,2014-03-20:2014/03/20/organisational-structures/<p>There have been a number of <a href="">blog</a> <a href="">posts</a> recently about exciting new organisational structures. As <a href="">Cory</a> <a href="">points out</a> &#8220;Every early stage company thinks it has reinvented management&#8221;: a very dangerous belief when betting on a new organisational structure can be much riskier than betting on the wrong&nbsp;product.</p> <p>It starts innocently enough: version 1.0 has finally launched, early adopters have arrived, the graphs are definitely starting to point upward and it&#8217;s time to&nbsp;hire.</p> <p>Growing the company is the next challenge, but no one has spent the last couple of years foregoing income and sleep to build another Acme Corp., in fact no one believes that an Acme Corp. would have been capable of launching version 1.0 which was clearly only possible due to having a room full of smart people and no&nbsp;management.</p> <p>So, the first course of action is to hire more smart people and empower them to choose the right thing to do. This works for a while but eventually the <span class="caps">CEO</span> can no longer keep track of what everyone&#8217;s doing and some things are falling through the&nbsp;cracks.</p> <p>This is clearly a scaling problem, so a scalable system is devised. Maybe everyone picks N tasks a week and votes on the most important, maybe this is 2 years ago and some bonuses are sprinkled on gamification style, or this is in 6 months time and some sharing economy social mechanics are wedged&nbsp;in.</p> <p>Like all alphas there are some wrinkles and so the system is tweaked and iterated on, but now the organisational structure has become a second product. While some people are enthusiastically hacking on the organisation, they&#8217;re not working on version 1.3 which is late. Other people are frantically trying to get version 1.3 out of the door while another no longer buys the new organisational structure and has cashed in their last 2 years vacation along with the money they gamed from the bonus system to pay for a month long holiday in the&nbsp;sun.</p> <p>Eventually it becomes clear that fighting on two fronts is not sustainable and the <span class="caps">CEO</span> decides to pivot to a more conventional organisational structure to concentrate on getting version 1.4 out on time. Some of the people who heroically got 1.3 out of the door would make great managers, but by this time they have either already burned out and left or are shopping their CVs to Acme&nbsp;Corp.</p> <p>Luckily the product is still generating buzz, so a brace of experienced managers can be drafted in, but this generates even more organisational churn as some more of the smart people look up 4.5 years in to their 3 month project to discover that a lot of their friends have&nbsp;left.</p> <p>The organisational structure that looked like a trivial problem compared to building version 1.0 results in version 2.0 never&nbsp;shipping.</p> <p>Getting product decisions wrong costs time and money. Getting organisational decisions wrong burns even more valuable human capital and goodwill. Engineers at startups expect to make product pivots, but they don&#8217;t expect to be alpha testing an ever-changing series of organisational MVPs at the same&nbsp;time.</p> <p>In many cases the ground breaking tech product and/or service is mostly a clever combination of commodity hardware and open source software connected to the internet and maintained by a small and heroic ops&nbsp;team.</p> <p>Similarly, the innovative company is often a clever combination of existing organisational structures maintained by a small and heroic management&nbsp;team.</p> <p>I&#8217;m currently very happy as one of the nodes in the middle left picture below. <a href="">We&#8217;re hiring</a>.</p> <p><img alt="Organizational Charts" src="" title="Organizational Charts" /> (via <a href=""></a>&nbsp;)</p>Beyond Time Dilation?2014-01-29T06:38:00+00:00Jim,2014-01-29:2014/01/29/beyond-time-dilation/<p><img alt="The Battle of B-R5RB" src="" title="The Battle of B-R5RB" /></p> <p><span class="caps">EVE</span> online is <a href="">a remarkable game</a>. On Monday <a href="">over 2000 people spent over 20 hours destroying virtual spaceships worth 200,000 <span class="caps">USD</span> in real money</a> in what was the likely the largest battle in a video game ever. That <span class="caps">EVE</span> is capaple of supporting such large engagements is an amazing technical achievement, that thousands of players are willing to invest huge amounts of time and money in to a game that recently celebrated its 10th anniversary is an amazing game design&nbsp;achievement.</p> <p>The reason that the scale of <span class="caps">EVE</span> battles continue to break records and make headlines is the introduction of <a href="">Time Dilation</a> (TiDi) in 2012. When the server simulating a solar system in <span class="caps">EVE</span> becomes unable to keep up with the rate of updates from connected clients it slows the simulation down so it can keep up. Before TiDi the server would become overloaded and start failing to process commands from players. After time dilation, game time slows down, but commands continue to processed fairly for all players. Up to a&nbsp;point.</p> <p>In order to ensure that battles eventually get resolved, TiDi is only allowed to slow the simulation down to 10% of it&#8217;s normal speed, so at 10% TiDi one second of simulation time passes for every 10 seconds of real time. The 20+ hour Battle of B-<span class="caps">R5RB</span> on Monday was eventually brought to a halt due to a server upgrade, if TiDi was allowed to continue past 10% this would become increasingly common and battles would often be decided by one side&#8217;s willingness to put up with more simulation slowdown than the&nbsp;other.</p> <p>In part due to the extra real time that TiDi gives players to join in, the scale of large <span class="caps">EVE</span> battles have now grown to the point where they regularly push TiDi to 10%, server <span class="caps">CPU</span> to 100% and cause the old problems of failing updates to reappear, as was seen in <a href="">The Battle Of <span class="caps">HED</span>-<span class="caps">GP</span></a> a week ago in which many of the attacking forces were unable to issue commands successfully and so were destroyed. The limits of TiDi to allow battles in <span class="caps">EVE</span> online to scale up have been&nbsp;reached.</p> <p>In his <a href="">analysis of The Battle of <span class="caps">HED</span>-<span class="caps">GP</span></a>, <a href=""><span class="caps">CCP</span> Veritas</a>, talks about &#8220;one of the bounding scaling factors in large fleet fights, the unavoidable O(n2) situation where n people do things that n people need to see&#8221;. Avoiding this bounding scaling factor may be one way to allow <span class="caps">EVE</span> battles to scale beyond the limits of&nbsp;TiDi.</p> <p>As the <a href="">amazing screenshots from B-<span class="caps">R5RB</span></a> show, large scale battles in <span class="caps">EVE</span> quickly become difficult to navigate. With 1000s of pilots in space, simply finding the ship you want to target becomes hard. As a result <span class="caps">EVE</span> provides an overview that lists the ships in nearby space. This overview can be sorted in a number of ways allowing a particular ship to be selected and multiple lists can be set up filtering out different subsets of things in space. Often during a battle pilots will use an overview set up to only show enemy ships to avoid targeting friendly ships, for&nbsp;example.</p> <p>Pushing these filtering settings to the server may be one way to scale <span class="caps">EVE</span> battles beyond the limits of TiDi. In the case where all pilots have their overview set up to only show enemy ships we would have a situation where n people do things that n/2 people need to see. With more aggressive filtering that only showed interacting ships plus the nearest ship of each type plus ships piloted by known pilots plus ships broadcast to your fleet it might be possible to get to a situation where the number of things people need to see does not depend on the number of people in a battle. The O(n2) problem could be avoided without significantly changing EVEs game play mechanics. If the number of ships of each type were also sent to the client, the overview could still be populated with one entry per ship, albeit with missing information, allowing for situations where fleets want to spread their fire rather than focussing&nbsp;it.</p> <p>One potential weakness with this approach is that it would allow pilots to force the server back in to doing O(n2) work by simply disabling thier overview filters, something that might become impossible to resist in a situation where it&#8217;s clear that your fleet is going to lose hundreds of thousands of real dollars worth of virtual spaceships. In the Battle of B-<span class="caps">R5RB</span>, one side had ships filled with autonomous drones which they apparently chose not to deploy to avoid server load, but might well have decided to deploy if they had started losing the&nbsp;battle.</p> <p>A potential solution here would be to prioritise updates from players with more aggressive filtering settings. If the server runs out of time at the limits of TiDi it simply stops processing updates and starts on the next frame, penalising pilots with too liberal filtering settings who could be made aware that their updates are being dropped by the <span class="caps">UI</span>. This would encourage pilots engaged in the battle to aggressively filter while potentially still allowing non-combatants to capture the occasional screenshot or even video of the entire&nbsp;battle.</p> <p>Another weakness of this approach is that pilots will no-longer see every ship in the battle rendered at the correct position in space. A similar approach to populating the overview could be taken here by rendering the correct number of ships of each type at ranges beyond the minimum range for each type, or more symbolic approaches could be used to display the number of ships in the 3D scene. I did some related work on rendering <a href=";arnumber=840515">symbolic abstractions of 3D environments in <span class="caps">MASSIVE</span>-3</a> many moons&nbsp;ago.</p> <p>I no longer have a copy of the <span class="caps">EVE</span> source code to check whether any of this is possible and even when I was working on <a href=""><span class="caps">CREST</span></a> I didn&#8217;t look at the simulation code in <span class="caps">EVE</span> much at all, but this might be one way around the O(n2) problem that currently limits the size of <span class="caps">EVE</span> battles. However they choose to tackle it, I wish <span class="caps">CCP</span> Veritas and the newly rehydrated Team Gridlock all the best in their scaling work and look forward to reading stories of ever more epic space battles for the next 10&nbsp;years.</p>Osprey Therian2013-12-15T23:50:00+00:00Jim,2013-12-15:2013/12/15/osprey-therian/<p>In mid-2004 I first started exploring Second Life. Version 1.4 had just been released and Philip Rosedale had said in the press release <a href="">&#8220;My fantasy is to be Uma Thurman in Kill Bill, and now I can. I&#8217;d pay $10 for her yellow jumpsuit and sword moves and I&#8217;m sure other people would too. Custom Animations are going to liberate the world of Second Life in ways I&#8217;m sure I can&#8217;t imagine.&#8221;</a> Plenty of people responded by making sword fighting moves, but the software and network latency in Second Life made actually having a satisfying sword fight hard, so I decided to play around with Second Life&#8217;s scripting language, <span class="caps">LSL</span>, to see if I could build an interesting turn based sword fighting game. Once I had a prototype up and running I realized that it might be fun to turn it in to a trading card game and so asked on the Second Life forums if anyone would like to help with the artwork. <a href="">Osprey Therian</a>&nbsp;replied.</p> <p>I had no idea who Osprey was in real life, but in Second Life she was friendly, funny, enthusiastic, determined and a ball of energy. Together we photographed hundreds of avatars and Osprey turned them in to georgous <a href="">Combat Cards</a>, she created hilarious custom animations with rotating limbs for our robot themed cards and dozens more to accomodate avatars curled up in to a ball as tinies. We sold thousands of combat cards, made thousands of real life dollars and ploughed some of that in to getting real life Combat Cards printed which we sold for Linden Dollars using a vending machine in Second Life. We built a community that met up every Sunday to play Combat Cards and ran tournaments with every new release for several years in <a href="">Europa</a>. Much of this was only possible thanks to the way that Osprey threw herself in to the project with an energy, passion and determination and commitment that I have rarely experienced in Second Life or real&nbsp;life.</p> <p>We rarely talked about real life, mostly just meeting up in <span class="caps">SL</span> and getting things done. Partly this was because started working for Linden Lab and didn&#8217;t want that to affect our relationship. Eventually I let it slip somehow and Osprey confessed that she sort of knew anyway, from various hints and things I&#8217;d mentioned over the years. With that secret out we vaguely started making plans to meet up in real life and in 2009 got the&nbsp;chance.</p> <p>While visiting Linden Lab&#8217;s Seattle office I hired a car and drove to Olympia where a woman called Vivian Kendall, who I had never met before, let me in to her house and made me a salad. In some ways it was a very strange encounter. I wasn&#8217;t a bald, yellow, Simpsons like mad scientist, Osprey wasn&#8217;t a rakish, swashbuckling rogue and we weren&#8217;t swordfighting, dancing or floating across Second Life in a hot air balloon, but we knew each other. &#8220;You&#8217;re just you&#8221;, Vivian said as we ate together. And Vivian was just Osprey. Much frailer in real life, but full of the same energy and passion that spilled out across Second Life. After lunch we looked at some of Vivian&#8217;s real life paintings and although she regretted not being able to paint any more she talked enthusiastically about the second life that Second Life had given her to create and socialize as her real life had closed in. After an hour or so of chatting we took a picture, I left and we went back to our second&nbsp;lives.</p> <p><a href="" title="Osprey by Jim Purbrick, on Flickr"><img src="" width="500" height="500" alt="Osprey"></a></p> <p>I only heard about Vivian&#8217;s passing a couple of hours ago via Joe Miller and so I unfortunately missed her memorial yesterday, but I am very grateful that I got to meet up with Osprey one last time to celebrate Second Life&#8217;s 10th birthday a few months ago. We talked and danced, listened to music and Osprey showed off yet another amazing Second Life avatar that looked incredibly life like until she pushed a virtual button and made her feet enormous. It was&nbsp;perfect.</p> <p><a href="" title="SL10B by Jim Purbrick, on Flickr"><img src="" width="500" height="267" alt="SL10B"></a></p> <p>Whenever people claim that <a href="">Second Life failed</a> I think of Osprey. Second Life allowed many of our lives to be enriched by the wonderful Osprey Therian and provided her with a platform for her creativity when painting in real life became impossible. That is success enough for me. But Second Life was just the platform: the art, energy, love, creativity and passion were Osprey&#8217;s alone. I will miss her enormously and remember our time making Combat Cards together&nbsp;forever.</p>Parse By The Sea2013-10-19T18:37:00+01:00Jim,2013-10-19:2013/10/19/parse-by-the-sea/<p><a href="" title="#parsebythesea by Jim Purbrick, on Flickr"><img src="" width="500" height="334" alt="#parsebythesea"></a></p> <p>A few weeks ago <a href="">Facebook London</a> hosted the <a href="">Parse By The Sea</a> hackathon at the <a href="">Brighton Dome</a> as part of the <a href="">Brighton Digital Festival</a>. The idea was to take one of our internal hackathons on the road and invite members of the public to join us, turning a hackathon in to an open studio offering a glimpse of an important part of <a href="">Facebook Culture</a>.</p> <p>The Dome cafe bar was a great venue with the Founders Room soon packed with eager hackers listening to presentations and watching live coding demos from <span class="caps">API</span> partners <a href="">Withings</a>, <a href="">Deezer</a>, <a href="">Pusher</a>, <a href="">Unity</a> and <a href="">Parse</a>. Soon the rest of the Cafe and Mezanine were collonized by teams of hackers. The WiFi and 100Mbps connection supplied by Metranet held up admirably to the 12 hour pounding dished out by 100 hackers each with multiple devices and before we knew it it was time to head back in to the Founders Room for the prototype forum where hacks were presented and prizes&nbsp;awarded.</p> <p>Deezer were delighted by the number of high quality music hacks including <a href="">Party Relative Track Evaluator</a> by Paul Blundell, <a href="">Music Puzzle</a> by Alice Lieutier, <a href="">Flapdoodle</a> by Ryk, Charlie, Phil, Luke, Matt and <a href="">PlayHear</a> by Joe Birch, Ivan Carballo and Mark&nbsp;Dessain.</p> <p>Inspired by the talk of <a href="">life logging</a> at <a href="">dConstruct</a> this year I decided to work on a music based hack with Andy Pincombe from Facebook and Sara Gozalo from the <span class="caps">BBC</span>. <a href="">Mood Music</a> analyses the mood of your music listens using the <a href="">EchoNest</a> <span class="caps">API</span> and plots it alongside a sentiment analysis of your Facebook status posts. We didn&#8217;t get far enough along to draw any conclusions from our experiments, but it would be nice to see if music listens are a trailing or leading indicators of mood and maybe even build a Samaritunes app which determines the songs which pick you up and suggests them to you if you start to get&nbsp;down.</p> <p>Other interesting hacks included <a href="">Batsh</a>, a compiler which generates bash and Windows batch files, <a href="">Identify</a> by Saqib Shaikh, an app which identifies objects by their bar codes for blind users, and <a href="">Ninja Ear</a>, an audio based game for blind users that moves objects around the stereo&nbsp;field.</p> <p>The clear winner of the Parse and Facebook prize was <a href="">Frictionless Photo Sharing</a> by Ben Chester, Nick Kuh and Jose Jimenez. The app automatically saves new photos to albums and pushes them immediately to other devices sharing that album. The team managed to build iOS and Android versions of the app overnight and it was was amazingly slick: Nick demoed the real time photo sharing by taking pictures of the prototype forum audience and having them appear immediately on another device connected to the demo screens. A worthy winner of the Parse Pro account and Facebook ads prizes which we hope the team use to get Frictionless Photo Sharing in to the iOS and Android app stores and on to everyone&#8217;s&nbsp;devices.</p> <p>There are a few things I&#8217;d change if I organize something like Parse By The Sea again. We normally run internal hackathons overnight on Thursday and wanted to stick to that schedule for authenticity, but it meant that lots of people who wanted to come couldn&#8217;t make it: if we open up a hackathon again we should definitely run it over a weekend. It was also unfortunate that Parse By The Sea clashed with <a href="">Over The Air</a> which meant people who would have liked to have gone to both had to pick one or the other. Somehow we also managed to forget to bring any Red Bull, a problem with <a href="">Elena</a> quickly fixed by buying up all of the supplies at the local Tesco, but an inexcusable slip when hackers are trying to stay up all&nbsp;night.</p> <p>Overall though, everyone seemed to have a great time and built some amazing hacks: thanks for coming along and making Parse By The Sea a big&nbsp;success.</p>Facebook Hackathons2013-09-16T21:50:00+01:00Jim,2013-09-16:2013/09/16/facebook-hackathons/<p>I&#8217;ve been a big fan of hackathons since one of the first Yahoo! Hack Days I attended at Alexandra Palace was <a href="">struck by lightning</a>. The lightning caused the fire alarms to go off which opened the roof to let the torrential rain pour on to hundreds of geeks and laptops. The lighting strike also did a wonderful job of breaking the ice: within minutes hundreds of attendees were huddling in the foyer talking about the weather while others busily <a href="">snapped</a> the few stalwart hackers who cracked open umbrellas and continued to code&nbsp;regardless.</p> <p>Since then I&#8217;ve been to more Yahoo! Hack Days, <a href="">Masheds</a>, <a href="">Music Hack Days</a> and a <a href="">Euro Foo Camp</a> and built a <a href="">realtime multiplayer augmented reality submarine torpedo game</a>, an <a href="">augmented virtual reality carbon emission visualizer</a>, <a href="">turned snake in to a music sequencer</a>, put together the <a href="">London Geek Community iPhone Oscestra</a> and learned a pile of technologies from <a href="">Reactable</a> and <a href="">Processing</a> to <a href="">Ubuntu</a> and <a href="">Django</a>. But no one hackathons like <a href="">Facebook</a>.</p> <p>Every six weeks hundreds of Facebook engineers get together on a Thursday evening under a huge crane to form teams and hack through the night on anything that isn&#8217;t their normal job before taking the next day off. It&#8217;s a huge opportunity to learn, meet people, experiment, try new ideas, learn new technologies, have fun and grow as a software developer. Running hackathons this frequently, on this scale and giving engineers a day off every six weeks is a huge, risky investment for Facebook, but one that pays off handsomely: many of Facebook&#8217;s features, dozens of internal tools that I use every day and the open source <a href="">Buck</a> build tool that I currently work on were all started at&nbsp;hackathons.</p> <p>Now <a href="">Facebook London</a> has taken hackathons even further by adding <a href="">cream teas at midnight</a> and taking them on&nbsp;tour.</p> <div class="flex-video widescreen"><<iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>I had enormous fun building a catapult on top of Whitstable Castle and reenacting Monty Python and the Holy Grail at my first Facebook hackathon and I&#8217;m looking forward to the next one at the Brighton Dome on the 26th of September. The <a href="">Brighton hackathon</a> will be special as it&#8217;s the first time we&#8217;re opening a Facebook London hackathon up to the public as part of the <a href="">Brighton Digital Festival</a>. The format will be the same as a normal hackathon: it will start on Thursday night and dozens of the super smart Facebook London engineers will be there hacking on amazing projects until the next morning. The only differences from a normal internal Facebook hackathon are that external developers will be joining us and so we&#8217;ll be building on top of the <a href="">Facebook</a>, <a href="">Parse</a>, <a href="">Deezer</a>, <a href="">Unity</a> and <a href="">Withings</a> APIs rather than hacking on secret Facebook&nbsp;code.</p> <p>If that sounds fun and you can convince yourself or your boss that taking a day off work to learn is worthwhile, send an email to <a href=""></a> and let us know that you&#8217;d like to experience a Facebook hackathon for yourself. I&#8217;m looking forward to seeing what we can build&nbsp;together.</p>Brighton Digital Festival2013-09-04T13:49:00+01:00Jim,2013-09-04:2013/09/04/brighton-digital-festival/<div class="flex-video widescreen"><iframe src="" frameborder="0" allowfullscreen=""></iframe></div> <p>The <a href="">Brighton Digital Festival</a> starts this week and I&#8217;m very happy to be helping out with <a href="">Facebook London</a>&#8216;s contributions: <a href="">Parse By The Sea</a>, a mobile app Hackathon featuring <a href="">Parse</a> on the 26th of September, and helping to <a href="">Connect The Brighton Digital Festival</a> by sponsoring <a href="">Metranet</a> to provide high speed internet connectivity and improved WiFi infrastructure in the <a href="">Brighton Dome</a>. If you&#8217;d like to come to the hackathon, please send an email to <a href=""></a> and tell us about your mobile apps. I&#8217;m looking forward to seeing lots of great hacks at Parse By The Sea and to finally get online to <a href="">blog</a>, <a href="">tweet</a> and <a href="">post</a> from <a href="">dConstruct</a> and the <a href="">Brighton Miki Maker Faire</a> this year. I hope to see you&nbsp;there.</p>Final Score2013-07-04T22:11:00+01:00Jim,2013-07-04:2013/07/04/final-score/<p><img alt="Google Reader" src="" title="Google Reader" /></p> <p>Using Reader on my <a href=""><span class="caps">HTC</span> Wizard</a> on the loo was probably responsible for my biggest increase in clue&nbsp;ever.</p> <p>Goodbye <a href="">Reader</a>, you&#8217;ll be&nbsp;missed.</p>Pelican Powered2013-07-02T19:13:00+01:00Jim,2013-07-02:2013/07/02/pelican-powered/<p>Almost exactly 5 years ago I set up <a href="">The Creation Engine No. 2</a> as a <a href="">Byteflow</a> blog running on <a href="">Django</a> when the original <a href="">Creation Engine</a> blog hosted by <a href="">Linden Lab</a> stopped being a suitable place for my thoughts on technology as a platform for&nbsp;creativity.</p> <p>Byteflow and Django served me well until late last year when <a href="">Recaptcha</a> finally crumbled and I found myself spending an increasingly tedious amount of time cleaning up spam&nbsp;comments.</p> <p>I considered just replacing the comment system with Disqus and Akismet, a particularly slick combination which I used on <a href="">Creatarr</a>, but without comments Byteflow&#8217;s full Django openid account system started looking pretty archiaic and heavyweight, especially when compared to the <a href="">Octopress</a> on <a href="">github</a> blogs that all the cool kids are&nbsp;using.</p> <p>After some playing around with various <a href="">modern static</a> frameworks I settled on Pelican, a python framework with some nice, responsive <a href="">themes</a> built with Django based Jinja 2 templates which would be easy for me to hack&nbsp;around.</p> <p>In a couple of hours I had a new <a href="">git repository</a> containing my blog posts <a href="">imported</a> in to a content directory, a <a href=""> fork of the pelican themes</a> in a theme directory, my <a href=""></a> repository in an output directory and all of the pelican dependencies listed in a requirements.txt ready to be pip installed in to a python virtual&nbsp;environment.</p> <p>Moving to gh-pages simply required adding a <a href=""><span class="caps">CNAME</span> file</a> to the <a href=""></a> repository and pointing <span class="caps">DNS</span> to github and it was possible to emulate Bytemark&#8217;s clean URLs with Pelican by combining the post date and slug in it&#8217;s <span class="caps">URL</span> and storing each post in an index.html file so that requests to the clean <span class="caps">URL</span> at github return the index.html&nbsp;file. </p> <p>The same hack works to support the existing clean feed URLs which are valid <a href="">Atom</a> and <a href=""><span class="caps">RSS</span></a> feeds, which could be consumed by the late <a href="">Google Reader</a> despite the &#8220;.html&#8221; extension causing github to return the feeds with an <span class="caps">HTML</span> content&nbsp;type.</p> <div class="highlight"><pre><span class="n">ARTICLE_URL</span> <span class="o">=</span> <span class="s">&quot;{date:%Y}/{date:%m}/{date:</span><span class="si">%d</span><span class="s">}/{slug}/&quot;</span> <span class="n">ARTICLE_SAVE_AS</span> <span class="o">=</span> <span class="s">&quot;{date:%Y}/{date:%m}/{date:</span><span class="si">%d</span><span class="s">}/{slug}/index.html&quot;</span> <span class="n">FEED_ATOM</span> <span class="o">=</span> <span class="s">&quot;feeds/atom/blog/index.html&quot;</span> <span class="n">FEED_RSS</span> <span class="o">=</span> <span class="s">&quot;feeds/rss/blog/index.html&quot;</span> </pre></div> <p>A few theme tweaks later and I have a lightweight responsive, lightweight blog that allows me to author posts offline, supports all of the existing permalinks and is hopefully ready for the next 5 years of The Creation Engine in the mobile first, post Google Reader&nbsp;era.</p>One Universe, Many Scales2013-01-10T23:19:00+00:00Jim,2013-01-10:2013/01/10/one-universe-many-scales/<p> <div class="flex-video widescreen"><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></div> </p> <p>One epic meta-game design I first remember talking about a decade ago while working on <a href="">Warhammer Online</a> is the multi-scale online game: a system of interconnected games in which you choose to be a solo operative, work in a small group, or command epic forces or huge space fleets and influence a single universe however you choose to contribute. It’s a recurring game design meme that I’ve talked about several times since and I’m sure many others have had similar discussions around the globe ever since multiple computers were&nbsp;connected together.</p> <p>So, I’m super excited to see <a href=""><span class="caps"><span class="caps">CCP</span></span> games</a> take a huge step towards realizing this epic game design dream today by linking the internet spaceship game <a href=""><span class="caps"><span class="caps">EVE</span></span> online</a> with the brand new <span class="caps"><span class="caps">PS3</span></span> first person shooter <a href="">Dust 514</a>. Players piloting giant spacecraft in <span class="caps"><span class="caps">EVE</span></span> online will be able to swoop down from space and interact with other players running around on the planet’s surface playing Dust 514. This is&nbsp;very cool.</p> <p>I’m also very excited that the two games are connected using the <a href=""><span class="caps"><span class="caps">CREST</span></span> <span class="caps"><span class="caps">API</span></span></a> that I helped to build: an <span class="caps"><span class="caps">API</span></span> that will be usable everywhere. Not only will you be able to interact with the <span class="caps"><span class="caps">EVE</span></span> universe at every scale from the human to the intergalactic, the clients used to connect to the universe will work on every scale of device from phones to desktops and every scale of interaction from web tools built by 3rd parties, through 5 minute casual experiences to epic space battles played out in full 3D over hours and campaigns played out&nbsp;over years.</p> <p>Today’s connection of <span class="caps"><span class="caps">EVE</span></span> and Dust is a historic moment, but it’s just the beginning of what will be a very exciting second decade for the universe of&nbsp;New Eden.</p>Creatarr2013-01-09T15:28:00+00:00Jim,2013-01-09:2013/01/09/creatarr/<p><img alt="" src="" /></p> </p> <p><a href="">cc</a> image by <a href="">vdu</a>, <a href="">j4mie</a></p> </p> <p>One of the things I’ve been tinkering with since leaving <a href="">Linden Lab</a> is Creatarr: a creative, collaborative social game. Creatarr’s goal is to bring some of the magical collaborative creation found in <a href="">Second Life</a> to a wider audience and to push creativity in social games and the web beyond virtual farming and impact text on&nbsp;animal pictures.</p> </p> <p>I quickly got busy building the <a href=""><span class="caps"><span class="caps">CREST</span></span> <span class="caps"><span class="caps">API</span></span></a> for <a href=""><span class="caps"><span class="caps">CCP</span></span></a> and I’m likely to have even less free time when I start <a href="">my new gig</a>, but <a href="">Facebook</a> are happy for me to continue to tinker with Creatarr and I like shipping code, so I’ve decided to make Creatarr public and run it for as long as people are playing. There are plenty of rough edges, but there are also some <a href="">neat ideas</a> and plenty of fun to be had, so head over to <a href=""></a> and&nbsp;dive in.</p> </p> <p>Thanks to <a href="">@mrkemeny</a>, <a href="">Irene Soler</a>, <a href="">Chris James</a>, <a href="">@yandle</a> and <a href="">@profaniti</a> for helping to&nbsp;build Creatarr.</p> </p>Following In My Father’s Footsteps2012-11-12T19:28:00+00:00Jim,2012-11-12:2012/11/12/following-my-fathers-footsteps/<p><a href="" title="Tintin Hair by Jim Purbrick, on Flickr"><img src="" width="318" height="500" alt="Tintin Hair"></a></p> <p>From 2 years before I was born, until just before I started working on Second Life at Linden Lab, my Dad worked at an innovative technology company with a large consumer photography business: <a href="">Kodak</a>. From January next year I’ll be working at an innovative technology company with a large consumer photography business: <a href="">Facebook</a>.</p> <p>Looking at the march of technology from the perspectives of these companies is amazing. I had a summer job building my first web application for Kodak while I was studying Computer Science in Nottingham and remember one of the researchers there joking that film needed to last a long time as a roll would often have pictures of christmas trees at either end with a summer holiday in the middle. Photography was so expensive that people would only take a few dozen pictures a year. Now we happily take a dozen pictures of our lunch, wouldn’t consider buying a telephone without a built in multi-megapixel camera and people upload hundreds of millions of images to Facebook&nbsp;every day.</p> <p>While the cost of creating photos has fallen to almost zero, their value hasn’t. Some of my most enjoyable moments recently have been looking at and commenting on the latest pictures of my brand new nephew, Charlie, on Facebook and so it goes for Facebook’s other billion users. Photos that are now so cheap to create that Kodak has <a href="">filed for chapter 11 protection</a> become <a href="">social objects</a> that are so valuable that Facebook can host the photos for free and still make a good business from advertising around&nbsp;the conversation.</p> <p>Working for Facebook might seem like a strange move after a decade working on 3D environments, but virtual worlds like <a href="">Second Life</a> and <a href=""><span class="caps"><span class="caps">EVE</span></span></a> are also social spaces, just with virtual nightclubs or space battles as the social objects. While 3D environments allow more immersion than Facebook, the price is a much higher barrier to entry. Although a few people from my family tried Second Life while I worked at Linden Lab: most of my family use Facebook already. My brother could create a gallery of pictures of my nephew in Second Life and we could meet there to talk about them, but then most of my nephew’s other aunts and uncles wouldn’t be able to join us. Ubiquity trumps immersion. Virtual worlds like Second Life still need their iPod moment if they’re going to cross the chasm from niche technology used by gamers, early adopters and academics to become a mainstream communication technology. Even though Second Life is free to use and paid for by the publishers of the 3D content, it’s still too hard to navigate for most people to use almost a decade after&nbsp;its launch.</p> <p>Facebook is already used by a billion people to keep in touch, while still evolving and developing at an incredible pace. I’m going to help new uncles connect with new nephews around the world while working on new technologies, which I think is going to make Facebook a fun and rewarding place&nbsp;to work.</p> <p>After that, who knows? My Dad’s working on some pretty amazing stuff these days: if I keep following in his footsteps and change keeps accelerating, the next thing is science fiction now, just as Second Life and Facebook were&nbsp;in 1975.</p>Caching Shared, Private Data With Ningx2012-11-11T20:23:00+00:00Jim,2012-11-11:2012/11/11/caching-restricted-data-ningx/<p>As with many other social services, a large amount of the data in <a href=""><span class="caps"><span class="caps">EVE</span></span> Online</a> and <a href="">Dust 514</a>‘s New Eden universe is shared between subsets of users. Some corporation data should only be accessible to the corporation’s members, market prices should only be accessible to capsuleers and infantry in the region&nbsp;for example.</p> </p> <p>In order to enforce these rules, the <span class="caps"><span class="caps">EVE</span></span> cluster performs a number of access control checks whenever a request is made from an <span class="caps"><span class="caps">EVE</span></span> client to the cluster. As a large fraction of calls to the <a href=""><span class="caps"><span class="caps">CREST</span></span> <span class="caps"><span class="caps">API</span></span></a> require these checks to be performed, it would be nice to perform them in Nginx to avoid the overhead of having to make a request to the <span class="caps"><span class="caps">EVE</span></span> proxy before returning the cached responses from Nginx. However, duplicating the access control logic within Nginx and trying to keep the two access control implementations in sync is likely to be error prone. As the spying metagame in <span class="caps"><span class="caps">EVE</span></span> is arguably bigger than the game itself the consequences of getting the access control logic wrong could be <a href="">huge</a>. <a href="">Internet spaceships are serious business</a>.</p> </p> <p>Fortunately, it’s possible to combine and reuse the <a href="">load balancing</a> and <a href="">vary header</a> support techniques previously discussed to avoid both excessive calls from Nginx to the cluster and access control&nbsp;logic duplication.</p> </p> <p>In addition to annotating responses from the cluster with the address of the proxy containing the character’s session, we also annotate the response with the character’s location, corporation and various other character meta data. The same logic that performs access control checks in the cluster can then add these response headers to the list of vary headers when generating a cache key for a later request on behalf of the same character. Rather than duplicating access control logic, Nginx just needs to make sure that only response headers from the cluster are used for these access control vary headers. If a particular <span class="caps"><span class="caps">URI</span></span> is annotated to vary on language and region for example, Nginx will allow the language to be supplied by the client, but the region must be supplied by the cluster in a previous response for the&nbsp;same character.</p> </p> <p>By reusing the stateful load balancing and vary header support we added to Nginx we’re able to cache data shared between multiple characters without duplicating complex access control logic implemented by the <span class="caps"><span class="caps">EVE</span></span> cluster: reducing the <span class="caps"><span class="caps">CREST</span></span> load on the <span class="caps"><span class="caps">EVE</span></span> cluster without breaking&nbsp;the metagame.</p> </p> <p>Thanks to <a href="">@jonastryggvi</a> for working with me on the Caching support and <a href="">@CCPGames</a> for allowing me to blog&nbsp;about it.</p> </p>Adding Vary Header Support To Nginx2012-10-14T17:12:00+01:00Jim,2012-10-14:2012/10/14/adding-vary-header-support-nginx/<p>Although Nginx supports proxy caching it doesn&#8217;t provide support for the <a href=""><span class="caps">HTTP</span> Vary</a> header out of the box. This is a problem if you want to use Nginx to proxy different versions of the same <span class="caps">URI</span> which Vary on <a href="">Content-Language</a> or proxy different representations of a RESTful resource specified via the <a href="">Accept</a>&nbsp;header.</p> <p>Fortunately it&#8217;s relatively easy to add support for the Vary header using the <a href="">Nginx Lua</a> module and a small amount of Lua, which is much easier than building and maintaining a 3rd party module and doesn&#8217;t greatly impact&nbsp;performance.</p> <p>First, we define a dictionary in the nginx config which will store a mapping from URIs to Vary&nbsp;headers:</p> <div class="highlight"><pre><span class="k">lua_shared_dict</span> <span class="s">uriToVary</span> <span class="mi">10m</span><span class="p">;</span> </pre></div> <p>Next we define the default location in the nginx&nbsp;config.</p> <div class="highlight"><pre><span class="k">location</span> <span class="s">/</span> <span class="p">{</span> <span class="c1"># make subrequest to /proxy_request, then store response headers</span> <span class="kn">content_by_lua</span> <span class="s">&#39;</span> <span class="s">local</span> <span class="s">vary</span> <span class="p">=</span> <span class="s">require(&quot;vary&quot;)</span> <span class="s">local</span> <span class="s">response</span> <span class="p">=</span> <span class="s">vary.ProxyRequest()</span> <span class="s">&#39;</span><span class="p">;</span> <span class="p">}</span> </pre></div> <p>This will use the lua ProxyRequest function to make a subrequest to /proxy_request then store the Vary header in the response in the&nbsp;dictionary.</p> <div class="highlight"><pre><span class="k">function</span> <span class="nf">ProxyRequest</span><span class="p">()</span> <span class="c1">-- make subrequest and capture response</span> <span class="kd">local</span> <span class="n">response</span> <span class="o">=</span> <span class="n">ngx</span><span class="p">.</span><span class="n">location</span><span class="p">.</span><span class="n">capture</span><span class="p">(</span><span class="s2">&quot;</span><span class="s">/proxy_request&quot;</span><span class="p">,</span> <span class="p">{</span> <span class="n">method</span> <span class="o">=</span> <span class="n">GetRequestMethod</span><span class="p">(</span><span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">request_method</span><span class="p">),</span> <span class="n">body</span> <span class="o">=</span> <span class="n">ngx</span><span class="p">.</span><span class="n">req</span><span class="p">.</span><span class="n">get_body_data</span><span class="p">()})</span> <span class="c1">-- forward <span class="caps">HTTP</span> headers from response</span> <span class="k">for</span> <span class="n">k</span><span class="p">,</span><span class="n">v</span> <span class="k">in</span> <span class="nb">pairs</span><span class="p">(</span><span class="n">response</span><span class="p">.</span><span class="n">header</span><span class="p">)</span> <span class="k">do</span> <span class="n">ngx</span><span class="p">.</span><span class="n">header</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="o">=</span> <span class="n">v</span> <span class="k">end</span> <span class="n">ngx</span><span class="p">.</span><span class="n">shared</span><span class="p">.</span><span class="n">uriToVary</span><span class="p">:</span><span class="n">set</span><span class="p">(</span><span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">request_uri</span><span class="p">,</span> <span class="n">response</span><span class="p">.</span><span class="n">header</span><span class="p">[</span><span class="s2">&quot;</span><span class="s">Vary&quot;</span><span class="p">])</span> <span class="c1">-- forward status and body from response</span> <span class="n">ngx</span><span class="p">.</span><span class="n">status</span> <span class="o">=</span> <span class="n">response</span><span class="p">.</span><span class="n">status</span> <span class="n">ngx</span><span class="p">.</span><span class="n">print</span><span class="p">(</span><span class="n">response</span><span class="p">.</span><span class="n">body</span><span class="p">)</span> <span class="k">return</span> <span class="n">response</span> <span class="k">end</span> </pre></div> <p>Finally, we define the /proxy_request&nbsp;location.</p> <div class="highlight"><pre><span class="k">location</span> <span class="s">/proxy_request</span> <span class="p">{</span> <span class="kn">internal</span><span class="p">;</span> <span class="c1"># set defaults</span> <span class="kn">set</span> <span class="nv">$noCache</span> <span class="mi">1</span><span class="p">;</span> <span class="kn">set</span> <span class="nv">$cacheBypass</span> <span class="mi">1</span><span class="p">;</span> <span class="kn">set</span> <span class="nv">$cacheKey</span> <span class="s">nil</span><span class="p">;</span> <span class="c1"># rewrite using stored data</span> <span class="kn">rewrite_by_lua</span> <span class="s">&#39;</span> <span class="s">local</span> <span class="s">vary</span> <span class="p">=</span> <span class="s">require(&quot;vary&quot;)</span> <span class="s">vary.RewriteCache()</span> <span class="s">&#39;</span><span class="p">;</span> <span class="c1"># proxy request</span> <span class="kn">proxy_cache_bypass</span> <span class="nv">$cacheBypass</span><span class="p">;</span> <span class="kn">proxy_no_cache</span> <span class="nv">$noCache</span><span class="p">;</span> <span class="kn">proxy_cache_key</span> <span class="nv">$cacheKey</span><span class="p">;</span> <span class="kn">proxy_cache</span> <span class="s">API_CACHE</span><span class="p">;</span> <span class="kn">proxy_pass</span> <span class="nv">$proxy$request_uri</span><span class="p">;</span> <span class="p">}</span> </pre></div> <p>This will use the lua RewriteCache function to combine the uri with the vary headers to generate the final cache key used by the proxy_cache&nbsp;module.</p> <div class="highlight"><pre><span class="k">function</span> <span class="nf">RewriteCache</span><span class="p">()</span> <span class="kd">local</span> <span class="n">varyOn</span> <span class="o">=</span> <span class="n">ngx</span><span class="p">.</span><span class="n">shared</span><span class="p">.</span><span class="n">uriToVary</span><span class="p">:</span><span class="n">get</span><span class="p">(</span><span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">request_uri</span><span class="p">)</span> <span class="kd">local</span> <span class="n">cacheKey</span> <span class="o">=</span> <span class="kc">nil</span> <span class="c1">-- if vary unknown for this uri, bypass cache and do not cache</span> <span class="k">if</span> <span class="n">varyOn</span> <span class="o">==</span> <span class="kc">nil</span> <span class="k">then</span> <span class="k">return</span> <span class="k">end</span> <span class="n">cacheKey</span> <span class="o">=</span> <span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">request_uri</span> <span class="o">..</span> <span class="n">GenerateCacheKey</span><span class="p">(</span><span class="n">varyOn</span><span class="p">,</span> <span class="n">ngx</span><span class="p">.</span><span class="n">req</span><span class="p">.</span><span class="n">get_headers</span><span class="p">())</span> <span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">noCache</span> <span class="o">=</span> <span class="mi">0</span> <span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">cacheKey</span> <span class="o">=</span> <span class="n">cacheKey</span> <span class="n">ngx</span><span class="p">.</span><span class="n">var</span><span class="p">.</span><span class="n">cacheBypass</span> <span class="o">=</span> <span class="mi">0</span> <span class="k">end</span> <span class="k">function</span> <span class="nf">GenerateCacheKey</span><span class="p">(</span><span class="n">varyOnStr</span><span class="p">,</span> <span class="n">requestHeaders</span><span class="p">)</span> <span class="kd">local</span> <span class="n">cacheKey</span> <span class="o">=</span> <span class="s2">&quot;</span><span class="s">&quot;</span> <span class="k">for</span> <span class="n">part</span> <span class="k">in</span> <span class="nb">string.gmatch</span><span class="p">(</span><span class="n">varyOnStr</span><span class="p">,</span> <span class="s2">&quot;</span><span class="s">([^,%s+]+)&quot;</span><span class="p">)</span> <span class="k">do</span> <span class="k">if</span> <span class="n">requestHeaders</span><span class="p">[</span><span class="n">part</span><span class="p">]</span> <span class="k">then</span> <span class="n">cacheKey</span> <span class="o">=</span> <span class="n">cacheKey</span> <span class="o">..</span> <span class="s2">&quot;</span><span class="s">:&quot;</span> <span class="o">..</span> <span class="n">requestHeaders</span><span class="p">[</span><span class="n">part</span><span class="p">]</span> <span class="k">end</span> <span class="k">end</span> <span class="k">return</span> <span class="n">cacheKey</span> <span class="k">end</span> </pre></div> <p>The first time a <span class="caps">URI</span> is requested the cache will be bypassed, but the Vary header from the response will be stored in the shared dictionary. The second time the <span class="caps">URI</span> is requested the cache key will be generated from the <span class="caps">URI</span> and the appropriate request headers specified in the vary header and the response will be cached. When the <span class="caps">URI</span> is subsequently requested with the same set of headers it will be served from the&nbsp;cache.</p> <p>Note that when the shared dictionary is full it will evict old entries using an <span class="caps">LRU</span> scheme. Nginx will generate &#8220;ngx_slab_alloc() failed&#8221; errors when this occurs, but these can <a href="">safely be ignored</a>.</p> <p>Thanks to <a href="">@jonastryggvi</a> for working with me on the Vary support and <a href="">@CCPGames</a> for allowing me to blog about&nbsp;it.</p>Load Balancing Stateful Services With Nginx2012-07-30T05:49:00+01:00Jim,2012-07-30:2012/07/30/load-balancing-stateful-services-nginx/<p>The <a href=""><span class="caps">EVE</span> online</a> network architecture uses stateful proxy servers which manage sessions for players connected to the cluster via the <span class="caps">EVE</span> client. The client sends requests to the proxy which are forwarded on to sol servers maintaining the game state and the sols send notifications to the proxy which are sent on to the&nbsp;client.</p> <p>In developing the <a href=""><span class="caps">CREST</span> <span class="caps">API</span></a> we extended the <span class="caps">EVE</span> proxies to talk <span class="caps">HTTP</span>, then added <a href="">nginx</a> reverse proxies to the service to provide <span class="caps">SSL</span> termination and caching while shielding the <span class="caps">EVE</span> proxies from potentially malicious&nbsp;requests.</p> <p>So, how does nginx know which <span class="caps">EVE</span> proxy to send a request to? In the first instance, it just guesses. We set up a set of proxies and use <a href="">proxy_pass</a> to have nginx just pick&nbsp;one.</p> <div class="highlight"><pre><span class="k">upstream</span> <span class="s">eveproxies</span> <span class="p">{</span> <span class="c1"># List all eveproxies</span> <span class="p">}</span> <span class="k">location</span> <span class="s">/</span> <span class="p">{</span> <span class="kn">proxy_pass</span> <span class="s">http://eveproxies</span><span class="p">;</span> <span class="p">}</span> </pre></div> <p>The proxy can then use the <span class="caps">CCP</span> cluster&#8217;s <span class="caps">RPC</span> machinery to find the character&#8217;s session. If nginx has been lucky the request is processed and the response sent back to nginx and from there to the player. If no session exists for the character on any proxy a new session is created and then the request processed as above. If the character session is on a different node the proxy returns an <a href="">X-accel</a> response to a location which extracts the correct proxy <span class="caps">URI</span> from the path and resends the&nbsp;request.</p> <div class="highlight"><pre><span class="k">location</span> <span class="p">~</span><span class="sr">*</span> <span class="s">^/internal_redirect/(.*)</span> <span class="p">{</span> <span class="kn">internal</span><span class="p">;</span> <span class="kn">proxy_pass</span> <span class="s">http://</span><span class="nv">$1$is_args$args</span><span class="p">;</span> <span class="p">}</span> </pre></div> <p>The performance of this approach can be greatly improved by caching the mapping of authorization headers to proxies, which can be done using a dict and a small piece of <a href="">lua</a>.</p> <div class="highlight"><pre><span class="k">lua_shared_dict</span> <span class="s">tokenToProxy</span> <span class="mi">10m</span><span class="p">;</span> <span class="k">location</span> <span class="s">/</span> <span class="p">{</span> <span class="kn">content_by_lua</span> <span class="s">&#39;</span> <span class="s">--</span> <span class="s">make</span> <span class="s">subrequest</span> <span class="s">and</span> <span class="s">capture</span> <span class="s">response</span> <span class="s">local</span> <span class="s">response</span> <span class="p">=</span> <span class="s">ngx.location.capture(&quot;/proxy_request&quot;,</span> <span class="p">{</span> <span class="kn">method</span> <span class="p">=</span> <span class="s">GetRequestMethod(ngx.var.request_method),</span> <span class="s">body</span> <span class="p">=</span> <span class="s">ngx.req.get_body_data()</span><span class="err">}</span><span class="s">)</span> <span class="s">--</span> <span class="s">forward</span> <span class="s"><span class="caps">HTTP</span></span> <span class="s">headers</span> <span class="s">from</span> <span class="s">response</span> <span class="s">for</span> <span class="s">k,v</span> <span class="s">in</span> <span class="s">pairs(response.header)</span> <span class="s">do</span> <span class="s">ngx.header[k]</span> <span class="p">=</span> <span class="s">v</span> <span class="s">end</span> <span class="s">--</span> <span class="s">forward</span> <span class="s">status</span> <span class="s">and</span> <span class="s">body</span> <span class="s">from</span> <span class="s">response</span> <span class="s">ngx.status</span> <span class="p">=</span> <span class="s">response.status</span> <span class="s">ngx.print(response.body)</span> <span class="s">--</span> <span class="s">cache</span> <span class="s">backend</span> <span class="s">for</span> <span class="s">next</span> <span class="s">request</span> <span class="s">ngx.shared.tokenToProxy:set(ngx.var.http_authorization,</span> <span class="s">response.header[&quot;X-Backend&quot;])</span> <span class="s">&#39;</span><span class="p">;</span> <span class="p">}</span> <span class="kn">location</span> <span class="s">/proxy_request</span> <span class="p">{</span> <span class="kn">internal</span><span class="p">;</span> <span class="kn">set</span> <span class="nv">$crestProxy</span> <span class="s">&quot;http://eveproxies&quot;</span><span class="p">;</span> <span class="kn">rewrite_by_lua</span> <span class="s">&#39;</span> <span class="s">ngx.var.crestProxy</span> <span class="p">=</span> <span class="s">ngx.shared.tokenToProxy:get(</span> <span class="s">ngx.var.http_authorization)</span> <span class="s">&#39;</span><span class="p">;</span> <span class="kn">proxy_pass</span> <span class="nv">$crestProxy$request_uri</span><span class="p">;</span> <span class="p">}</span> </pre></div> <p>In a configuration with multiple loadbalancers we potentially have to pay the price of one proxy redirection per nginx process. This could potentially be improved by using a shared cache for the authorization to proxy mapping or by using ip affinity to map all requests from a client to a single nginx box, but in practice where the number of requests from a client is much larger than the number of loadbalancers, this improvement is likely to be&nbsp;negligible.</p> <p>This mechanism ensures that most <span class="caps">HTTP</span> requests go straight to the correct proxy without the load balancers having to maintain any state. A new load balancer can be added to the cluster just be being told the addresses of the eve proxies and will quickly start routing requests to the correct&nbsp;location.</p>Brighton Mini Maker Faire: The Movie2012-05-24T14:08:00+01:00Jim,2012-05-24:2012/05/24/brighton-mini-maker-faire-movie/<p>A great video of the Brighton Mini Maker Faire last year by Andrew Sleigh showing the making of <a href="">You’re The Boss 2</a>. Applications for this year’s Maker Faire are <a href="">now open</a> and I can’t wait to see what everyone comes up with&nbsp;this year!</p> <div class="flex-video vimeo"><iframe src="" width="500" height="281" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe></div>Super Hyperpolyglot2012-05-05T19:03:00+01:00Jim,2012-05-05:2012/05/05/super-hyperpolyglot/<p>A few years ago nearly all the code I wrote was in C++, but increasingly I’m finding myself writing in a variety of mostly C-style languages and having to perform crunching mental gear changes as I switch&nbsp;between them.</p> </p> <p>In the interests of making these language switches less painful I thought about listing the commonly used features of the languages I commonly use in a side-by-side format. Luckily I’m a lazy programmer, the web is large and there’s nothing new under the sun, so I quickly found <a href="">Hyperpolyglot</a> which provides commonly used programming language features in a side-by-side format, which is what I&nbsp;wanted. Nearly.</p> </p> <p>Hyperpolyglot organizes it’s language comparisons in to several catagories: scripting languages, C++ family languages, embeddable languages and so on. In my case (and I suspect in many cases) the languages I wanted to compare were spread across&nbsp;several pages.</p> </p> <p>After briefly considering some cut and paste to get what I wanted I started playing with Google Spreadsheets, which has a very nifty importHtml function which allowed me to pull the Hyperpolyglot data in to several sheets which can be combined to produce arbitrary&nbsp;language comparisons.</p> </p> <p>It’s not perfect as different languages have different features and in some cases the Hyperpolyglot data doesn’t use exactly the same terms across tables (“version used” vs “versions used”) and I’m not a spreadsheet ninja, but it’s good enough to generate PDFs like this <a href="">JavaScript Python Java C++ Comparision</a>. As a Hyperpolyglot derivative work, The <a href="">Super Hyperpolyglot Spreadsheet</a> is licensed under the <a href="">Creative Commons Attribution-ShareAlike 3.0 License</a>, please let me know if you&nbsp;improve it.</p> </p>100 robots Vs The Audience2012-01-04T16:44:00+00:00Jim,2012-01-04:2012/01/04/100-robots-vs-audience/<p>A couple of years ago I had great fun putting together the <a href="">London Geek Community iPhone OSCestra</a> at <a href="">Open Hack London</a> and I’ve been controlling Ableton Live with iPhone tapped to my guitar as part of <a href="">100 robots</a> for <a href="">a couple of years now</a> so when <a href="!/andybudd">@andybudd</a> suggested I do a digital music thing for the <a href="">Brighton Digital Festival</a> I immediately thought that it would be fun to combine the 2 projects by doing a 100 robots performance with&nbsp;audience participation.</p> <p>The iPhone OSCestra was effectively a distributed collaborative mixing desk with each person controlling the volume and effect parameters on one channel of a playing back Ableton Live set. For the 100 robots performance I wanted to go further and have the audience actually adding parts to the musical performance, so <a href="">@toastkid</a> and I added extra drum, bass, synth and sample tracks to the 100 robots live set and filled them full of samples that could be triggered by&nbsp;the audience.</p> <p>While having the samples adjust in tempo to match each song was relatively simple, transposing them to match the key of each song was more complicated. First I built a <a href=";t=90512">custom slice to midi preset</a> which mapped the sample transpose to a macro control and used it to slice all of the samples to <span class="caps"><span class="caps">MIDI</span></span> tracks, then mapped all of the transpose controls to a single <span class="caps"><span class="caps">MIDI</span></span> controller and added a <span class="caps"><span class="caps">MIDI</span></span> track which output the appropriate controller value for each song to a <span class="caps"><span class="caps">MIDI</span></span> output which was looped back in to Live to transpose&nbsp;the samples.</p> <p>The next question was how to avoid the performance turning in to a mush if multiple drum tracks or bass parts were playing concurrently. To avoid this we put <a href="">dummy clips</a> on the normal 100 robots which muted the normal parts when the audience triggered parts were playing. In some cases we let the audience parts add to the music, in others the audience parts would play instead of the&nbsp;normal tracks.</p> <p>A final question was how to avoid max and I getting lost when the normal parts we play along to were replaced by unfamiliar samples. To deal with this we set the clip quantization on the audience triggered clips to values longer than the clip length. This meant that even if alternate baselines were constantly being launched, we would still hear the normal bassline for a while at the end of each quantization period, so we would know where we were with the track. To tune these settings we did some <a href="">fuzz testing</a> with semi random <span class="caps"><span class="caps">MIDI</span></span> data to see how much madness we could deal with and still manage to play&nbsp;the songs.</p> <p>With the tests done it was time to perform with 100 robots and 100s of people at the Brighton Dome&nbsp;and Museum.</p> <div class="flex-video widescreen"><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></div> <p>With the tests done it was time to perform with 100 robots and 100s of people at the Brighton Dome and&nbsp;Museum.</p> <div class="flex-video widescreen"><iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe></div> <div class="flex-video widescreen"><iframe width="560" height="315" src=";hl=en_GB&amp;hd=1" frameborder="0" allowfullscreen></iframe></div> <p>Many thanks to Steve Liddell for recording the Brighton Museum set, <a href="!/aral">@aral</a> for letting us experiment on his <a href="">update conference</a> and to everyone who participated and watched. If you’d like to host another performance, please get in touch and if you like the music, please check out the <a href="">100 robots blog</a> and consider buying our album from <a href="">bandcamp</a>.</p>100 robots Attack!2011-12-09T15:06:00+00:00Jim,2011-12-09:2011/12/09/100-robots-attack/<p>Lots of exciting <a href="">100 robots</a> news! Our debut album, Attack!, has been professionally mastered by Chris at <a href="">Melograf Mastering</a> who has done an amazing job and made the album sound incredible. The new version is already available at <a href="">bandcamp</a> and will be available on itunes, <a href=";qid=1323472257&amp;sr=8-16">amazon</a> and many other download services on Monday. To celebrate the launch we’re playing live at The Hope in Brighton <a href="">tomorrow night</a> and have set up a <a href="">new blog</a> where we’ll be giving away a track from the album free every month and I’ll be doing most of my 100 robots related music blogging from now on. Head on over and subscribe to the feed so you don’t miss out. Hope to see you at <a href="">The Hope</a> tomorrow!</p>The JavaScript Jungle2011-10-03T02:47:00+01:00Jim,2011-10-03:2011/10/03/javascript-jungle/<p>There was a slide in the early talks that <a href="">Cory Ondrejka</a> used to give about Second Life about alien abductions in <a href="">Second Life</a>. One of the most exciting moments in Second Life for the early Lindens was when a resident constructed a <span class="caps">UFO</span> and flew around the world abducting other residents and then returning them to the world with a commemorative t-shirt. It was exciting because it was unanticipated. The Lindens had created a virtual world that enabled interaction and someone had taken it and run with it to create a fun and engaging&nbsp;experience.</p> <p>So, once I&#8217;d finished implementing a simple <a href=";dl=ACM&amp;coll=DL&amp;CFID=53271178&amp;CFTOKEN=59247649">interest management</a> and collision detection system for the <a href="">Brighton Digital Festival</a> <a href="">JavaScript Jungle</a> to enable interactions, I thought I would implement an alien abductor as a hat tip to Second&nbsp;Life.</p> <p>The <a href="">JavaScript</a> first adds a <span class="caps">UFO</span> from <a href="">You&#8217;re The Boss 2</a> to the supplied div along with an <a href=""><span class="caps">SVG</span></a> canvas containing a hidden translucent tractor beam path before binding to the see and tick events. The tick handler implements a state machine which either moves the <span class="caps">UFO</span> towards a random spot, a target creature that the <span class="caps">UFO</span> has seen or drags the target off screen for diabolical&nbsp;experimentation.</p> <p>The most interesting part of the code on line 155 which replaces the target&#8217;s position method with one which returns the target&#8217;s position, but doesn&#8217;t update. This allows the <span class="caps">UFO</span> to move the target while the position updates made by the target&#8217;s own code call the new read only position method. <a href="">Tom Parslow</a>&#8216;s boids look especially mournful flapping around and turning towards the flock while being&nbsp;captured.</p> <p><a href=""><img src=""></a></img></p> <p>While the alien abductions in Second Life and the JavaScript jungle are meant to be fun and mostly harmless, the same mechanisms that enable them can be used for griefing in virtual environments and malware in software at large. The ability for scripted objects in Second Life to self replicate caused dozens of problems with <a href="">grey goo</a> attacks for every amazing <a href="">virtual ecosystem</a> and many malicious cage attacks for every playful alien&nbsp;abductor.</p> <p>The <a href="">message passing concurrency</a> model adopted by <span class="caps">LSL</span> actually made direct attacks on other scripts of the kind used by the JavaScript Jungle <span class="caps">UFO</span> very hard, but things are much harder in JavaScript&#8217;s browser environment even when separating scripts in&nbsp;iFrames.</p> <p>Luckily projects like <a href="">Caja</a> and <a href="">Belay</a> (which is being worked on by another ex-Linden, <a href="">Mark Lentczner</a> ) are working on the problem of making multiple scripts work safely in the same&nbsp;browser.</p> <p>The challenge for sandboxes like Second Life and the JavaScript jungle is to allow interesting and meaningful interactions with emergent properties and unanticipated consequences without allowing malicious scripts to destroy that environment. Building the <a href="">JavaScript Jungle</a> was a lot of fun and made for another great <a href="">Brighton Digital Festival</a> project. Many congratulations to <a href="!/premasagar">@premasagar</a>, <a href="!/ac94">@ac94</a>, <a href="!/purge">@purge</a> and everyone else for making it a success. Maybe next time we can try to build a secure JavaScript Jungle that is both secure and&nbsp;expressive.</p>Data Is Not Art2011-10-01T00:11:00+01:00Jim,2011-10-01:2011/10/01/data-not-art/<p>This week I experienced two remarkable combinations of music and the&nbsp;moving image.</p> <div class="flex-video vimeo"><iframe src=";byline=0&amp;portrait=0&amp;color=ffffff" width="400" height="150" frameborder="0" webkitAllowFullScreen allowFullScreen></iframe></div> <p><a href="">Natures 3B</a> from <a href="">Quayola</a> on <a href="">Vimeo</a>.</p> <p>This evening I watched <a href="">Nature</a> — Mira Calix and Quayola’s audio visual piece which took video footage of flowers blowing in the wind and used motion tracking technology to generate music from the footage. As a concept it was interesting, unfortunately as music it was terrible. The beauty of the footage betrayed the folly of the concept: if a human were to compose music based on the beauty of flowers the way they moved in the breeze might feature, but wouldn’t be the basis of the entirety of the piece. The colour, form and memories triggered by the flowers would surely feature. Turning the flowers to a network of points modulating parameters reduced them to an interesting if psuedo-random system and the resultant synthesised music was predictably cold and&nbsp;pseudo random.</p> <p>By contrast, a few days ago I had the pleasure to watch <a href="">Manhatta</a>, a black and white movie about Manhatten made in 1920 by Charles Sheeler and Paul Strand and accompanied by a new soundtrack by the Cinematic Orchestra. Where Nature used machines to generate it’s soundtrack based on an algorithmic interpretation of the movement of flowers, Manhatta uses humans to generate it’s soundtrack based on the emotional impact of the moving image on the musicians. The result is infinitely more moving. The music adds emotion to the moving image, combining feelings of wonder, awe, fragility and insignificance — a uniquely human reaction to the images of the worlds most amazing city that cannot possibly be understood or rendered by an algorithm, no matter&nbsp;how clever.</p> <p>Art is a human reaction to our world, not something that can be captured in&nbsp;an algorithm.</p>You’re The Boss 22011-09-12T22:53:00+01:00Jim,2011-09-12:2011/09/12/youre-boss-2/<p><a href=""><img src="" title="You're The Boss 2 Screenshot" alt="You're The Boss 2 Screenshot"/></a></p> <p>A week ago over 5000 people streamed through the foyer of the Brighton Dome to see and build hundreds of amazing things at the first <a href="">Brighton Mini Maker Faire</a>. Luke and I went along with 2 laptops, a scanner and a pile of pens, paper, glue and scissors to make a video game with what felt like most of those&nbsp;5000 people.</p> <p>We arrived at 9:30 in the morning and were still working out how to plug out laptop in to the big plasma screen when the doors opened at 10:00. From then until the doors closed at 17:00 our table was a tornado of cutting, gluing, drawing and colouring as dozens of children and adults dived in to the task of drawing bosses for our shoot ‘em up with wild abandon. For a while my picture scanning, data wrangling and game copying efforts kept up with the stream of submissions and people were delighted to see their creations flying around on the big screen within minutes of their creation. Soon enough though, the stream turned in to a deluge and by midday I had a sizable backlog of pictures&nbsp;to process.</p> <p>Despite working non-stop all day with only a 20 minute break to grab a milk shake and have a quick look around I ended up with a backlog of dozens of pictures at the end of the day. At that point another problem emerged: the game is designed to slowly get harder at each level, but with so many bosses to add the game would get impossibly hard before half of the bosses were seen. Realizing that I had a lot more work to do before the game would be finished I released an initial version at the end of the faire and collapsed in an exhausted heap at the&nbsp;after party.</p> <p>All of this is by way of being a long winded explanation as to why “You’re The Boss 2” wasn’t finished a week ago. Last night I finally got around to scanning in all of the remaining images, tweaked the difficulty curve to make it possible to get to the end and released “You’re The Boss 2 Extended” which can now be downloaded <a href="">here</a>.</p> <p>Despite being one of the most exhausting days of my life, it was also one of the most enjoyable. It was incredibly rewarding seeing dozens of children and adults alike delighting in creating something fun together and watching <a href="">Thomas Truax</a> perform with his <span class="caps"><span class="caps">DIY</span></span> instruments while talking to a professional gingerbread house maker made for a truly magical end to the day. I’m very proud to have been part of the first ever Brighton (not-so) Mini Maker Faire and look forward to taking part in many more (although I might bring along a friend to help&nbsp;next time!).</p> <p>I hope you enjoy playing You’re The Boss 2 as much as we enjoyed&nbsp;making it.</p>From Magic Circles To Magic Portals2011-09-11T21:18:00+01:00Jim,2011-09-11:2011/09/11/magic-circles-magic-portals/<div class="flex-video"><object width="640" height="506" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"><param value="true" name="allowfullscreen"/><param value="always" name="allowscriptaccess"/><param value="high" name="quality"/><param value="true" name="cachebusting"/><param value="#000000" name="bgcolor"/><param name="movie" value="" /><param value="config={'key':'#$aa4baff94a9bdcafce8','playlist':['format=Thumbnail?.jpg',{'autoPlay':false,'url':'magic_circles.mp4'}],'clip':{'autoPlay':true,'baseUrl':'','scaling':'fit','provider':'h264streaming','showCaptions':true},'canvas':{'backgroundColor':'#000000','backgroundGradient':'none'},'plugins':{'controls':{'playlist':false,'fullscreen':true,'height':26,'backgroundColor':'#000000','autoHide':{'fullscreenOnly':true}},'h264streaming':{'url':''},'captions':{'url':'','captionTarget':'content'},'content':{'display':'block','url':'','bottom':26,'left':0,'width':640,'height':50,'backgroundGradient':'none','backgroundColor':'transparent','textDecoration':'outline','border':0,'style':{'body':{'fontSize':'14','fontFamily':'Arial','textAlign':'center','fontWeight':'bold','color':'#ffffff'}}}},'contextMenu':[{},'-','Flowplayer v3.2.1']}" name="flashvars"/><embed src="" type="application/x-shockwave-flash" width="640" height="506" allowfullscreen="true" allowscriptaccess="always" cachebusting="true" bgcolor="#000000" quality="high" flashvars="config={'key':'#$aa4baff94a9bdcafce8','playlist':['format=Thumbnail?.jpg',{'autoPlay':false,'url':'magic_circles.mp4'}],'clip':{'autoPlay':true,'baseUrl':'','scaling':'fit','provider':'h264streaming','showCaptions':true},'canvas':{'backgroundColor':'#000000','backgroundGradient':'none'},'plugins':{'controls':{'playlist':false,'fullscreen':true,'height':26,'backgroundColor':'#000000','autoHide':{'fullscreenOnly':true}},'h264streaming':{'url':''},'captions':{'url':'','captionTarget':'content'},'content':{'display':'block','url':'','bottom':26,'left':0,'width':640,'height':50,'backgroundGradient':'none','backgroundColor':'transparent','textDecoration':'outline','border':0,'style':{'body':{'fontSize':'14','fontFamily':'Arial','textAlign':'center','fontWeight':'bold','color':'#ffffff'}}}},'contextMenu':[{},'-','Flowplayer v3.2.1']}"> </embed></object></div> <p>The <a href="">Brighton Digital Festival</a> continued this weekend with <a href="">BarCamp Brighton 6</a> which was super interesting and lots of fun&nbsp;as always.</p> <p>I was a bit worried that my <a href="">Terra Nova</a> style talk on the philosophy of games, virtual worlds and magic circles would be too esoteric, but the room was packed and the talk generated some&nbsp;great discussion.</p> <p>A video of the talk is now available at <a href="">The Internet Archive</a> thanks to <a href="!/stevepurkiss">@stevepurkiss</a> and the slides are available on <a href="">SlideShare</a>. Thanks to everyone who came along to my talk and BarCamp and to <a href="!/jaygooby">@jaygooby</a> and <a href="!/profaniti">@profaniti</a> for organising a&nbsp;wonderful event.</p>dConstructing Augmented Reality2011-09-08T15:45:00+01:00Jim,2011-09-08:2011/09/08/dconstructing-augmented-reality/<p>One of the events that kicked off <a href="">Brighton Digital Festival</a> was <a href="">dConstruct</a>, the always thought provoking conference run by <a href="">clearleft</a>.</p> </p> <p>As usual I found most of the sessions interesting, but not always relevant as there’s a heavy design rather than development focus. The most relevant talk this year was Kevin Slavin’s final talk, <a href="">Reality is Plenty</a>, which argued that augmented reality is not the next big thing, just as it wasn’t&nbsp;in 2005.</p> </p> <p>Despite Kevin having a dig at <a href="">Second Life</a> and having spent a lot of time working on Augmented Reality with <a href="">Blast Theory</a> while at Nottingham University, I mostly agreed. While there are definitely use cases which benefit from augmented reality (fighter pilot navigation systems and things like Carbon Goggles which are all about making invisible aspects of objects visible) and virtual reality (simulation and virtual meeting spaces) there are plenty of others which are better served by other interfaces. Environments like Second Life are particularly exciting as they allow people to quickly prototype systems to discover which applications work and&nbsp;which don’t.</p> </p> <p>With both <span class="caps"><span class="caps">AR</span></span> and <span class="caps"><span class="caps">VR</span></span> it’s tempting to argue that they allow for intuitive interfaces as they model or overlay the real world: people know how to navigate a 3D space so they know how to use a 3D environment and they know how to use <span class="caps"><span class="caps">AR</span></span> as they can see. Anyone who has done their time climbing the Second Life learning curve or trying to use <span class="caps"><span class="caps">AR</span></span> to find their way around will know this clearly isn’t true. Apparently more abstract interfaces like maps, which talk to the mind rather than the senses are often much easier&nbsp;to use.</p> </p> <p>There’s a lot of work to be done to make both <span class="caps"><span class="caps">AR</span></span> and <span class="caps"><span class="caps">VR</span></span> as easy to use as 2D interfaces, let alone as natural as using real world senses. Now that the huge technical problems around networking virtual environments and tracking real world objects with mobile devices are starting to be solved, it is mostly <span class="caps"><span class="caps">UI</span></span> work that needs to be done to make these technologies more&nbsp;widely used.</p> </p> <p>Even if the <span class="caps"><span class="caps">UX</span></span> issues are solved there will still be many cases where speaking to the mind is much better than speaking to&nbsp;the senses.</p> </p>Introspecting Python Decorators2011-08-25T17:05:00+01:00Jim,2011-08-25:2011/08/25/introspecting-python-decorators/<p>Over the last couple of years I&#8217;ve found myself using python decorators to annotate handlers for web requests more and more, both when using <a href="">Django</a> and with micro-frameworks like <a href="">mnml</a> and <a href="">newf</a>.</p> <p>Where the same functionality is required for all handlers, or the required functionality can be determined from standard request or response headers, using <a href=""><span class="caps">WSGI</span></a> or <a href="">Django</a> middleware is fine, but where the required functionality is varies based on the handler its much cleaner to use a parameterised decorator than poluting the environment or response objects just to control the middleware. Functionality can be added to a framework as a suite of decorators and plugged together in an <a href="">aspect oriented</a> way like lego to easily build up sophisticated&nbsp;behaviours.</p> <p>Unlike other mechanisms for implementing macros, templating or aspect orientation that introduce a new language, python decorators are pure syntactic sugar that under the hood are simply rewritten as python&nbsp;expressions:</p> <div class="highlight"><pre><span class="nd">@requires_oauth_scope</span><span class="p">(</span><span class="s">&quot;email&quot;</span><span class="p">)</span> <span class="k">def</span> <span class="nf">notify_friends</span><span class="p">(</span><span class="n">request</span><span class="p">):</span> <span class="k">pass</span> </pre></div> <p>Is simply shorthand&nbsp;for:</p> <div class="highlight"><pre><span class="n">def</span> <span class="n">notify_friends</span><span class="p">(</span><span class="n">request</span><span class="p">)</span><span class="o">:</span> <span class="n">pass</span> <span class="n">notify_friends</span> <span class="o">=</span> <span class="n">requires_oauth_scope</span><span class="p">(</span><span class="s">&quot;email&quot;</span><span class="p">)(</span><span class="n">notify_friends</span><span class="p">)</span> </pre></div> <p>This simplicity is powerful as it allows decorators to also be used as normal functions, for example to build up higher level decorators that bundle common decorator configurations, but it also means that decorators potentially interact badly with another powerful Python feature:&nbsp;introspection.</p> <p>In the above example the undecorated notify_friends function has the <strong>name</strong> &#8220;notify_friends&#8221;, but the decorated function has the <strong>name</strong> &#8220;requires_oauth_scope&#8221;. When decorators are used extensively, this can seriously impact the usefulness of introspection for debugging or generating&nbsp;documentation.</p> <p>Decorating your decorators with the <a href="">functools</a> @wraps decorator, which copies the <strong>name</strong> of the wrapped function over to the wrapping function solves this introspection problem, but introduces another: the decorators now become invisible to introspection. In the example above the <strong>name</strong> of the decorated function would now be &#8220;notify_friends&#8221; as in the undecorated case, but we wouldn&#8217;t know that the function had been decorated or&nbsp;not.</p> <p>A potential solution to this new problem is to store the details about the decoration in another attribute that can be inspected at runtime. In addition to copying over the <strong>name</strong> attribute, functools.wraps also copies over the target <strong>dict</strong> by default, allowing it to be used to store information about the decoration and be correctly copied over when decorators are&nbsp;chained:</p> <div class="highlight"><pre><span class="kn">from</span> <span class="nn">functools</span> <span class="kn">import</span> <span class="n">wraps</span> <span class="k">def</span> <span class="nf">requires_oauth_scope</span><span class="p">(</span><span class="n">scope</span><span class="p">):</span> <span class="k">def</span> <span class="nf">decorator</span><span class="p">(</span><span class="n">target</span><span class="p">):</span> <span class="n">target</span><span class="o">.</span><span class="n">__dict__</span><span class="p">[</span><span class="s">&quot;my_project_requires_oauth_scope&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="n">scope</span> <span class="nd">@wraps</span><span class="p">(</span><span class="n">target</span><span class="p">)</span> <span class="k">def</span> <span class="nf">wrapper</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> <span class="c"># return target(*args, **kwargs) or <span class="caps">FORBIDDEN</span> if token does not have required scope</span> <span class="k">return</span> <span class="n">wrapper</span> <span class="k">return</span> <span class="n">decorator</span> </pre></div> <p>By constructing decorators in this way we get the benefits of python decorators and more declarative C# style attributes that are visible to&nbsp;introspection.</p>You’re The Boss Lives!2011-08-17T18:21:00+01:00Jim,2011-08-17:2011/08/17/youre-boss-lives/<p><img alt="You're The Boss Screenshot" src="" title="You're The Boss" /></p> </p> <p>Back in 2005, while I was working on <a href="">Second Life</a> in Nottingham, before <a href="">Linden Lab Brighton</a> existed, I ran a workshop as part of the Screenplay “Boss Frenzy!” day at the Radiator Festival which allowed children to collaboratively create a computer game by drawing or making bosses&nbsp;with collage.</p> </p> <p>Dozens of people came to the Broadway in Nottingham and got busy with pens, pencils, paper, scissors, glue and magazines to design bosses for our “You’re The Boss!” shmup. We had an amazing time and created a charming and delightful game which I talked about on the original <a href="">Second Life blog</a>.</p> </p> <p>I immediately thought of it when we started planning the <a href="">Brighton Maker Faire</a> a couple of months ago and was delighted when the project was accepted. Unfortunately 6 years of bit rot had taken it’s toll and disaster loomed after discovering that I’d hosted the <a href="">Game Maker</a> files on the web space provided by an old <span class="caps"><span class="caps">ISP</span></span> account and didn’t have them on my patchy backups. Luckily the ever amazing <a href="">Torley</a> had a copy of the executable and with the help of a <a href="">decompiler</a> I was able to recover the Game Maker files I needed to run the&nbsp;project again.</p> </p> <p>So, if you’re near Brighton on the 3rd of September and like the idea of collaboratively making an arcade game with scissors, glue and pens then please come along. If you have a Windows machine then check out the game we made in Nottingham in 2005. I think it’s still charming and delightful 6 years on. You can download it <a href="">here</a>.</p> </p> <p>This time round I’d like to make the game completely out of <a href="">Creative Commons</a> licensed works, so please suggest <span class="caps"><span class="caps">CC</span></span> licensed books, comics and pictures that might make good source material in the comments, or bring them along on&nbsp;the day.</p> </p>Google+ First Thoughts2011-06-30T04:55:00+01:00Jim,2011-06-30:2011/06/30/google-first-thoughts/<p>After months of rumours it’s finally here, so what is Google+ like? My first thoughts are that it’s super slick and that Circles definitely makes it different, but I’m not&nbsp;sure better.</p> </p> <p>Limiting the distribution of shared information will likely also limit the growth of the network, something that’s not going to help Google+ grow to Facebook’s size. Will the limited sharing encourage more sharing? I’m not sure. There have certainly been a couple of times when I’ve held back from sharing inside Google Reader as I know not all of my Facebook friends will want to see it when it’s sent to my wall via FriendFeed but that doesn’t&nbsp;happen often.</p> </p> <p>When sharing inside Facebook directly I already have the choice of publishing to my friends via my wall, or to a particular group wall and I make that choice when I share. People have the choice of joining a group, or or adding me as a friend, or both. I choose where I publish when I publish and they choose what they subscribe to. Having to partition contacts up front, before I have anything to share, is harder. Maybe it will become more natural, but currently it feels weird. The sharing itself is also harder, feeling a lot more like composing an email, having to choose recipients than just a place&nbsp;to post.</p> </p> <p>Circles apart, Google+ definitely feels somewhat early and skeletal. Hangouts look like a nice group video chat feature, but it’s difficult to judge with so few people currently on the network. Sparks doesn’t look like it will ever be as useful as Google Reader to filter information on interests and, given the focus on social sharing, feels like a bootstrapping mechanism more than a long term feature. Mostly Google+ feels like a weird, empty, alternate reality version of Facebook with more primary colours, which is to&nbsp;be expected.</p> </p> <p>There’s a long way to go, but I’m glad Google has made a start. Google+ feels like it could be both a real competitor and alternative to Facebook, which is a good thing. One horse races are never very interesting&nbsp;to watch.</p> </p>“100 robots attack!” Album Out Now!2011-05-19T09:30:00+01:00Jim,2011-05-19:2011/05/19/100-robots-attack-album-out-now/<p>100 robots first album, “Attack!” is now finished and available to download now from <a href="">bandcamp</a>. I’m so glad that it is done and very proud of the result. It’s the first album I’ve made <a href="">since 2005</a> and the first I’ve produced using <a href="">Ableton Live</a>, which once again proved to be an amazing piece of software. The ability to quickly cycle through libraries of samples to find the right sound, easy parameter automation and super flexible routing using drum racks were a&nbsp;huge help.</p> <p>It’s also the first time I’ve tried to put together a big rock/dance production at home which made mastering tricky as we wanted both a big loud rock sound and huge slabs of sub bass. Because the of the huge sub bass, the <a href="">Fletcher-Munson</a> effect meant that while the <span class="caps"><span class="caps">RMS</span></span> loudness of our initial mixes were normal, the <a href="">a-weighted</a> <span class="caps"><span class="caps">RMS</span></span> values were very low, making the mixes sound very quiet when burned to <span class="caps"><span class="caps">CD</span></span>.</p> <p>The mixes couldn’t simply be turned up without clipping or compressing the mixes which would introduce distortion or reduce the dynamic range of the music. Not wanting to get sucked in to the <a href="">loudness war</a> we ended up in a complicated 3 way trade off between a-weighted <span class="caps"><span class="caps">RMS</span></span>, sub bass weight and dynamic range. <a href="">Audioleak</a> and the <a href="">Pleasurize Music</a> tools were both really helpful during this process. We ended up with an an album that hit the <a href="">sweet spot</a> of -14 dBFS A-weighted <span class="caps"><span class="caps">RMS</span></span>, has an average dynamic range of <span class="caps"><span class="caps">DR6</span></span> (admittedly less than the recommended <span class="caps"><span class="caps">DR10</span></span>) and hopefully still has enough sub bass to work on a big system. A couple of people have commented that it sounds over-compressed, but most people seem to like where we&nbsp;ended up.</p> <p>I hope you enjoy the album and that you can join us at the launch parties at the <a href="">Hydrant in Brighton</a> on the 31st of May or at the <a href="">Maze in Nottingham on the 14th June</a> — they’re going to be&nbsp;great nights!</p> <iframe width="400" height="100" style="position: relative; display: block; width: 400px; height: 100px;padding: 20px" src="" allowtransparency="true" frameborder="0"></iframe> <p><a href="">Attack! by 100&nbsp;robots</a></p>21st Century JavaScript2011-03-12T14:05:00+00:00Jim,2011-03-12:2011/03/12/21st-century-javascript/<div class="flex-video"><embed src="" type="application/x-shockwave-flash" width="640" height="510" allowscriptaccess="always" allowfullscreen="true"></embed></div> <p>The <a href="">slides</a> and <a href="">video</a> of my talk at <a href="">AsyncJS</a> on Thursday are now online. The video is pretty murky, but the sound has come out fine and you can see enough of the slides to be able to follow along at home. The talk focuses on ways to bring useful software engineering patterns to JavaScript, patterns that will be increasingly important as JavaScript applications become larger and&nbsp;more complex.</p> <p>Thanks to <a href="">Prem</a> for inviting me to talk and to everyone who came along to the <a href="">Async</a> session for the&nbsp;fascinating discussion.</p>The Why and How of Automated Testing with Python and Django2010-11-04T09:54:00+00:00Jim,2010-11-04:2010/11/04/why-and-how-automated-testing-python-and-django/<div class="flex-video"><embed src="" type="application/x-shockwave-flash" width="480" height="302" allowscriptaccess="always" allowfullscreen="true"></embed></div> <p>Jamie has just uploaded the <a href="">movie</a> of my talk “The Why and How of Automated Testing with Python and Django” which I gave at <a href="">BrightonPy</a> a week ago (and this time it really is a movie, clocking in at a feature length 1 hr and 35 minutes). The audio on the video is fine (and arguably the laptop-eye-view video is improved by chopping my head off for large parts of the talk), but it’s tricky to see the slides on the video, so I’ve uploaded them to <a href="">slideshare</a>.</p> <p>The talk rambles a bit in places and there are a couple of things that betray my static language roots for example you can’t actually use unit tests to discover dependencies as easily in python as you can in C++. I’m also already evolving the <span class="caps"><span class="caps">JS</span></span> testing stack I talk about here: moving from <a href="">qunit</a>, <a href="">qmock</a> and <a href="">Selenium</a> to <a href="">jsmockito</a> and possibly <a href="">JsTestDriver</a>. Overall I think it’s a pretty good overview of how an agile software engineering process can be&nbsp;screwed together.</p> <p>Many thanks to <a href="!/garethr">@garethr</a> for donating his <a href="">Fabric</a> scripts, Spike for his database migration cameo, <a href="">Si</a> for recommending <a href="">Hudson</a>, <a href="">Dave</a> for hooking me on automated testing and <a href="">j4mie</a> for organising the night and wrangling the video. If you’d like me to help your organisation improve its agile engineering process, please <a href="">get in touch</a>.</p>Goodbye Babbage Linden, Hello Doc Boffin2010-10-23T14:04:00+01:00Jim,2010-10-23:2010/10/23/goodbye-babbage-linden-hello-doc-boffin/<p>In June 2004, not long after <a href="">Cory</a> had introduced me to <a href="">Second Life</a>, version 1.4 was released which added Custom Character Animations. In the accompanying <a href="">press release</a> Philip said “My fantasy is to be Uma Thurman in Kill Bill”, “I’d pay \$10 for her yellow jumpsuit and sword moves and I’m sure other people would too.” I’d been looking for something to build in <span class="caps"><span class="caps">SL</span></span> and also been thinking about melee combat systems in RPGs which traditionally just leave the tanks hacking away while the others get loads of different fun and interesting abilities to use. At the other end of the spectrum arcade fighting games give players lots interesting choices to make, but require twitch reflexes that require low latencies that are difficult to achieve over networks let alone in <span class="caps"><span class="caps">SL</span></span>. Building a tactical melee combat game in Second Life sounded like the kind of interesting challenge I was looking for, so at the end of 2004 Doc Boffin and Jaladan Codesmith set out to build what would become <a href="">Combat Cards</a>.</p> <p>The early versions of the game were built in to weapons and employed a simple <a href="">llDialog</a> interface for selecting moves, but the core mechanics were very much as they are now. HUDs were introduced In October 2005 with Second Life 1.7 and I immediately started thinking about converting the game in to a trading card game — a business model that seemed to fit perfectly with Second Life’s micro currency&nbsp;based economy.</p> <p>A trading card game needed an artist and after looking for one on the <span class="caps"><span class="caps">SL</span></span> forums I was very lucky to find the wonderful <a href="">Osprey Therian</a> who preceded to blow my mind producing amazing artwork and taking incredible pictures of the fantastic avatars of Second Life for what became&nbsp;Combat Cards.</p> <p>Working on the game while working at Linden Lab gave me insights in to how Second Life felt from a residents perspective. Despite Second Life’s flexibility, it’s a lot harder to build complex systems than it should be. Building systems that can send out product updates is fiddly, error prone and something that should be in the platform, <span class="caps"><span class="caps">LSL</span></span>’s memory limitations mean that I often spent more time cutting scripts up or trying to save memory than building features. When the number of cards and so data increased, Combat Cards ended up having to incorporate a paging system to load lines of notecard data in to memory asynchronously in order to continue to work. This hugely frustrating and time consuming experience led directly in to the discussions and design around <a href="">Script Limits</a> which will allow Mono scripts to request as much memory as&nbsp;they needed.</p> <p>Learning about building businesses in Second Life was also incredibly valuable. As a multi-player only game, Combat Card’s biggest challenge has always been getting enough people together at the same time to play, something that has resulted in a series of wonderful parties and regular events often hosted by the amazing Kat Burger. It also resulted in the exploration of linking Second Life with social media that led to Combat Cards arenas tweeting game results and then the <a href=""><span class="caps"><span class="caps">LSL</span></span> Twitter OAuth Library</a> that allowed players to tweet results from their own accounts without <a href="">disclosing their Twitter passwords</a>. When we finally found a print on demand service that allowed Combat Cards to make the jump to <span class="caps"><span class="caps">RL</span></span> it also allowed us to explore the possibilities for linking <span class="caps"><span class="caps">RL</span></span> and <span class="caps"><span class="caps">SL</span></span> businesses that resulted in the system for buying gift certificates for L\$ in <span class="caps"><span class="caps">SL</span></span> that can be redeemed for physical Combat Cards in the online <a href="">web</a> <a href="">shops</a>.</p> <p>Keeping my Babbage Linden and Doc Boffin identities separate for over 6 years has given me incredible insight in to what it’s really like to be a Second Life resident, but it has been exhausting. There was an awkward moment in 2006 when I had to tell Philip that I worked for him when he came to check out Combat Cards, Osprey only found out that I was a Linden in 2008 when I emailed her a version of the <span class="caps"><span class="caps">RL</span></span> rules sheet that Word had helpfully annotated with my name and I had to come up with a dweeby Doc Boffin voice to disguise my identity when commentating on Combat Cards matches on <a href="">YouTube</a>. It’s a huge relief to finally be able to come out of the closet and talk about Combat Cards openly. I’m incredibly proud of what Osprey, Jaladan and I have achieved with the help of Kat, Comragh, Spin and our amazing player base, to whom I apologize to for sometimes not being able to devote as much time as I’d like to Combat Cards. My other Second Life as Babbage Linden often kept me&nbsp;pretty busy.</p> <p>Now that I’ve left Linden Lab I hope to still find some time to work on Combat Cards and hope that it will now be easier to pursue the full publication of Combat Cards in real life that Osprey’s amazing artwork deserves. I’m very happy to announce that Combat Cards 3.0 and the long awaited <a href="">Robot Series</a> of cards will be launching on 31 October and hope to see you all at the launch party at <span class="caps"><span class="caps">2PM</span></span> Pacific (Second Life time) at the <a href="">Combat Cards Arenas in Europa</a>. I’ll leave you with Osprey’s latest amazing promo for&nbsp;the event.</p> <div class="flex-video"><object width="640" height="385"><param name="movie" value=";hl=en_GB"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src=";hl=en_GB" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="385"></embed></object></div>Spawning Django Blogs2010-10-18T16:25:00+01:00Jim,2010-10-18:2010/10/18/spawning-django-blogs/<p>Since leaving <a href="">Linden Lab</a> I have been talking to a number of people about doing freelance consulting and development work while I get my start-up off the ground and last week got round to setting up a <span class="caps"><span class="caps">UK</span></span> limited company so that people will actually be able to&nbsp;pay me.</p> </p> <p>Setting up a company is insanely easy these days: if you go to <a href="">companies made simple</a> it will cost you less than £17 and 10 minutes of form filling. Coming up with a name is harder, but within a couple of hours I found that 18dex was available as a .com <span class="caps"><span class="caps">TLD</span></span>, twitter account and facebook username. Meaningful 5 character .coms are pretty tricky to come by these days, so I snapped it up and 18 Dexterity Ltd. was born — a pretty fantastically geeky name for an agile software engineering company I hope&nbsp;you’ll agree.</p> </p> <p>A few minutes later I had a holding page up for, but it looked pretty sad with no content, so I started thinking about setting it up as a blog. I have a stack of relevant software engineering posts on from the last few years, but they are sandwiched between less relevant posts on <a href="">100robots</a>, <a href="">Second Life</a> and various miscellany. I didn’t want to move the software engineering posts from <a href=""></a> as they’re part of what I do and regularly updating a single blog is quite enough work. I also didn’t want to copy the posts from one blog to another as it would potentially end up with 2 independent comment threads on each blog. There would be no definitive version of a post, a blatant violation of <a href="">Don’t Repeat Yourself</a>.</p> </p> <p>Luckily Django includes a piece of machinery to deal with this problem in its <a href="">sites framework</a>, something I’ve been meaning to have a closer look at for some time. The sites machinery simply lets you associate a piece of content with a site and keeps track of the current site, allowing you to filter the content in the database to only show a subset on&nbsp;each site.</p> </p> <p>While the <a href="">byteflow</a> blog engine I use for supports the sites framework, each post is associated with a single site via a ForeignKey. In order to allow posts to be shown on both and I had to change that ForeignKey field to be a ManyToManyField: a single line change in the python code, but something that requires a little wrangling to massage the existing data to fit the&nbsp;new model.</p> </p> <p>I’ve been using the excellent <a href="">South</a> in all my recent projects to allow me to easily migrate data across django model changes. Although dates from long before South was available I managed to convince south to manage the migration by dumping the blog_post table to json, dropping the table and recreating it with south, reloading the data and then letting south migrate the data to the new ManyToMany schema. While this was slightly more fiddly than it could have been it means that the blog app is now being managed by south, which will make future development on the blogs&nbsp;much easier.</p> </p> <p>Once I had migrated the data to the new model and associated the software engineering posts in with both sites in the django admin interface all that remained was for me to clone the directory with mercurial to create an directory and choose and tweak a byteflow theme for the&nbsp;new site.</p> </p> <p>Once again I’ve been very impressed with Django and Byteflow, which have proven to be incredibly powerful tools that are very easy to work with. In a few hours I was able to create professional and personal views on to my blogging which can be easily administered from a single interface and allow comment threads and users to easily flow between them. If you’re just interested in my software engineering posts, head over to <a href=""></a>, if you want to hear about music, Second Life and everything else I get up to, stay subscribed to <a href=""></a>. If you notice anything broken on either blog, then please leave a comment to let&nbsp;me know.</p> </p>Another Age Must Be The Judge2010-09-29T16:15:00+01:00Jim,2010-09-29:2010/09/29/another-age-must-be-judge/<p><a href="" title="Babbage Linden by Jim Purbrick, on Flickr"><img src="" width="500" height="281" alt="Babbage Linden" /></a></p> <p>Almost exactly 6 years ago, the incredible <a href="">Cory Ondrejka</a> and I met for the first time in real life (having previously blogged together on <a href="">Terra Nova</a>) at the <a href="">Austin Game Conference 2004</a>, where we got on like a house on fire. Several months later I joined Linden Lab and (as James and Jim Linden were already taken) Babbage Linden was born. The first task Cory asked me to do was embed the <a href="">Mono virtual machine</a> in to <a href="">Second Life</a> as a next generation scripting engine. It was a wonderful project to work on, involving authoring a new <span class="caps"><span class="caps">LSL</span></span> compiler back end to generate <span class="caps"><span class="caps">CIL</span></span> bytecode, a scavaging garbage collector to allow assembly unloading and a microthread injector to allow 10s of 1000s of scripts to run concurrently on Mono in a single process (work that has been described in detail in talks at <a href="">ooPSLA</a>, <a href="">Lang.<span class="caps"><span class="caps">NET</span></span></a> and <a href=""><span class="caps"><span class="caps">FOSDEM</span></span></a> ). As a long term R&amp;D project it was put on hold a number of times to make way for more important projects like <a href=""><span class="caps"><span class="caps">HTTP</span></span>-Out</a>, <a href="">Message Liberation</a> and <a href="">Het-Grid</a>, but eventually we shipped a <a href="">Second Life simulator that embedded Mono</a>&nbsp;in 2008.</p> <p>Running <span class="caps"><span class="caps">LSL</span></span> on Mono in Second Life was a huge win, allowing scripts to run 100s of times faster in some cases and reducing the average memory footprint of scripts in Second Life by a third, but the big hairy audatious goal for Mono in Second Life was always to enable other languages that targeted the Common Language Infrastructure to run in Second Life. After waiting until the end of last year for <a href="">Mono 2.6</a> to implement the bytecode verifier and <a href="">CoreCLR security</a> sandbox which allowed us to safely run other languages on Mono inside the simulator we started work on adding support for C# in Second Life at the beginning of this year. A team of Linden engineers in Brighton and California did an amazing job overcoming an array of challenges and got to the point where we had Silverlight chess demos, run time configurable script profilers and scripts that used .<span class="caps"><span class="caps">NET</span></span> reflection APIs to visualize other C# scripts running in our development simulators earlier&nbsp;this summer.</p> <p>Alas, tomorrow is my last day at Linden Lab and Babbage Linden will never get to see C# scripts running in the wild in Second Life, but I very much hope that I do. I hope that C# support is eventually added to Second Life and that I don’t have to wait 170 years to turn the handle. As another <a href="">Babbage</a> said when he failed to build the <a href="">Difference Engine</a>: “Another age must be&nbsp;the judge”.</p>Meaningful Choices2010-09-27T17:23:00+01:00Jim,2010-09-27:2010/09/27/meaningful-choices/<div class="flex-video"><object width="400" height="300"> <param name="flashvars" value="offsite=true&lang=en-us&page_show_url=%2Fphotos%2Ftags%2Fthisisplayful%2Fshow%2F&page_show_back_url=%2Fphotos%2Ftags%2Fthisisplayful%2F&tags=thisisplayful&jump_to=&start_index="></param> <param name="movie" value=""></param> <param name="allowFullScreen" value="true"></param><embed type="application/x-shockwave-flash" src="" allowFullScreen="true" flashvars="offsite=true&lang=en-us&page_show_url=%2Fphotos%2Ftags%2Fthisisplayful%2Fshow%2F&page_show_back_url=%2Fphotos%2Ftags%2Fthisisplayful%2F&tags=thisisplayful&jump_to=&start_index=" width="400" height="300"></embed></object></div> <p>On Friday I jumped on the train to London to attend <a href="">Playful 2010</a>, a one day conference put on by <a href="">mudlark</a> of <a href="">World of Love</a> fame. Despite billing itself as a day of cross “disciplinary frolicking” and featuring <a href="">designers</a>, <a href="">podcasts</a>, <a href="">discussions of narrative</a>, <a href="">iphone augmented paper games</a> and <a href="">Disco Snake</a> the thing that stood out for me was a thread running through the talks that addressed a fundamental of game design:&nbsp;meaningful choices.</p> <p><a href="">Jonathan Smith</a> talked about the dangers of giving people too much freedom in his talk about the Lego Games. Lego is almost a shorthand for freedom: the easy to understand system of knobs and anti-knobs that allows 2 4x2 blocks to be combined in 9 million ways an ultimate sandbox aspired to by games and virtual worlds like Second Life. This open, free system led <a href="">Travellers Tales</a> to add lots of open, free features to it’s early Lego games that were largely ignored by players who need boundaries and feedback from the game to determine ‘what I want versus what’s expected of me’. Choosing freedom and rebellion is more meaningful when it is clear that I am exercising my freedom and not doing&nbsp;the expected.</p> <p><a href="">Margaret Robertson</a> talked about and in the current sandbox game du jour, <a href="">Minecraft</a>, which has enough terror and threat in its horror filled night to make the choices made during the day meaningful and to reward mastery of it’s sandbox — a sandbox that compelled Margaret to stay up until early in the morning carving her slides out of earth, building them out of wood and animating them with flowing water and flames burning down the assertion that “games&nbsp;= points”.</p> <p>It was this misguided assertion that <a href="">Sebastien Deterding</a> talked about in his look at the ‘gamification’ of the world around us. When all that gamified web sites like foursquare do is allow the accumulation of points and badges there are no meaningful choices, no mastery, no way to rebel against expectations, no play and no fun. Gamification results in loyalty schemes that are no more meaningful than <a href="">Progress Quest</a>.</p> <p>The importance of being able to rebel against expectations was echoed by <a href="">Alexis Kennedy’s</a> talk about delicious misery in <a href="">Echo Bazaar</a>, a social game that would be another meaningless progression to inevitable success if it weren’t for contrarian missions that allow players to opt-in to getting their characters exiled for scandal or driven insane by demons. These missions inflict real harm on characters, but when properly signposted are the most enjoyed and shared missions: allowing players to be badass. When a game makes success inevitable, misery and failure is play and&nbsp;meaningful escape.</p> <p>Pat Kane, formerly of Hue and Cry and more recently author of <a href="">The Play Ethic</a> gave a fascinating talk about wordplay, humour and his journey from disillusionment at the comedy industry, to fascination with humour through the <a href="">Old Jews Telling Jokes’</a> stories of Jews laughing in the face of persecution. When misery and failure is inevitable, humour and play is rebellion. An ultimate, meaningful demonstration of freedom and humanity when all hope of victory&nbsp;is gone.</p>Disco Snake2010-09-15T17:25:00+01:00Jim,2010-09-15:2010/09/15/disco-snake/<div class="flex-video"><object style="height: 390px; width: 640px"><param name="movie" value=""><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed src="" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="640" height="390"></object></div> <p>Rock Band does a great job of inspiring people to play music, can you develop a game that inspires composition? <a href="">Lumines</a> and <a href="">Rez</a> create music while you play, can you make games where music creation is the goal, not a side effect? <a href="">Pictionary</a> does a great job of using game mechanics to overcome creative block, can you use other game-like constraints to inspire creativity? These were among the questions I asked at <a href="">GameCamp 2</a> a few months ago and I was keen to explore them at <a href="">Music Hack Day London</a> a&nbsp;week ago.</p> <p>The spectrum of potential game-like musical composition tools is huge, ranging from traditional recognisable music interfaces like keyboards and step sequencers at one end, through things that are designed to be both music interfaces and games like <a href="">Fractal</a>, Rez and Lumines in the middle to things that are recognisably games at the other. While the middle ground is incredibly interesting, 24 hours at a hack day isn’t really long enough to develop a brand new revolutionary hybrid game/music interface, so instead I decided to repurpose an existing game as a sequencer and picked the simplest one I could think of — <a href="">Snake</a>.</p> <p>With the interface chosen, the next thing to do was to think about how to map the interface to music composition. The core mechanic in snake is eating food placed on a grid. Grids have a venerable history in music as <a href="">step sequencer</a> interfaces with time growing form left to right and pitch or samples selected on the y axis. It seemed natural to map food position to note parameters in a similar way. Using the order in which food is selected to determine the order of notes played frees up the X axis to map to a parameter instead of time and also makes playing the game feel more like a progression through a composition: each piece of food adds to the sequence which is continually looping, the music plays and the composition progresses, there is no turning back or revising. By mapping the X axis to velocity pseudo rests can be added to the sequence by selecting food on&nbsp;the left.</p> <p>Selecting notes requires some deviation from the normal snake mechanics which normally only make a single piece of food available at any time. This restriction would mean that players wouldn’t compose music, simply reveal it as they ate one piece of food at a time. At the other end of the spectrum turning every square in to food would mean that the next selected note would have to be adjacent to the last note, also overly restrictive. Making a limited number of pieces of food available at any time provides a nice middle ground, allowing the player some freedom in the choice of the next note selected, but not total freedom, a restriction which can lead to&nbsp;serendipitous melodies.</p> <p>The other major mechanic in snake is colliding with your tail, which ends the game, but becomes harder to avoid as you eat food and get longer. One option would be to use that mechanic to intentionally end the game and the composition, but instead I mapped it to sample selection allowing the player to switch between sounds and start a new sequence to build up multi-timbral polyphonic music. By making the world toroidal players can simply let the snake circle around the world when they have&nbsp;finished composing.</p> <p>A lot of these design decisions came out while implementing the game using <a href="">processing</a>/<a href="">processing.js</a> and <span class="caps"><span class="caps">HTML</span></span> 5 audio — a technology stack <a href="">I’d played around with a bit previously</a>, but wanted to explore further. In the end, for this kind of application I don’t think processing brings enough benefit to outweigh the difficulties it adds to debugging. When running on top of Java errors are often reported as mangled Java call stacks and when running in the browser different errors appear as mangled JavaScript. While I can see the attraction to language designers and implementers of building on top of existing technology it often results in having to implement in one language and debug horrible unrecognisable code in another. Incompatibilities are also horrible. With a couple of hours to go I had the entire game running on Java, but was presented with a blank canvas with no useful Firebug errors when I exported to processing.js and having to perform a binary search by commenting out chunks of code to find the error.&nbsp;Not pleasant.</p> <p><span class="caps"><span class="caps">HTML5</span></span> audio is also a somewhat fragile technology. Generating an Audio element for each sample playback event leads to current browsers grinding to a halt while resetting and restarting audio elements often causes glitches and delays. Another problem is that JavaScript timers don’t provide enough accuracy for tight sequence playback timing. In the end I rebranded both bugs as features by switching from very transient drum samples which sounded messy to dubby bass and melancholy bell samples that work quite nicely with glitches and unintentionally&nbsp;loose timing.</p> <p>At <span class="caps"><span class="caps">10PM</span></span> on Saturday night everything had come together enough for me to lose myself in an hour of ambient bleepy electronica and by the time the presentations started at <span class="caps"><span class="caps">3PM</span></span> on Sunday <a href="">Disco Snake</a>&nbsp;was done.</p> <p>I’d like to thank all of the organisers and hackers that made Music Hack Day London a wonderful experience and have been pleasantly surprised at the positive reaction that Disco Snake has generated over the last week. The space between music interfaces and games is a very fertile one that I’ll be exploring further in the future and while it’s not there yet, I hope <span class="caps"><span class="caps">HTML5</span></span> audio fulfils its promise of bringing interactive music applications to everyone on the web in the very&nbsp;near future.</p>HTML 5 Audio Redux2010-09-04T12:01:00+01:00Jim,2010-09-04:2010/09/04/html-5-audio-redux/<p>My recent experiments in to using <a href="">Procssing.js</a> and <a href=""><span class="caps">HTML5</span> audio</a> to generate multimedia web applications <a href="">didn&#8217;t get very far</a>. I first tried generating a new <span class="caps">HTML</span> 5 audio element for each audio event, which quickly caused the browser to grind to a halt, and my attempts to reuse audio elements by resetting the playback position didn&#8217;t seem work, leading me to conclude that <span class="caps">HTML</span> 5 audio was only really useful for playing back long audio files, not for building sequencers that play back many short samples. When I spoke to <a href="">@rem</a> about my findings he was convinced that resetting audio elements should be possible and this weekend&#8217;s <a href="">Music Hackday London</a> has provided the perfect incentive and opportunity to dust off my experiments and start tinkering again. An hour in and sure enough I&#8217;ve managed to get audio elements to reset: it seems that the trick is to set currentTime after calling play() on the element, something that seems very counter-intuitive, but seems to work (at least in Firefox 3.6.8 and Safari 5.0.1 on <span class="caps">OS</span> X 10.6.4). Now I have reliable sample play back it&#8217;s time to start playing around with more interesting interfaces in Processing and there are 26 hours of hacking left: game&nbsp;on!</p>Some Games Never Die2010-08-02T01:30:00+01:00Jim,2010-08-02:2010/08/02/some-games-never-die/<p><img alt="Law screenshot" src="" /></p> </p> <p>While goofing around asking <span class="caps"><span class="caps">UK</span></span> indie game developers for their top 5 games of all time at <a href="">World of Love</a>, I was very pleased to hear that the amazing <span class="caps"><span class="caps">ZX</span></span> spectrum strategy game <a href="">Chaos</a> featured in&nbsp;multiple lists.</p> </p> <p>I love Chaos so much that I developed Law, a Chaos remake at university as a way to learn Java. It turns out that a bunch of other people love Chaos too: a year or so ago one of the administrators of the Chaos Remakes Wiki got in touch with me to say that he’d been able to pull a copy of my original web site from an archive and wondered if I still had a copy of the game itself. The reminiscing at World of Love was enough for me to finally wade through decade old backups looking for a copy of Law and I’m happy to say that I found a copy and that it still runs on modern&nbsp;Java runtimes.</p> </p> <p>If you remember the original Chaos, head over to the <a href="">Chaos Remakes Wiki</a> and indulge in some retro gaming nostalgia. If you’d like to tinker with it (please fix the yellow text on grey dialogs and add <span class="caps"><span class="caps">AI</span></span> wizards!) the code is available under the <span class="caps"><span class="caps">GPL</span></span> license from <a href="">BitBucket</a>.</p> </p>World of Love2010-06-26T17:03:00+01:00Jim,2010-06-26:2010/06/26/world-love/<p>Thanks to a tip off from <a href="">David Hayward</a>, I managed to snag a last minute ticket for the <a href="">World Of Love</a> independent games conference organized by <a href="">Pixel Lab</a>, sponsored by <a href="">Preloaded</a> and hosted by <a href="">Channel 4</a>. I’m glad&nbsp;I did.</p> </p> <p>The day kicked off with <a href="">Chris Delay</a> from introversion demonstrating <a href="">Subversion</a>, a typically minimal game inspired by Mission Impossible (the <span class="caps"><span class="caps">TV</span></span> show, not the movies). It ticked a huge number of boxes for me as a game, but the most interesting thing about the demo is that the game is nowhere near finished and it’s not often you get to see a game this early in production. I suggested to Chris that the game should feature Sporty Spy, Posh Spy, Ginger Spy and Scary Spy. Look out for those in the&nbsp;final version.</p> </p> <p>Possibly the most inspiring session of the day was <a href="">Terry Cavanagh’s</a> talk on game jams. Setting the scene with a description of his stalled over ambitious uber project, Terry the rattled off an amazing list of games he made at game jams like <a href="">Sinewave Ninja</a> which ultimately morphed in to the amazing <a href="">v six times</a>. I’ve had a ton of fun building <a href="">mashups</a> at <a href="">Hack Days</a> so I can imagine adding games to the mix would be amazing. I predict game jams will be coming to Brighton in the very&nbsp;near future…</p> </p> <p>Definitely the most amazing talk of the day was by <a href="">Eskil Steenberg</a> who was delighted to be talking at the first conference dedicated to his game, Love. After a brief tour around a beautiful procedurally generated and fully dynamic world, Eskil showed off the amazing tools that have allowed him to build an on line game on his own. A suite of networked art tools allowed him to model then procedurally <span class="caps"><span class="caps">UV</span></span> map and texture a 3D model and then demonstrate how altering the model would result in updated <span class="caps"><span class="caps">UV</span></span> maps and procedural textures in real time. It was like being given a tour of the future and when Eskil ran out of time I fully expected him to pull out a time machine to allow him to&nbsp;keep going.</p> </p> <p>The rest of the day included lots of really useful advice for indie game development including talks on business, finance, law and marketing which ended with the <a href="">Frozen Synapse</a> developers bursting in to blast the attendees with water pistols to demonstrate <a href="">Kieron Gillen’s</a> last marketing rule — you’re creative, so be creative&nbsp;with marketing.</p> </p> <p>Overall I prefer the many tracks and everyone gets involved format of <a href="">GameCamp</a> and at times the talks devolved in to extended biographies, but mostly it was a fun, thought provoking and inspiring day. I spent the evening asking the attendees for their top 5 games (it felt like it was that kind of event) and was very pleased to hear <a href="">Chaos</a> featured in multiple lists (it’s one of my favourites too, I developed a <a href="">remake</a> at university. On the strength of the <span class="caps"><span class="caps">UK</span></span> indie scene demonstrated at World of Love I’m sure we’ll be seeing more indie games in top 5 lists in the&nbsp;near future.</p> </p>HTML 5 multimedia2010-06-07T23:14:00+01:00Jim,2010-06-07:2010/06/07/html-5-multimedia/<p>I’ve been morbidly fascinated by the <a href="">Rich Internet Application</a> technology blood bath for a while now: <a href="">Whirled</a>,<a href="">Metaplace</a> and others tried to stuff virtual worlds in to web pages using Flash, <a href="">Second Life</a> stuffed Flash in to virtual worlds via <a href="">Webkit</a>, <a href="">Unity</a> stuffed <a href="">Mono</a> in to a 3D engine and then took on the world and <a href="">Silverlight</a> and <a href="">Moonlight</a> stuffed the <span class="caps"><span class="caps">CLR</span></span> in to web browsers and <a href="">Erik Meijer</a> stuffed a <span class="caps"><span class="caps">CIL</span></span> interpreter straight in&nbsp;to Javascript.</p> <p>All good fun and there are fortunes to be won and lost to be sure, but the smart money seemed to be on waiting for the dust to settle and then using the winning technology. Recently, however, amazing technologies like <a href="">V8</a>, <a href="">Node.js</a> and the resulting browser Javascript arms race have been adding weight to the Google viewpoint that all you need is Javascript: a philosophy made more pragmatic by Apple’s <a href="">decree</a> that all you get&nbsp;is Javascript.</p> <p>A week or so ago I decided to test the hypothesis by building a drum machine using only <a href=""><span class="caps"><span class="caps">HTML</span></span> 5</a> and Javascript. My first discovery was that while the canvas element is perfectly capable, it’s a very low level <span class="caps"><span class="caps">API</span></span>, even for building something as rudimentary as a step sequencer interface. After looking at a number of drawing libraries I settled on <a href="">processing.js</a> as a higher level drawing <span class="caps"><span class="caps">API</span></span>, something I’ve been meaning to play with since we used it to build <a href="">SLorpedo</a> at Hack Day a few years ago. Processing.js is a neat hack, that despite an incomplete <span class="caps"><span class="caps">API</span></span> and some subtleties around casting does a great job of running processing sketches within a browser without a plugin. It also uses a sloppy parser enabling you to drop arbitrary Javascript in to your processing sketch, which makes it easy to just create Audio() objects within the sketch to playback audio. Unfortunately while it was easy to add audio playback, the playback itself was pretty disappointing: Firefox just spluttered and belched sadly while Safari did a decent job of playing beats for a couple of minutes before its timing went to hell and then the browser crashed. The shiny future may yet be <span class="caps"><span class="caps">HTML</span></span> 5 and Javascript, especially when the <a href="">experimental extensions to Firefox</a> become widely supported, but we’re not&nbsp;there yet.</p>Always Watching Out Now!2010-05-17T10:14:00+01:00Jim,2010-05-17:2010/05/17/always-watching-out-now/</p> <p>Always Watching is now available to download from <a href=";affId=1108120">iTunes</a>, <a href="">amazon</a>, <a href="">7digital</a>, <a href="">juno download</a> and <a href="">songrilla</a> — buy&nbsp;it now!</p> </p> <p>If you’d like to remix the song or video, you can get the parts under a <a href="">Creative Commons Sampling Plus Licence</a> via <a href="">BitTorrent</a>.</p> </p> <p>Read more about the making of the track and the encroaching survellance state <a href="">here</a>.</p> </p>GameCamp 22010-05-10T20:56:00+01:00Jim,2010-05-10:2010/05/10/gamecamp-2/</p> <p>A couple of years ago, <a href="">Aleks Krotoski</a> and a group of friends spanning the web, games and technology fields decided to bring the <a href="">FOOCamp</a> and <a href="">BarCamp</a> model of <a href="">unconferences</a> to the world of games and invited me along. I had a great time at the original <a href="">GameCamp</a> and missed it last year, so when I heard about the return of <a href="">GameCamp</a> a couple of months ago I jumped on a ticket and eagerly got on the train early on Saturday morning to participate in another inspiring mind expanding day. I&nbsp;wasn’t disappointed.</p> </p> <p>The cultural differences between the worlds of games and the web we touched on in a couple of sessions. “The <span class="caps"><span class="caps">PC</span></span> Is Dead, Long Live The <span class="caps"><span class="caps">PC</span></span>” quickly turned in to a discussion of open platforms like the <span class="caps"><span class="caps">PC</span></span> versus closed, console like channels. While the long term view was that in 50 years open will prevail, the present sees controlled channels like <span class="caps"><span class="caps">XBLA</span></span>, the various Apple stores and Steam in the ascendant. Some people will wait until titles are available on a particular channel, suggesting that they offer some advantages in terms of convenience, support or peace of mind. While this may be <span class="caps"><span class="caps">OK</span></span> where there is competition between channels, games cannot be as easily distributed via multiple channels, with platforms like Steam requiring relatively invasive changes to be made to allow things like overlay menus to be displayed over game UIs. Where choices between channels is not possible the whims of platform controllers can make development extremely risky when policy changes or simply under staffing can delay or scupper&nbsp;a release.</p> </p> <p>Steven Goodwin asked why open source can build Linux, but not games. While the first answer is that Linux is a platform that can be shared by anyone whereas games are more individual expressions of creativity, that still begs the question why aren’t there more open source game platforms? Games seem to slowly be moving towards more shared and open infrastructure, but still lack the equivalent of the <a href=""><span class="caps"><span class="caps">LAMP</span></span></a> stack, <span class="caps"><span class="caps">HTML</span></span>, <span class="caps"><span class="caps">CSS</span></span> and JavaScript provide a shared web platform that allows web developers to share tips and tricks&nbsp;at BarCamps.</p> </p> <p>Luckily, the inability for game developers to give talks on the equivalent of squid configuration leaves room for lots of interesting talks on higher level game design concepts. 2 of the most interesting were hosted by <a href="">Margaret Robertson</a>, the first was on Curiosity — an urge which is hard to explain in classical or behavioral economics, and which is more powerful when the unknown is close and definite, a closed box on a table that must be opened, than when the unknown is distant and amorphous. The session turned in to an interesting comparison between risk, where outcomes are known and curiosity when the outcome is unknown, how both can be manipulated to make games more compelling and whether that manipulation could be a force for good as well as evil. The second of Margaret’s talks was about a forthcoming audio only binaural iPhone game and the challenges of navigating a world without light, which sounds fascinating,&nbsp;if hard.</p> </p> <p><a href="">Tom Armitage</a> hosted a session which asked “What Do Cows Call Thermodynamics?”. The answer is, probably nothing, but it still affects them. So it is with games, which are ultimately defined by their rules and mechanics which can be used very creatively. The rule that Yorda is afraid if you don’t hold her hand dramatically shapes your relationship with her, the rule that you aim better while doing a stunt is what turns action quake in to a John&nbsp;Woo simulator.</p> </p> <p>Another arc that ran through GameCamp 2 was a discussion about creativity. Pictionary and Scrabble were used as examples of games that foster creativity while players of World of Warcraft show amazing creativity despite the game in a session that had many echos of the Playfish talk at <a href="">Develop</a> last year. At the other end of the spectrum, a session on procedurally generated content asked could algorithms create good content and if they could, how would&nbsp;they know?</p> </p> <p>My “Social Music Composition Games?” session at the end of the day continued this arc. I first played Rock Band at the first GameCamp and since have played it so much that I ended up starting 100 robots with Max. Since then I’ve been playing with tools like <a href="">Ableton</a> that have very game like interfaces and games like <a href="">Lumines</a> that have very sequencer like interfaces. Could you use interfaces like these to build games that do for music composition what Rock Band does for music performance? If so, how would you judge the compositions? Something that combined the interface of Lumines or the <a href="">Tenori-On</a> and <a href="">Digg</a> like filtering and tagging might be very&nbsp;interesting here…</p> </p>Always Watching The Watchers2010-05-01T15:44:00+01:00Jim,2010-05-01:2010/05/01/always-watching-watchers/</p> <p>On May 17th, the first <a href="">100 robots</a> single, <a href="">Always Watching</a>, will be released online via <a href="">Amazon</a>, iTunes, emusic, Rhapsody, napster, spotify and many more&nbsp;digital outlets.</p> </p> <p>Always Watching has been one of the most satisfying projects I’ve ever worked on. Using a commodity <span class="caps"><span class="caps">PC</span></span> and the incredible <a href="">Ableton Live</a> music production software, <a href="">Max</a> and I were able to compose, record, arrange and produce the track, sharing versions online using the free and open source <a href="">Subversion</a> revision control software and downloading freely reusable samples from the amazing <a href="">freesound</a> online&nbsp;sample site.</p> </p> <p>With the music done we were able to produce a <span class="caps"><span class="caps">DIY</span></span> video for the track, filmed and directed by our friend <a href="">Chris Cole</a> again using digital technology that would have been out of reach of anyone but film studios a few years ago. After an incredibly fun day running around Brighton recording shots we could again get additional material from the internet, in this case footage of internet luminaries Chris Anderson, John Battelle, Sherry Turkle and Lee Tien interviewed about online privacy for the <a href=""><span class="caps"><span class="caps">BBC</span></span> virtual revolution</a> series which were released online with a permissive license that allowed us to&nbsp;reuse them.</p> </p> <p>With the video in the can it was time to promote the track online using the social network sites <a href="">MySpace</a>, <a href="">Facebook</a> <a href="">Sound Cloud</a> and <a href=""></a>, make a remix pack available via <a href="">Bit Torrent</a> and sell the track via <a href="">Zimbalam</a>, a site that that makes music available for sale on all of the major online stores, using artwork that we found on <a href="">Flickr</a> and that Rainer Messerklinger kindly let&nbsp;us use.</p> </p> <p>It really has been an amazing and eye-opening experience. Using cheap digital technology, the internet and a <span class="caps"><span class="caps">DIY</span></span> spirit we have been able to create, promote and sell our music to the world and while I don’t expect to make a ton of money from selling the track, being able to sell it&nbsp;is important.</p> </p> <p>Computers and the internet have put the means of production back in to the hands of musicians, creatives and other workers in the digital economy. Whereas musicians in decades past would have had to rely on recording facilities and production and distribution chains owned and controlled by major labels, musicians now can choose to do it all themselves and potentially get much better deals. Historically record deals have been incredibly unfair on artists who have to pay for their recordings from their royalties but still don’t own their recordings once they are paid for. Right now sites like Zimbalam, TuneCore and <span class="caps"><span class="caps">CD</span></span> Baby are fiercely competing to provide the best deal to musicians who are doing&nbsp;it themselves.</p> </p> <p>The last time ownership of the means of production changed hands from workers to factory owners the <a href="">disenfranchised rose up to smash the machines</a> until they were suppressed by the government. This time the <a href="">disintermediated</a> are turning to the government to defend and enforce the old business models by crippling the new machines that are handing back the means of production to&nbsp;the workers.</p> </p> <p>Always Watching is a song about biometrics, click tracking, online privacy, <a href="">Phorm</a>, governmental data loss, corruption and the increasingly Orwellian surveillance state. Even while we’ve been recording it the state has rushed through the <a href="">Digital Economy Bill</a> which further endangers our digital rights and freedoms. With a general election coming up, it’s even more important to always watch&nbsp;the watchers.</p> </p>Battle of the Battle of the Bands2010-03-25T19:11:00+00:00Jim,2010-03-25:2010/03/25/battle-battle-bands/<p>Somehow, <a href="">100 robots</a> have ended up playing 2 different Battle of the Band competitions on consecutive nights in Brighton: at <a href="">The Providence on April 2nd</a> and <a href="">The Lectern on April 3rd</a>.</p> <p>So, which band is the best and which battle of the bands is better? Early indications favour The Providence, which seems to have a better <span class="caps"><span class="caps">PA</span></span>, but The Lecturn is near the University. It could go&nbsp;either way.</p> <p>Anything you can do, I can do meta: join us for both nights of the Battle of the Bands tour and see who we declare the winner of the Battle of the Battle of&nbsp;the Bands!</p>FOSDEM X: The Movie2010-03-14T22:06:00+00:00Jim,2010-03-14:2010/03/14/fosdem-x-movie/<p>A video of my <span class="caps"><span class="caps">FOSDEM</span></span> talk about Mono in Second Life and our plans for the future of scripting is now online (the slides are also available <a href="">here</a> ):</p> <div class="flex-video"><iframe width="640" height="360" src="" frameborder="0" allowfullscreen></iframe></div> <p>Watching back, I was surprised to hear myself say “Hooray!”, “Shit” and “Crap” quite&nbsp;so often…</p> <p>While you’re catching up on <span class="caps"><span class="caps">FOSDEM</span></span> talks, I recommend <a href="">Miguel</a>, <a href="">Jeremie</a> and <a href="">Alan</a>‘s talks from the Mono track. Unfortunately it doesn’t look like the very interesting talks on the <span class="caps"><span class="caps">XMPP</span></span> track are available online, but I’m going to be catching up on <a href="">lots of the talks I missed out on</a>, including the key note talks on Saturday morning which I missed due to the very enjoyable beer event on&nbsp;Friday evening…</p>FOSDEM X2010-02-10T13:52:00+00:00Jim,2010-02-10:2010/02/10/fosdem-x/<div class="flex-video"><object width="400" height="300"> <param name="flashvars" value="offsite=true&lang=en-us&page_show_url=%2Fsearch%2Fshow%2F%3Fq%3Dfosdem%26d%3Dtaken-20100204-20100208%26ss%3D0%26ct%3D0%26mt%3Dall%26adv%3D1&page_show_back_url=%2Fsearch%2F%3Fq%3Dfosdem%26d%3Dtaken-20100204-20100208%26ss%3D0%26ct%3D0%26mt%3Dall%26adv%3D1&"></param> <param name="movie" value=""></param> <param name="allowFullScreen" value="true"></param><embed type="application/x-shockwave-flash" src="" allowFullScreen="true" flashvars="offsite=true&lang=en-us&page_show_url=%2Fsearch%2Fshow%2F%3Fq%3Dfosdem%26d%3Dtaken-20100204-20100208%26ss%3D0%26ct%3D0%26mt%3Dall%26adv%3D1&page_show_back_url=%2Fsearch%2F%3Fq%3Dfosdem%26d%3Dtaken-20100204-20100208%26ss%3D0%26ct%3D0%26mt%3Dall%26adv%3D1&" width="400" height="300"></embed></object></div> </p> <p>Last weekend I went to the 10th Free and Open Source Developers European Meeting in Brussels. This year was the first time that <a href=""><span class="caps"><span class="caps">FOSDEM</span></span></a> had hosted a track on <a href="">Mono</a>, so I went along to find out what’s going on with Mono, tell the Mono folk what our plans are for C# and fill in&nbsp;some gaps.</p> </p> <p>I started Saturday on the monitoring track which included a terrible talk about an interesting tool that I hadn’t heard of: <a href="">SystemTap</a> — a scriptable system monitoring tool designed to allow diagnosis of problems at run time on production servers without the usual instrument, compile, analyse loop. stap looks really interesting and useful as a tool for augmenting or replacing our current simulator&nbsp;performance tools.</p> </p> <p>After the stap talk I headed over to the <a href=""><span class="caps"><span class="caps">XMPP</span></span></a> track for the rest of the day and saw some great talks on <a href=""><span class="caps"><span class="caps">BOSH</span></span></a>, <a href="">onsocialweb</a>, <a href="">pubsub</a>, <a href="">strophe.js</a>, <a href="">collecta</a> and <a href="">node.js</a>. Federated, open social networks seem to be a big thing at the moment (@blaine was talking about them at <a href="">@scalecampuk</a> ) and there was lots of interest in using <span class="caps"><span class="caps">XMPP</span></span> to build them as it already has standards for identity, presence, friends and events. Given that jabber and <span class="caps"><span class="caps">XMPP</span></span> is a decade old it makes you wonder why the standards weren’t used the first time round. A possible reason is that <span class="caps"><span class="caps">XMPP</span></span> doesn’t have it’s rails/django yet and still looks pretty clunky to work with although Strophe.js may help. Another reason may be that <span class="caps"><span class="caps">XMPP</span></span> hasn’t been the web up until now, although <span class="caps"><span class="caps">BOSH</span></span> may be the bridge that’s&nbsp;needed there.</p> </p> <p>While everyone else was thinking about using <span class="caps"><span class="caps">XMPP</span></span> to build Twitter and Facebook I started thinking about what an open, federated virtual world platform built on <span class="caps"><span class="caps">XMPP</span></span> might look like. I wasn’t the only one. The <a href="">realXtend</a> folks turned up to show off their latest new from the ground up viewer which uses <span class="caps"><span class="caps">XMPP</span></span> and jingle for voice and <span class="caps"><span class="caps">IM</span></span> integration and were obviously thinking along the same lines. As realXtend seem to be moving away from <span class="caps"><span class="caps">SL</span></span> tech now, expect an <span class="caps"><span class="caps">XMPP</span></span> based back end from&nbsp;them soon…</p> </p> <p>I spent Sunday camped out in the Mono room which was interesting from a cultural point of view. Miguel de Icaza seems to have completed his transition from champion to enemy of freedom in the open source community’s eyes. Last time he was at <span class="caps"><span class="caps">FOSDEM</span></span> he was accused of talking about “Coca Cola” when showing off the closed source <a href="">Unity</a> engine that uses Mono, so stayed strictly to open technologies in his talk this year. Luckily, due to <a href="">Microsoft’s Community Promise</a> and open sourcing of the <a href=""><span class="caps"><span class="caps">DLR</span></span></a>, <a href="">IronPython</a> and <a href="">IronRuby</a>, that includes a lot of the .<span class="caps"><span class="caps">NET</span></span> platform and gave him lots to talk about. Most of the legitimate worries of patents around Mono have been resolved, but talking about [C#][] and the <a href=""><span class="caps"><span class="caps">CLI</span></span></a> at <span class="caps"><span class="caps">FOSDEM</span></span> still requires a poster asking people to leave their religion at the&nbsp;door, apparently.</p> </p> <p>In my Second Life talk, I gave a summary of the interesting things we did to get <span class="caps"><span class="caps">LSL</span></span> on Mono working in 2008 and then outlined our plans for C# in the future including lots of question marks around how we’re currently planning to implement them. Lots of my questions were answered after the talk and it turns out that the Unity developers are wrestling with the same problems at the moment, so we should be able to work together over the next few months to make a lot&nbsp;of progress.</p> </p> <p>Overall <span class="caps"><span class="caps">FOSDEM</span></span> was hugely informative and enjoyable and I have a big shopping list of exciting new technologies to play with over the next few months. Hopefully the Mono room will become a regular fixture and we’ll be able to head back&nbsp;next year.</p> </p>An Open Source, Guitar Mounted, Multi Touch, Wireless, OSC Interface for Ableton Live2009-12-17T10:58:00+00:00Jim,2009-12-17:2009/12/17/open-source-guitar-mounted-multi-touch-wireless-osc-interface-ableton-live/<p><img alt="Guitar mounted iPhone controller" src="" /></p> </p> <p>(<a href="">100 robots images</a> by <a href="">Steve Marshall</a> )</p> </p> <p>Ever since playing with iPhones as music interfaces with the <a href="">London Community iPhone OSCestra</a> at Open Hack London in May I’ve been wondering how I could use my iPhone as a controller in my rock/electronic band <a href="">100 robots</a>. The 100 robots set up has Max and I playing live drums and guitar over a number of tracks played from <a href="">Ableton Live</a>. If I could mount an iPhone on the guitar I could manipulate the tracks playing from Ableton making the whole experience more varied and live and less like 2 people playing over a&nbsp;backing track.</p> </p> <p>The biggest problem with using the OSCestra set up with 100 robots is that we run Ableton on Windows with 100 robots so that we can use a number of plugins that aren’t available on <span class="caps"><span class="caps">OS</span></span> X. This stopped us using the <span class="caps"><span class="caps">OS</span></span> X <a href="">Osculator</a> to convert <span class="caps"><span class="caps">OSC</span></span> data from Mrmr in to <span class="caps"><span class="caps">MIDI</span></span> data that we could feed to Ableton — the really simple and easy solution that I recommend if you’re using <span class="caps"><span class="caps">OSC</span></span> and Ableton on <span class="caps"><span class="caps">OS</span></span> X.</p> </p> <p>Some quick Googling showed that we could potentially use the open source and incredibly powerful pd to read <span class="caps"><span class="caps">OSC</span></span> and send <span class="caps"><span class="caps">MIDI</span></span> to Ableton, but pd seemed a bit heavy weight and complex to use just for controller mapping in a live environment which needs to be stable and predictable under heavy load while streaming 6 audio tracks and recording 2&nbsp;tracks live.</p> </p> <p>Some more research I discovered <a href="">LiveAPI</a>: an open source project that gives access to the Python interpreter embedded in Ableton Live. LiveAPI allows you to write python code which listens to events from Ableton and allows control of Ableton from python code. Conveniently LiveAPI also includes LiveOSC an <span class="caps"><span class="caps">OSC</span></span> server that quickly allows you to map <span class="caps"><span class="caps">OSC</span></span> messages from applications like Mrmr to <a href="">Python</a> methods that use LiveAPI to&nbsp;control Ableton.</p> </p> <p>In the end it only took a couple of hours and a few lines of Python code to rig up Mrmr on the iPhone to control Ableton on Windows and a few minutes more to build a guitar mount for the iPhone from a cable tie and a piece of&nbsp;gaffa tape.</p> </p> <p>Using the iPhone live with&nbsp;100 robots:</p> </p> </p> <p>(Cinematography by <a href="">David Packer</a>)</p> </p> <p>Despite being a really quick hack to build, LiveAPI is somewhat fiddly to set up, so I thought I should document the process of wiring things up. If you’d like to build your own open source, guitar mounted multi touch, osc interface for Ableton Live 8 running on Windows, follow&nbsp;these instructions:</p> </p> <ol> <li>Install <a href="">Mrmr</a> on your iPhone or&nbsp;iPod touch</li> <li>Design your Mrmr interface using a text editor. If you have a Mac, you can use the <a href="">interface builder</a>. The 100 robots mmr file is <a href="">here</a></li> <li>Upload the mmr file to a&nbsp;web server.</li> <li>Connect the iPhone to the same network as the computer&nbsp;running Ableton.</li> <li>Run Mrmr on the iPhone, download the mmr file from your web server to get your interface running on&nbsp;the iPhone</li> <li>Download <a href="">LiveOSC</a></li> <li>Unpack the LiveOSC zip in to a new LiveOSC folder created inside the Resources/<span class="caps"><span class="caps">MIDI</span></span> Remote Scripts directory in your&nbsp;Ableton installation.</li> <li> <p>Edit Resources/<span class="caps"><span class="caps">MIDI</span></span> Remote Scripts/ to add new callbacks for Mrmr to the __init__ method of the LiveOSCCallbacks class. Widgets are numbered sequentially from 0 in the mmr file, so this line registers a callback for the first widget in the file. The 100 robots callbacks look&nbsp;like this:</p> </p> <p><p> <span class="dquo">&#8220;</span>&#8221;&#8221; 100robots callbacks &#8220;&#8221;&#8220;self.callbackManager.add(self.toggleBass,&nbsp;&#8221;/mrmr/pushbutton/0/iPhone1&#8221;)</p> </li> <li> <p>Add methods to Resources/<span class="caps"><span class="caps">MIDI</span></span> Remote Scripts/ to respond to the messages from Mrmr. These can use LiveAPI freely to control ableton. The 100 robots callbacks toggle the first parameter of effect racks on different tracks to turn them on&nbsp;and off:</p> </p> <p><p> def toggleBass(self, msg): &#8220;&#8221;&#8220;Called when a /mrmr/pushbutton/0/iPhone1 message is received.&#8221;&#8220;&#8221; track = 38 device = 0 parameter = 0 value = msg[2] LiveUtils.getSong().visible_tracks[track].devices[device].parameters[parameter].value =&nbsp;value</p> </li> <li> <p>Select LiveOSC as a control surface in the Ableton preferences. This will load the and set up your mappings. Interact with the controls in Mrmr and see Ableton&nbsp;Live respond!</p> </li> </ol> </p> <p>The full, modified 100 robots <a href=""></a>.</p> </p> <p><a href="">My presentation on the hack</a> at the <a href="">£5 App Christmas Special</a> which includes more details on the Ableton Live routing&nbsp;we use.</p> </p>@scalecamp2009-12-07T15:49:00+00:00Jim,2009-12-07:2009/12/07/scalecamp/</p> <p>On Friday I jumped on the train to London to attend the first <a href="">scalecampuk</a>, an unconference about scalability, at the <a href="">Guardian</a> offices.</p> </p> <p>The sessions were all very interesting and mostly very relevant. I learned new things about <a href=""><span class="caps"><span class="caps">XSS</span></span></a> and <a href=""><span class="caps"><span class="caps">CSRF</span></span></a> and <a href="">Django’s defences against them</a> from <a href="">Simon Willison</a>, new things about <a href="">messaging</a>, <a href="">pubsub</a>, queues and data stores (process 1 message at a time, use message hospitals, send URLs to unavailable data that can be polled for with JavaScript and that just check memcache entries) lots about <a href="">Varnish</a> ) and it’s use at <a href="">Wikia</a> from <a href="">Artur Bergman</a> (Wikia runs off 18 apaches and 8 varnishes with <span class="caps"><span class="caps">60GB</span></span> of <span class="caps"><span class="caps">RAM</span></span> and SSDs to serve 25 million pages and 950Mbps at peak, Varnish is generally better than <a href="">squid</a> ), but you need a&nbsp;modern kernel).</p> </p> <p>Lots of the talks were about moving storage, caching and queuing out of the application and just writing a small piece of business logic to tie them together. Against this background <a href="">Alex Evans’</a> talk about the back end for Media Molecule’s Little Big Planet stood out like a sore thumb. Having not enjoyed using a Java web stack, Alex has just rewritten the whole of the back end as proprietary technology as a single binary in order to know the code from end to end. While it may be true that having custom physics or rendering middleware might distinguish Little Big Planet from other games, I can’t believe that custom technology to serve <span class="caps"><span class="caps">HTTP</span></span> requests is going to be a competitive advantage. I hope Alex’s good ideas become <a href="">Redis</a> contributions rather than a maintenance nightmare and barrier&nbsp;to agility.</p> </p> <p>The lightning talks were also very good. Simon’s “ScaleFail” talk about the <a href="">Guardian <span class="caps"><span class="caps">MP</span></span> expenses app</a> was hilarious (lesson: do load testing) and Gareth’s talk about <a href="">Dumbo</a> (a Python Hadoop client) was&nbsp;very useful.</p> </p> <p>At times it felt like the talks were suited to an ops audience, but “Dev’s should know about this!” was a regular refrain. Don’t worry: I listened and learned a lot. Thanks to everyone who made it a&nbsp;great day.</p> </p>4 Robot Attacks!2009-11-21T18:06:00+00:00Jim,2009-11-21:2009/11/21/4-robot-attacks/<p>Incredibly, 100 robots have 4 gigs lined up in the next 3 weeks: tomorrow we’re playing at an electro/rock night at <a href="">The Freebutt</a> with <a href="">Bang Bang Eche</a>, <a href="">Son of Robot</a> and <a href="">labasheeda</a>, then next Thursday we’re playing at a more hip hop themed night at <a href="">The Hope</a> with <a href="">Tactical Thinking</a> and <a href="">L-Mo</a>. On the 2nd of December we’re playing at the music hack themed <a href="">£5 App Christmas Special</a>, where I’ll also be giving a talk about my open source guitar mounted iPhone multitouch Mrmr/LiveAPI/<span class="caps"><span class="caps">OSC</span></span> wireless Ableton Live interface and then using it to play live. Finally on the 11th of December we’re playing at the Linden Lab Brighton Christmas party along with the other real rock band that formed from the remnants of the all conquering Linden Lab Brighton virtual&nbsp;Rock Band.</p> </p> <p>Whew. It’s going to be a busy few weeks. If you can come to any of the gigs, please do. It’s going to be fun: we’ll be playing brand new songs including our first excursion in to Rock/Dubstep and trying out new ways of performing. If all the gigs weren’t in Brighton I’d call it&nbsp;a tour…</p> </p>Bouncaline2009-11-03T20:28:00+00:00Jim,2009-11-03:2009/11/03/bouncaline/<p>Last week I took some time off to spend with Luke and Natty during half term and we spent Wednesday having a lovely time finishing off a game we started a couple of months&nbsp;ago: Bouncaline.</p> </p> <p>Luke has been interested in making games for a while: he made a level for the <a href="">You’re The Boss</a> game at the <a href="">Radiator festival</a> in Nottingham in 2006 — when he&nbsp;was 3!</p> </p> <p>More recently Luke started designing a game that I was helping him put together in <a href="">Game Maker</a>. He drew lots of backgrounds and characters that we scanned in and there were vague ideas about treasure hunting game play, but it felt a bit like Luke was biting off more than he&nbsp;could chew.</p> </p> <p>So, when Luke and Natty inherited a trampoline over the summer I suggested that we build a bouncing game and we started building it with <a href="">Scratch</a>, an educational programming environment that I’d been meaning to experiment with since seeing that it had been <a href="">ported to Second Life</a>.</p> </p> <p>Scratch has a very simple model based on plugging together blocks that is similar to the <a href="">Lego Mindstorms</a> environment. Luke quickly got the hang of it and built a significant portion of the logic with just a few leading questions. Like Mindstorms and <span class="caps"><span class="caps">LSL</span></span> it uses multiple flows of control within the same scripted object for complex behaviour, which can take some getting used to when making an object that simultaneously waits to be touched and for a timer,&nbsp;for example.</p> </p> <p>In some respects I wish Scratch was a little purer — although message passing concurrency is possible, it’s very easy to share state between objects — something we shouldn’t be encouraging the programmers of tomorrow to do. It’s also harder to do multiple levels or screens than with Game Maker, but given Luke’s propensity to lose himself in Zelda style epics, the tight focus might help learn the basics&nbsp;of logic.</p> </p> <p>Overall it’s a delightfully easy and rewarding environment to use. After spending a couple of hours finishing the logic, we went in to the garden to take pictures of the trampoline and Luke and Natty striking poses for the animations and quickly got them imported in to Scratch along with some very cute drawings and sound effects&nbsp;by Luke.</p> </p> <p>Scratch also makes it very easy to share your work on the web, allowing Luke to proudly show off his handy work to his Grandparents over the weekend and for me to proudly share the game with you here. I hope you&nbsp;enjoy Bouncaline!</p> </p> <p>Use the left and right arrow keys to move and try to collect the food.<a href="">Learn more about&nbsp;this project</a></p> </p>100 robots vs 100 geeks2009-09-05T13:42:00+01:00Jim,2009-09-05:2009/09/05/100-robots-vs-100-geeks/<p>We’ve just about finished setting up the 100 robots gear at <a href="">BarCamp Brighton 4</a> in a derelict building that’s going to make the gig feel like an illegal rave. If you’re at BarCamp please come downstairs to hear us sing songs about the surveilance state, Twitter and the Iran election, gene therapy cures for aids, world of Warcraft love affairs and lots more at 8:15 this evening. If you can’t make it we’re also playing at the Prince Albert on Trafalgar street in Brighton at 8 <span class="caps"><span class="caps">PM</span></span> on the 24th of October and if you can’t make it to either you can watch the videos from the Dance of the Dead zombie prom which are now on YouTube. You can also listen to and download some of our tracks for free from <a href=""></a> or <a href="">myspace</a> and add yourself as a fan to the <a href="">facebook page</a>.</p> </p>Evolving Develop2009-07-20T23:41:00+01:00Jim,2009-07-20:2009/07/20/evolving-develop/<p>As usual I headed down to the Metropole on the sea front last week to attend the annual <a href="">Develop conference</a> in Brighton. Unusually, this time I was attending the <a href="">Evolve day</a> which shifts the focus from console development to online, mobile and social games, which I had helped create as part of the steering commitee. I was&nbsp;very impressed.</p> </p> <p>The first talk I attended was not strictly part of the evolve day: I switched over to the <a href="">Games:Edu</a> track to listen to <a href="">Alice Taylor</a> talk about her work building games for Channel 4. In a previous era, Channel 4 spent £6 million a year on public service programming for young people which ended up being shown in the mornings when the only young people at home were off sick. In order to reach a wider audience all of that money was switched to online programming and, because 14-19 year olds love them, games. Since then, Alice and Matt Locke have toured the country commisioning small developers to develop games like <a href="">Sneeze</a>, <a href="">Routes</a>, <a href="">1066</a> and future games battling STIs in privates and making science fun for girls with Ada Lovelace. It all sounds like loads of fun and a great fit for Second Life (which already has a fair amount of sex and science adventures of its own), but the key here is availability. Alice is going where the kids are. Millions of people play the Channel 4 games using technologies like Flash which are ubiquitous: an order of magnitude more people than use Second Life regularly or used to watch the telly from their&nbsp;sick beds.</p> </p> <p>Next up was “Browser Based Games Past Present Future” which sounded perfect: which of the competing technologies is going to win the battle for the 3D browser canvas, which should we use to make Second Life more accessible to some of Alice’s teens? Unfortunately the retrospective and predictive parts of the talk were extremely limited and the talk was mostly a plug for <a href="">Pirate Galaxy</a>: an online Java based <a href="">Eve</a>. Luckily it looks incredibly well done. In particular the in world economy which allows either time or money to be converted in to energy and from their in to experience and eventually space ships looks very slick: to begin with energy is plentiful and little is needed. Slowly the screw is turned making the urge to invest in energy ever more inticing. There are also limits on the amount of short cutting that can be done by investing money, with game play still required to gain experience which is needed to buy the best ships. It’s all a long step forward from the experiments with economies and mudflation we grappled with at <span class="caps"><span class="caps">AGC</span></span>, <span class="caps"><span class="caps">SIGGRAPH</span></span> and on <a href="">Terra Nova</a>.</p> </p> <p>The highlight of the day for me was the talk by Kristian Segerstrale, <span class="caps"><span class="caps">CEO</span></span> and co-founder of <a href="">Playfish</a> who make some of the most successful games on Facebook, including <a href=";pf_ref=x1030">Pet Society</a>: a game which allows you to collect rainbow poos and is as popular on Facebook as Rhianna and the Simpsons. Kristian talked about how social games are different: no longer focused on creating paths for solitary players to experience fear, horror, wonder and suspense, social games are toolkits for cooperation, competition and expression for friends. Social games have been responsible for most of the growth in the games industry over the last year and while the Wii and Rock Band have let friends gather and play together, games on social networks let friends play together online. Where traditional games spend years and millions in development and then launch with marketing splurges and shelf space negitiated with retailers, social games might intially be developed for 10s of thousands, sit on an infinite shelf and are marketed virally through recommendations from friends. Where traditional games rely on the intuition of a games designer who hopes to get it right, social games rely on feedback. After launch social games are deluged with information. Marketing is driven by numbers, buy buttons are A/B tested with users, designers analyse play, further investment and development desisions are based on usage, the skill is not in intuitively knowing what players will like, but by mining and filtering a slew of data to find out. Kristian’s talk felt a lot like Raph Koster’s dinosaurs talk from a few years back: it even used a similar meteor strike slide, however, this time around the small mammals weren’t micro user generated virtual worlds but pets doing rainbow poos and restaurant sims where you can hire your mum&nbsp;as chef.</p> </p> <p>It will be interesting to see how the small furry mammals evolve over the&nbsp;coming years.</p> </p>The London Geek Community iPhone OSCestra2009-05-12T00:34:00+01:00Jim,2009-05-12:2009/05/12/london-geek-community-iphone-oscestra/<div class="flex-video"><embed src="" type="application/x-shockwave-flash" width="640" height="510" allowscriptaccess="always" allowfullscreen="true"></embed></div> </p> <p>On Friday evening while mulling over potentially interesting hacks to build at <a href="">Open Hack London</a> I remembered an idea I’d had a while ago: there are now loads of interesting ways to use iphones as music interfaces and the iphone to hacker ratio at hack days tends to be around 1, so you could probably put together an entire&nbsp;iPhone orchestra.</p> </p> <p>With only a few hours left before heading to London I started rummaging around on the internet to find the bits I needed. I’d looked at the various iPhone music interface apps over Christmas and <a href=""><span class="caps"><span class="caps">ITM</span></span> MidiLab</a> had been the easiest to use, but although I could start up multiple iTouchMidi servers listening on different ports, I couldn’t send the output of the servers to different <span class="caps"><span class="caps">MIDI</span></span> ports, making it impossible to distinguish between&nbsp;multiple iPhones.</p> </p> <p>Next I looked at the <a href=""><span class="caps"><span class="caps">OSC</span></span></a> based iPhone apps: <a href="">OSCemote</a>, <a href="">TouchOSC</a> and <a href="">mrmr</a>. Of these, mrmr was the easy choice as it is free as in beer and speech, allowing me to extend it if needed. It also allows custom interface design via scripting allowing for potentially interesting <span class="caps"><span class="caps">UI</span></span> hacking at open hack. <span class="caps"><span class="caps">OSC</span></span> is also an <a href="">open standard</a>, so as a last resort I’d be able to build a server that could listen to&nbsp;multiple devices.</p> </p> <p>With the client settled on I started looking at existing software to run on my laptop to convert <span class="caps"><span class="caps">OSC</span></span> data in to <span class="caps"><span class="caps">MIDI</span></span> to control <a href="">Ableton</a>. The first thing I looked at was <a href="">pd</a>, an incredibly powerful data processing environment that can understand <span class="caps"><span class="caps">OSC</span></span> and generate <span class="caps"><span class="caps">MIDI</span></span>. As well as being incredibly powerful, pd also has an incredibly steep learning curve and time was running out, so despite having used it in the past and wanting to use open source software for my hack, I eventually gave up on pd and tried <a href="">OSCulator</a>.</p> </p> <p>OSCulator is incredibly easy to use. Within minutes I had multiple <span class="caps"><span class="caps">OSC</span></span> servers listening on different ports, my iPhone had connected to each of them and I’d set up mappings from dozens of <span class="caps"><span class="caps">OSC</span></span> inputs to <span class="caps"><span class="caps">MIDI</span></span> controllers. OSCulator also supports up to 8 Wiimotes connected via bluetooth, so I chucked a couple of wiimotes my bag, tested the iPhone could connect to an Ad Hoc WiFi network created on my MacBook Pro, threw a Dr Who <span class="caps"><span class="caps">MIDI</span></span> file in to Ableton and then got&nbsp;some sleep.</p> </p> <p>After booking a slot for the non-existant iPhone orchestra during the hack demos, I set out to make it exist. With a combination of arm twisting and volunteering I convinced 8 plucky hackers to join the orchestra then spent a few hours auditioning synth patches in Ableton and assiging <span class="caps"><span class="caps">MIDI</span></span> controllers to their parameters and tweaking iPhone accelerometer smoothing settings in OSCulator to get a couple of Wiimotes working&nbsp;as drums.</p> </p> <p>I managed to organise an hour’s rehearsal on Saturday afternoon where we spent the first half trying to connect all of the devices and the second huddled around the laptop trying to hear the audio from the built in speakers. After a bit more tweaking I set up a 3rd Wiimote to launch loops and start and stop the set, allowing me to get in on the fun while conducting and borrowed an amp for our second and&nbsp;final rehearsal.</p> </p> <p>The performance was a hoot. We’d been having trouble getting all of the devices to connect to OSCulator at the same time and Simon Willison’s iPhone refused to connect for the final performance, which freed him up to concentrate on hamming it up with a look of intense concentration. I also managed to completely lose track of where we were in the music, so Jon Markwell’s haunting theremin solo section ended up following an embarrasing silence when his part wasn’t actually playing. All in all though, I think we did pretty well and it went down a storm with the&nbsp;assembled geeks.</p> </p> <p>Many thanks to <a href="">Ryan Alexander</a>, <a href="">Jonathan Markwell</a>, <a href="">Natalie Downe</a>, <a href="">Nigel Crawley</a>, <a href="">Matt Jarvis</a>, <a href="">Simon Willison</a> and Matthew Smith for indulging me once again at hack day — it was loads of fun. There are more videos and photos of the performance in my <a href="">delicious stream</a>.</p> </p>Lang.NET 3 Years On2009-04-27T11:12:00+01:00Jim,2009-04-27:2009/04/27/langnet-3-years/<p>It was incredibly satisfying to be able to go back to Lang.<span class="caps"><span class="caps">NET</span></span> 3 years on to be able to say that we actually managed to make <a href="">all the crazy plans we had for Mono in 2006</a> work. My talk is now <a href="">online here</a>. Lots of people hadn’t seen the 2006 talk and were blown away with us adding support for continuations on the <span class="caps"><span class="caps">CLR</span></span> and enough new stuff to keep those that had seen the first talk interested. In particular the anecdote about flywheel exploits for the Mono scheduler got a laugh from&nbsp;Anders Hejlsberg.</p> </p> <p>The other highlights on Tuesday were <a href="">Mads Torgeson’s talk</a>, which gave some nice insights in to the process that led up to the final C# 4.0 Dynamic design. Mads did a good job capturing some of the concerns that static advocates have with dynamic languages. Gilad Bracha’s talk on <a href="">Newspeak and the Hopscotch <span class="caps"><span class="caps">IDE</span></span></a> showed what’s possible with a really dynamic language: extending the <span class="caps"><span class="caps">IDE</span></span> and debugging it from inside itself. The access that newspeak has to it’s call stack was particularly interesting: it would be much easier to move a newspeak stack around than build the infrastructure we needed to move a <span class="caps"><span class="caps">CLI</span></span> stack. I spoke to Gilad about his experiences building newspeak on Sqeak afterwards. His impression is that Squeak is runtime mostly used for education and has no support for security or running untrusted code. Having spent a decade seeing the various security problems with bytecode verification in Java, Gilad isn’t convinced that a bytecode level sandbox is the right approach, but although he has some ideas for object capability based security for the post squeak Newspeak implementation it seems a long&nbsp;way off.</p> </p> <p>The highlight on Wednesday was definitely <a href="">Lars Bak’s talk on V8</a>, which I had missed last year at <span class="caps"><span class="caps">JAOO</span></span>. The mechanism for discovering emergent types in dynamic languages to allow indexed lookup instead of hash table lookup is a very nice hack. Lar’s super competitive heckling of everyone at Microsoft for the rest of the day was also fun. Other highlights included Erik Meijer throwing coins around while talking about mathematical duality and <a href="">Amanda Lauter</a> and <a href="">Luke Hoban</a> talking about F#. Potentially relevant to Linden Lab if we get to the point where we can support debugging scripts in Second Life was Herman Venter’s talk on the <a href="">Common Compiler Infrastructure</a> a newly open sourced library which can allow <span class="caps"><span class="caps">CLI</span></span> assembly rewriting while preserving&nbsp;debug information.</p> </p> <p>The <a href="">Meta Introduction to Domain Specific Languages</a> was a really good opening for the DevCon, but the talk most relevant to Second Life was Paul Cowan talking about <a href="">DSLs in the Horn Package Manager</a>. Paul talked about creating DSLs by extending <a href="">Boo</a>, something that should be possible when we get to the point where Second Life can accept arbitrary <span class="caps"><span class="caps">CLI</span></span> assemblies. I got a first Boo script compiling against the Second Life assemblies at the DevCon and have some plans to experiment building a <span class="caps"><span class="caps">DSL</span></span> for virtual ecosystems in Second Life over the next few weeks. Supporting C# as a general purpose language and Boo as a meta language for building DSLs in Second Life would be an&nbsp;excellent combination.</p> </p> <p>A lot of the talks at the DevCon seemed to involve doing terrifying language somersaults to create more natural DSLs that run on the <span class="caps"><span class="caps">JVM</span></span> or <span class="caps"><span class="caps">CLI</span></span> which then can’t be easily debugged due to the huge chained expressions or nested types that the languages boil down to. There seems to be a large opportunity for disaster here when these DSLs need to be extended or maintained in 10 years time after the original authors have moved on. Although Martin deflected a lot of these questions by saying you can get bad frameworks as easily as bad DSLs, I feel a lot more comfortable unpicking a bad framework or wrapping a library than trying to fix or extend a broken language (maybe several years reimplementing <span class="caps"><span class="caps">LSL</span></span>’s quirks&nbsp;did that).</p> </p> <p>The ultimate promise of DSLs was hinted at by <a href="">Magnus Christerson’s talk</a> which demoed Intentional’s amazing Domain Workbench. Instead of building DSLs on top of mainstream runtimes, the Domain Workbench models the domain and can then project the model as code or using domain specific projections like interactive circuit diagrams that can be manipulated and debugged interactively. Magnus started his talk saying that we used to have to enter programs on punch cards that evolved in to sequential programs and that we don’t need to do that any more. If the Domain Workbench’s promise is fulfilled it could change the way we develop&nbsp;software forever.</p> </p>100 Robots Vs 200 Zombies2009-03-27T23:50:00+00:00Jim,2009-03-27:2009/03/27/100-robots-vs-200-zombies/<p><img alt="Dance Of The Dead Flyer" src="" /></p> </p> <p>I may not have blogged much recently, but I’ve been hard at work writing new songs about the financial meltdown, the surveilance state, gene therapy cures for hiv, anger and guilt for the new band I’ve put together with <a href="">Max Williams</a> and <a href="">Aleks Krotoski</a>: <a href="">100 Robots</a>. We’ll be performing the new songs along with rock/dance/electronic versions of Thriller and <a href="">Re: Your Brains</a> among others to 200 zombies at <a href="">Dance Of The Dead</a>: a charity zombie prom in London in aid of St Mungos on April 4th. If you’d like to come along, dance, rock, shamble and groan tickets are available <a href="">here</a>. I’m always happy to <a href=";op=3&amp;o=global&amp;view=global&amp;subj=586260989&amp;id=827265544">dress</a> <a href=";op=3&amp;o=global&amp;view=global&amp;subj=586260989&amp;id=731327819">up</a> <a href=";op=3&amp;o=global&amp;view=global&amp;subj=586260989&amp;id=731327819">as</a> <a href="">a</a> <a href="">zombie</a>, throw in playing music and supporting a band called the <a href="">The Tits Of Death</a> and it’s clear that this is going to be the best&nbsp;gig ever!</p> </p>Music Again!2009-01-12T20:42:00+00:00Jim,2009-01-12:2009/01/12/music-again/<p>Since moving to Brighton 18 months ago I’ve been pretty busy finding my feet, moving house twice, sorting out schools and setting up Linden Lab Brighton, so I haven’t had as much time to make music as I’d have liked. It hasn’t helped that my brother Roo, who collaborated with me on <a href="">Vanishing Trick</a> has gone from being a medical student to an opthalmologist, so has been pretty&nbsp;busy too.</p> </p> <p>I’ve still been reading <a href="">create digital music</a> though, so couldn’t resist buying Luke a copy of <a href="">Korg <span class="caps"><span class="caps">DS</span></span>-10</a> for his birthday just before Christmas. It’s a great piece of software that reminds me a lot of ReBirth (which is now <a href="">free</a>!) — the same infectious acid sounds and a fun interface that you can’t resist tweaking. Within hours Luke was coming up with screaming synth noises that would have made the Chemical Brothers proud, so I told him that when he had a track finished we could record it on to the computer, add some more sounds and make a <span class="caps"><span class="caps">CD</span></span>.</p> </p> <p>30 minutes later I had Luke to thank for finally convincing me to unpack my bag of midi and audio cables for the first time since we rolled up in Brighton. While adding bits to Lukes <span class="caps"><span class="caps">DS</span></span>-10 and <a href="">Electroplankton</a> creations, I took the opportunity to finally play around with <a href="">Ableton Live</a> properly and was completely bowled over. It’s one of those pieces of software that as a software engineer you can’t help admiring. It makes you want to use it and turns the complex process of sequencing and producing music in to joyful fun. I’d been a Cubase user for 10 years, but I’m not&nbsp;going back.</p> </p> <p>Over Christmas I put together a Live rig that lets me use my ancient Yamaha <span class="caps"><span class="caps">MFC05</span></span> <span class="caps"><span class="caps">MIDI</span></span> controller to switch between drum beats, record guitar loops and automatically switch between effects settings without touching the computer. With some tweaking I’m going to get it to do the same for my <a href=";clpm=Nord_Modular">Nord Modular</a> and switch patches on both the Nord and my <span class="caps"><span class="caps">POD</span></span> allowing me to quickly record, compose, arrange and perform music from 5 foot pedals. I’ve also got <a href=""><span class="caps"><span class="caps">ITM</span></span></a> set up on my iPhone which I’m planning to mount on my guitar giving me a wireless X/Y touchpad that can control any number of&nbsp;Ableton parameters.</p> </p> <p>I also finally got round to playing around with circuit bending for the first time. Last Christmas Roo got me a reissue Stylophone, which is a fun toy, but whenever I read about circuit bending online I kept thinking it would be fun to try connecting the headphone output of the stylophone to the <span class="caps"><span class="caps">MP3</span></span> input. In the end it only took an hour or so of poking around inside the Stylophone while telling Luke about electronics to find some bend points that turn the mild mannered Stylophone in to a wailing banshee of feedback distortion that’s more Hendrix than Harris. The modded Stylophone is shown below and I’ve added annotations to a <a href="">picture of the opened up Stylophone on Flickr</a> that shows the points that I connected. If you’d like to hear the evil sounds it makes there are a <a href="">selection of Creative Commons licensed samples</a> on Freesound, but I advise turning the volume down and using headphones if you give them&nbsp;a listen.</p> <p><a href="" title="Bent Stylophone by Jim Purbrick, on Flickr"><img src="" width="500" height="334" alt="Bent Stylophone" /></a></p> <p>It’s great to be having fun making music again. Thanks Luke&nbsp;and Roo!</p> </p> <p>[<img alt="Bent Stylophone" src="" />]: &#8220;Bent Stylophone by Jim Purbrick, on&nbsp;Flickr&#8221;</p>Babbage Linden In Real Life2008-12-14T14:03:00+00:00Jim,2008-12-14:2008/12/14/babbage-linden-real-life/<p><a href="" title="Babbage Linden by Jim Purbrick, on Flickr"><img src="" width="104" height="240" alt="Babbage Linden" align="left" vspace="10" hspace="10"/></a> When I heard that the theme for the Linden Lab Christmas party was going to be steam punk, I knew I had to go as Babbage Linden. Since 2005 my avatar in Second Life has sported a victorian suit from Neverland and a steam arm, originally from a Steambot avatar which I updated to a more recent design from Marcos Fonzerelli after Joe Linden started washing his face in my&nbsp;sink.</p> <p>I had a suit that was close enough and a bit of riffling through local shops and ebay got me a waistcoat and cravat that were a pretty good match, but I knew the arm was going to need a lot of construction, even if I went for the first version from the Steambot. After some Googling trying to find out how to construct a conical frustum I remembered the <a href="">export to world</a> project that had converted Second Life objects to paper craft models. So, after checking that Marcos didn&#8217;t mind me exporting his geometry from Second Life, I decided to do&nbsp;that.</p> <p><a href="" title="Setting Render Types Before Capturing GL Data by Jim Purbrick, on Flickr"><img src="" width="100" height="78" alt="Setting Render Types Before Capturing GL Data" align="right" vspace="10" hspace="10" /></a>Installing <a href=""><span class="caps">OGLE</span></a> proved to be really easy - just dragging the replacement OpenGL <span class="caps">DLL</span> and supplied config to the Second Life directory worked fine, but my first few captured scenes were full of unwanted clutter. In the end I found that using the advanced menu to <a href="">turn off all render types</a> apart from Basic, Volume and Bump and the <a href=""><span class="caps">UI</span> feature</a> allowed me to capture just the steambot arm placed in front of me in Second&nbsp;Life.</p> <p><a href="" title="Steambot Finger in Blender by Jim Purbrick, on Flickr"><img src="" width="100" height="79" alt="Steambot Finger in Blender" align="left" vspace="10" hspace="10" /></a>After opening the .obj file captured by <span class="caps">OGLE</span> in <a href="">Blender</a>, exporting it as a <span class="caps">DXF</span> and opening it in <a href="">Pepakura Designer</a> it was clear that the model was far too complicated - the cones and spheres created dozens of faces. Going back in to Second Life I broke the steambot arm in to 1 piece for each cardboard part I expected to make, stripped off all of the spheres, made the finger tips in to boxes instead of cones and then used the Second Life preferences to set the object mesh complexity to minimum, which reduced the number of faces on each cylinder to&nbsp;6.</p> <p><a href="" title="Steambot Finger In Pepakura Designer by Jim Purbrick, on Flickr"><img src="" width="100" height="76" alt="Steambot Finger In Pepakura Designer" align="right" vspace="10" hspace="10" /></a>With these simplifications made the models looked much more managable when opened in Pepakura Designer, but it was still worth playing with open faces to simplify the parts in to quad strips. With these tweaks made I determined the correct scale. I placed my forearm on a couple of sheets of A4 and then adjusted the scale in Pepakura Designer until the steambot arm covered approximately the same amount of paper, then noted down the scale so I could use it when printing each&nbsp;part.</p> <p><a href="" title="Assembled Forearm by Jim Purbrick, on Flickr"><img src="" width="75" height="100" alt="Assembled Forearm" align="left" vspace="10" hspace="10" /></a> After printing the parts out on A4, I pinned them to some think double ply corrugated card from some old Dell boxes and then cut out the shapes and assembled them with gaffa tape, <span class="caps">PVA</span> and paper fasteners. I replaced the spheres with some polystyrene spheres from a craft shop and replaced the shiny with half a dozen coats of copperchrome spray&nbsp;paint.</p> <p>I had a load of fun and the finished arm is amazingly sturdy, surviving 6 hours of <a href="">Christmas party</a> relatively unscathed. Who knows, maybe I&#8217;ll get another chance to wear it at an <span class="caps">SLCC</span> in the future. A full set of pictures with notes is available <a href="">here</a>.</p> <p><a href="" title="Babbage and Niall by Jim Purbrick, on Flickr"><img src="" width="240" height="191" alt="Babbage and Niall" /></a></p>m0cxx0r And Return Types2008-12-03T22:04:00+00:00Jim,2008-12-03:2008/12/03/m0cxx0r-and-return-types/<p>The core of <a href="">m0cxx0r</a> is the creation of an object that records method calls and compares them to expectations. This is done by using C++ placement new to create a VTableDonor object in allocated memory the same size as the object being mocked and then returning the memory as a m0cxx0r::Mock class which inherits from T, the class of the object&nbsp;being mocked.</p> </p> <p>When methods are called on the mock object instead of invoking the methods in T, the virtual methods in VTableDonor are called instead and are able to record the calls made and compare them to expectations. The problem is that the signature of the original method and the VTableDonor method may&nbsp;not match.</p> </p> <p>In order to be able to find and compare parameters the VTableDonor methods take a single parameter which they can use as a fix point to find other parameters that may be passed to the call via pointer arithmetic. Luckily the rules for parameter layout are fairly simple, so if you know the address of the first parameter, it’s easy to find&nbsp;the others.</p> </p> <p>Unfortunately the same isn’t true for return values. Depending on the return type, space for the return value might be pushed on to the stack as a hidden parameter, a pointer to a heap location might be pushed or the caller may expect the return to be saved in a register. The rules for which mechanism is used depends on some combination of the compiler, platform and sometimes which C++ features the return type uses. To make matters worse the this pointer is also pushed as a hidden parameter which can become corrupted when there is a return type mismatch. All of this makes it very difficult to call a VTableDonor virtual method with a void return type in place of a virtual method on T with a non-void return type and have everything work correctly. You can see why people generally use the much simpler C <span class="caps"><span class="caps">ABI</span></span> to nail&nbsp;binaries together.</p> </p> <p>After a <a href="">lot</a> <a href="">of</a> <a href="">research</a> and some trial and error I’ve managed to get m0cxx0r working with virtual methods returning primitive types and non-<span class="caps"><span class="caps">POD</span></span> types by value in Visual Studio 2005 on Windows and using g++ 3.3 on Linux. The new code can be found <a href="">here</a>. I’m still having trouble getting it working on g++ 4.0.1 on Darwin where dyld seems to be noticing my monkeying around, causing the process to exit with a _dyld_misaligned_stack_error — hopefully it will be possible to&nbsp;work around.</p> </p> <p>A potentially better solution is used by <a href="">mockitopp</a>, a brand new dynamic mock framework for C++ that I found on my travels around the internet today. Where m0cxx0r uses a compiler generated VTableDonor class and then attempts to work around the signature mismatch problems, mockitopp builds the mock vtable at run time which has the advantage that the entries can be made to match the signatures in the class being mocked. It looks to be a promising approach and I’m looking forward to investigating&nbsp;mockitopp further.</p> </p>New Widgets2008-11-25T23:21:00+00:00Jim,2008-11-25:2008/11/25/new-widgets/<p>It’s that time of year again where people start asking what I’d like for Christmas and I start wondering what they’d like in return. It’s just the sort of problem that should be solved with social software. Over the last few years I’ve had an Amazon wish list which suffices for books, music and software, but doesn’t allow me to add fun things like <a href="">board games</a>, <a href="">sensors</a> and&nbsp;[lego] (</p> </p> <p>I’ve thought about building a wish list service that worked against any web store a few times and was talking to my old friend Tom about this problem at the weekend when he came to stay with his lovely new daughter Beth. We both agreed that someone must have built it already and so it goes: <a href="">boxedup</a> provides you with browser buttons that allow you to easily add any product any where on the web to a social wish list service. It also supports the other essential feature — allowing your friends to reserve items in a way that’s visible to them, but invisible to you, so everything stays a surprise until the&nbsp;big day.</p> </p> <p>I’ve added a boxedup widget to the side bar so you can see what interesting schwag I’ve uncovered from across the web in a <a href="">wonderland</a> style. While I was at it I added a <a href="">friendfeed</a> widget so you can see what I’m reading, bookmarking and uploading in a <a href="">simon willison</a>/<a href="">boingboing</a>&nbsp;style too.</p> </p> <p>Now I just need to get everyone I know to set up a boxedup list too and my Christmas shopping will&nbsp;do itself.</p> </p>Measurement vs Modelling2008-11-19T00:37:00+00:00Jim,2008-11-19:2008/11/19/measurement-vs-modelling/<p>I’ve just been at a really interesting <a href="">cafe scientifique</a> in Brighton where <a href="">Philip ‘Critical Mass’ Ball</a> talked about using physics to model the behavior of people en mass. When modeling people as particles you can create surprisingly realistic simulations of real behavior in corridors, traffic jams and panics. As fascinating as this is, I only think this I’d useful on situations where there is little historical evidence to rely on and where the cost of change is high. In the case of parks it is much better to ship a park in beta without any paths and then close the park in November to pave the cow paths than it is to model the park up front and hope that your model bears some resemblance to reality. Physics has invented models of reality just as reality has invented methods of measurement that don’t require modeling. <a href="">Jeremy Keith</a> is Philip Ball’s nemesis, whatever he&nbsp;may say.</p> </p>m0cxx0r on Windows2008-10-27T21:32:00+00:00Jim,2008-10-27:2008/10/27/m0cxx0r-windows/<p>In order for m0cxx0r to be useful for writing tests at Linden Lab, it needs to work on all of the platforms that we target with C++ applications, so today I tried building and running m0cxx0r&nbsp;on Windows.</p> </p> <p>Initially it looked good: m0cxx0r built in the default Visual Studio Debug configuration, but then crashed on construction of Mock objects due to accessing unitialised memory. This was relatively easy to fix, just requiring a call to memset to zero out the memory that would become the&nbsp;m0cxx0r::Mock object.</p> </p> <p>The next problem was harder to fix. One of the hacks at the core of m0cxx0r is pointing the mock object’s vptr at a donor vtable populated with methods that record calls to the methods. The problem is that the signatures of the original and replacement methods may not match, so multiple parameters may be passed to a method expecting a single parameter. This shouldn’t be a problem as long as the caller manages the stack unwinding: the caller just pushes parameters on to the stack which are ignored and then popped the back&nbsp;off again.</p> </p> <p>Although m0cxx0r just worked when compiled with <span class="caps"><span class="caps">GCC</span></span> on darwin, the run time checks performed in Debug by Visual Studio caught the stack pointer mismatch and stopped execution. In Release the situation was even worse: the tests just crashed out without error. Luckily after some poking around I was able to turn of the stack pointer run time check in Debug and after some trial and error I found that disabling optimisations in the Release configuration with the default __cdecl calling convention allowed the tests to run without error&nbsp;in Release.</p> </p> <p>With these property changes made, m0cxx0r built and ran it’s tests fine in Visual Studio 2005 on Windows. Get the Visual Studio 2005 project and solution files along with the m0cxx0r code from <a href="">Google Code</a>.</p> </p>m0cxx0r - Compiler Generated Mock Objects For C++2008-10-26T23:45:00+00:00Jim,2008-10-26:2008/10/26/m0cxx0r-compiler-generated-mock-objects-c/<p>A few weeks ago at <a href=""><span class="caps">JAOO</span></a> I felt insanely jealous while watching <a href="">Erik Doernenburg</a> demo <a href="">Mockito</a>: I wanted dynamic mock objects in C++. It turns out that it&#8217;s really hard. However, after a few days hacking around I found that it&#8217;s not completely impossible. The results of my hacking are now available under a <a href=""><span class="caps">BSD</span> license</a> <a href="">here</a>. m0cxx0r lets you write tests like this in&nbsp;C++:</p> <div class="highlight"><pre><span class="k">typedef</span> <span class="n">m0cxx0r</span><span class="o">::</span><span class="n">Mock</span><span class="o">&lt;</span><span class="n">ProductionClass</span><span class="o">&gt;</span> <span class="n">MockClass</span><span class="p">;</span> <span class="n">MockClass</span><span class="o">*</span> <span class="n">mock</span> <span class="o">=</span> <span class="n">MockClass</span><span class="o">::</span><span class="n">create</span><span class="p">();</span> <span class="n">mock</span><span class="o">-&gt;</span><span class="n">expect</span><span class="p">(</span><span class="s">&quot;foo&quot;</span><span class="p">,</span> <span class="o">&amp;</span><span class="n">ProductionClass</span><span class="o">::</span><span class="n">foo</span><span class="p">);</span> <span class="n">mock</span><span class="o">-&gt;</span><span class="n">expect</span><span class="p">(</span><span class="s">&quot;bar&quot;</span><span class="p">,</span> <span class="o">&amp;</span><span class="n">ProductionClass</span><span class="o">::</span><span class="n">bar</span><span class="p">,</span> <span class="mi">42</span><span class="p">);</span> <span class="n">mock</span><span class="o">-&gt;</span><span class="n">expect</span><span class="p">(</span><span class="s">&quot;baz&quot;</span><span class="p">,</span> <span class="o">&amp;</span><span class="n">ProductionClass</span><span class="o">::</span><span class="n">baz</span><span class="p">);</span> <span class="n">mock</span><span class="o">-&gt;</span><span class="n">foo</span><span class="p">();</span> <span class="n">mock</span><span class="o">-&gt;</span><span class="n">bar</span><span class="p">(</span><span class="mi">3</span><span class="p">);</span> <span class="n">mock</span><span class="o">-&gt;</span><span class="n">verify</span><span class="p">();</span> <span class="n">MockClass</span><span class="o">::</span><span class="n">destroy</span><span class="p">(</span><span class="o">&amp;</span><span class="n">mock</span><span class="p">);</span> </pre></div> <p>Most importantly you don&#8217;t need to hand code a test double for ProductionClass: m0cxx0r generates it for you. The code needs lots of love: it&#8217;s all in a single file and the interface will need iterating a few times, but I think it&#8217;s a good start. Please download it, have a play and let me know what you come up with. I&#8217;ve only tested it on gcc version 4.0.1 on darwin, so I&#8217;d be interested to know if it works on other platforms as it uses some code layout assumptions that might not be portable. I&#8217;ll write some blog posts over the next few days that explain how it all&nbsp;works.</p>Like Second Life2008-10-23T12:26:00+01:00Jim,2008-10-23:2008/10/23/second-life/<p>Was without a doubt the phrase I heard most often yesterday, especially if you include variants like “Not Like Second Life”, “A bit like Second Life” and “Unlike Second Life”. Whatever else it’s achieved, Second Life has definitely become the frame of reference for the small and somewhat myopic crowd that made up the delegates at the sparsely populated <a href="">Virtual Worlds Conference</a> in&nbsp;London yesterday.</p> </p> <p><a href="">Vastpark</a>is not like Second Life because it works in a web browser. Everyone on the web integration panel seemed to agree that virtual worlds in a browser is the next step, so I was glad to be there to question the TechCrunch consensus. How does having a world in a browser help? What does back and forward mean to a virtual world? What does it mean for presence to have 10 tabs open looking in to different parts of the same virtual world? Why would you want your view further constained by extra web browser widgets? Isn’t 3D in the browser going to be a blood bath for the next few years? Aren’t you really just using the browser as a download path? I suggested that the final question was the real reason that developers are pushing virtual worlds on the web and that the integration that most people want is to be able to use existing web and 2D media while using virtual worlds and use web services as a universal data bus between virtual worlds and other web&nbsp;aware platforms.</p> </p> <p><span class="caps"><span class="caps">MPEG</span></span>-V is not like Second Life because it’s a standard defined by 35 companies which is much better than the <a href="">emerging Linden led standard</a> according to Dr. Yesha Sivan in what was the worst talk I’ve heard in a long time. Not only did he make the standardisation process sound like a 3 year political bun-fight by people who didn’t know much about virtual worlds and who might come up with a bad standard, he managed to spell <span class="caps"><span class="caps">MPEG</span></span> and Google incorrectly, called Sun’s Darkstar, Blackstar and attributed a Ugotrade quote to Philip Rosedale amongst other clangers. He was roundly rebutted by a large part of the audience including <a href="">Tara5 Oh</a> who questioned the need for old fashioned standards processes in the web era. Thank goodness for rough consensus and&nbsp;running code.</p> </p> <p>Most of the virtual worlds talked about in the investment panel were not like Second Life, but were nearly all <a href="">Club Penguin</a> clones. This copy the big exit attitude was called out by one of the audience as it seemed to be at odds with a lot of the talk about wanting to back the first in a market, but at least one of the panel is still looking for a successful 18+ social world play. The panel ended with a show of hands from people wanting money and people wanting to invest, but the economic climate made the whole affair very muted with lots of the panelists saying that they are slowing down rates of investment as it’s difficult to get existing companies off&nbsp;their books.</p> </p> <p>As with <a href="">Virtual Policy 08</a> and the Virtual Worlds Forum the most valuable parts of the conference were the spaces between sessions. I had another very worthwhile discussion with <a href="">Adam Frisby</a> of <a href="">OpenSim</a>about C# script compatibility between OpenSim and Second Life. The straw man design we talked about was to have an idiomatic .<span class="caps"><span class="caps">NET</span></span> interface for event handling that can be used by C# scripts and adapted for <span class="caps"><span class="caps">LSL</span></span> scripts and a set of static library methods for manipulating the world that would be used directly by <span class="caps"><span class="caps">LSL</span></span> scripts and wrapped by user created libraries to provide an idiomatic object oriented interface. Adam was particularly interested in the idea of user created wrapper libraries as it would allow the creation of an OpenSim interface library that could be ported to Second Life and implemented in terms of the ll* static methods. OpenSim could then agree to support the common behaviour of this library in Second Life and OpenSim instead of having to support the gamut of ll* methods some of which don’t map well to OpenSim internals. As well as defining a common set of events and ll* static methods that are supported on both platforms there would need to be a way of extending the interface with new events and library methods. In addition Adam was interested in making the event propogation configurable so that a single script could respond to events on many objects in a scene. This would effectively add a script interest management layer to OpenSim’s scripting interface. Where platforms provide differing interfaces to scripts we would also need to decide how scripts query the available interfaces or how they behave when interfaces are&nbsp;not available.</p> </p> <p>Overall a worthwhile trip, but not because of the conference. This Friday I’ll be talking at the online <a href="">head conference</a> about conferencing in Second Life which has the advantage of requiring no travel making marginal conferences like the Virtual Worlds Conference less risky to attend while allowing all of the serendipitous networking opportunities that make real life&nbsp;conferences worthwhile.</p> </p>Anything But Java2008-10-06T13:28:00+01:00Jim,2008-10-06:2008/10/06/jaoo-denmark/<p><a href="" title="The Shakespeare Language by Jim Purbrick, on Flickr"><img src="" width="500" height="412" alt="The Shakespeare Language" /></a></p> <p>Last week I was invited to talk at <a href=""><span class="caps"><span class="caps">JAOO</span></span> Denmark</a>. Originally a Java conference, <span class="caps"><span class="caps">JAOO</span></span> is now a very broad software development conference covering everything from agile to language design to&nbsp;distributed systems.</p> <p>The stand out talk on the first day was <a href="">Gregor Hohpe</a>‘s <a href="">Programming the Cloud</a> which enumerated some of the problems with building distributed systems without call stacks, transactions, promises, certainty or ordering constraints and then outlined some approaches to overcome them including looking at real life situations which also have to deal with the lack of distributed transactions. For example at Starbucks your coffee is made concurrently with your payment being taken and then problems are fixed up afterwards if you can’t pay, they can’t make the coffee or they get the order wrong. The throughput gained from optimistic concurrency is greater than the loss of having to fix things up, even if it means that sometimes you end up giving away&nbsp;free coffee.</p> </p> <p>Unfortunately I missed <a href="">Lars Bak’s V8 keynote</a> on Tuesday, but was really impressed by <a href="">Successfully Applying <span class="caps"><span class="caps">REST</span></span></a> by <a href="">Stefan Tilkov</a> which enumerated <span class="caps"><span class="caps">REST</span></span> patterns and anti-patterns shining some light on the subtleties of a technology which initially seems straight forward but turns out to have some pot holes for&nbsp;the unwary.</p> </p> <p>The highlights on Wednesday were <a href="">seeing Guy Steele and Dick Gabriel give their 50-in-50 talk</a> again (which is still not available on-line, but one of the highlights is <a href="">here</a>) and <a href="">seeing</a> the new <a href="">WeDo lego robotics platform for kids</a> which will be available next summer. The most relevant talk was <a href="">Test Driven Development, Take 2</a> by <a href="">Erik Doernenburg</a> which got me thinking about how to do dynamic mock objects in C++. My talk on <a href="">embedding Mono in Second Life</a> went down well and elicited some good questions, although as a fringe topic it wasn’t&nbsp;heavily attended.</p> </p> <p>Other highlights included <a href="">Erik Meijer’s keynote on fundamentalist functional programming</a>, <a href="">Bill Venners</a> <a href="">talk on Scala</a>, hearing <a href="">Patrick Linskey</a> conclude that the way to <a href="">make Java scale</a> is to use Scala or Erlang, <a href="">James Copland</a> reinventing <span class="caps"><span class="caps">OO</span></span>, playing guitar at the jam session and hearing Erik suggest to Lars that we compile <span class="caps"><span class="caps">LSL</span></span> to <span class="caps"><span class="caps">CIL</span></span> and run it on V8 modified to capture thread state while Erik was spilling half bottles of Champagne over people and Lars was swaying and stumbling around&nbsp;the room.</p> </p> <p>[<img alt="The Shakespeare Language" src="" />]: &#8220;The Shakespeare Language by Jim Purbrick, on&nbsp;Flickr&#8221;</p>dConstructing dConstruct2008-09-18T12:03:00+01:00Jim,2008-09-18:2008/09/18/dconstructing-dconstruct/<p>A couple of weeks ago the great and the good of web development descended on Brighton for the wonderful <a href="">clearleft</a> produced <a href="">dconstruct</a> conference and once again I’m glad I&nbsp;went along.</p> </p> <p>Steven Johnson kicked off with a talk about how Dr. John Snow’s innovative data visualization of a cholera epidemic and the wisdom of dead crowds helped convince people of the water borne nature of the disease. It was an interesting story, but it mostly ended up being a plug for <a href="">his book</a> and geoblogging aggregator <a href=""></a>.</p> </p> <p>Next up, <a href="">Aleks Krotoski</a> talked about how games had spent decades creating incredibly compelling user experiences in silos without much contact with each other the academic <span class="caps"><span class="caps">HCI</span></span> community or the web. Meanwhile the web is very interested in creating similarly sticky experiences using virtual rewards to encourage participation. Aleks’ conclusion was that the two communities should talk and&nbsp;I agree.</p> </p> <p>Daniel Burka talked about similar themes in his talk about the evolution of <a href="">Digg</a>. The most interesting anecdotes where about how top diggers started off as a good incentive, but became a disincentive when new users saw how unachievable the scores had become and how the recommendation engine is now a good way to encourage some of Digg’s passive audience to&nbsp;get involved.</p> </p> <p>Matt Jones and Matt Bidaulph talked about their successful Silicon Roundabout startup <a href="">dopplr</a>. Jones talked about visual design and delighters which sounded a lot like Alek’s virtual rewards in games. <span class="caps"><span class="caps">SL</span></span> uber-hacker <a href="">Bidaulph</a>talked made another gaming analogy, talking about how embedding dopplr in other sites and vice versa achieves a similar seamless experience to streaming maps in games: removing the load screens and jumps that used to bedevil console games and still are the normal experience when using the web. He also talked about the importance of using message queues and asynchronicity in services like dopplr which pull information from across&nbsp;the web.</p> </p> <p><a href="">Joshua Porter</a>‘s talk on Leveraging Cognitive Bias in Social Design was the stand out talk for me. He talked about exploiting people’s tendency to pattern match to generalise isolated positive case studies on web sites and on framing account creation as something to do to avoid losing features rather than something that gains features as a way to play on the tendency to value losses greater than gains. His description of the how the 9x mismatch between customers (who over value the application they already have by 3 times) and developers (who over value the application they have developed by 3 times) creates a huge barrier to application adoption was&nbsp;particularly interesting.</p> </p> <p><a href="">Tantek Celik</a>‘s talk about using hCard and rel=me links to create portable, auto-updating social network profiles and data to reduce the fatigue induced by inviting all of your friends to many social networks was the most practical session of the day. I’m going to try playing around with rel=me links and Google’s social graph <span class="caps"><span class="caps">API</span></span>&nbsp;here soon.</p> </p> <p><a href="">Jeremy Keith</a> gave a <a href="">grandiose talk</a> to end the day which wove together psychohistory from <a href="">Asimov’s Foundation Series</a> with <a href="">Critical Mass</a> and <a href="">The Wisdom of Crowds</a> to talk about how network effects and power law distributions cause some social software to explode in popularity while others wither, but that despite <a href="">The Tipping Point</a> being sold in business sections as a how-to book, it is fundamentally a retrospective and that predicting or engineering tipping points or network effects is notoriously hard. It was a great talk and the conclusion that social software is more of a lottery than a science is valid, but still: you have to be [in it to win&nbsp;it] (</p> </p>On Lifecycles And Spimes2008-08-03T23:22:00+01:00Jim,2008-08-03:2008/08/03/lifecycles-and-spimes/<p>It was immensely satisfying to see <a href="">Bruce Sterling commenting on Carbon Goggles</a> in his <a href="">Beyond The Beyond blog for wired</a> last week, not only because I’m a big admirer of his work, but because his 4 year old Spime neologism came up in the original discussions about Carbon Goggles at EuroFOO 2&nbsp;years ago.</p> </p> <p><a href="">Sterling’s original description of Spimes</a> make them sound extremely sophisticated and active objects: recording and publishing information about their construction and ownership, usage, possible modifications and alerting owners about the need for maintainence and so on. Our current reality is one of more passive objects which are annotated via the web: despite being able to run web servers, the possibilities for modifying <a href="">Linksys SLUGs</a> are independently published on the web; the lifecycles of passive books are tracked and determined using services like <a href="">bookcrossing</a> and <a href="">bookmooch</a>. Everything is web enabled right now: a subject I <a href="">gave a talk on at BarCamp Brighton last year</a>.</p> </p> <p>The current model relies on human identification and administration to grease the wheels of dumb objects. We see a Linksys <span class="caps"><span class="caps">SLUG</span></span> and google to find out information about it. We enter the information added to a bookcrossed book to find out about it’s sequence of owners and route around the world. Augmented reality takes a step towards automating the process: objects are still passive, but <span class="caps"><span class="caps">RFID</span></span> readers replace humans in the identification of objects, automated processes pull information on the object from external databases and augmented reality overlays display the information over the object in a way that gives us x-ray like abilities. We can see the details of the construction, components, chemistry and recycling options for objects. Able to make visible the invisible, to see the full picture beyond feature lists&nbsp;and prices.</p> </p> <p>Sterling talks about the “need to document the life cycles of objects” and this was the original plan for Carbon Goggles. Everything from an apple to a supercomputer has an Id in Second Life and so the goal was to compare the carbon footprints of everything over it’s entire lifecycle. The apple is cultivated using machine tools, transported, refrigerated and stored and has a carbon footprint just like an object that actively emits carbon like a motorcycle. In many cases the carbon costs of creating and destroying objects can dwarf the carbon they emit in their use. It can be more carbon efficient to buy a second hand car than to buy a new hybrid, despite the later’s frugal emissions while&nbsp;in use.</p> </p> <p>Unfortunately collecting data on entire life cycles is incredibly difficult. While people can measure the electricity usage of an appliance and add it to the <a href=""><span class="caps"><span class="caps">AMEE</span></span></a> wiki, it’s much harder to find out the emissions produced by the entire chain of companies that have built transported and assembled the myriad pieces that produced that object. When I last met Gavin in London he told me that the goal is for <span class="caps"><span class="caps">AMEE</span></span> to provide this complete lifecycle picture of everything, but we’re a long way off. Environmental costs have been externalities outside company accounting for a long time. <span class="caps"><span class="caps">AMEE</span></span> intends to add this missing accounting, but it’s tantamount to annexing every company’s accounting process and is likely to be just as complex as counting the pounds&nbsp;and pence.</p> </p> <p>In the future we will be able to automatically see everything there is to know about everything around us and be fully aware of the impacts of our consumption. Right now services like bookmooch, freecycle and bookcrossing allow us to add Spime like intelligence to objects if we’re prepared to do a lot of Spime wranging. Experiments like <a href="">Carbon Goggles</a> give us a glimpse of what the future in the real world might look like. We’re going to be a lot more aware, which is lucky, because we’re going to need&nbsp;to be.</p> </p>Jon Blow2008-08-01T00:20:00+01:00Jim,2008-08-01:2008/08/01/jon-blow/<p><a href="" title="Jon Blow by Jim Purbrick, on Flickr"><img src="" width="334" height="500" alt="Jon Blow" /></a></p> <p>At the recommendation of John and <a href="" title="Wonderland">Alice</a> I took a break from <a href="" title="Develop Online">Develop Online</a> to listen to <a href="" title="Jonathan Blow">Jon Blow</a>&#8216;s talk at <a href="" title="Games:Edu">Games:Edu</a> this week and was totally blown&nbsp;away.</p> <p>Jon talked about whether games are poised to enter a golden age similar to films in the &#8216;30s, when they transitioned from visual spectacle to an art form capable of touching people emotionally. Currently many games are broken by the conflicts between the game play rewards and the needs of the story. The canonical example is Metal Gear Solid, which pauses all interactivity to deliver exposition, but even more nuanced games suffer from the lack of control over the framing of the story. A narrative is likely to be much less powerful if the protagonist is jumping around while another character opens their heart. Equally the illusion of interactivity is completely broken by a character that refuses to acknowledge the player&#8217;s actions by simply reeling off scripted&nbsp;dialog.</p> <p>I wonder whether games too often sacrifice interactivity in the pursuit of realism. When you can simulate a city full of cars, the desire to populate it with people is almost overwhelming, but without solving the hard <span class="caps">AI</span> problem the only way to add people that say anything nuanced is to script them. The world seems more real, but adding scripted people to the center of the world compromises the interactivity that should be fundamental to a game. When we read a book we accept a lack of agency as we are empathizing with a character and following their journey through the narrative. When we&#8217;re in a game the story should be ours and the world should respond to our actions. There will be limits to our freedom, but placing scripted characters in the world rubs those limits in our face. Many forms of art touch us without having to realistically represent people. No one would mistake the people in <a href="">Guernica</a> for real people, but the work touches us and the image could be interpreted as a game environment without solving the hard <span class="caps">AI</span> problem. Maybe games should spend more time trying to be Guernica and less time trying to be <a href="">The Godfather</a>.</p>Sharing Carbon Goggles Visualisations2008-07-08T23:01:00+01:00Jim,2008-07-08:2008/07/08/sharing-carbon-goggles-visualisations/<p>Second Life has benefited greatly from growing in popularity alongside video sharing services. Many people’s first glimpse of Second Life or a particular Second Life experience is through the lens of a YouTube video. When promoting real world brands in Second Life, videos of the Second Life experience that can be viewed by a wider audience on the web are often an important part of the campaign. Even for experienced residents like me, it’s often a video posted on <a href="" title="New World Notes">New World Notes</a> that inspires me to fire up the Second Life viewer to take a look at an amazing new build&nbsp;or experience.</p> </p> <p>The goal of the Carbon Goggles <a href="" title="Carbon Goggles Demo Video">demo</a> and <a href="" title="Carbon Goggles Tutorial Video">tutorial</a> videos was to make it clear what Carbon Goggles do and how to use them, but videos are also a great way to make the Carbon Goggles visualisations themselves available to a wider audience on the web. As well as being an ambient augmented reality application that allows Second Life residents to passively learn about real world carbon costs, Carbon Goggles can be used to quickly create images and videos that illustrate real&nbsp;world emissions.</p> </p> <p>If you annotate new objects with carbon emission data using Carbon Goggles, please consider recording some footage of the newly annotated objects and adding it to the Carbon Goggles <a href="" title="Carbon Goggles Vimeo Group">vimeo group</a>. I’ve added a vimeo badge to <a href=""></a> to show the newest videos. As well as allowing Carbon Goggles users to share the locations of annotated objects in Second Life, now shares visualisations of carbon emissions data to everyone on&nbsp;the web.</p> </p>Mashed 08: T + 1 Week2008-07-03T21:15:00+01:00Jim,2008-07-03:2008/07/03/mashed-08-t-1-week/<p>There were a number of great projects at <a href="" title="Mashed">Mashed</a> that I wanted to blog about. Unfortunately, by the time I’d got round to setting up a blog I somewhat missed the boat. So, instead I’m going to revisit some of my favourite Mashed projects and see where they are 1 week on. <a href="" title="Carbon Goggles">Carbon Goggles</a> has gone live and grown an <a href="" title="Carbon Goggles Annotation">annotation interface</a> in the last week. Hopefully some of the other projects have also survived the journey home and gone on to&nbsp;greater things.</p> </p> <p><a href="" title="Team Bob">Team Bob</a> was undoubtedly my favourite of the many <span class="caps"><span class="caps">BBC</span></span>/subtitles/video/web mashups. A great visual gag rather than a useful service, I’m not sure this will go live, but there is now a video on line that serves to show this technically impressive and very&nbsp;amusing hack.</p> </p> <p><a href="" title="CurrentCost Live">CurrentCost Live</a> was the winner of the social responsibility prize at Mashed (somehow beating Carbon Goggles!) and was a great project. Starting with a CurrentCost meter Dale from <span class="caps"><span class="caps">IBM</span></span> and team published electricity usage data online, comparing and analysing it to award XBox Live style achievements and trophies for saving energy. It doesn’t look like the project is live at the moment, which is a shame as I have a <a href="" title="Linksys NSLU2) I use for backups and got a CurrentCost meter free when I switched to hydroelectric power from [Southern Electric]( &quot;Southern Electric">slug</a>. It would be great to hook it all up and pwn people by turning my lights off while saving&nbsp;the planet.</p> </p> <p>The project that struck me as the most useful was <a href="" title="Opening Times">Opening Times</a>, a service that takes your postcode and tells you the opening times for all your local shops. I was really hoping it would go live as a service as I could see myself using it all the time. Turns out I shouldn’t have worried: not only is Opening Times live and kicking, it looks like it has been&nbsp;since April…</p> </p> <p>Mashed gives you a straight 24 hours to kick off a project and break the back of it with a team of great people, but it’s great when the development doesn’t stop when the <a href="" title="Simon And Nat With Bean Bag">beanbags leave the building</a>. Hopefully some of the projects that started in a crazy whirlwind of hacking, mashing and <a href="" title="Carbon Goggles Live">rocking</a> will continue to blossom over the&nbsp;coming weeks.</p> </p>A Collaborative User Generated Ambient Augmented Virtual Reality Scientific Visualisation The Size Of Denmark2008-07-01T19:57:00+01:00Jim,2008-07-01:2008/07/01/collaborative-user-generated-ambient-augmented-virtual-reality-visualisation-size-denmark/<p>2 years ago at <a href="" title="Euro FOO 2006&quot;]) I met a mass of great people and enjoyed a torrent of wonderful conversations, but 2 of them in particular stuck with me. The first was with [Gavin Starks]( &quot;d::gen network">Euro <span class="caps">FOO</span> 2006</a> who commented that climate change would be much easier to deal with if we could see carbon dioxide. The second was with <a href="" title="">Claus Dahl</a> who observed that <a href="" title="Second Life">Second Life</a> is a great platform to prototype large scale <a href="" title="Augmented Reality">augmented reality</a> applications as every object in Second Life has an Id and you can give away free augmented reality glasses in the form of heads up displays&nbsp;(HUDs).</p> <p>A year later I started to experiment with the latter idea with <a href="" title="SLateIt">SLateIt</a>, an augmented reality application that can be used to find, tag and rate virtual objects in Second Life. Although I think tagging, rating and recommendation systems have a bright future in navigating the vast quantities of people, places and stuff in Second Life, SLateIt mostly came about as a way to demo augmented virtual reality in Second Life without a large data set to associate with objects in&nbsp;<span class="caps">SL</span>.</p> <p>Finally, last week, the awesome team of <a href="" title="Max Williams">Max Williams</a>, Ryan Alexander, Andrew Conway, <a href="">Simon Willison</a>, <a href="">Natalie Downe</a> and Chris Waigl helped me bring the two ideas together by mashing up SLateIt, SecondLife and Gavin Starks&#8217; new <a href=""><span class="caps">AMEE</span></a> emissions data base to create <a href="">Carbon Goggles</a>. Instead of mapping Second Life object Ids to tags and ratings, Carbon Goggles maps Second Life object Ids to <span class="caps">AMEE</span> URLs. The <span class="caps">HUD</span> queries for emissions data for nearby objects and, if found, overlays a sphere on the object with a volume corresponding to the monthly carbon emissions of the object. In 24 hours we managed to hack together a <a href="">working system to demo at Mashed</a> and 2 days later added an <a href="">annotation interface</a> that allows new objects to be annotated with emissions&nbsp;data.</p> <p>Carbon Goggles has had some <a href="">great coverage</a> over the last week, but I really hope the story doesn&#8217;t end there. The goal is to annotate objects across Second Life to produce a collaborative user generated ambient augmented virtual reality scientific visualisation the size of Denmark. Together we can add an extra layer of information to Second Life allowing people to learn to make more informed decisions in real life while living their Second Life. If you&#8217;re part of a group in Second Life that would like to help annotate objects, host Carbon Goggles vendors in world, create videos or images of Carbon Goggles visualisations or would like to help in any other way, please join the Carbon Goggles group in Second Life and get in&nbsp;touch.</p> <p><img alt="Carbon Goggles" src="" /></p>Hello World2008-07-01T09:21:00+01:00Jim,2008-07-01:2008/07/01/hello-world/<p>Well, not exactly. Having blogged previously on <a href="" title="Terra Nova">Terra Nova</a>, the original <a href="" title="The Creation Engine">Creation Engine</a> and currently on the <a href="" title="Official Second Life Blog">Official Second Life Blog</a>, I’m not exactly stumbling blinking in to the blinding light of the blogosphere. Recently a number of things have come up that I’ve wanted to write more than <a href="" title="Twitter / JimPurbrick">140 words about</a>, but that wouldn’t fit on the Official Second Life Blog <a href="" title="Last Sound System">any</a> <a href="" title="SLorpedo">more</a>, so I’ve finally stopped mooching off other people and set up my&nbsp;own blog.</p> </p> <p>One reason I hadn’t got around to it sooner is that I’ve been torn between platforms. Although it’s been tempting to throw up a <a href="" title="WordPress">WordPress</a> blog every time I’ve had something to talk about, I really wanted to build a blog in <a href="" title="Django">Django</a> that I could tinker and experiment with. Although it’s just a matter of <a href="" title="Where is Django’s blog application?">plugging bits together</a>, it still takes a few hours to get a basic Django blog up and running and longer to add all the bells and whistles. I finally managed to break the impasse last week when I came across this <a href="" title="Django Blog Engines">list of Django blog engines</a> and after some routing around decided to go with <a href="" title="Byteflow">byteflow</a> which has all the bells and whistles but is made of standard Django bits and is&nbsp;eminently tinkerable.</p> </p> <p>So, that’s what you see here: a default byteflow blog running on <a href="" title="Django Trunk">Django trunk</a> running in [mod_python][] as a virtual host (alongside the <a href="" title="SLateIt"></a> and <a href="" title="Carbon Goggles"></a> Django apps) inside <a href="" title="Apache HTTPD">apache2</a> running on <a href="" title="Ubuntu">ubuntu</a> dapper on a virtual machine hosted by <a href="" title="Bytemark hosting">bytemark</a>. It took long enough to get round to, but once I’d found byteflow it only took an hour to set up. I’ll be kicking the wheels and tinkering over the coming weeks, but if you find anything broken, please let&nbsp;me know.</p> </p>