How we can build for our colleagues

It’s sometimes necessary for an organisation to develop software to support its internal operations. Doing this well is less straightforward than one might think. In this post, I examine some of the challenges faced by product teams building internal tools, and share some lessons learned from working on consumer products that are applicable in overcoming them.

 

The value that comes from using a tool is in how it improves a process. When an actor from a user story hacks around the current process to get their job done, it’s a good indicator that a new tool might be needed. Another step may be required in the workflow, for instance, if users frequently open another browser window to perform a particular task. There are also situations when a new feature is required for reasons other than improving the user experience. We may wish to gather data to train a machine learning algorithm which will ultimately allow us to automate a manual process.

 

Another reason to build our own tools is to avoid vendor lock-in – the situation where we become unable to switch our process from one product or service to another without substantial costs. However, it’s important to remember, that the decision to adopt any technology, be it proprietary or open source, is a long term commitment. While there are compelling reasons to choose an open source solution, we may incur large costs in adapting it to fit our process or in simply learning how to use it well if the base technology and expertise doesn’t already exist in our stack.

 

How do we avoid reinventing an existing tool which already fits our purpose? Cast a wide net to find out whether or not a cost-effective solution is already available on the market. Don’t hesitate to open this investigation to the operations and engineering teams. Their involvement is important; although they may have a good understanding of the problem domain, they often lack the marketplace visibility and exposure to product demos or sales-driven trials that product managers or the business team have. How have stakeholders solved similar problems at previous organisations? Getting input from every player at this stage can eliminate a lot of uncertainty around the necessity of the work involved.

 

When there’s a genuine need for a bespoke solution because the marketplace doesn’t offer an essential feature, expectations may still be high because users will be familiar with similar well established, high quality software. We can manage these expectations by including metrics and benchmarking on the product roadmap and by building them into the product as early as the size of the user base justifies the effort. This also gives us the confidence to abandon our developing solution for something better if it isn’t performing as we’d hoped. Involving users in the development cycle early can also help – users are more forgiving of work in progress when they are part of its inception and growth.

 

We can develop the best understanding of our customers’ pains by beginning the development cycle with an exploratory research phase. This allows us to get to the root of the problem and discourages us from rushing to a suboptimal solution. IDEO’s human centered design framework provides some useful techniques for doing this, such as by having customers map their journey through the process or by observing the journey directly, taking note of any unnecessary cognitive overhead and the behaviours of our “power users”.

 

The research phase may also take the form of a design sprint, where inexpensive prototype solutions are validated by observing how customers interact with them. Be sure to meet with every possible user at this stage. Not only will users at different levels in the workstream be concerned with different tasks; they may also have different working styles which the UX will need to accommodate. This can seem like a large upfront time investment, but it’s far less costly than waiting until after UAT to learn that the chosen solution doesn’t meet the customers’ needs.

 

What do we do when we don’t have the luxury of conducting a lengthy exploratory research phase? When pivoting, a startup or a product team needs to adapt its operations at short notice, sometimes resulting in the prioritisation of a completely new set of features. As an internal product team, our colleagues are our customers; we should therefore be well positioned to meet with them early and often. When we don’t, we develop false assumptions about where the process bottlenecks are. When gathering requirements, don’t be afraid of asking “why” too often. On first asking, our customers might tell us what they think we want to hear, suggesting “quick wins” or solutions they believe are easy to pull off, rather than revealing their greatest pains. Persistence in our questioning will pay dividends.

 

Feature requests are, in theory, better supported by an internal development team than an outsourced one, and straightforward for us to act on because we can easily seek clarification. In practice, we need to consider the long term costs of maintaining these features. Even simple estimation exercises like Josh Pigford’s build vs. buy calculator can be of help. More often than we’d like, resource constraints may mean that we’re not able to balance the local needs of our internal customer with the overall needs of the business. When that’s the case, it’s important for the health of the relationship to communicate why the work can’t be done at this time. Shared understanding and goals reduces the tension between the team and encourages us to review and update these priorities continuously.

 

If our tool doesn’t require expertise to operate, then we’re able to easily dog food our product across the organisation. This lets us find and form relationships with product-minded users who can identify problems which we may have become blind to when designing and building. Take advantage of this, remembering that the managers of most consumer products don’t have this luxury! Developing these relationships by holding “open office hours” increases the quality and quantity of feedback we receive.

 

Once the tool has been built, how do we ensure that product development continues smoothly? Having the development team focus early on the infrastructure necessary to support continuous delivery allows us to launch and begin gathering feedback as early as possible and keep a tight, iterative development cycle. when done well, we can reap the same benefits from practicing agile with our internal tool development as with our consumer products. MVPs are a great way to accelerate learning, but we shouldn’t be duped into thinking that it’s acceptable to produce sub-standard features, believing that they can be “improved incrementally” because we have only our colleagues’ expectations to manage. The launched product should consist of the minimum set of features required to deliver value, but each of those features needs to meet some previously agreed standards.

 

When planning, it’s important to be mindful of how our users will onboard. We’re familiar with the notion that “good design needs no instructions”, but even refined technical operational processes require some training. To save time and effort, training for our tools could take the form of a webinar which can be made available online for later access. Announcing the initial launch internally and continuing to meet frequently with customers can both help drive adoption, and announcing subsequent feature releases can help users imprint on workflows. Make all of the feedback received easily accessible to engineers, for example, through a dedicated Slack channel or integration. Above all, celebrate as a team when users are delighted.

 

In summary, it’s easy for us to become complacent or misguided when we’re designing for our colleagues. We know their organisation, its mission and its roadmap. We know their titles, respective roles and working environment. We may therefore assume that we know what’s best for them, and worse, we won’t make the time to validate those assumptions. Instead, if we do our internal customers the same courtesies as we would our flagship product users, but acknowledge when to treat them differently, we stand a much better chance of delivering the best possible outcome.

MeModel GBuffer

About skin colour authoring

Part of our MeModel development process involves skin colour matching. We have to match our 3D avatars to a photographic reference. We have attempted to do this automatically in the past, but as the lighting process became more complex, the results were no longer good and it required a lot of manual tweaking. In effect, we needed to manually author the skin colour, but writing parameters by hand and trying them out one at a time is a tedious process. That’s why we decided to create an interactive tool so we could see the result immediately and iterate quickly.

The first choice we made was the platform: the browser. If we wrote this tool for the web, then we could share it immediately with remote teams. It’s a zero-install process, and therefore painless for the user.

We wrote a prototype that would use a high-resolution 2D canvas, and transform all the pixels in simple for-each loops. However, this was far from interactive. For our images, it could take a couple of seconds per transform, not very pleasant when adjusting parameters with sliders. You could try to parallelise those pixel loops using Javascript workers, for a 2 or 3-fold speed increase. But the real beast for local parallel processing is your GPU, giving us in this case more than a 100-fold speed increase.

So we decided to make the canvas a WebGL canvas. WebGL gives you access to the GPU in your machine, and you can write small programs for it to manipulate all pixels of the image in parallel.

Quick introduction to rendering

Forward rendering

The traditional programmable rendering pipeline is something that in the computer graphics jargon is referred to as forward rendering. Here’s a visual summary,

Forward rendering pipeline

Forward rendering pipeline

Before you can render anything, you need to prepare some data buffers with your vertex positions and any parameters you may need, which are referred to as uniforms. These buffers need to be in an area of memory that your GPU can access. Depending on your hardware, that area could be the same as the main memory, or a separate graphics memory. WebGL, based on OpenGL ES 2.0 API, has a series of functions to prepare this data.

Once you have the data ready, then you have to provide two programs to the GPU, a vertex shader and a fragment shader. In OpenGL/WebGL, these programs are written in HLSL, and compiled during run time. Your vertex shader will compute the final position and colour of your vertices. The GPU will rasterize the vertices for you (this part is not programmable), which is the process of computing which pixels the given geometry will cover. Then, your fragment shader program will be used to decide the final pixel colour on screen. Notice that all the processing in both the vertex and pixel/fragment shaders is done in parallel, so we write programs that know how to handle one data point. There’s no need to write loops in your program to apply the same function to all the input data.

A traditional vertex shader

There are basically two things that we compute in the vertex shader:

  • Space transforms. This is how we find the position of each pixel on screen. It’s just a series of matrix multiplications to change the coordinate system. We pass these matrices as uniforms.
  • Lighting computations. This is to figure out the colour of each vertex. Assuming that we are using a linear colour space, it is safe to assume that, given 2 vertices, the interpolation of pixel colours that happens during rasterization is correct because irradiance is additive.
A traditional vertex shader

A traditional vertex shader

Both the space transforms and lighting computations can be expensive to compute, so we prefer doing it per vertex, not per pixel, because there are usually fewer vertices than pixels. The problem is that the more lights you try to render, the more expensive it gets. Also, there’s a limit of the number of uniforms you can send to the GPU. One solution to these issues is deferred rendering.

Deferred rendering

The idea of deferred rendering is simple: let’s defer the lighting & shading computation until a later stage. It can be summarized with this diagram,

Deferred rendering pipeline

Deferred rendering pipeline

Our vertex shader will still compute the final position of each vertex, but it won’t do any lighting computation. Instead, we will output any data that will be needed for lighting later on. That’s usually just the depth (distance from the camera) of each pixel, and the normal vectors. If necessary, we can reconstruct the full 3D position of each pixel in the image, given its depth and its screen coordinates.

As I mentioned earlier, irradiance is additive. So now we can have a texture or a buffer where to store the final irradiance value, and just loop through all the lights in the scene and keep summing the pixel values in the final texture.

Skin colour authoring tool

If you followed so far, you may see where this is going. I introduced deferred rendering as the process of deferring lighting computation to a later stage. In fact, that later stage can be done in a different machine if you wanted to. And that’s precisely what we have done. Our rendering server does all the vertex processing, and produces renders of the albedo, normals, and some other things that we’ll need for lighting. Those images will be retrieved by our WebGL application, and it will do all the lighting in a pixel shader. The renders we generate look like this,

MeModel GBuffer

MeModel GBuffer

Having these images generated by our server, the client needs to worry only about lighting equations, and we only need a series of sliders that connect directly to the uniforms we send to the shader to produce a very responsive and interactive tool to help us author the skin tones. Here’s a short video of the tool in action,

 

The tool is just about 1000 lines of pure Javascript, and just 50 lines of shader code. There are some code details in the slides here:

(These slides were presented in the Cambridge Javascript meetup)

Summary

Javascript & WebGL are great for any graphic tool (not only 3D!): being in the web means zero-install, and being in WebGL should mean it gives you interactive speeds. Also, to simplify the code of your client, remember that you don’t need to do all the rendering in the client. Just defer the things that need interaction (lighting in our case).

 

Sometimes people move teams. The reasons why can be varied. It could be this person’s skills are needed on a different set of work. Or maybe to drive their personal development you need to provide other opportunities for them. Or perhaps they are unhappy for some reason. Generally speaking it’s pretty healthy for a business to move people about. But regardless of the reason, it’s going to affect your organisation. By changing someone’s line management and transferring a person between teams you are changing the team dynamics of two teams. In order to help keep both the teams and that individual happy, you really need to understand what makes them tick.

If you’re the manager who is losing a person, you hopefully already have a working relationship with them but you may not have had the head space to figure out how this is going to affect your team. If you’re the manager that is receiving a new team member, you might already have a fair idea of what your team needs but you may not know the person that is joining you.

Depending on your organisation, the hand over of line management may happen casually between two managers, with little to no structure. When I looked online, I couldn’t really see anything to help this transition. At the very least you should have a conversation with the line manager who is giving up or receiving the line report. Before you have the meeting, try to summarize (in writing) what the reasons for the move are. It’s good for everyone involved to be on the same page.

For the process of handing over itself, I came up with a few questions you can ask or answer to ease the movement of people around the organisation.

Happiness

  • What motivates / demotivates this person?
  • Are they happy at work?
  • Who are their friends at work?
  • Do they have any triggers for things that upset them or that they have particularly strong opinions about?
  • Are there areas where they’ve been particularly happy or excited to work on in the past?

Communication

  • What are they like on a 1:1 basis?
  • What are they like in a group setting?
  • Have they raised any concerns or needs over the move?

Work/Life

  • How well do they manage their work / life balance?
  • What do they do in their free time? Any hobbies?
  • How is their general well-being (both home and work)?
  • What are their regular working hours/days?
  • Do they have any “invisible” commitments outside work that we need to ensure they’re supported with?

Personal Development

  • What personal goals are they working towards?
  • Do they want or need training on anything?
  • What was their biggest success recently?
  • Have they struggled with anything recently?
  • What are they hoping to get from the change?
  • Do they have an existing buddy/mentor/coach – will that relationship change if they move?

Support / Management

  • How do they prefer to be supported/managed?
  • Are there any current issues or problems that need managing?
  • What did they like / dislike about their old team?
  • What are their strengths and weaknesses?
  • Do they have any preferences/strengths/issues working with particular technologies or environments?
  • When are they going to move and sit with the new team?

 

You might not have, or be able to get all the answers to the questions, but finding out as much as you can will give you a head start in being able to build rapport with your new line report. It’ll also help you settle them in and set them on the path to building new relationships with other people in the team. Try it out the next time you have people coming or going from your team.

 

For more on Metail’s culture and team, please visit www.metail.com/careers

While working in the games industry in Japan, I attended a seminar about brainstorming. The instructor, professor Hidenori Satō, has written dozens of books on the subject. Unfortunately for many of us it seems his work has not been translated from Japanese, so here’s a brief introduction to his approach. I’ve translated the method he introduced to us as “Spark Ideas” (スパーク発想法). At the beginning of his seminar Prof. Satō led in with the following quote attributed to Thomas Edison: “Genius is one percent inspiration and ninety-nine percent perspiration”. I read this in two ways: first, even if you have ideas, they mean nothing if you don’t put in sufficient effort to realize them; second, you may have a sudden bright idea once in a while, but to generate ideas continuously, you need to make an active effort – and probably use a tool like the one I describe here.

The brainstorming process

Often we try to think of ideas directly from a theme. Unless you’re in a moment of “inspiration” this is hard. For the “perspiration” moments, we need hints, like the one Newton got from an apple falling from a tree. The best way of getting these hints is by changing the Point of View. And that’s all you need to remember! (*^ω^*)

Brainstorming process: Spark Ideas

Brainstorming process

 

You can think of the Spark method as a “cheat sheet” with a series of keywords to help you get started with your brainstorming.

Points of View for Spark Ideas

Prof. Satō lists up 5 basic Points of View (PoV) to get started with the exercise:

  1. State of affairs
  2. Point of view of the other
  3. Change character
  4. Change case
  5. Free of constraints

Working through these first five perspectives is usually enough, but there’s an extra list if you want to dig deeper:

  1. Triple ease
  2. Fun
  3. Positivation
  4. Indirection
  5. 3D expansion
  6. Similar case
  7. General case

I will describe all these points in detail later, but let’s jump first to how to do the exercise.

Brainstorming with the Spark Sheet

I recommend time-boxing the exercise. From experience, the “State of affairs” is usually the most important PoV, so expect to spend at least twice the time in that one as opposed to the others. If you need tons of ideas, you may want to attempt all 12 PoVs, though it may take too long. Even if you spend only 10 minutes per PoV, it will take at least 2 hours to finish.

If you are doing the exercise with enough people you may choose to divide them into groups. You could assign a couple of PoVs per group, with one group dedicated solely to “State of Affairs”.

Once you have allocated time, and appropriately divided people into groups, you just need paper or a whiteboard. Write the PoV and the theme on the top, and draw 3 columns. The first column are hints, that should come from the PoV. Write hints with as much detail as possible. The middle column will be for direct ideas, coming straight from the hints. These too should be as detailed as possible.

The third column is for ideas from association, things related to an idea from the middle column. These can be something that follows on in order from the initial idea, is the exact opposite of the idea, or just things that go together. It helps to have a cheat sheet with keywords on one side. Your sheet of paper will look like this:

 

Brainstorming example: Spark Sheet Pollution

Spark Sheet about Pollution

 

Keywords to get started

I’ve tried to select a few keywords for each PoV so you can get started.

(1) State of affairs

  1. Status
    1. Where are we at? Contents, outlook, flow, related work, schedule, place
    2. Domain, level, quantity, season, important factors
  2. Target
    1. Characteristics, functionality, structure, processes, elements, type
    2. Materials, size, weight, color, design, definition
  3. Self
    1. Company values, our technology, our resources, strengths & weaknesses
    2. Budget, available developers, external opportunities
  4. Main point
    1. Reason for it, difficulties
    2. Essential conditions

(2) PoV of the other

  1. Target user(s), as detailed as possible
    1. Adult A, kid B, high-school girl C, athlete, married person, old lady from the neighbourhood, a person with 2 dogs, etc.
  2. Requests/needs, correct & detailed
    1. Can we ask/have we asked users?
    2. Person status, surroundings, circumstance, specialty, personality, real thinking, new thinking, needs, values, requests, dissatisfaction, worries, likes, opinions, feeling, goals, and conditions.

(3) Change character

  1. Think of another person and write down their name.
  2. How would they do things?
    1. Way of thinking, behaviour, performance, personality, strengths
    2. Ask them directly whenever possible!
  3. Examples
    1. Close person: colleague, boss, junior staff, from another team, related, from same industry, family member, friend, someone with similar/opposite interests, acquaintance, neighbour, professor, student of a higher/lower grade
    2. Famous/historical: Buddha, Jesus, Bono, Björk, John Lennon, Messi, Trump, Picasso, Tom Daley, Tom Adeyoola, Tom Jones, Tom & Jerry
    3. Role model: an expert, specialist, experienced person, aficionado, protagonist of a story/tale

(4) Change case

  1. Think of the theme and target, and find a similar topic
  2. Write down the contents (status, method, conditions) in detail
  3. Examples
    1. Direct method (visual): picture the theme in a broader sense, and give an example from intuition. E.g. “reduce stock” → “reduce ingredients”
    2. Indirect method (logical): think of the essence of the theme, and from it give another example. E.g. “reduce stock” → “reduce unnecessary stuff” (essence) → “reduce flavour additives”
    3. Close example (change from same class): e.g. “sell cameras” → “sell computers”
    4. Far example (different class): e.g. “sell cameras” → “become famous” (sell brand)

(5) Free of constraints

  1. Ideal
    1. What is the state we want to be in? (in detail)
    2. What’s the ideal, the best situation?
    3. Write down the “ideals” as Hints, and how to realize/get close to those as Ideas.
  2. Break norms
    1. Try to break the rules: odd techniques, silly things, nonsensical, fancy, dream, insane, not common sense, innovation, daring
    2. Write down as Ideas the way you’d get there.

Extra PoVs

  1. Triple ease
    1. Low-hanging fruit: do easy things first
    2. Divide-and-conquer: divide in several parts, and assign to different people/teams
    3. Reduction: reduce the quantity or targets. Make our lives easier.
  2. Fun
    1. Make it fun or interesting; add hobbies; gamify
  3. Positivation
    1. turn upside-down; take the negatives and turn them into positives;
    2. find the positives and work on them
  4. Indirection
    1. Soften/cushion the blow;
    2. Make it indirect; mediation
  5. 3D Expansion: think of these 3 dimensions
    1. Space: expand the space, the area; change the place.
    2. Time: expand time. Think of the future and the past. Think in a longer span.
    3. Human: expand the human circle. Think of others. Get help from the crowd.
  6. Similar case
    1. Compare to similar cases
    2. Compare with cases that offer contrast
  7. General case
    1. Remove the particular case. Look at the forest, not at the tree.
    2. Think of the system

Rank the ideas

Once you’re done, you will end up with dozens of ideas. You may want to quickly eyeball the ones that lack detail or that are obviously flawed in some way and discard them to save time. Or you can focus on the ideas with many arrows coming to them. For the ideas you select to explore further, use a ranking mechanism, a simple example being the combination of impact and feasibility. For instance,

 

Theme: Do something about pollution — Best 3 ideas  Impact Feasibility Expectation (I×F) Rank
Bring leaflets to schools  2 4 8 2
Gather signatures in online petition about cigarettes 3 2 6 3
Create a game where each stage is about a pollutant 3 4 12 1

 

Hopefully your Spark Sheets will be sufficiently detailed that they will also help you to start producing a plan for those selected ideas.

 

Conclusion

Generating novel ideas from nothing is a big challenge to some of us but a bit of structure in a brainstorming session can make a huge difference.  If you’re stuck, try using a tool like this and be surprised at the volume of ideas your brains can produce. And as in so many other things, the more you practice the better you’ll get at it.

Metail is a UK fashion technology startup with offices in Cambridge and London, UK. We use Clojure on the front-end and back and currently have vacancies for both Clojure and ClojureScript developers in our Cambridge office. If you’re interested in functional programming and are keen to work with Clojure, we’d love to hear from you. You don’t need to be an expert, we’re a friendly company and there are plenty of people here to help you learn and grow your skills.

Metail were early adopters of Clojure with the first code going into production back in 2010. This was a Clojure implementation of our size recommendation algorithm. Back then we were using Java’s Spring Framework for server-side applications, with the Clojure code embedded into the Spring application as a Java class. Nowadays, our web services are implemented in Clojure using Pedestal and ring-swagger and we are considering Lacinia for one of our newest applications. On the front-end, we use ClojureScript with re-frame and a Material UI library. We also use Clojure to orchestrate cloud deployments (REPL-Driven DevOps) and for large-scale data processing on Amazon’s Elastic Map Reduce clusters.

NonDysfunctional Programmers Meetup

William Byrd at Cambridge NonDysfuntional Programmers

Metail have long been supporters of the local tech community: I met CTO Jim Downing back in 2009, when he was running the local Clojure user group. I took over in 2013, and another Metailer, Rich Taylor, took up the reins this year. When Metail moved into a new city-centre office, we had space to host meet-ups ourselves, complete with data projector and excellent wi-fi. Now we are regular hosts of Cambridge NonDysfunctional Programmers, Data Insights Cambridge, Cambridge AWS User Group, DevOps Cambridge and Cambridge Gophers. As well as providing a free venue, Metail sponsors refreshments at many of these Meetups.

If you’d like to join this growing company and vibrant local tech community, check out our current vacancies. If you’re excited by the prospect of a Clojure career but don’t see your ideal job listed there, please drop us a line anyway – we’re always keen to hear from enthusiastic Clojure developers and there may an opening that hasn’t made it up to the website yet.

 

Most of the teams I’ve worked on have been not so great at breaking down the work that lands on the development backlog. There are plenty of resources out there on what stories are, and multiple different ways of writing them. There’s also articles written about different story splitting techniques. I couldn’t find anything out there about deliberately applying the theory however so I thought I’d write something.

Sometimes we just don't know how to begin breaking down work.

Before I dive in, lets start with some definitions

Epic – Also known as a “very big” story, that is unlikely to be completed in a single sprint or planning cycle. Would normally be broken down into several stories before being pulled onto a backlog. Epics can also be used for defining the main focus of a development team for a series of sprints.

Story – A smaller piece of work that can fit into a sprint or planning cycle specifically aimed at providing value to the end user and/or the customer. It can be good to apply the INVEST criteria to any story that you’re writing or at the very least include some acceptance criteria to define when a story is complete. Typically a story would be written in non-technical language to make it accessible for all interested parties to discuss. There are lots of different ways to write stories, Here’s a link to some sample formats.

Task – Pieces of a story that describe how the story is going to be achieved. These are usually written by the people doing the work. They should generally be short lived and completed within the sprint or planning cycle.

Splitting patterns

When examining a story (or an epic), you’re going to need to break it down. This has already been written about in much more detail over here. To keep things simple, I’ve summarized some of the commons ones:

  • Most difficult bit first (What’s the hardest piece of the story to solve?)
  • Simple case first (What’s the simple solution to the story?)
  • Functional first (Make it work, worry about performance later.)
  • User flow (What’s the first thing the user does. What’s after that?)
  • Per use case (What does user A want to achieve? User B?)
  • Per operation (buy a subscription, change a subscription, cancel a subscription)
  • Spike (What questions do you need to answer in order to know more about the solution?)

This is great! We now have some lines of thought we can use to think about our stories. We need to practice using these deliberately so as to get used to using them naturally and to ensure we don’t fall into the trap of using one or two of them over and over again.

Kata

Much in the same way as you’d practice different coding techniques in a coding kata, you can practice breaking down stories into tasks in a similar way. There is a little preparation to do in advance of your task break down kata.

Before you start, you’ll need to define some problems to split up. The problems should be large enough that they can be solved with multiple steps. Some examples might be:

  • Make a banana split
  • Go on holiday abroad
  • Buy something from an online store
  • Set up a new computer for a relative
  • Organize a party.

Try to work out if your problem is an epic or a story. If you’ve picked an epic, can you split it and write each story against the INVEST mnemonic? You want to end up with a few stories that can the broken down during the task break down kata.

Running the session

Split up in to groups of 2-3. Participants should be anyone that needs practice breaking down stories into tasks. If you can, try to make sure there is a mix of disciplines breaking down the selected story.

Choose a splitting pattern from above or from elsewhere, then take 10-15 minutes to apply the pattern to break down one of the stories. After you’ve applied the pattern, try to think around the edges and work out what was missed. What else needs to be included to make the story “complete”?

If you have lots of participants, compare and contrast results with other groups in the kata. Once you’re done you can try splitting the same problem again using a different pattern or use the same pattern on a different problem. Some problems will lend themselves better to one type of splitting pattern that others. Just keep practicing and you’ll get better at knowing which pattern to use for what kinds of problem.

Metail provides a yearly training budget for all employees consisting of both time and money, but we found that many employees were not making the most of this opportunity. We decided to look into why this is and work on increasing the uptake. One idea we had was around hackathons – pairing people together to do small hackathons together sounds more fun than just reading a book by yourself!

One-to-ones help uncover trends

From my one-to-ones I found that the main reason people were not using the training days was because they weren’t sure what to do with them. If people were going to a conference or working toward a qualification or certification it was easy to identify the time spent on that as ‘training’. But what if you are already qualified? or there isn’t a conference on this quarter? or you want to spend some time testing out new technology?

Crew Hackathons

I came up with the idea of running some small hackathons within the crew and suggested we could use training days for these. The idea is that people will pair for a couple of days to create something new. This aligns with our company values: being in this together, actively learning, trust to deliver, and making a difference. But I also wanted to push the joy/excitement axis up a bit as well (see previous post).

Because people never want an extra meeting, we decided to schedule this as a special retrospective session. We kept the happiness axis exercise and collected a few actions based on that, but we spent most of the hour running a hackathon proposal exercise outlined below:

  • Everyone tries to write down a couple of ideas for 2-day projects they would like to work on, and spends a couple of minutes to get others excited about it.

  • Vote proposals. Everyone has 2 votes to pick a project (other than their own). Only projects with 2 or more votes survive.

The projects do not need to be directly related to work, but we should learn something from them. The idea is to spend one day together working out designs, and another day creating a prototype or something usable.

I explained the exercise a week in advance, so people had time to think of projects before the meeting.

Deciding on projects

The exercise went well and everyone seemed quite excited. It turns out that a few people had similar ideas, so we grouped some projects together. We then drew a matrix so everyone could cast their votes. This is how the whiteboard looked:

Hackathon Matrix

The top row of the matrix has the people initials, with the number of available training days written below.

We (a team of seven) decided to work on 3 projects. The projects with more votes will have a couple of hackathons associated with them – this is particularly useful if we can’t get together all at the same time. We can also start thinking at this stage if we are going to need any materials, e.g. books, that we need to buy before we get started.

Scheduling the hackathons

We have the ideas, the people, and most importantly, the excitement, so now it’s just a matter of scheduling these hackathons. If a person is working for the full 10 days in a sprint, they instantly become candidates for any of the hackathons they showed interest in. If we can find someone else interested in the same project who has enough training days available, we pair them together and schedule it in the sprint.

Some of these projects have more than two people interested – in this case we have a 1-hour meeting with everyone interested in it, to come up with a plan and decide how we’ll split the work. For instance, if it was a project that involved four developers and two different platforms, one group could work on one platform one sprint, and the other group could do the other platform the following sprint.

Conclusion

Small hackathon exercises can be helpful for people that don’t know what to do with their training days. Other people can bring ideas that suddenly open the curiosity box, and we can turn the learning exercise into a shared experience. Just as it is, it’s a valuable experience. But some of the projects can even turn into something bigger that brings additional value to the company. I think it’s probably worth running this exercise every quarter, to disconnect from your main duties and refresh a bit. If you can’t find the time to run this, just pack it inside one of your retrospectives. You can always use the happiness axis for a swifter retrospective, and move straight away into finding topics for the hackathons.

lap top, headphones: remote work important tools

fully remote working: my work station for a whole week

In the first week of December I ran an experiment: our entire team was made to work remotely from the two main offices. The aim of the venture was for everyone to feel exactly what our remote employees feel every day. As a result, we hoped to improve team communication, both within the team and external to it.

Our team is probably one of the most distributed engineering teams in Metail. While most of our engineers are in the Cambridge office, a few work remotely. We’re lucky enough they are in the same time zone as the headquarters. Nonetheless we still suffer a lot of the pains that distributed teams feel, especially when the rest of the company is more used to working between the two offices, based in Cambridge and London.

Our hypothesis was that we would probably miss out on a lot of incidental “water cooler” conversations. We also guessed that communication with the rest of the organisation would be somewhat difficult.

Before Kick off

Before we rolled out the experiment, I had to lay some groundwork. Firstly I checked with our crew director (we work in teams called ‘Crews’ at Metail) and the other engineering managers that this wouldn’t impact anything crucial. We communicated widely across multiple channels that our team would be entirely remote during the week before the start date. I also spoke to the team to hear their concerns. It certainly helped to draw up a few guidelines. This is in summary what we came up with:

  • We use Slack by default and Skype as a backup
    • We say when we are at our keyboards and when we’re not
    • Everyone is to use headset and have their webcams turned on.
  • In general we try to ensure that we are over communicating
  • If there is a problem or someone can’t be reached, people are to come to me (the engineering manager) or our crew director.

There were a few practical things to take care of as well. We made sure our contact details were added to all the meeting rooms’ Skype accounts. We also checked we could all access internal resources via the VPN. Just to be sure, we ran a couple of trial calls to make sure Slack and Skype would work for us (they did!).

So how did it go?

We were able to anticipate the problems we hit; there wasn’t too much of the unexpected. It was much harder to run work past people on a casual, in person basis. Attempting to do so required both parties to mic up and jump on a Slack call.

Meetings with the wider company is where we struggled the most. We noticed people in Metail occasionally talk over one another and because of this it was hard to participate in guilds and other group meetings. Usually it meant one person in the office would drown out another who was further away from the room mic. We also noticed that if there were multiple people in the office participating in a meeting, remote workers often ended up ignored. In some cases it was difficult to observe body language that would normally be cues for a person to start talking. From time to time it was hard to hear people in the office. Sometimes this was because of problems with the audio equipment, other times it was because of background office noise.

We encountered a few minor technical issues as well. Some of these things were easy to fix, like tweaking rules on a firewall. Others were harder to diagnose, like why a developer was seeing Jenkins time out during load, preventing him from being able to see when builds were finishing. A couple of times we had issues with Slack where one person in the group couldn’t see another but these were easily fixed by leaving the call and re-entering it.

Generally speaking the engineers found it easier to focus on the work they were attempting to do. On the other hand it was pretty difficult for myself and our crew director, being the main communications interface between the team and the rest of the company.

I also discovered that my house gets really cold during the day if I don’t put my heating on! I made a special effort to be a little more social, going out to dinner and to the pub for much needed social interaction.

Conclusions

On the Monday following the experiment we ran a retrospective where we recorded our experiences. On the whole, the world didn’t end and the company kept working. We recognise that it was a pretty short experiment, lasting only a week, but we still found it valuable. One thing we noticed was that we certainly affected how the rest of the company interacted with us by communicating that it was coming up. I can now say I have a much better understanding of the pain our remote colleagues go through every day. I’m definetely going to be reminding people in the office about it in the future.

Learnings

If you engage with remote employees or are planning to in the future, here is what I’d recommend:

  • When you are having a meeting with remote people and it’s possible for everyone attending to have mics, then do so.
  • Let remote employees know if you are starting a meeting late.
  • Respect meeting etiquette and allow all attendees to fully express themselves. Don’t interrupt until they’re done speaking.

Scrum retrospectives are a great opportunity to sit down with your team and make everyone’s voice heard. It’s about collective process improvement, by getting everyone involved and owning part of that process, it’s also about feelings, and about empathizing with each other.

A typical scrum retrospective

If you have a formula that works for your team, it’s good to repeat it: your team members will know what to do without having to repeat the agenda every week. However, it can be beneficial to try different things from time to time.

The most important source of ideas is probably the one-to-one meetings. Some team members may actually find the retrospectives boring or not particularly useful, and they may have ideas to improve them. Try some of them, discard things that do not work, and keep the things that people get more involved with.

We started our retrospectives with classical good / bad clustering: we draw two axis, time on the horizontal, and goodness to badness in the vertical, and people write down 2 positive things and 2 negative things, with a number from +5 to -5, and stick the post-its on the whiteboard. Every week, a different person tries to cluster the post-it notes into different categories. Sometimes, the time scale is a good indicator of a cluster, but we usually re-cluster them into more meaningful categories. Then, that person tries to explain what went well and what went badly during the sprint, asking the relevant people to explain their tickets. The important thing is trying to identify actions based on those notes, pretty much working out the start-stop-continue from that set. However, we don’t do this exhaustively. We focus on the immediately actionable items, the biggest wins and fails.

Some suggested we were wasting too much time on this, and we tried creating a thread on Slack for every sprint where people could write down thoughts as events happened during the sprint, and others would react with emoji. The thread died out after a few sprints, and we realized it was better to think retrospectively during the allocated time slot and get physically involved, i.e., standing up and writing things down.

Happiness axis

Our company wanted to measure happiness somehow. We discussed the option of having some anonymous surveys sent regularly to measure it, but many in the team were put off by having to fill in surveys online. So I decided to do something during the retrospective time, and get people directly involved.

I’ve selected 6 feelings or axes, 3 positive ones juxtaposed with 3 negative ones. Humans are complicated and full of emotions, so I tried to pick up things that I consider actionable in the work environment. This is our list:

Positive Negative
Enjoyment – did I work on something I enjoy? Boredom – most of the stuff was tedious and/or boring
Sense of accomplishment – I got that thing done! Despair – I’m getting nowhere
Powered up – learned something useful! Powered down – I feel I’m losing my skills

I think it’s important to keep it small, though. You don’t want to model the whole brain!

During the retrospective, we draw these axes on the whiteboard. Then, everyone stands up and casts up to 3 votes on any of the axes,

  • You don’t need to use all the votes (abstentions are counted as well)

  • You can vote in opposite axes (half of the sprint was really fun, but the other half was boring)

  • Preferably, add equally-spaced ticks, so we can draw a spider graph in the end.

And this is how it looks in the end,

Scrum Retrospectives Happiness

Happiness Axis

Actions based on happiness axis

Here are some of the recipes we have for actions based on the result of the happiness axis exercise,

  • … if joy is low:

    • everyone should have at least one ticket they would enjoy working on in next sprint;

  • … if boredom is high:

    • promote team work (e.g. pair-programming), from the premise that the conversation will make tedious tasks less painful;

  • … if not powering up:

    • plan for new things in next sprint;

    • schedule training time;

  • … when powering down:

    • discuss during the retrospective and/or one-on-ones which abilities are not being put to use. Try to find a place for them;

    • reduce time spent in repetitive tasks;

  • … when there’s no sense of accomplishment:

    • create smaller tickets with a well-defined goal;

    • try a “Demo-Driven Development” approach (this is a name I came up with): small features that are always “demoable”;

  • … when people feel they are going nowhere:

    • align the tickets with the company/crew objectives, so the goal is well defined;

    • identify blockers and deal with them ASAP (e.g. build issues).

Simple data visualization

In order to track the changes of the team mood over time, we also write the votes down in our Wiki. We keep 3 tables, one for each opposite axes, where each data point is just the date, the value on the positive axis, and the values on the negative one. Confluence can conveniently plot these for you,

Scrum retrospectives Happiness Data

Happiness data

From the graphs we noticed things like cycles in despair and accomplishment, that we regarded as being caused by having features that require a couple of sprints to complete, so the first sprint is full of despair, but when the feature gets finally completed in the following sprint, the sense of accomplishment spikes up.

Written down in words, it seems like a complex exercise, but it’s something that can be done really quickly, so we’ve kept this as part of our retrospectives.

Conclusion

There is no “correct” way of running scrum retrospectives, but the important thing is that they are dynamic and not too long. Also, make sure that people get involved in them. You probably know more or less what people feel from one-to-ones, but it’s important that they share some of that with everyone else in the team. At least, try to record the actionable needs. The happiness axis exercise is quick and it takes the scare out of surveys, and turns it into something a bit more fun. But if you feel stale, try doing something completely different from time to time, like brainstorming for ideas that people would like to work in with others. I’ll come back to that in a future post.

We welcomed back the Cambridge AWS User Group to the Cambridge office for it’s eighth Meetup. This one was focused on Big Data. This is something that I spend a lot of my time working on here at Metail, and I was keen to give a talk. I was nervous when having been put on the agenda we had 65 people sign up, the office capacity!

We had an exciting line up of speakers, if I do say so myself, with two talks about Redshift and one about building a big data solution on AWS. Peter Marriot gave the first talk which was an introduction to Redshift demonstrating how to create a cluster, log into it, load some data and then run queries. Most of this was a live demo and it went very smoothly. He was very enthusiastic about Redshift and demonstrated its speed at querying large data sets. I think his enthusiasm for Redshift came across as well measured and not just ‘oo shiny new tool’ as he did a good job of relating this to his own experience of querying large data sets; highlighting trade offs. The main one being Redshift seems to have a constant minimum overhead of a second or two on queries, where MySQL/PostgresSQL would be sub-second. This makes it difficult to support scenarios where multiple users make lots of small queries and receiving real-time results because the queue becomes backlogged. The general belief is that slow query response is because of the overhead of the leader node orchestrating the query, possibly a single node cluster wouldn’t have the problem. Something to put on the experiment list 🙂

The train chaos mentioned in the first Tweet meant our speaker from AWS, David Elliot, arrived late but still in plenty of time for his talk. It reminded me of my own experiences trying to get to my AWS London Loft talk back in April! His talk was an excellent live demo on setting up a trackers, and exploring the collected data. The exploration was done using Spark which is a managed install on EMR, and also Redshift and QuickSight. This was pretty similar to the demo I went to at the AWS Loft. It is impressive how quickly all this can be set up and how much power is available through these tools. I liked the demo and David had some good input to some of the questions asked of both me and Peter. We’ve blogged about this kind of setup and how it compares to our own here. We’ve changed our set up a little to be more event driven, using S3 notifications and SQS queues, but it’s still a good comparison. I see I blurred the lines a bit in my post about the use of Kinesis Firehose and Kinesis. The demo used Kinesis Firehose which is writing in batches, however you have control over when the buffer is flushed. David chose 60s to keep things flowing. You can use Kinesis streams, as David mentioned, if you want more of a streaming solution.

I was the final speaker on the agenda and my talk was titled Why The ‘Like’ In ‘Progres Like’ Matters”. I went through the decisions we’ve made when using Redshift and why. There were two main ones which I focused on. The first was whether to choose a cluster with a large amount of storage but limited compute, with the aim of storing all the data; or to have more CPU and less storage for faster querying but having to drop old data. We decided to keep all our data available in Redshift and progressed through a cluster made up of an increasing number of compute nodes until we had to switch to a cluster made up a few dense storage nodes to keep costs under control. The second major decision was the schema design. Unfortunately having never worked with columnar data stores we went with normalised schema layout which would have worked well on a row store such as PosgreSQL. We did use distribution and sort keys appropriate for the tables however the highly denormalised data often had different sort orders or distribution keys per table which made joins very slow. Since then we’ve done some more detailed research and more testing. Now we have a much larger data set and less CPU our tests highlight schema and query problems much more clearly which has lead to a much more efficient schema design. We have denormalised a lot of our data, and with common distribution and sort keys for the tables joins no longer need to sort data nor pull data from elsewhere in the cluster for table joins. As David said, Redshift optimisation is all about the schema design.

Overall we’ve found Redshift a very powerful tool, and like any tool there is a learning curve. As with all AWS services I’ve used there are the features in place to allow you to change your mind and hack around. Most of this due to the ease at which you can take snapshots and restore them to different shaped clusters.

Finally here’s me presenting:

It looks dark but it was still the hottest day of the year!

Thanks to @CambridgeAWS for the photos, to Peter and David for their talks, and Jon and Stephen for organising the Meetup. We’re looking forward to see everyone at the ninth Meetup here at Metail on Tuesday 25th October.