Diversity in the Songkick technology team


Respect each other and celebrate diversity is one of Songkick’s core values. Groupthink can be the enemy of innovation, and there’s a ton of research12 that’s been done to look at group dynamics and what a diverse set of experiences can bring to the table. So as well as creating a welcoming and supportive environment for everyone to work in, it helps us to build better products, too.

The Pipeline Problem

You will have probably heard of this – for the uninitiated, this is a way to explain lack of diversity in the tech industry due to problems earlier in the “pipeline”, eg gender biases which manifest themselves in schools and affect study choices, which in turn affect the number of applicants for jobs at companies like ours. This is a hard and acknowledged problem and Songkick can’t solve it (unfortunately!), although there are things which we can do to both attract diverse candidates and try to eliminate potentially biased treatment once they’re in our hiring pipeline.

Outside of our immediate hiring needs we can also contribute to a healthy debate on the issues and promote technology as a career choice to all.

So let’s start with widening the pipeline…

However good our hiring process, this will make no difference if we don’t fix the top of the funnel. We can be honest about this – we don’t get many women candidates. There are some great organisations such as Women Who Code who have a strong presence in London. We have hosted WWC meetups in our office and are in talks to host more in the future, and have also advertised on their job board. We have a hiring presence at conferences such as Lead Developer that care about diversity and have strong Codes of Conduct. Women are of course not the only minority group in tech, so there’s more we can do here.

We don’t have a preferred list of companies or universities to hire from, and indeed not all of our tech team studied at university.

A note on advertising

When reaching out to candidates, language is important. We run our job posts through linting tools such as joblint and textio. You will never see Songkick hiring for a ninja or wizard. These measures are easy and quick to implement, there’s really no excuse for not doing it. Our job listings only list the essential experience and engineering concepts you need to do the job, rather than using a wishlist of “nice to have” skills.

We’ve got a more diverse set of candidates – now what?

Once we have made contact with candidates we have some extra safeguards in place to avoid bias where it is practical to do so. For example, the coding exercise will be anonymised by the hiring manager, redacting all personal information before forwarding on to someone else to mark. We maintain a flexible interview schedule to accommodate our candidates as best as possible.

We base the technical interview around scenarios similar to those you’d find day-to-day in your job at Songkick, rather than expecting you to recall an obscure section of an university computer science course or solve brain teasers34. We’re looking for good all-round problem-solving ability and a willingness to ask questions.

Not everyone has the same preferences when it comes to ways of working, so during interview we’ll look to see if we can accommodate these rather than insisting that new hires conform to our existing style.

Day-to-day support

As in everything we do, we follow the principle of Continuous Improvement. In our Tech Team Offsite we had an action point to make our social activities less alcohol-based, allowing more people to take part and feel welcome if drinking was not their thing. We also wanted to schedule more of these activities in working hours so those with more commitments during the evening were not excluded.

We keep diversity and respect part of the conversation at work. We are always looking at what other companies are doing to tackle these issues, and encourage our team to call out any behaviour which is not respectful.

As we grow, we are developing a clear set of progression routes to support different interests within the technology team, from managerial to pure technical.

There’s still work to be done..

So what more could we do? We have a hiring council to bring more ideas to the table. One task is trying to find more diverse advertising channels. While there are many organisations for under-represented groups in technology, not all of them have corresponding job boards.

Our awesome team has been pro-active in outreach, and we’ve had some success with random shout-outs on slack channels. We are also looking at developing internships in the technology team and supporting graduates from intensive web development courses (not only university).

I was hired from Silicon Milk Roundabout, and there was an element of chance to it. I knew who Songkick were but didn’t approach them because I didn’t know Ruby (which they had listed on their stand), but a proactive member of the team approached me and assured me this was ok, and here I am a year and a half later. We now no longer specify language requirements in our ads, and hopefully this has helped in attracting a more diverse set of candidates since I started.

I’m really proud to work in a team that values diversity, if you would like to be part of this, please get in touch.

References and further reading

1. How diversity makes us smarter by Katherine W. Phillips
2. Why diversity matters by McKinsey and Company
3. Why we don’t hire programmers based on puzzles, API quizzes, math riddles, or other parlor tricks by DHH
4. Interview with Laszlo Bock, senior vice president of people operations at Google, The New York Times

When to repeat yourself

As a developer one of the first concepts you will be introduced to is DRY (Don’t Repeat Yourself) – if logic is re-used around your codebase it often makes sense to bring it into a central place to be standardised and easily maintained.  Later on in your career you might learn the hard way that there is value to be had in duplication and redundancy for the right reasons.

At Songkick we spent some time learning the hard way, so now the value of centralising common logic versus promoting weak coupling is something we actively explore and re-evaluate in our architectural decisions.

Tracking an artist versus tracking an event – be DRY on concepts, not code!

On the face of it, the concept of tracking an artist might look similar to tracking an event. For this reason we originally leveraged the same table and used a polymorphic association.

Tracking an artist vs tracking an event

Tracking an artist vs tracking an event

Looking closer and actually these two things are conceptually quite different. Tracking an event implies attendance on a specific date, and has a concept of “interested in/might go”. There is no equivalent granularity for tracking an artist (though maybe there should be a “I would consider seeing them under the right circumstances” option, look out for that in the future!).

When we split out our domain into services some years later we had to run migrations on this table to split out our attendance data from our artist tracking data, and similarly, separating logic out is much harder than combining it. Code-wise, the complexity added to handle both tracking concepts as the use cases evolved outweighed any benefit of the early abstraction.

See further reading for advice on avoiding early “optimisations” (such as abstractions).

Duplication of client models – DRY not necessarily suitable when weak coupling is required

Our front ends implement their own client models when reading data from a service, rather than making use of a client library.

Fewer dependencies gives us ease of deployment

A client library that provided standard client models may well result in less code duplication but it creates a coupling between client and service. Upgrading the client library would require a new deployment of each frontend, even if only a single frontend benefitted from the change – with our approach, we can deploy our frontends without this dependency.

If you’ve read any of our other blog posts, you’ll know we don’t like restrictions (or directives) on when we deploy!

Easier to reason about

Each frontend uses only the resources it needs and we can track our data dependencies to a service endpoint easily without the client library abstraction sitting in the middle.

Duplication of components – DRY benefits can be negligible with fast rates of change

The shared "upcoming event" component;, used on venue, metro area and user pages

The shared “upcoming event” component, used on venue, metro area and user pages

The artist "upcoming event" component

The artist “upcoming event” component

We unashamedly copy code from one front-end html component to another, making each one self-contained and with no dependency on other components. Front-end components are changed and iterated on quickly, and we usually want changes to affect a single component on a page. Any shared components are mapped out by the designer and PM so we know they will change together – if requirements differ we create a new component.

Because we make our components as dumb and atomic as possible, copying is low-cost and low-risk, and we can avoid complicated branching logic.


Optimising for code re-use might not be the right approach – consider that use cases might change, unnecessary dependencies can be created and that at the line level, code can become unreadable with an overly DRY mindset.  Be DRY, but not too DRY.

Further reading

Let’s make the Android community better

Romain Piel and I decided to submit a talk about the Android community to several conferences, on how much it has improved, the major problems it still has, and how we can all collaborate to make it better.  In one of the conferences our submission was categorised as “weird”.

Although I am passionate about fighting the lack of diversity in the tech industry, talking about it at a conference scared the bejesus out of me. How can you call out the different issues there are in the industry without pointing fingers and making people feel defensive? And besides that, I’ve never done a talk before. Is doing a non-technical talk as my first talk classifying me as a bad developer? Will it affect my career in the future? Will I forever have a stamp on my face saying I can only talk about diversity and not about other technical problems?

I had to set all my fears aside to prepare this talk. But I have the feeling the Android community is not open, yet, to this type of talks. At Android conferences, there are no talks about non-technical subjects. There are no talks about the impostor syndrome, the lack of diversity, lack of empathy, harassment, nor other problems the industry, and community currently face.

So what I am trying to do with this blog post is state why we need to start talking about the biggest problems our community has, and start addressing them.

Why I think we need to start educating each other on what causes these problems, and how can we solve them together.

Because there is not just one solution. Everyone has had different experiences, encountered different difficulties, and will have different suggestions on how to approach and solve a problem.

So here’s why I think talks on our community should be supported/promoted, and not seen as “weird”?

Because we need to acknowledge the problems

“The first step in solving any problem is recognizing there is one”. Will McAvoy, The Newsroom.

The first step to recovery is being honest, acknowledging there is a problem. That step is common for every problem we want to solve, even though it’s always related to the 12 step programs for AAs.

Acknowledging the problem is the hardest part. Many will argue that things were way worse in the past and that we should just be satisfied. I agree that we have come very far, but the problems are not completely solved and there is still a long way to go. The best example for this is diversity in the industry. Yes, we are more diverse, but women still only hold 26 percent of all tech jobs, and black and latino people only 4 and 5 percent.

Lena Reinhard – Works on my machine, or Problem exists between Keyboard and Chair

As we can see from the diagram, our community is not isolated from the rest of the world, we are not separated from society, from the tech industry, or from the companies we work for. All these different pieces have an impact on how our community behaves, how we act, how we make decisions, or what we consider wrong or normal.

So unfortunately for us, we need to understand the problems each and every one of these pieces have, and understand how they impact us.

And the piece that brings all of it together is us. And sadly we are human, which means we are not perfect, at all. So we also need to acknowledge our own flaws, biases, privileges, etc.

If we don’t start acknowledging these problems, we won’t have any incentives to start fixing them, so they will remain unfixed, and they will become the new normal. And I am sure no one wants that.

We need to start acknowledging these problems to help the people impacted by them, so we don’t leave them, or push them out of the community, to make them feel welcomed, and part of the community.

Because we need to fix these problems

“Right now, most of the people who are already working on debugging this industry are members of underrepresented groups in tech. That’s a bit like telling the QA team in your company that they have to fix the bugs they find themselves, because you have better things to do”. Lena Reinhard – Works on my machine, or Problem exists between Keyboard and Chair

It is really easy to ignore a problem, in fact it is the easiest thing to do, to turn your back to it. But that is not going to make it go away, it will probably make it worse (believe me, trying to ignore a kitchen fire doesn’t help make it go away).

K.C. Green, “On Fire”

We need to start addressing the problems and start thinking about solutions. If we are all aware of what is happening, and what the issues are, we can work together towards solutions to fix them (you know, a thousand brains work better than one).

If we ignore the problem we are limiting ourselves to a small proportion of people, we are limiting our point of view, our understanding of the world, our ideas and solutions.

If we ignore the problem we are closing the community to new people with different background and experiences, with fresh and different ideas, that probably have more to contribute than us. We would be preventing new ideas coming along and making Android better, including the community.

It is not going to be an easy process, nor fast, and it is an ongoing process. New problems will come along, and we need to be open to acknowledging them and solve them.

Because we are a community

“Sense of community is a feeling that members have of belonging, a feeling that members matter to one another and to the group, and a shared faith that members’ needs will be met through their commitment to be together” (McMillan, 1976).

If we are truly a community we should be supporting each other to be the best we can be. We need to be aware of each other’s opinions and needs, be aware of what makes people leave the community or even the industry, make new members feel welcome and meet their expectations.

We need to be supportive of each other, and empower those who don’t have the confidence to speak up.

Because other communities are doing it

The truth of the matter is that as a community, we are way behind in comparison to other tech communities when it comes to talking about social problems in the industry, or about non technical skills to become a better developer, or doing psychology talks.

Ruby conferences, Python conferences, PHP conferences, lead developer conferences, open tech conferences, JavaScript conferences, and even iOS conferences! These are just a few of them, there are a lot more communities that openly talk about these issues at their conferences. Just check the reference list to get an idea.

So, if all these communities are doing it, why are we still so far behind?

References and good videos you should watch

Songkick from a Tester’s point of view

Earlier this year we wrote about how we move fast but still test the code.

This was recently followed by another post about Developer happiness at Songkick which also focuses on the processes we have in place, as they provide a means to a productive working environment.

How does this all look from a tester’s point of view?

I have been asked a few times what a typical day looks like for a tester at Songkick. The post is about our processes that enable us to move fast from a tester’s point of view and how testing is integrated in our development lifecycle.

Organising our work

Teams at Songkick are organised around products and the process we follow is agile. Guided by the product manager and our team goals, we organise our sprints on a weekly basis with a prioritisation meeting. This allows us to update each other on the work in progress and determine the work that may get picked up during that week.

Prioritisation meetings also take into consideration things such as holidays and time spent doing other things (meetings, fire fighting, pairing).

On top of that we check our bug tracker, to see if any new bugs were raised that we need to act on.

Everyone in the company can raise bugs, enabling us to constantly make decisions on how to improve, not only our user facing products, but also our internal tools.

We also have daily stand ups at the beginning of each day, where we provide information on how we are getting on, and any blockers or other significant events that may impact our work positively or negatively.

Every 2 weeks we also a retrospective to assess how we are doing and what improvements we can make.


The kick-off

Sabina gave a great definition of the kick-off document here. Each feature or piece of work has a kick-off document. We try to always have a developer, product manager and tester in the conversation. More often than not we also include other developers, or experts, such as a member from tech ops or a frontline team. Frontline teams can be anyone using internal tools directly, members from our customer support team, or someone from the sales team.

Depending on the type of task; is it a technical task or a brand new feature, we use a slightly different template. The reasoning behind this is, that a technical non user facing change will require a different conversation than a user facing change.

But at the end of the day this is our source of truth, documenting, most importantly, the problem we are trying to solve, how we think we will do it, and any changes that we make to our initial plan along the way.

The kick-off conversation is where the tester can ask a tonne of questions. These range from anything about the technical implementation, potential performance issues, to what are the risks and what should our testing strategy be? Do we need to add a specific acceptance test for this feature, or are unit and integration tests enough?

A nice extra section in the document is the “Recurring bugs” section.

The recurring bugs consist of questions to make sure we are not implementing something we may have already solved and also bugs we see time and time again. These can range from field lengths and timezones, to nudges about considering how we order lists. What it doesn’t include is every bug we have ever seen. It is also not static and the section can evolve, removing certain questions or notes and adding others.

Having a recurring bugs section in a kick-off document is also great for on-boarding as you start to understand what previously has been an issue and you can ask why and what we do now to avoid it.

What’s next?

After the kick-off meeting, I personally tend to familiarise myself with where we are making the change.

For example, say we are adding a new address form to our check-out flow when you purchase tickets. I will perform a short exploratory test of this in our staging environment or on production. Anytime we do exploratory testing, we tend to record these as time-boxed test session in a lightweight format. This provides a nice record of the testing that was performed and also may lead to more questions for the kick-off document.

Once the developer(s) working on the feature have had a day or so, we do a test modelling session together.

Test Modelling

Similar to the kick-off this is an opportunity for the team to explore the new feature and how it may affect the rest of the system.

It consists of a short collaboration session, with at least a developer, tester and if applicable the design lead and/or other expert, where we mind map through test ideas, test data and scenarios.

We do this as it enables the developer to be testing early before releasing the product to a test/production environment, which in turn means we can deliver quality software and value sooner.

It is also a great way to share knowledge. Everyone who comes along brings different experiences and knowledge.

Test Model for one of our internal admin pages

Test Model for one of our internal admin pages

The collaborators work together to discuss what needs checking and what risks need exploring further.

We might also uncover questions about the feature we’re building. Sharing this before we build the feature can help us build the right feature, and save time.

For example, we recently improved one of our admin tools. During the test modelling session, we discovered a handful of questions, including some around date formats, and also default settings. By clearing these questions up early, we not only ensure that we build the right thing, but also that we build it in the most valuable way for the end user.

In this particular example, it transpired that following a certain logic for setting defaults, would not only save a lot of time, but also greatly reduce the likelihood of mistakes.

The team (mainly the developer) will use the resulting mind map for testing.

It becomes a record of test scenarios and cases we identified and covered as part of this bit of work.

As we mainly work in continuous deployment or delivery (depending on project and risk of the feature), testers often test in production using real data, to not block the deployment pipeline.

This has the advantage that the data is realistic (it is production data after all), there are no discrepancies in infrastructure, and performance can be adequately accessed.

Downsides can be that if we want to test purchases, we have to make actual purchases, which creates an overhead on the support team, as they will need to process refunds.

Testers and Bugs

Any issues we find during our testing on production or a staging environment (if we are doing continuous delivery), will be logged in our bug tracker and prioritised.

Some issues will be fixed straight away and others may be addressed at a later date.

As mentioned above, anyone at Songkick can raise issues.

If this issue relates to one of the products that your teams are working on, you (as the tester on the team(s)) will be notified and often it is good to verify the issue, ask for more information and also assess if this may be blocking the person who reported the issue, as soon as possible, or is it even an issue?

We do have guidelines to not even bother logging blockers but to come to the team directly, but this may not always be possible, so as testers we always have an eye on the bugs that are raised.

Want to know more?

In this post I described some of the common things testers at Songkick do.

Depending on the team and product there may also be other things, such as being involved in weekly performance tests, hands on mobile app testing, talking through A/B tests and coaching and educating the technology team and wider company on what testing is.

If any of that sounds interesting, we are always looking for testers. Just get in touch.

SlackMood – Analyse your teams happiness via Slack Emoji usage

We had a hack day in the office a few weeks back, and I decided I wanted to build something with Slack. Hack days give us a chance to work with people outside of our product teams, work with different and new technologies, as well as trying out fun ideas we’ve had.

Like any sensible company, we use Slack to help us collaborate and improve communication, but we also use it to share cat gifs (we have an entire channel) and a whole host of default, aliased and custom emojis. Based on this, I wondered if I could use our emoji use to gauge the average mood of the whole company. And so SlackMood was born.


SlackMood showing that 85% of our current Slack use is neutral or positive.

My first step was figuring out how to get a feed of messages across our whole Slack. I’d already decided to build it in Golang, and fortunately some clever person had already built a Golang library for Slack, saving me a huge amount of work. I registered a new bot on the Slack developer site and started hacking.

Unfortunately I quickly ran into an issue. I wanted to get the RTM (real-time message) feed of every channel, but it turns out bot accounts can’t join channels unless they’re invited. I could see 3 solutions to this:

  1. Create a real Slack user with an API key (I decided Finance wouldn’t be happy with this)
  2. Add my own API key alongside the bot, use the API to have me join all the channels, invite the bot and leave – annoying everyone in the company
  3. Use the message history APIs to periodically scrape the channels.

I decided to go with 3, as it seemed the simplest to implement.

The actual code for this was relatively simple:

for _,c := range channels{
  if c.IsArchived{
  hp := api.NewHistoryParameters()
  hp.Count = 1000
  h, err := s.Api.GetChannelHistory(c.ID, hp)

  if err != nil {
      "error": err,
      "channelId": c.ID,
      "channel": c,
    }).Warning("Could not fetch channel history")
  } else {

      "channel": c.Name,
      "channelId": c.ID,
      "messages": len(h.Messages),
    }).Debug("Got channel history")

It then passes the message object into a function that extracts the emoji counts.

func ParseEmoji(messages []api.Message){
  r := regexp.MustCompile(`:([a-z0-9_\+\-]+):`)

  for _,m := range messages{
    msgId := fmt.Sprintf("%s-%s-%s", m.Timestamp, m.Channel, m.User)
    for _,r := range m.Reactions{
      emojiList.AddEmoji(r.Name, m, fmt.Sprintf("%s-%s-%s", msgId, m.User, m.Name))

    foundEmoji := r.FindAllStringSubmatch(m.Text, -1)
    for _,em := range foundEmoji{
      emojiList.AddEmoji(em[1], m, msgId)

It uses both a regular expression on the message, and iterating over the reactions.

I’d decided to use BoltDB for the backend storage, maybe not the best idea as I think a relational datastore like Sqlite would have been much better suited, but Bolt was a technology I’d never used before so it seemed interesting. We generate a message ID from the base message, then the reactions all have their own IDs based on the user who posted them. These are all stored in BoltDB as message ID -> details, where details is a struct describing the emoji:

type Emoji struct{
  Name      string
  SeenAt    time.Time
  Channel   string
  User      string

Now we’ve got a list of emojis and their timestamps, we can go through and assign each one a rating, of ether positive, negative or neutral. Fortunately, some of our team had already built a spreadsheet of emoji sentiment analysis for a previous hack project (turns out, we love emojis) with positive to negative rankings (1 to -1):

Screen Shot 2016-07-04 at 14.59.59

Our emoji rankings spreadsheet, obviously.

With our emoji ranks loaded into a struct array, we can go through and analyse the score of our current listed emoji.

func GetMood(emoji []*Emoji) Mood{
  m := Mood{}

  for _, e := range emoji{
    for _,r := range ranks.EmojiRanks{
      if r.Name == e.Name{
        switch r.Rank {
        case 1:
          m.PositiveCount += 1
        case 0:
          m.NeutralCount += 1
        case -1:
          m.NegativeCount += 1
        m.TotalCount += 1

  m.Positive = percentage(m.PositiveCount, m.TotalCount)
  m.Negative = percentage(m.NegativeCount, m.TotalCount)
  m.Neutral = percentage(m.NeutralCount, m.TotalCount)

  return m

(N.B. looking back at this now, I realise a map of emojiname -> mood would have been much better rather than a double-loop, but this was like 6 hours in and I was keen to get something working).

Now we know the mood of all the emojis, calculating the graph just involves iterating through all the seen emojis and storing them in a map of date->mood. The GetMood function above works on a list of emojis, so we just bucket the emojis by the selected time period.

Due to storing all the emoji in Bolt and not being able to do proper filtering, we first filter by the time period we care about, then divide this up.

type Mood struct{
  Positive      float32
  Negative      float32
  Neutral       float32
  PositiveCount int32
  NegativeCount int32
  NeutralCount  int32
  TotalCount    int32
  Time          time.Time
  TimeString    string

func FilterEmoji(from time.Time, to time.Time, emoji []*Emoji) []*Emoji{
  var emj []*Emoji
  for _, e := range emoji{
    if e.SeenAt.After(from) && e.SeenAt.Before(to){
      emj = append(emj, e)

  return emj

func GraphMood(over time.Duration, interval time.Duration) []Mood{
  var points []Mood

  now := time.Now().UTC()
  dataPointCount := int(over.Seconds()/interval.Seconds())
  endTime := time.Unix(int64(interval.Seconds())*int64(now.Unix()/int64(interval.Seconds())), 0)
  periodEmoji := FilterEmoji(endTime.Add(over*-1), endTime, AllEmoji())
  for i:=0;i<dataPointCount;i++{
    offset := int(interval.Seconds())*(dataPointCount-i)
    startTime := endTime.Add(time.Second*time.Duration(offset)*-1)

    m := GetMood(FilterEmoji(startTime, startTime.Add(interval), periodEmoji))
    m.Time = startTime
    m.TimeString = startTime.Format("Jan _2")
    points = append(points, m)

  return points

GraphMood returns a struct array which we can just JSON encode and feed into Chart.JS to get the nice visualisation above.

All in all, it was pretty fun but the whole project contains a lot of terrible code. If you want, check it out on Github here.

Other stuff I would have liked to add:

  • Most positive/negative person
  • Most used emoji
  • Biggest winker ?

Maybe next hack day.

P.S. if you fancy working somewhere with regular hack days, in a team which has a pre-prepared spreadsheet with emoji sentiment analysis, Songkick are hiring a variety of technology roles at the moment. So come work with us, we have a 64% SlackMood happiness rating™.

Developer happiness at Songkick

Back in November 2014 I was on a plane back from Vancouver where I’d left my job in the Visual Effects industry to return to my hometown, London, with the definite plan of trying something new and the vague idea of that thing being working in a startup. In the previous year to that I’d developed an interest in lean, agile and the practice of experimentation and iteration as a way to navigate and progress through an increasingly complex world. Also, I really just thought it would be more fun to work on new stuff in a smaller company that cared about process and developer satisfaction. And I was right.

Songkick takes developer happiness very seriously. All the things that frustrated me working in my old team are age-old problems that have frustrated most developers at some point. Thankfully there are lots of leaders, resources and movements in this area that have sought to address this and at Songkick we are always looking to improve things to make working as fun and pain-free as possible.

I’m going to give you a run-down of some of the things that have increased my developer happiness – this is not an exhaustive list!

The kick-off document – the canonical source of truth!

The standard “As a user.. I want to… So that…” user story that starts the kick-off really gives the motivation and the context of the feature we are trying to build. This document acts as a reference point throughout the development process. We map out the scope of the feature with the product manager and designer, and the tester gets involved to help get us thinking of possible bugs and risks early on in the process. Certain questions might be raised but not answered during the kick-off, so it’s updated throughout to reflect our learnings and any new decisions that have been made. Once we are kicked-off we can dive in and start building, even if there are still some unanswered questions.

It’s a very simple idea but you might be surprised how many companies don’t do this. In my previous jobs this had consisted of some scribbling down in a notebook a vague idea of what a user wanted, a degree of strategising as to how that might be achieved and then one long-running feature branch later, deploying to production test-free and hoping there was no comeback (there invariably was – most likely a bug, or a disagreement on what it was supposed to do in the first place).

Kick-offs ensure that we build the right thing, no more and no less.

Test modelling

For non-trivial features we will also schedule a test modelling session using mind-maps with the tester to think of all the possible failure scenarios and work out a test strategy. Some of these things will be common to all features of this type, others will require specific business or technical knowledge. For internal tools we invite members of the relevant operational team to get that extra context. Mind-mapping really takes you out of the low-level detail of the implementation and makes you think about the real-world impact of the feature you’re writing, and usefully it forces you to think about all the uncomfortable things that could go wrong ahead of time.

Written test coverage

We write tests at various levels of abstraction so that we can avoid bugs and articulate our business logic. This ensures we can spend the vast majority of our time developing features and not fixing bugs.


We use pair programming as a way of collaborating on features, knowledge sharing and of course onboarding new developers. The benefits and drawbacks of pairing are well documented, but in short it acts as a real-time code review and focussing-aid whilst making you tired quite quickly! We don’t pair on everything – it’s good to vary between this and some deep-thinking solo programming time.

Dean and me, clearly having fun.

Dean and me, clearly having fun.

Fast iterations and continuous deployment

Our continuous deployment pipeline means it’s a one-step process (and a matter of minutes) to deploy a change to production. Thanks to the test coverage we build as part of a feature (and previous coverage that act as regression tests), it’s also pretty safe – no sign-off required. It’s great to see your code out in the wild as soon as it’s built and to be able to act on feedback quickly. It also means you don’t lose context in the meantime.

Getting involved

Developers at Songkick are fully involved in shaping not only our products but our processes and values. We have councils for among other things, security, hiring strategy and API design that anyone can join, and our tech values are workshopped by the whole team. You will often find us at conferences, attending/organising meetups and writing blog posts such as this one.

Catalog: Increasing visibility for our Android UI tests

Getting automatic feedback from tests is extremely important when building any kind of software. At Songkick, our code is tested, validated, and reported through Jenkins CI.
The pipeline around our Android app includes static analysis, unit tests and instrumentation tests running on real devices and emulators.
Previously, we used square/spoon to run our instrumentation tests. It did a great job, with support for screenshots and LogCat recordings. But recently we had to skip it because it was conflicting with another library, LogCat recording stopped working, and it was taking too long to run all of our tests (around 15 minutes for our entire test suite).
So we moved to the official connected{Variant}AndroidTest tasks. Despite being much faster (around 8 minutes for the same test suite), we were missing the logs. When a test was failing, we couldn’t check the logs for more details. So we started re-running our tests and losing trust in them.

Introducing Catalog

Catalog is a Gradle plugin for Android. When added to your project, it runs with connected{Variant}AndroidTest tasks. At the end of the tests, it generates a report per device in app/build/outputs/androidTest-results/:

Screen Shot 2016-06-20 at 17.15.08

Why should I use it?

  • Catalog is built on top of Android build tools, we are not introducing any new test tasks
  • It will give you more confidence in your tests
  • It is lightweight (basically 8 simple classes)
  • It is fast, it won’t add any significant overhead to your build time

Get started

To include the plugin in your project, just add these lines in your app/build.gradle:

buildscript {
    repositories {
    dependencies {
        classpath 'com.songkick:catalog:0.1.1'

apply plugin 'com.android.application'
apply plugin 'com.songkick.catalog'

How does it work?

Catalog consists of two gradle tasks:

  • recordConnected{Variant}AndroidTest: runs before connected{Variant}AndroidTest and connects to Adb to record the LogCat for the current application.
  • printConnected{Variant}AndroidTest: runs after connected{Variant}AndroidTest and gathers the recorded logs and prints a txt and a html file into app/build/outputs/androidTest-results/.

Going forward

We are starting small with Catalog, but we would love suggestions and feedback. If you like the plugin, please create a pull request or post an issue. We have a few ideas to make it even more awesome, like:

  • show the status of the test (failure/success/ignored)
  • generate a html file listing all devices
  • add support for screenshots

Anything is possible, feel free to contribute: https://github.com/songkick/catalog

Ingredients for a healthy Android codebase

Getting started in Android development is pretty straightforward, there are plenty of tutorials and documentation provided by Google. But Google will teach you to build a tent, not a solid sustainable house. As it’s still a very young platform with a very young community, the Android world has been lacking some direction on how to properly architect an app. Recently, some teams have started to take the problem more seriously, with the shiny tagline “Clean architecture for Android”.

At Songkick, we had the chance to rebuild the Android client from scratch 7 months ago. The previous version was working very well but the codebase had not been touched for almost 3 years, which was leaving us with old practices, old libraries, and Eclipse. We wanted to take a good direction so we spent a week designing the general architecture of the app. So we tried to apply the following principles from Uncle Bob’s clean architecture:

Systems should be

  • Independent of Frameworks. The architecture does not depend on the existence of a particular library. This allows you to use such frameworks as tools, rather than having to design your system around their limited constraints.
  • Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.
  • Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.
  • Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.
  • Independent of any external agency. In fact your business rules simply don’t know anything at all about the outside world.

…and this is what we ended up with:

Screen Shot 2016-02-25 at 14.21.45


Data layer

The data layer acts as a mediator between data sources and the domain logic. It should be a pure Java layer. We divide the data layer in different buckets following the repository pattern. In short, a repository is an abstract layer that isolates business objects from the data sources.

Screen Shot 2016-02-25 at 14.23.01

For example it can expose a searchArtist() method but the domain layer will not (and should not) know where the data is coming from. In fact one day we could swap the data source from a database to a web API and the domain layer will not see the difference.

When the data source is the Songkick REST API, we usually follow the format of the endpoint to know where data access belongs. That way we have a UserRepository, an ArtistRepository, an EventRepository, and so on.

Domain layer

The role of the domain layer is to orchestrate the flow of data and offer its services to the presentation layer. The domain layer is application specific, this is where the core business logic belongs. It is divided in use cases. A use case should not be directly linked to any external agencies and it should also be a pure Java layer.

Presentation layer

At the top of the stack, we have the presentation layer which is responsible for displaying information to the user.

That’s where things get tricky because of this class:

Screen Shot 2016-02-25 at 14.25.02

When I started developing for Android, I found that an Activity is a very convenient place where everything can happen:

  • it’s tied to the view lifecycle
  • it can receive user inputs
  • it’s a Context so it gives access to many data sources (ContentResolver, SharedPreferences, …)

Adding on top of that, most of the samples provided by Google have everything in an Activity, what could go wrong? If you follow that pattern I can guarantee that your Activity will be huge and untestable.

We took the decision to consider our activities/fragments as views and make them as dumb as possible. The view related logic lives in presenters that communicate with the domain layer. Presenters should only have simple logic related to presentation of the data, not to the data itself.

Models vs. View models

This architecture is moving a lot of logic away from the presentation layer but there is one last thing that we didn’t consider: models. Models that we get from the data sources are very rarely what we want to display to the user. It’s very common to do some extra treatment just before binding the data to the view. We’ve seen some apps that have 300 lines of code onBindViewHolder(), resulting in very slow view recycling. This is unacceptable, why would you want to add additional overhead to your process on the main thread? Why not move that overhead to the same background thread you used to fetch the data?

In the Songkick Android app, the presentation layer barely know what the original model is. It only deals with view models. A view model is the view representation of the content your data layer fetched. In the domain layer, each use case has a transformer that converts models to view models. To respect the clean architecture rules, the presentation layer provides the transformer to the domain layer and the domain layer uses it without really knowing what it does.

So say that you have the following Artist model:

Screen Shot 2016-02-25 at 14.32.42

If we just want to show the name and if the artist is on tour, our ArtistViewModel is as follow:

Screen Shot 2016-02-25 at 14.32.32

So that we can efficiently bind it to our view:

Screen Shot 2016-02-25 at 14.32.19


To communicate between these layers, we use RxJava by:

  • exposing Observables in repositories
  • exposing methods to subscribe/unsubscribe to an Observable that emits ViewModels in the use case
  • subscribing/unsubscribing to the use case in the Presenter


To structure our app we are using Dagger in the following way:

Screen Shot 2016-02-25 at 14.28.59

Repositories are unique per application as they should be stateless and shared across activities. Use cases and presenters are unique per Activity/Fragment. Presenters are stateful and should be linked to a unique Activity/Fragment.

We are also trying to follow the quote by Erich Gamma:

“Program to an interface, not an implementation”

  • It decouples the client from the implementation
  • It defines the vocabulary of the collaboration
  • It makes everything easier to test


Most of the pieces in this stack are pure Java classes. So they should be ready for unit testing without Robolectric. The only bit that needs Robolectric would be the Activity/Fragment.

We usually prefer testing the presentation layer with pure UI tests using Espresso. The good thing is that we can just mock the data layer to expose observables emitting entities from a JSON file and we’re good to go:

Screen Shot 2016-02-25 at 14.30.07

Of course there are drawbacks to only testing the domain and presentation layer without checking if it’s compliant with the external agencies, but we generally found that tests were much more stable and very accurate with that pattern. End-to-end tests are also valuable and we could imagine adding a separate category running through some important user journeys by providing the default sources to our data layer.


We’ve now run the new app for 4 months and it appeared to be very stable and very maintainable. We’re also in a great place with a good test coverage on both unit and UI tests. The codebase is pretty scalable when it comes to add new features.

Although it works for us, we are not saying that everyone should go for this architecture. We’re just at the first iteration of “Clean architecture” for Android, and are looking forward to seeing what it will be in the future.

Here’s a link to the talk I gave about the same topic: https://youtu.be/-oZswd1j5H0 (slides: https://speakerdeck.com/romainpiel/ingredients-for-a-healthy-codebase)


Uncle Bob’s clean architecturehttp://fernandocejas.com/2014/09/03/architecting-android-the-clean-way
Martin Fowler – The repository pattern
Erich Gamma – Design Principles from Design Patterns

Move fast, but test the code

At Songkick we believe code only starts adding value when it’s out in production, and being used by real users. Using Continuous Deployment helps us ship quickly and frequently. Code is pushed to Git, automatically built, checked, and if all appears well, deployed to production.

Automated pipelines make sure that every release goes through all of our defined steps. We don’t need to remember to trigger test suites, and we don’t need to merge features between branches. Our pipeline contains enough automated checks for us to be confident releasing the code to production.

However, our automated checks are not enough to confirm if a feature is actually working as it should be. For that we need to run through all our defined acceptance criteria and implicit requirements, and see the feature being used in the real world by real users.

In a previous life we used to try and perform all of our testing in the build/test/release pipeline. Not only was this slow and inefficient, dependent on lots of different people to be available at the same time, but often we found that features behaved very differently in production. Real users do unexpected things and it’s difficult to create truly realistic test environments.

Our motivation to get features out to real users as quickly as possible drove our adoption of Continuous Deployment. Having manual acceptance testing within the release pipeline slowed us down and made processes unpredictable. It was hard to define a process that relied on so many different people. We treated everyday events such as meetings and other work priorities as exceptional events which made things even more delay-prone and frustrating.

Eventually we decided that the build and release pipeline must be fully automated. We wanted developers to be able to push code and know that if Jenkins passed the build, it was safe for them to deploy to production. Attempting to automate all testing is never going to be achievable, or desirable. Firstly, automated tests are expensive to build and maintain. Secondly, testing, as opposed to checking, is not something that can be automated.

When we check something we are comparing the system against a known outcome. For example checking a button launches the expected popup when clicked, or checking a date displays in the specified format. Things like this can be, and should be automated.

Testing is more involved and relies on a human making a judgement. Testing involves exploring the system in creative ways in order to discover the things that you forgot about, the things that are unexpected, or difficult to completely define. It’s hard to predict how time and specific data combinations will affect computer systems, testing is a good way to try and uncover what actually happens. Removing the constraint of needing fully defined expected outcomes allows us to explore the system as a user might.

In practical terms this means running automated checks in our release pipeline and performing testing before code is committed, and post release. Taking testing out of the release pipeline removes the time pressures and allows us freedom to test everything as deeply as we require.

Songkick's Test and Release Process

Songkick’s Test and Release Process

Small, informal meetings called kick-offs help involve everyone in defining and designing the feature. We discuss what we’re building and why, plan how to test and release the code, and consider ways to measure success. Anything more complicated than a simple bug fix gets a kick-off before we start writing code. Understanding the context is important for helping us do the right thing. If we know that there are deadlines or business risks associated then we’re likely to act differently from a situation than has technical risks.

Coming out of the kick-off meeting we know how risky we consider the feature to be. We will have decided on the best approach to testing and releasing the code. As part of developing the feature we’ll also write or update our automated checks to make sure we don’t break the feature further down the line. Our process is intentionally flexible to allow us to treat each change appropriately depending on risk and need to ship.

Consider a recently released feature to store promoter details against ticket allocations as an example. The feature kick-off meeting identified risks and we discussed what and how to test the feature. We identified ways to break down the work into smaller pieces that could be developed and released independently; each hidden behind a feature flipper to keep it invisible from real users.

Developers and testers paired together to decide on specific areas to test. The tester’s testing expertise, and the developer’s deep understanding of the code feed into an informal collection of test ideas based on risk. Usually these are represented in a visual mind map for easy reference.

The developers, guided by the mind map, tested the feature and added automated unit and integration tests as they went. Front-end changes were overseen by a designer working closely with one of the developers to come up with the best, feasible, design. Once we had all the pieces of the feature the whole team jumped in to do some testing, and update our automated acceptance tests.

The feature required a bit of data backfilling so the development team were able to use the functionality in production, in ways we expect real users to use it. Of course we found some bugs but by working with small releases we were able to quickly locate the source of the problem. Fast release pipelines allow fixes to be deployed within minutes, making the cost of most bugs tolerably low.

Once the feature had been fully released and switched on for all users we used monitoring to check for unexpected issues. Reviewing features after a week or two of real world usage allows us to make informed decisions about the technical implementation and user experience. Taking the time to review how much value features are adding allows us to quickly spot and respond to problems.

Testing a feature involves many experts. Testers must be on hand to aid the developers in their testing, often by creating a mindmap of test ideas to guide testing. We try to use our previous experience of releasing similar features to focus the testing on areas that are typically complex or easy to break. Designers and UX people get involved to make sure the UX works as hoped, and the design looks good on all our supported devices and browsers. Product managers make sure the features actually do what they want them to do. High risk features have additional deep testing from the test team and in certain cases we throw in some focused performance or security testing.

Most of our bugs come from forgetting use cases or not understanding existing functionality in the system. Testing gives us a chance to use the system in an investigative way to hopefully find these bugs. Moving testing outside of our release pipeline gives us space to perform enough testing for each feature whilst maintaining a fully automated, and fast, release pipeline.

How we do product discovery

A few weeks ago, I gave a talk at the Future of Web Apps conference on how we do product discovery at Songkick. I had such an overwhelming response to it that I thought it might be useful to share it with the rest of the world.

Apologies for the whack formatting, but SlideShare doesn’t support Keynote files and I didn’t have time to redo the slides in PowerPoint to include the notes in a better way.

I’d love to hear how you guys go about product discovery and any tips / tricks on how to do it better.