Started From The Model Now We’re Here: A Swift 3 Migration Diary

This is a romanticised activity log of Swift 3 migration work that happened in November last year.

For context, Songkick’s iOS app consists of both Objective-C and Swift code. Before migrating, we have converted 40% of our Objective-C classes to Swift.1 These Swift files were needed to be upgraded to Swift 3 to comply with current and future Xcode releases.

We hope that you find this story useful. If you are still working on the migration, best of luck! First days are surely tough, but if you persevere through then you will reach the finish line sooner than you think.

Day 0

“Xcode 8.2 is the last release that will support Swift 2.3.”
— Xcode 8.2’s release notes

Sigh, I think this is it. I understand Apple’s aggressiveness, but I was a bit surprised for it to be forced this quick. I guess I will raise this requirement to my Product Manager (PM) so we can prioritise migration work this week.

I tried migrating once before when Swift 3 just came out. Xcode’s Swift Migrator tool converted many things, but still left many errors. When I fixed 3 errors, 10 new errors appeared. We ended up going with the easier route: migrating to Swift 2.3. It worked well at the time, but now Swift 2.3 will not be supported.

I guess I have to deal with those endless errors and try my best. Luckily, I saw Rob Napier’s tweet just before starting the work.

Hmmm, compared to my previous approach his approach is better in terms of predictability and control. Great, I will remove all the swift files from our main app target and call it a day.

Day 1 & Day 2

Day 1 starts with PM’s blessing to start the migration. Great, now I can take my time and strategise my approach.

Based on stories from other developers, this will take days if not weeks. This project won’t compile for days and that is fine. The reason I can make peace with an uncompilable project is this: Errors are fine as long as they are in Objective-C files. More on this later.

The strategy starts from the simplest, most inter-dependent, and most testable objects: the models. I add them to the app target and run the migration tool, one-or-two files at a time. Once all models are included, I repeat the same process to network classes since they only depend on models. Eventually, I manage to convert all of our networking code, including request objects and API caller objects.

At this point, errors on Objective-C files are fine because they are missing classes written in Swift. There always will be errors until all Swift files are included back in the app target. Our approach focuses on making all included Swift files error-free, so errors in Objective-C is acceptable and will be resolved later.

Day 2 is a downhill compared to day 1. Most of the work is handled by the Swift Migrator. Various classes were migrated in this order: view models, extension classes, views, and view controllers. In total 170 Swift files in the main app target are successfully migrated.

Migrator tool really helps this process but there are manual conversions needed to be done. Below are the notes from the first two days.

AnyObject -> Any

Swift 3 converts Objective-C’s id into Any, not AnyObject like what Swift 2 did. This affects most of our models because they deal a lot with JSON dictionaries.

For example, look at this struct in Swift 2

The migrator converts to Swift 3 into

In Swift 3, AnyObject only applies for NSObject classes. The migrator casts to AnyObject is because Int and String are Swift structs, not Objective-C objects. Most of the fix related to this is just to manually rename AnyObject to Any.

New Access Controls

Swift 3 introduces new access controls: private, fileprivate, internal, public, open. The visual below explains clearly the differences between each access control.

The Swift Migrator converts all private access to fileprivate. All fileprivate access are manually checked and changed back to private whenever possible. public and open were not used because we only have one main target, which is the app target. This will be revisited in the future once we start to modularise the app into frameworks for sharing between targets (e.g. main app, extensions, unit tests, and UI tests).

Closure as parameters is non-escaping as default

Closure passed as an argument now is non-escaping by default. Escaping closure is a closure that is passed as an argument that is invoked after the function returns. Migration tool misses some of our API calls. The fix is to add the @escaping annotation manually.

Status: Main target cannot be compiled, but all swift files are error-free

Day 3

Good thing that we write tests for most of our Swift code. These tests have proven to be crucial for the migration process.

Migrating test files is done in a similar fashion, test files are added to the test target and are run incrementally. Failing tests are ignored for now, as we are aiming only for successful compilation. Once all tests are included in the test target, all the failing tests are fixed. In total, 70 test files are migrated.

Status: Test target can be compiled

Day 4

Day 4 is all about cleaning up; resolving warnings and renaming methods to comply better to Swift 3 naming guide.

Lowercased enums

Most of Foundation’s and UIKit’s swift enums got converted by migrator. To make it consistent with our code base, all of our own swift enums are manually lowercased.

NSDate categories

Swift 3 has Date object as a struct not a class. Have a look at this Date‘s helper method.

This does not automatically translate into Objective-C categories well because Date is a struct. To fix this, NSDate‘s method is added in its extension by calling Date‘s appropriate method.

Warning: “incompatible Objective-C category definitions”

Last weird warning is the “incompatible Objective-C category definitions”. This is triggered when a class has a computed property in an extension. Objective-C seemed unhappy about it (although it got translated fine). To remove the warning, use class method instead of computed property.

Status: Main target can be compiled

Started From The Model Now We’re Here

It took almost 5 working days to migrate all swift files to version 3.0. There are 203 files migrated, in which 133 files included in main target and 70 files in the test target.

Let’s hope for a better Swift 4.0 migration process. Swift team is aiming for source compatibility moving forward, so hoping for less manual work and fewer aggressive changes.

Other useful links on Swift 3 Migration

1. Check our previous post for more details on how we approach and track our code conversion from Objective-C to Swift

Posted in iOS

Diversity in the Songkick technology team


Respect each other and celebrate diversity is one of Songkick’s core values. Groupthink can be the enemy of innovation, and there’s a ton of research12 that’s been done to look at group dynamics and what a diverse set of experiences can bring to the table. So as well as creating a welcoming and supportive environment for everyone to work in, it helps us to build better products, too.

The Pipeline Problem

You will have probably heard of this – for the uninitiated, this is a way to explain lack of diversity in the tech industry due to problems earlier in the “pipeline”, eg gender biases which manifest themselves in schools and affect study choices, which in turn affect the number of applicants for jobs at companies like ours. This is a hard and acknowledged problem and Songkick can’t solve it (unfortunately!), although there are things which we can do to both attract diverse candidates and try to eliminate potentially biased treatment once they’re in our hiring pipeline.

Outside of our immediate hiring needs we can also contribute to a healthy debate on the issues and promote technology as a career choice to all.

So let’s start with widening the pipeline…

However good our hiring process, this will make no difference if we don’t fix the top of the funnel. We can be honest about this – we don’t get many women candidates. There are some great organisations such as Women Who Code who have a strong presence in London. We have hosted WWC meetups in our office and are in talks to host more in the future, and have also advertised on their job board. We have a hiring presence at conferences such as Lead Developer that care about diversity and have strong Codes of Conduct. Women are of course not the only minority group in tech, so there’s more we can do here.

We don’t have a preferred list of companies or universities to hire from, and indeed not all of our tech team studied at university.

A note on advertising

When reaching out to candidates, language is important. We run our job posts through linting tools such as joblint and textio. You will never see Songkick hiring for a ninja or wizard. These measures are easy and quick to implement, there’s really no excuse for not doing it. Our job listings only list the essential experience and engineering concepts you need to do the job, rather than using a wishlist of “nice to have” skills.

We’ve got a more diverse set of candidates – now what?

Once we have made contact with candidates we have some extra safeguards in place to avoid bias where it is practical to do so. For example, the coding exercise will be anonymised by the hiring manager, redacting all personal information before forwarding on to someone else to mark. We maintain a flexible interview schedule to accommodate our candidates as best as possible.

We base the technical interview around scenarios similar to those you’d find day-to-day in your job at Songkick, rather than expecting you to recall an obscure section of an university computer science course or solve brain teasers34. We’re looking for good all-round problem-solving ability and a willingness to ask questions.

Not everyone has the same preferences when it comes to ways of working, so during interview we’ll look to see if we can accommodate these rather than insisting that new hires conform to our existing style.

Day-to-day support

As in everything we do, we follow the principle of Continuous Improvement. In our Tech Team Offsite we had an action point to make our social activities less alcohol-based, allowing more people to take part and feel welcome if drinking was not their thing. We also wanted to schedule more of these activities in working hours so those with more commitments during the evening were not excluded.

We keep diversity and respect part of the conversation at work. We are always looking at what other companies are doing to tackle these issues, and encourage our team to call out any behaviour which is not respectful.

As we grow, we are developing a clear set of progression routes to support different interests within the technology team, from managerial to pure technical.

There’s still work to be done..

So what more could we do? We have a hiring council to bring more ideas to the table. One task is trying to find more diverse advertising channels. While there are many organisations for under-represented groups in technology, not all of them have corresponding job boards.

Our awesome team has been pro-active in outreach, and we’ve had some success with random shout-outs on slack channels. We are also looking at developing internships in the technology team and supporting graduates from intensive web development courses (not only university).

I was hired from Silicon Milk Roundabout, and there was an element of chance to it. I knew who Songkick were but didn’t approach them because I didn’t know Ruby (which they had listed on their stand), but a proactive member of the team approached me and assured me this was ok, and here I am a year and a half later. We now no longer specify language requirements in our ads, and hopefully this has helped in attracting a more diverse set of candidates since I started.

I’m really proud to work in a team that values diversity, if you would like to be part of this, please get in touch.

References and further reading

1. How diversity makes us smarter by Katherine W. Phillips
2. Why diversity matters by McKinsey and Company
3. Why we don’t hire programmers based on puzzles, API quizzes, math riddles, or other parlor tricks by DHH
4. Interview with Laszlo Bock, senior vice president of people operations at Google, The New York Times

Compare your Objective-C and Swift code through time with Swoop

At Songkick, we’re busy converting our iOS app Objective-C codebase to Swift, and we built a tool called Swoop to help us track our progress.

We started to use Swift for our iOS app since November last year. That means new features and new tests are written in Swift. But how about our existing Objective-C code? We approached the conversion from Objective-C to Swift carefully. We have a small team and wanted to keep shipping new features, so we could not afford the risk of having major code changes. Instead, we started with smaller changes to the most problematic Objective-C code.

Our models and networking code were the first two areas that we actively convert into Swift, mainly because we use an old and unsupported network library. Early 2016, we pushed quite hard on these conversions and progressed excellently.

At that time, I was curious to understand the velocity of our progress. Maybe seeing it as a graph would be cool. This idea was then realized into a Ruby gem I named, Swoop.

Swift and Objective-C comparison reporter

Swoop compares and reports your Swift and Objective-C code through time. It goes through your git history, reads your Xcode project, compares them, and presents the information in a digestible representation.

To use Swoop, just install the gem using 'gem install swoop_report', and use the command with 2 required parameters :

  • Path to your Xcode project, and
  • Directory of your interest (the directory inside Xcode project)

Call the swoop command from your terminal like so :

$ swoop --path ~/your_project/project.xcodeproj --dir Classes

By default, it will present you a table of the last eight tags of your project, similar to the table below.

sw_tableHow it works

The diagram below explains how Swoop’s main classes work together.


  1. It creates a Project using the path parameter.
  2. TimeMachine uses the project, and then figures out which git commits should be used based on the options provided.
  3. Once TimeMachine got the list of commits, it checkouts each one and starts the comparison process, which is broken down into :
    1. Selects the files that are inside the specified directory.
    2. EntityParser parses filtered Swift or Objective-C files and counts its classes, structs and extensions.
    3. Collates file information into a Report.
  4. All of the Reports are rendered by a chosen subclass of Renderer.

Below is the snippet of the Swoop’s main program :


This is what our iOS app’s comparison report looks like :


Until now, our Swift code constitute roughly 35% of our whole codebase. From the graph, we can see that the number was vastly improved because of the work done around February until March. At that time, we were actively converting code to Swift. Then, the past three months it stagnated a bit because we changed our team goals and changed our focus to other projects.

After it worked for our iOS app, I ran Swoop on two other open source projects: Artsy’s eigen and WordPress’ iOS app.

Artsy’s Eigen

Last 8 minor versions of eigen :
$ swoop --path eigen/Artsy.xcodeproj --dir Artsy --filter_tag '\d\.\d\.\d(-0)?$' --render chart


WordPress for iOS

Last 12 major version of WordPress for iOS :
$ swoop --path WordPress-iOS/WordPress/WordPress.xcodeproj --dir Classes --tags 12 --filter_tag '^\d.\d+$' --render chart


All in all, it works pretty well for our app and we plan to incorporate this into our continuous integration pipeline.


We will need to test Swoop using more Xcode projects because sometimes it fails to do the job for projects that have directory changes in their git history. Also, we will aim for 100% coverage in the near future.

Any form of contributions are welcomed! Let us know if it doesn’t work for your project (it’ll be better if the project is publicly accessible). For more information on how to use and improve Swoop, please visit :

Posted in iOS

When to repeat yourself

As a developer one of the first concepts you will be introduced to is DRY (Don’t Repeat Yourself) – if logic is re-used around your codebase it often makes sense to bring it into a central place to be standardised and easily maintained.  Later on in your career you might learn the hard way that there is value to be had in duplication and redundancy for the right reasons.

At Songkick we spent some time learning the hard way, so now the value of centralising common logic versus promoting weak coupling is something we actively explore and re-evaluate in our architectural decisions.

Tracking an artist versus tracking an event – be DRY on concepts, not code!

On the face of it, the concept of tracking an artist might look similar to tracking an event. For this reason we originally leveraged the same table and used a polymorphic association.

Tracking an artist vs tracking an event

Tracking an artist vs tracking an event

Looking closer and actually these two things are conceptually quite different. Tracking an event implies attendance on a specific date, and has a concept of “interested in/might go”. There is no equivalent granularity for tracking an artist (though maybe there should be a “I would consider seeing them under the right circumstances” option, look out for that in the future!).

When we split out our domain into services some years later we had to run migrations on this table to split out our attendance data from our artist tracking data, and similarly, separating logic out is much harder than combining it. Code-wise, the complexity added to handle both tracking concepts as the use cases evolved outweighed any benefit of the early abstraction.

See further reading for advice on avoiding early “optimisations” (such as abstractions).

Duplication of client models – DRY not necessarily suitable when weak coupling is required

Our front ends implement their own client models when reading data from a service, rather than making use of a client library.

Fewer dependencies gives us ease of deployment

A client library that provided standard client models may well result in less code duplication but it creates a coupling between client and service. Upgrading the client library would require a new deployment of each frontend, even if only a single frontend benefitted from the change – with our approach, we can deploy our frontends without this dependency.

If you’ve read any of our other blog posts, you’ll know we don’t like restrictions (or directives) on when we deploy!

Easier to reason about

Each frontend uses only the resources it needs and we can track our data dependencies to a service endpoint easily without the client library abstraction sitting in the middle.

Duplication of components – DRY benefits can be negligible with fast rates of change

The shared "upcoming event" component;, used on venue, metro area and user pages

The shared “upcoming event” component, used on venue, metro area and user pages

The artist "upcoming event" component

The artist “upcoming event” component

We unashamedly copy code from one front-end html component to another, making each one self-contained and with no dependency on other components. Front-end components are changed and iterated on quickly, and we usually want changes to affect a single component on a page. Any shared components are mapped out by the designer and PM so we know they will change together – if requirements differ we create a new component.

Because we make our components as dumb and atomic as possible, copying is low-cost and low-risk, and we can avoid complicated branching logic.


Optimising for code re-use might not be the right approach – consider that use cases might change, unnecessary dependencies can be created and that at the line level, code can become unreadable with an overly DRY mindset.  Be DRY, but not too DRY.

Further reading

Let’s make the Android community better

Romain Piel and I decided to submit a talk about the Android community to several conferences, on how much it has improved, the major problems it still has, and how we can all collaborate to make it better.  In one of the conferences our submission was categorised as “weird”.

Although I am passionate about fighting the lack of diversity in the tech industry, talking about it at a conference scared the bejesus out of me. How can you call out the different issues there are in the industry without pointing fingers and making people feel defensive? And besides that, I’ve never done a talk before. Is doing a non-technical talk as my first talk classifying me as a bad developer? Will it affect my career in the future? Will I forever have a stamp on my face saying I can only talk about diversity and not about other technical problems?

I had to set all my fears aside to prepare this talk. But I have the feeling the Android community is not open, yet, to this type of talks. At Android conferences, there are no talks about non-technical subjects. There are no talks about the impostor syndrome, the lack of diversity, lack of empathy, harassment, nor other problems the industry, and community currently face.

So what I am trying to do with this blog post is state why we need to start talking about the biggest problems our community has, and start addressing them.

Why I think we need to start educating each other on what causes these problems, and how can we solve them together.

Because there is not just one solution. Everyone has had different experiences, encountered different difficulties, and will have different suggestions on how to approach and solve a problem.

So here’s why I think talks on our community should be supported/promoted, and not seen as “weird”?

Because we need to acknowledge the problems

“The first step in solving any problem is recognizing there is one”. Will McAvoy, The Newsroom.

The first step to recovery is being honest, acknowledging there is a problem. That step is common for every problem we want to solve, even though it’s always related to the 12 step programs for AAs.

Acknowledging the problem is the hardest part. Many will argue that things were way worse in the past and that we should just be satisfied. I agree that we have come very far, but the problems are not completely solved and there is still a long way to go. The best example for this is diversity in the industry. Yes, we are more diverse, but women still only hold 26 percent of all tech jobs, and black and latino people only 4 and 5 percent.

Lena Reinhard – Works on my machine, or Problem exists between Keyboard and Chair

As we can see from the diagram, our community is not isolated from the rest of the world, we are not separated from society, from the tech industry, or from the companies we work for. All these different pieces have an impact on how our community behaves, how we act, how we make decisions, or what we consider wrong or normal.

So unfortunately for us, we need to understand the problems each and every one of these pieces have, and understand how they impact us.

And the piece that brings all of it together is us. And sadly we are human, which means we are not perfect, at all. So we also need to acknowledge our own flaws, biases, privileges, etc.

If we don’t start acknowledging these problems, we won’t have any incentives to start fixing them, so they will remain unfixed, and they will become the new normal. And I am sure no one wants that.

We need to start acknowledging these problems to help the people impacted by them, so we don’t leave them, or push them out of the community, to make them feel welcomed, and part of the community.

Because we need to fix these problems

“Right now, most of the people who are already working on debugging this industry are members of underrepresented groups in tech. That’s a bit like telling the QA team in your company that they have to fix the bugs they find themselves, because you have better things to do”. Lena Reinhard – Works on my machine, or Problem exists between Keyboard and Chair

It is really easy to ignore a problem, in fact it is the easiest thing to do, to turn your back to it. But that is not going to make it go away, it will probably make it worse (believe me, trying to ignore a kitchen fire doesn’t help make it go away).

K.C. Green, “On Fire”

We need to start addressing the problems and start thinking about solutions. If we are all aware of what is happening, and what the issues are, we can work together towards solutions to fix them (you know, a thousand brains work better than one).

If we ignore the problem we are limiting ourselves to a small proportion of people, we are limiting our point of view, our understanding of the world, our ideas and solutions.

If we ignore the problem we are closing the community to new people with different background and experiences, with fresh and different ideas, that probably have more to contribute than us. We would be preventing new ideas coming along and making Android better, including the community.

It is not going to be an easy process, nor fast, and it is an ongoing process. New problems will come along, and we need to be open to acknowledging them and solve them.

Because we are a community

“Sense of community is a feeling that members have of belonging, a feeling that members matter to one another and to the group, and a shared faith that members’ needs will be met through their commitment to be together” (McMillan, 1976).

If we are truly a community we should be supporting each other to be the best we can be. We need to be aware of each other’s opinions and needs, be aware of what makes people leave the community or even the industry, make new members feel welcome and meet their expectations.

We need to be supportive of each other, and empower those who don’t have the confidence to speak up.

Because other communities are doing it

The truth of the matter is that as a community, we are way behind in comparison to other tech communities when it comes to talking about social problems in the industry, or about non technical skills to become a better developer, or doing psychology talks.

Ruby conferences, Python conferences, PHP conferences, lead developer conferences, open tech conferences, JavaScript conferences, and even iOS conferences! These are just a few of them, there are a lot more communities that openly talk about these issues at their conferences. Just check the reference list to get an idea.

So, if all these communities are doing it, why are we still so far behind?

References and good videos you should watch

Songkick from a Tester’s point of view

Earlier this year we wrote about how we move fast but still test the code.

This was recently followed by another post about Developer happiness at Songkick which also focuses on the processes we have in place, as they provide a means to a productive working environment.

How does this all look from a tester’s point of view?

I have been asked a few times what a typical day looks like for a tester at Songkick. The post is about our processes that enable us to move fast from a tester’s point of view and how testing is integrated in our development lifecycle.

Organising our work

Teams at Songkick are organised around products and the process we follow is agile. Guided by the product manager and our team goals, we organise our sprints on a weekly basis with a prioritisation meeting. This allows us to update each other on the work in progress and determine the work that may get picked up during that week.

Prioritisation meetings also take into consideration things such as holidays and time spent doing other things (meetings, fire fighting, pairing).

On top of that we check our bug tracker, to see if any new bugs were raised that we need to act on.

Everyone in the company can raise bugs, enabling us to constantly make decisions on how to improve, not only our user facing products, but also our internal tools.

We also have daily stand ups at the beginning of each day, where we provide information on how we are getting on, and any blockers or other significant events that may impact our work positively or negatively.

Every 2 weeks we also a retrospective to assess how we are doing and what improvements we can make.


The kick-off

Sabina gave a great definition of the kick-off document here. Each feature or piece of work has a kick-off document. We try to always have a developer, product manager and tester in the conversation. More often than not we also include other developers, or experts, such as a member from tech ops or a frontline team. Frontline teams can be anyone using internal tools directly, members from our customer support team, or someone from the sales team.

Depending on the type of task; is it a technical task or a brand new feature, we use a slightly different template. The reasoning behind this is, that a technical non user facing change will require a different conversation than a user facing change.

But at the end of the day this is our source of truth, documenting, most importantly, the problem we are trying to solve, how we think we will do it, and any changes that we make to our initial plan along the way.

The kick-off conversation is where the tester can ask a tonne of questions. These range from anything about the technical implementation, potential performance issues, to what are the risks and what should our testing strategy be? Do we need to add a specific acceptance test for this feature, or are unit and integration tests enough?

A nice extra section in the document is the “Recurring bugs” section.

The recurring bugs consist of questions to make sure we are not implementing something we may have already solved and also bugs we see time and time again. These can range from field lengths and timezones, to nudges about considering how we order lists. What it doesn’t include is every bug we have ever seen. It is also not static and the section can evolve, removing certain questions or notes and adding others.

Having a recurring bugs section in a kick-off document is also great for on-boarding as you start to understand what previously has been an issue and you can ask why and what we do now to avoid it.

What’s next?

After the kick-off meeting, I personally tend to familiarise myself with where we are making the change.

For example, say we are adding a new address form to our check-out flow when you purchase tickets. I will perform a short exploratory test of this in our staging environment or on production. Anytime we do exploratory testing, we tend to record these as time-boxed test session in a lightweight format. This provides a nice record of the testing that was performed and also may lead to more questions for the kick-off document.

Once the developer(s) working on the feature have had a day or so, we do a test modelling session together.

Test Modelling

Similar to the kick-off this is an opportunity for the team to explore the new feature and how it may affect the rest of the system.

It consists of a short collaboration session, with at least a developer, tester and if applicable the design lead and/or other expert, where we mind map through test ideas, test data and scenarios.

We do this as it enables the developer to be testing early before releasing the product to a test/production environment, which in turn means we can deliver quality software and value sooner.

It is also a great way to share knowledge. Everyone who comes along brings different experiences and knowledge.

Test Model for one of our internal admin pages

Test Model for one of our internal admin pages

The collaborators work together to discuss what needs checking and what risks need exploring further.

We might also uncover questions about the feature we’re building. Sharing this before we build the feature can help us build the right feature, and save time.

For example, we recently improved one of our admin tools. During the test modelling session, we discovered a handful of questions, including some around date formats, and also default settings. By clearing these questions up early, we not only ensure that we build the right thing, but also that we build it in the most valuable way for the end user.

In this particular example, it transpired that following a certain logic for setting defaults, would not only save a lot of time, but also greatly reduce the likelihood of mistakes.

The team (mainly the developer) will use the resulting mind map for testing.

It becomes a record of test scenarios and cases we identified and covered as part of this bit of work.

As we mainly work in continuous deployment or delivery (depending on project and risk of the feature), testers often test in production using real data, to not block the deployment pipeline.

This has the advantage that the data is realistic (it is production data after all), there are no discrepancies in infrastructure, and performance can be adequately accessed.

Downsides can be that if we want to test purchases, we have to make actual purchases, which creates an overhead on the support team, as they will need to process refunds.

Testers and Bugs

Any issues we find during our testing on production or a staging environment (if we are doing continuous delivery), will be logged in our bug tracker and prioritised.

Some issues will be fixed straight away and others may be addressed at a later date.

As mentioned above, anyone at Songkick can raise issues.

If this issue relates to one of the products that your teams are working on, you (as the tester on the team(s)) will be notified and often it is good to verify the issue, ask for more information and also assess if this may be blocking the person who reported the issue, as soon as possible, or is it even an issue?

We do have guidelines to not even bother logging blockers but to come to the team directly, but this may not always be possible, so as testers we always have an eye on the bugs that are raised.

Want to know more?

In this post I described some of the common things testers at Songkick do.

Depending on the team and product there may also be other things, such as being involved in weekly performance tests, hands on mobile app testing, talking through A/B tests and coaching and educating the technology team and wider company on what testing is.

If any of that sounds interesting, we are always looking for testers. Just get in touch.

SlackMood – Analyse your teams happiness via Slack Emoji usage

We had a hack day in the office a few weeks back, and I decided I wanted to build something with Slack. Hack days give us a chance to work with people outside of our product teams, work with different and new technologies, as well as trying out fun ideas we’ve had.

Like any sensible company, we use Slack to help us collaborate and improve communication, but we also use it to share cat gifs (we have an entire channel) and a whole host of default, aliased and custom emojis. Based on this, I wondered if I could use our emoji use to gauge the average mood of the whole company. And so SlackMood was born.


SlackMood showing that 85% of our current Slack use is neutral or positive.

My first step was figuring out how to get a feed of messages across our whole Slack. I’d already decided to build it in Golang, and fortunately some clever person had already built a Golang library for Slack, saving me a huge amount of work. I registered a new bot on the Slack developer site and started hacking.

Unfortunately I quickly ran into an issue. I wanted to get the RTM (real-time message) feed of every channel, but it turns out bot accounts can’t join channels unless they’re invited. I could see 3 solutions to this:

  1. Create a real Slack user with an API key (I decided Finance wouldn’t be happy with this)
  2. Add my own API key alongside the bot, use the API to have me join all the channels, invite the bot and leave – annoying everyone in the company
  3. Use the message history APIs to periodically scrape the channels.

I decided to go with 3, as it seemed the simplest to implement.

The actual code for this was relatively simple:

It then passes the message object into a function that extracts the emoji counts.

It uses both a regular expression on the message, and iterating over the reactions.

I’d decided to use BoltDB for the backend storage, maybe not the best idea as I think a relational datastore like Sqlite would have been much better suited, but Bolt was a technology I’d never used before so it seemed interesting. We generate a message ID from the base message, then the reactions all have their own IDs based on the user who posted them. These are all stored in BoltDB as message ID -> details, where details is a struct describing the emoji:

Now we’ve got a list of emojis and their timestamps, we can go through and assign each one a rating, of ether positive, negative or neutral. Fortunately, some of our team had already built a spreadsheet of emoji sentiment analysis for a previous hack project (turns out, we love emojis) with positive to negative rankings (1 to -1):

Screen Shot 2016-07-04 at 14.59.59

Our emoji rankings spreadsheet, obviously.

With our emoji ranks loaded into a struct array, we can go through and analyse the score of our current listed emoji.

(N.B. looking back at this now, I realise a map of emojiname -> mood would have been much better rather than a double-loop, but this was like 6 hours in and I was keen to get something working).

Now we know the mood of all the emojis, calculating the graph just involves iterating through all the seen emojis and storing them in a map of date->mood. The GetMood function above works on a list of emojis, so we just bucket the emojis by the selected time period.

Due to storing all the emoji in Bolt and not being able to do proper filtering, we first filter by the time period we care about, then divide this up.

GraphMood returns a struct array which we can just JSON encode and feed into Chart.JS to get the nice visualisation above.

All in all, it was pretty fun but the whole project contains a lot of terrible code. If you want, check it out on Github here.

Other stuff I would have liked to add:

  • Most positive/negative person
  • Most used emoji
  • Biggest winker ?

Maybe next hack day.

P.S. if you fancy working somewhere with regular hack days, in a team which has a pre-prepared spreadsheet with emoji sentiment analysis, Songkick are hiring a variety of technology roles at the moment. So come work with us, we have a 64% SlackMood happiness rating™.

Developer happiness at Songkick

Back in November 2014 I was on a plane back from Vancouver where I’d left my job in the Visual Effects industry to return to my hometown, London, with the definite plan of trying something new and the vague idea of that thing being working in a startup. In the previous year to that I’d developed an interest in lean, agile and the practice of experimentation and iteration as a way to navigate and progress through an increasingly complex world. Also, I really just thought it would be more fun to work on new stuff in a smaller company that cared about process and developer satisfaction. And I was right.

Songkick takes developer happiness very seriously. All the things that frustrated me working in my old team are age-old problems that have frustrated most developers at some point. Thankfully there are lots of leaders, resources and movements in this area that have sought to address this and at Songkick we are always looking to improve things to make working as fun and pain-free as possible.

I’m going to give you a run-down of some of the things that have increased my developer happiness – this is not an exhaustive list!

The kick-off document – the canonical source of truth!

The standard “As a user.. I want to… So that…” user story that starts the kick-off really gives the motivation and the context of the feature we are trying to build. This document acts as a reference point throughout the development process. We map out the scope of the feature with the product manager and designer, and the tester gets involved to help get us thinking of possible bugs and risks early on in the process. Certain questions might be raised but not answered during the kick-off, so it’s updated throughout to reflect our learnings and any new decisions that have been made. Once we are kicked-off we can dive in and start building, even if there are still some unanswered questions.

It’s a very simple idea but you might be surprised how many companies don’t do this. In my previous jobs this had consisted of some scribbling down in a notebook a vague idea of what a user wanted, a degree of strategising as to how that might be achieved and then one long-running feature branch later, deploying to production test-free and hoping there was no comeback (there invariably was – most likely a bug, or a disagreement on what it was supposed to do in the first place).

Kick-offs ensure that we build the right thing, no more and no less.

Test modelling

For non-trivial features we will also schedule a test modelling session using mind-maps with the tester to think of all the possible failure scenarios and work out a test strategy. Some of these things will be common to all features of this type, others will require specific business or technical knowledge. For internal tools we invite members of the relevant operational team to get that extra context. Mind-mapping really takes you out of the low-level detail of the implementation and makes you think about the real-world impact of the feature you’re writing, and usefully it forces you to think about all the uncomfortable things that could go wrong ahead of time.

Written test coverage

We write tests at various levels of abstraction so that we can avoid bugs and articulate our business logic. This ensures we can spend the vast majority of our time developing features and not fixing bugs.


We use pair programming as a way of collaborating on features, knowledge sharing and of course onboarding new developers. The benefits and drawbacks of pairing are well documented, but in short it acts as a real-time code review and focussing-aid whilst making you tired quite quickly! We don’t pair on everything – it’s good to vary between this and some deep-thinking solo programming time.

Dean and me, clearly having fun.

Dean and me, clearly having fun.

Fast iterations and continuous deployment

Our continuous deployment pipeline means it’s a one-step process (and a matter of minutes) to deploy a change to production. Thanks to the test coverage we build as part of a feature (and previous coverage that act as regression tests), it’s also pretty safe – no sign-off required. It’s great to see your code out in the wild as soon as it’s built and to be able to act on feedback quickly. It also means you don’t lose context in the meantime.

Getting involved

Developers at Songkick are fully involved in shaping not only our products but our processes and values. We have councils for among other things, security, hiring strategy and API design that anyone can join, and our tech values are workshopped by the whole team. You will often find us at conferences, attending/organising meetups and writing blog posts such as this one.

Catalog: Increasing visibility for our Android UI tests

Getting automatic feedback from tests is extremely important when building any kind of software. At Songkick, our code is tested, validated, and reported through Jenkins CI.
The pipeline around our Android app includes static analysis, unit tests and instrumentation tests running on real devices and emulators.
Previously, we used square/spoon to run our instrumentation tests. It did a great job, with support for screenshots and LogCat recordings. But recently we had to skip it because it was conflicting with another library, LogCat recording stopped working, and it was taking too long to run all of our tests (around 15 minutes for our entire test suite).
So we moved to the official connected{Variant}AndroidTest tasks. Despite being much faster (around 8 minutes for the same test suite), we were missing the logs. When a test was failing, we couldn’t check the logs for more details. So we started re-running our tests and losing trust in them.

Introducing Catalog

Catalog is a Gradle plugin for Android. When added to your project, it runs with connected{Variant}AndroidTest tasks. At the end of the tests, it generates a report per device in app/build/outputs/androidTest-results/:

Screen Shot 2016-06-20 at 17.15.08

Why should I use it?

  • Catalog is built on top of Android build tools, we are not introducing any new test tasks
  • It will give you more confidence in your tests
  • It is lightweight (basically 8 simple classes)
  • It is fast, it won’t add any significant overhead to your build time

Get started

To include the plugin in your project, just add these lines in your app/build.gradle:

How does it work?

Catalog consists of two gradle tasks:

  • recordConnected{Variant}AndroidTest: runs before connected{Variant}AndroidTest and connects to Adb to record the LogCat for the current application.
  • printConnected{Variant}AndroidTest: runs after connected{Variant}AndroidTest and gathers the recorded logs and prints a txt and a html file into app/build/outputs/androidTest-results/.

Going forward

We are starting small with Catalog, but we would love suggestions and feedback. If you like the plugin, please create a pull request or post an issue. We have a few ideas to make it even more awesome, like:

  • show the status of the test (failure/success/ignored)
  • generate a html file listing all devices
  • add support for screenshots

Anything is possible, feel free to contribute:

How Docker is changing the way we develop, test & ship apps at Songkick

We’re really excited to have shipped our first app that uses Docker throughout our entire release cycle; from development, through to running tests on our CI server, and finally to our production environment. This article explains a bit about why we came to choose Docker, how we’re using it, and the benefits it brings.

Since Songkick and Crowdsurge merged last year we’ve had a mix of infrastructures, and in a long-term quest to consolidate platforms we’ve been looking at how to create a great development experience that would work cross-platform. We started by asking what a great development environment looks like, and came up with the following requirements:

  • Isolate dependencies (trying to run two different versions of a language or database on the same machine isn’t fun!)
  • Match production accurately
  • Fast to set up, and fast to work with day-to-day
  • Simple to use (think make run)
  • Easy for developers to change

We’ve aspired to created a development environment that gets out of the way and allows developers to focus on building great products. We believe that if you want a happy, productive development team it’s essential to get this right, and with the right decisions and a bit of work Docker is a great tool to achieve that.

We’ve broken down some advice and examples of how we’re using Docker for one of our new internal apps.

Install the Docker Toolbox

The Docker Toolbox provides you with all the right tools to work with Docker on Mac or Windows.

A few of us have also been playing with Docker for Mac that provides a more native experience. It’s still in beta but it’s a fantastic step forwards compared to the Docker toolbox and docker-machine.

Use VMWare Fusion instead of Virtualbox

Although Docker Toolbox comes with Virtualbox included, we chose to use VMWare Fusion instead. File change notifications are significantly better using VMWare Fusion, allowing features like Rails auto-reloading to work properly.

Creating a different Docker machine is simple:

Use existing services where possible

In development we connect directly to our staging database, removing a set of dependencies (running a local database, seeding structure and data) and giving us a useful, rich dataset to develop against.

Having a production-like set of data to develop and test against is really important, helping us catch bugs, edge-cases and data-related UX problems early.

Test in isolation

For testing we use docker-compose to run the tests against an ephemeral local database, making our tests fast and reliable.

Because you may not want to run your entire test suite each time, we also have a test shell ideal for running specific sets of tests:

Proper development tooling

As well as running the Ruby web server through Docker, we also provide a development shell container, aliased for convenience. This is great for trying out commands in the Rails console or installing new gems without needing Ruby or other dependencies on your Mac.

Use separate Dockerfiles for development and production

We build our development and production images slightly differently. They both declare the same system dependencies but differ in how they install gems and handle assets. Let’s run through each one and see how they work:

Here we deliberately copy the Gemfile, corresponding lock file and the vendor/cache directory first, then run bundle install.

When steps in the Dockerfile change, Docker only re-runs that step and steps after. This means we only run bundle install when there’s a change to the Gemfile or the cached gems, but when other files in the app change we can skip this step, significantly speeding up build time.

We deliberately chose to cache the gems rather than install afresh from each time for three reasons. First, it removes a deployment dependency–when you’re deploying several times a day it’s not great having to rely on more external services than necessary. Second, it means we don’t have to authenticate for installing private or Git-based gems from inside containers. Finally, it’s also much faster installing gems from the filesystem, using the –local flag to avoid hitting Rubygems altogether.

For production we install our gems differently, skipping test and development groups and precompiling assets into the image.


To release this image we tag it as the latest version, as well as the git SHA. This is then pushed to our private ECR.

We deliberately deploy that specific version of the image, meaning rolling back is as simple re-deploying a previous version from Jenkins.

Running in production

For running containers in production, we’re doing the simplest possible thing–using Docker to solve a dependency management problem only.

We’re running one container per node, using host networking and managing the process using upstart. When deploying we simply tell the upstart service to restart, which pulls the relevant image from the registry, stops the existing container and starts the new one.

This isn’t the most scalable or resource-efficient way of running containers but for a low-traffic internal app it’s a great balance of simplicity and effectiveness.

Next steps

One thing we’re still missing on production is downtime-less deploys. Amazon’s ECS handles this automatically (by spinning up a new pool of containers before automatically swapping them out in the load balancer) so we’re looking to move towards using that instead.

We’re still learning a lot about using Docker but so far it’s been a powerful, reliable and enjoyable tool to use for both developers and ops.