Let’s make the Android community better

Romain Piel and I decided to submit a talk about the Android community to several conferences, on how much it has improved, the major problems it still has, and how we can all collaborate to make it better.  In one of the conferences our submission was categorised as “weird”.

Although I am passionate about fighting the lack of diversity in the tech industry, talking about it at a conference scared the bejesus out of me. How can you call out the different issues there are in the industry without pointing fingers and making people feel defensive? And besides that, I’ve never done a talk before. Is doing a non-technical talk as my first talk classifying me as a bad developer? Will it affect my career in the future? Will I forever have a stamp on my face saying I can only talk about diversity and not about other technical problems?

I had to set all my fears aside to prepare this talk. But I have the feeling the Android community is not open, yet, to this type of talks. At Android conferences, there are no talks about non-technical subjects. There are no talks about the impostor syndrome, the lack of diversity, lack of empathy, harassment, nor other problems the industry, and community currently face.

So what I am trying to do with this blog post is state why we need to start talking about the biggest problems our community has, and start addressing them.

Why I think we need to start educating each other on what causes these problems, and how can we solve them together.

Because there is not just one solution. Everyone has had different experiences, encountered different difficulties, and will have different suggestions on how to approach and solve a problem.

So here’s why I think talks on our community should be supported/promoted, and not seen as “weird”?

Because we need to acknowledge the problems

“The first step in solving any problem is recognizing there is one”. Will McAvoy, The Newsroom.

The first step to recovery is being honest, acknowledging there is a problem. That step is common for every problem we want to solve, even though it’s always related to the 12 step programs for AAs.

Acknowledging the problem is the hardest part. Many will argue that things were way worse in the past and that we should just be satisfied. I agree that we have come very far, but the problems are not completely solved and there is still a long way to go. The best example for this is diversity in the industry. Yes, we are more diverse, but women still only hold 26 percent of all tech jobs, and black and latino people only 4 and 5 percent.

Lena Reinhard – Works on my machine, or Problem exists between Keyboard and Chair

As we can see from the diagram, our community is not isolated from the rest of the world, we are not separated from society, from the tech industry, or from the companies we work for. All these different pieces have an impact on how our community behaves, how we act, how we make decisions, or what we consider wrong or normal.

So unfortunately for us, we need to understand the problems each and every one of these pieces have, and understand how they impact us.

And the piece that brings all of it together is us. And sadly we are human, which means we are not perfect, at all. So we also need to acknowledge our own flaws, biases, privileges, etc.

If we don’t start acknowledging these problems, we won’t have any incentives to start fixing them, so they will remain unfixed, and they will become the new normal. And I am sure no one wants that.

We need to start acknowledging these problems to help the people impacted by them, so we don’t leave them, or push them out of the community, to make them feel welcomed, and part of the community.

Because we need to fix these problems

“Right now, most of the people who are already working on debugging this industry are members of underrepresented groups in tech. That’s a bit like telling the QA team in your company that they have to fix the bugs they find themselves, because you have better things to do”. Lena Reinhard – Works on my machine, or Problem exists between Keyboard and Chair

It is really easy to ignore a problem, in fact it is the easiest thing to do, to turn your back to it. But that is not going to make it go away, it will probably make it worse (believe me, trying to ignore a kitchen fire doesn’t help make it go away).

K.C. Green, “On Fire”

We need to start addressing the problems and start thinking about solutions. If we are all aware of what is happening, and what the issues are, we can work together towards solutions to fix them (you know, a thousand brains work better than one).

If we ignore the problem we are limiting ourselves to a small proportion of people, we are limiting our point of view, our understanding of the world, our ideas and solutions.

If we ignore the problem we are closing the community to new people with different background and experiences, with fresh and different ideas, that probably have more to contribute than us. We would be preventing new ideas coming along and making Android better, including the community.

It is not going to be an easy process, nor fast, and it is an ongoing process. New problems will come along, and we need to be open to acknowledging them and solve them.

Because we are a community

“Sense of community is a feeling that members have of belonging, a feeling that members matter to one another and to the group, and a shared faith that members’ needs will be met through their commitment to be together” (McMillan, 1976).

If we are truly a community we should be supporting each other to be the best we can be. We need to be aware of each other’s opinions and needs, be aware of what makes people leave the community or even the industry, make new members feel welcome and meet their expectations.

We need to be supportive of each other, and empower those who don’t have the confidence to speak up.

Because other communities are doing it

The truth of the matter is that as a community, we are way behind in comparison to other tech communities when it comes to talking about social problems in the industry, or about non technical skills to become a better developer, or doing psychology talks.

Ruby conferences, Python conferences, PHP conferences, lead developer conferences, open tech conferences, JavaScript conferences, and even iOS conferences! These are just a few of them, there are a lot more communities that openly talk about these issues at their conferences. Just check the reference list to get an idea.

So, if all these communities are doing it, why are we still so far behind?


References and good videos you should watch

Songkick from a Tester’s point of view

Earlier this year we wrote about how we move fast but still test the code.

This was recently followed by another post about Developer happiness at Songkick which also focuses on the processes we have in place, as they provide a means to a productive working environment.

How does this all look from a tester’s point of view?

I have been asked a few times what a typical day looks like for a tester at Songkick. The post is about our processes that enable us to move fast from a tester’s point of view and how testing is integrated in our development lifecycle.

Organising our work

Teams at Songkick are organised around products and the process we follow is agile. Guided by the product manager and our team goals, we organise our sprints on a weekly basis with a prioritisation meeting. This allows us to update each other on the work in progress and determine the work that may get picked up during that week.

Prioritisation meetings also take into consideration things such as holidays and time spent doing other things (meetings, fire fighting, pairing).

On top of that we check our bug tracker, to see if any new bugs were raised that we need to act on.

Everyone in the company can raise bugs, enabling us to constantly make decisions on how to improve, not only our user facing products, but also our internal tools.

We also have daily stand ups at the beginning of each day, where we provide information on how we are getting on, and any blockers or other significant events that may impact our work positively or negatively.

Every 2 weeks we also a retrospective to assess how we are doing and what improvements we can make.

Retrospectives

The kick-off

Sabina gave a great definition of the kick-off document here. Each feature or piece of work has a kick-off document. We try to always have a developer, product manager and tester in the conversation. More often than not we also include other developers, or experts, such as a member from tech ops or a frontline team. Frontline teams can be anyone using internal tools directly, members from our customer support team, or someone from the sales team.

Depending on the type of task; is it a technical task or a brand new feature, we use a slightly different template. The reasoning behind this is, that a technical non user facing change will require a different conversation than a user facing change.

But at the end of the day this is our source of truth, documenting, most importantly, the problem we are trying to solve, how we think we will do it, and any changes that we make to our initial plan along the way.

The kick-off conversation is where the tester can ask a tonne of questions. These range from anything about the technical implementation, potential performance issues, to what are the risks and what should our testing strategy be? Do we need to add a specific acceptance test for this feature, or are unit and integration tests enough?

A nice extra section in the document is the “Recurring bugs” section.

The recurring bugs consist of questions to make sure we are not implementing something we may have already solved and also bugs we see time and time again. These can range from field lengths and timezones, to nudges about considering how we order lists. What it doesn’t include is every bug we have ever seen. It is also not static and the section can evolve, removing certain questions or notes and adding others.

Having a recurring bugs section in a kick-off document is also great for on-boarding as you start to understand what previously has been an issue and you can ask why and what we do now to avoid it.

What’s next?

After the kick-off meeting, I personally tend to familiarise myself with where we are making the change.

For example, say we are adding a new address form to our check-out flow when you purchase tickets. I will perform a short exploratory test of this in our staging environment or on production. Anytime we do exploratory testing, we tend to record these as time-boxed test session in a lightweight format. This provides a nice record of the testing that was performed and also may lead to more questions for the kick-off document.

Once the developer(s) working on the feature have had a day or so, we do a test modelling session together.

Test Modelling

Similar to the kick-off this is an opportunity for the team to explore the new feature and how it may affect the rest of the system.

It consists of a short collaboration session, with at least a developer, tester and if applicable the design lead and/or other expert, where we mind map through test ideas, test data and scenarios.

We do this as it enables the developer to be testing early before releasing the product to a test/production environment, which in turn means we can deliver quality software and value sooner.

It is also a great way to share knowledge. Everyone who comes along brings different experiences and knowledge.

Test Model for one of our internal admin pages

Test Model for one of our internal admin pages

The collaborators work together to discuss what needs checking and what risks need exploring further.

We might also uncover questions about the feature we’re building. Sharing this before we build the feature can help us build the right feature, and save time.

For example, we recently improved one of our admin tools. During the test modelling session, we discovered a handful of questions, including some around date formats, and also default settings. By clearing these questions up early, we not only ensure that we build the right thing, but also that we build it in the most valuable way for the end user.

In this particular example, it transpired that following a certain logic for setting defaults, would not only save a lot of time, but also greatly reduce the likelihood of mistakes.

The team (mainly the developer) will use the resulting mind map for testing.

It becomes a record of test scenarios and cases we identified and covered as part of this bit of work.

As we mainly work in continuous deployment or delivery (depending on project and risk of the feature), testers often test in production using real data, to not block the deployment pipeline.

This has the advantage that the data is realistic (it is production data after all), there are no discrepancies in infrastructure, and performance can be adequately accessed.

Downsides can be that if we want to test purchases, we have to make actual purchases, which creates an overhead on the support team, as they will need to process refunds.

Testers and Bugs

Any issues we find during our testing on production or a staging environment (if we are doing continuous delivery), will be logged in our bug tracker and prioritised.

Some issues will be fixed straight away and others may be addressed at a later date.

As mentioned above, anyone at Songkick can raise issues.

If this issue relates to one of the products that your teams are working on, you (as the tester on the team(s)) will be notified and often it is good to verify the issue, ask for more information and also assess if this may be blocking the person who reported the issue, as soon as possible, or is it even an issue?

We do have guidelines to not even bother logging blockers but to come to the team directly, but this may not always be possible, so as testers we always have an eye on the bugs that are raised.

Want to know more?

In this post I described some of the common things testers at Songkick do.

Depending on the team and product there may also be other things, such as being involved in weekly performance tests, hands on mobile app testing, talking through A/B tests and coaching and educating the technology team and wider company on what testing is.

If any of that sounds interesting, we are always looking for testers. Just get in touch.

SlackMood – Analyse your teams happiness via Slack Emoji usage

We had a hack day in the office a few weeks back, and I decided I wanted to build something with Slack. Hack days give us a chance to work with people outside of our product teams, work with different and new technologies, as well as trying out fun ideas we’ve had.

Like any sensible company, we use Slack to help us collaborate and improve communication, but we also use it to share cat gifs (we have an entire channel) and a whole host of default, aliased and custom emojis. Based on this, I wondered if I could use our emoji use to gauge the average mood of the whole company. And so SlackMood was born.

emoji_stats

SlackMood showing that 85% of our current Slack use is neutral or positive.

My first step was figuring out how to get a feed of messages across our whole Slack. I’d already decided to build it in Golang, and fortunately some clever person had already built a Golang library for Slack, saving me a huge amount of work. I registered a new bot on the Slack developer site and started hacking.

Unfortunately I quickly ran into an issue. I wanted to get the RTM (real-time message) feed of every channel, but it turns out bot accounts can’t join channels unless they’re invited. I could see 3 solutions to this:

  1. Create a real Slack user with an API key (I decided Finance wouldn’t be happy with this)
  2. Add my own API key alongside the bot, use the API to have me join all the channels, invite the bot and leave – annoying everyone in the company
  3. Use the message history APIs to periodically scrape the channels.

I decided to go with 3, as it seemed the simplest to implement.

The actual code for this was relatively simple:

It then passes the message object into a function that extracts the emoji counts.

It uses both a regular expression on the message, and iterating over the reactions.

I’d decided to use BoltDB for the backend storage, maybe not the best idea as I think a relational datastore like Sqlite would have been much better suited, but Bolt was a technology I’d never used before so it seemed interesting. We generate a message ID from the base message, then the reactions all have their own IDs based on the user who posted them. These are all stored in BoltDB as message ID -> details, where details is a struct describing the emoji:

Now we’ve got a list of emojis and their timestamps, we can go through and assign each one a rating, of ether positive, negative or neutral. Fortunately, some of our team had already built a spreadsheet of emoji sentiment analysis for a previous hack project (turns out, we love emojis) with positive to negative rankings (1 to -1):

Screen Shot 2016-07-04 at 14.59.59

Our emoji rankings spreadsheet, obviously.

With our emoji ranks loaded into a struct array, we can go through and analyse the score of our current listed emoji.

(N.B. looking back at this now, I realise a map of emojiname -> mood would have been much better rather than a double-loop, but this was like 6 hours in and I was keen to get something working).

Now we know the mood of all the emojis, calculating the graph just involves iterating through all the seen emojis and storing them in a map of date->mood. The GetMood function above works on a list of emojis, so we just bucket the emojis by the selected time period.

Due to storing all the emoji in Bolt and not being able to do proper filtering, we first filter by the time period we care about, then divide this up.

GraphMood returns a struct array which we can just JSON encode and feed into Chart.JS to get the nice visualisation above.

All in all, it was pretty fun but the whole project contains a lot of terrible code. If you want, check it out on Github here.

Other stuff I would have liked to add:

  • Most positive/negative person
  • Most used emoji
  • Biggest winker 😜

Maybe next hack day.


P.S. if you fancy working somewhere with regular hack days, in a team which has a pre-prepared spreadsheet with emoji sentiment analysis, Songkick are hiring a variety of technology roles at the moment. So come work with us, we have a 64% SlackMood happiness rating™.

Developer happiness at Songkick

Back in November 2014 I was on a plane back from Vancouver where I’d left my job in the Visual Effects industry to return to my hometown, London, with the definite plan of trying something new and the vague idea of that thing being working in a startup. In the previous year to that I’d developed an interest in lean, agile and the practice of experimentation and iteration as a way to navigate and progress through an increasingly complex world. Also, I really just thought it would be more fun to work on new stuff in a smaller company that cared about process and developer satisfaction. And I was right.

Songkick takes developer happiness very seriously. All the things that frustrated me working in my old team are age-old problems that have frustrated most developers at some point. Thankfully there are lots of leaders, resources and movements in this area that have sought to address this and at Songkick we are always looking to improve things to make working as fun and pain-free as possible.

I’m going to give you a run-down of some of the things that have increased my developer happiness – this is not an exhaustive list!

The kick-off document – the canonical source of truth!

The standard “As a user.. I want to… So that…” user story that starts the kick-off really gives the motivation and the context of the feature we are trying to build. This document acts as a reference point throughout the development process. We map out the scope of the feature with the product manager and designer, and the tester gets involved to help get us thinking of possible bugs and risks early on in the process. Certain questions might be raised but not answered during the kick-off, so it’s updated throughout to reflect our learnings and any new decisions that have been made. Once we are kicked-off we can dive in and start building, even if there are still some unanswered questions.

It’s a very simple idea but you might be surprised how many companies don’t do this. In my previous jobs this had consisted of some scribbling down in a notebook a vague idea of what a user wanted, a degree of strategising as to how that might be achieved and then one long-running feature branch later, deploying to production test-free and hoping there was no comeback (there invariably was – most likely a bug, or a disagreement on what it was supposed to do in the first place).

Kick-offs ensure that we build the right thing, no more and no less.

Test modelling

For non-trivial features we will also schedule a test modelling session using mind-maps with the tester to think of all the possible failure scenarios and work out a test strategy. Some of these things will be common to all features of this type, others will require specific business or technical knowledge. For internal tools we invite members of the relevant operational team to get that extra context. Mind-mapping really takes you out of the low-level detail of the implementation and makes you think about the real-world impact of the feature you’re writing, and usefully it forces you to think about all the uncomfortable things that could go wrong ahead of time.

Written test coverage

We write tests at various levels of abstraction so that we can avoid bugs and articulate our business logic. This ensures we can spend the vast majority of our time developing features and not fixing bugs.

Pairing

We use pair programming as a way of collaborating on features, knowledge sharing and of course onboarding new developers. The benefits and drawbacks of pairing are well documented, but in short it acts as a real-time code review and focussing-aid whilst making you tired quite quickly! We don’t pair on everything – it’s good to vary between this and some deep-thinking solo programming time.

Dean and me, clearly having fun.

Dean and me, clearly having fun.

Fast iterations and continuous deployment

Our continuous deployment pipeline means it’s a one-step process (and a matter of minutes) to deploy a change to production. Thanks to the test coverage we build as part of a feature (and previous coverage that act as regression tests), it’s also pretty safe – no sign-off required. It’s great to see your code out in the wild as soon as it’s built and to be able to act on feedback quickly. It also means you don’t lose context in the meantime.

Getting involved

Developers at Songkick are fully involved in shaping not only our products but our processes and values. We have councils for among other things, security, hiring strategy and API design that anyone can join, and our tech values are workshopped by the whole team. You will often find us at conferences, attending/organising meetups and writing blog posts such as this one.

Catalog: Increasing visibility for our Android UI tests

Getting automatic feedback from tests is extremely important when building any kind of software. At Songkick, our code is tested, validated, and reported through Jenkins CI.
The pipeline around our Android app includes static analysis, unit tests and instrumentation tests running on real devices and emulators.
Previously, we used square/spoon to run our instrumentation tests. It did a great job, with support for screenshots and LogCat recordings. But recently we had to skip it because it was conflicting with another library, LogCat recording stopped working, and it was taking too long to run all of our tests (around 15 minutes for our entire test suite).
So we moved to the official connected{Variant}AndroidTest tasks. Despite being much faster (around 8 minutes for the same test suite), we were missing the logs. When a test was failing, we couldn’t check the logs for more details. So we started re-running our tests and losing trust in them.

Introducing Catalog

Catalog is a Gradle plugin for Android. When added to your project, it runs with connected{Variant}AndroidTest tasks. At the end of the tests, it generates a report per device in app/build/outputs/androidTest-results/:

Screen Shot 2016-06-20 at 17.15.08

Why should I use it?

  • Catalog is built on top of Android build tools, we are not introducing any new test tasks
  • It will give you more confidence in your tests
  • It is lightweight (basically 8 simple classes)
  • It is fast, it won’t add any significant overhead to your build time

Get started

To include the plugin in your project, just add these lines in your app/build.gradle:

How does it work?

Catalog consists of two gradle tasks:

  • recordConnected{Variant}AndroidTest: runs before connected{Variant}AndroidTest and connects to Adb to record the LogCat for the current application.
  • printConnected{Variant}AndroidTest: runs after connected{Variant}AndroidTest and gathers the recorded logs and prints a txt and a html file into app/build/outputs/androidTest-results/.

Going forward

We are starting small with Catalog, but we would love suggestions and feedback. If you like the plugin, please create a pull request or post an issue. We have a few ideas to make it even more awesome, like:

  • show the status of the test (failure/success/ignored)
  • generate a html file listing all devices
  • add support for screenshots

Anything is possible, feel free to contribute: https://github.com/songkick/catalog

How Docker is changing the way we develop, test & ship apps at Songkick

We’re really excited to have shipped our first app that uses Docker throughout our entire release cycle; from development, through to running tests on our CI server, and finally to our production environment. This article explains a bit about why we came to choose Docker, how we’re using it, and the benefits it brings.

Since Songkick and Crowdsurge merged last year we’ve had a mix of infrastructures, and in a long-term quest to consolidate platforms we’ve been looking at how to create a great development experience that would work cross-platform. We started by asking what a great development environment looks like, and came up with the following requirements:

  • Isolate dependencies (trying to run two different versions of a language or database on the same machine isn’t fun!)
  • Match production accurately
  • Fast to set up, and fast to work with day-to-day
  • Simple to use (think make run)
  • Easy for developers to change

We’ve aspired to created a development environment that gets out of the way and allows developers to focus on building great products. We believe that if you want a happy, productive development team it’s essential to get this right, and with the right decisions and a bit of work Docker is a great tool to achieve that.

We’ve broken down some advice and examples of how we’re using Docker for one of our new internal apps.

Install the Docker Toolbox

The Docker Toolbox provides you with all the right tools to work with Docker on Mac or Windows.

A few of us have also been playing with Docker for Mac that provides a more native experience. It’s still in beta but it’s a fantastic step forwards compared to the Docker toolbox and docker-machine.

Use VMWare Fusion instead of Virtualbox

Although Docker Toolbox comes with Virtualbox included, we chose to use VMWare Fusion instead. File change notifications are significantly better using VMWare Fusion, allowing features like Rails auto-reloading to work properly.

Creating a different Docker machine is simple:

Use existing services where possible

In development we connect directly to our staging database, removing a set of dependencies (running a local database, seeding structure and data) and giving us a useful, rich dataset to develop against.

Having a production-like set of data to develop and test against is really important, helping us catch bugs, edge-cases and data-related UX problems early.

Test in isolation

For testing we use docker-compose to run the tests against an ephemeral local database, making our tests fast and reliable.

Because you may not want to run your entire test suite each time, we also have a test shell ideal for running specific sets of tests:

Proper development tooling

As well as running the Ruby web server through Docker, we also provide a development shell container, aliased for convenience. This is great for trying out commands in the Rails console or installing new gems without needing Ruby or other dependencies on your Mac.

Use separate Dockerfiles for development and production

We build our development and production images slightly differently. They both declare the same system dependencies but differ in how they install gems and handle assets. Let’s run through each one and see how they work:

Dockefile.dev

Here we deliberately copy the Gemfile, corresponding lock file and the vendor/cache directory first, then run bundle install.

When steps in the Dockerfile change, Docker only re-runs that step and steps after. This means we only run bundle install when there’s a change to the Gemfile or the cached gems, but when other files in the app change we can skip this step, significantly speeding up build time.

We deliberately chose to cache the gems rather than install afresh from Rubygems.org each time for three reasons. First, it removes a deployment dependency–when you’re deploying several times a day it’s not great having to rely on more external services than necessary. Second, it means we don’t have to authenticate for installing private or Git-based gems from inside containers. Finally, it’s also much faster installing gems from the filesystem, using the –local flag to avoid hitting Rubygems altogether.

Dockefile.prod

For production we install our gems differently, skipping test and development groups and precompiling assets into the image.

Deployment

To release this image we tag it as the latest version, as well as the git SHA. This is then pushed to our private ECR.

We deliberately deploy that specific version of the image, meaning rolling back is as simple re-deploying a previous version from Jenkins.

Running in production

For running containers in production, we’re doing the simplest possible thing–using Docker to solve a dependency management problem only.

We’re running one container per node, using host networking and managing the process using upstart. When deploying we simply tell the upstart service to restart, which pulls the relevant image from the registry, stops the existing container and starts the new one.

This isn’t the most scalable or resource-efficient way of running containers but for a low-traffic internal app it’s a great balance of simplicity and effectiveness.

Next steps

One thing we’re still missing on production is downtime-less deploys. Amazon’s ECS handles this automatically (by spinning up a new pool of containers before automatically swapping them out in the load balancer) so we’re looking to move towards using that instead.

We’re still learning a lot about using Docker but so far it’s been a powerful, reliable and enjoyable tool to use for both developers and ops.

Ingredients for a healthy Android codebase

Getting started in Android development is pretty straightforward, there are plenty of tutorials and documentation provided by Google. But Google will teach you to build a tent, not a solid sustainable house. As it’s still a very young platform with a very young community, the Android world has been lacking some direction on how to properly architect an app. Recently, some teams have started to take the problem more seriously, with the shiny tagline “Clean architecture for Android”.

At Songkick, we had the chance to rebuild the Android client from scratch 7 months ago. The previous version was working very well but the codebase had not been touched for almost 3 years, which was leaving us with old practices, old libraries, and Eclipse. We wanted to take a good direction so we spent a week designing the general architecture of the app. So we tried to apply the following principles from Uncle Bob’s clean architecture:

Systems should be

  • Independent of Frameworks. The architecture does not depend on the existence of a particular library. This allows you to use such frameworks as tools, rather than having to design your system around their limited constraints.
  • Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.
  • Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.
  • Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.
  • Independent of any external agency. In fact your business rules simply don’t know anything at all about the outside world.

…and this is what we ended up with:

Screen Shot 2016-02-25 at 14.21.45

Layers

Data layer

The data layer acts as a mediator between data sources and the domain logic. It should be a pure Java layer. We divide the data layer in different buckets following the repository pattern. In short, a repository is an abstract layer that isolates business objects from the data sources.

Screen Shot 2016-02-25 at 14.23.01

For example it can expose a searchArtist() method but the domain layer will not (and should not) know where the data is coming from. In fact one day we could swap the data source from a database to a web API and the domain layer will not see the difference.

When the data source is the Songkick REST API, we usually follow the format of the endpoint to know where data access belongs. That way we have a UserRepository, an ArtistRepository, an EventRepository, and so on.

Domain layer

The role of the domain layer is to orchestrate the flow of data and offer its services to the presentation layer. The domain layer is application specific, this is where the core business logic belongs. It is divided in use cases. A use case should not be directly linked to any external agencies and it should also be a pure Java layer.

Presentation layer

At the top of the stack, we have the presentation layer which is responsible for displaying information to the user.

That’s where things get tricky because of this class:

Screen Shot 2016-02-25 at 14.25.02

When I started developing for Android, I found that an Activity is a very convenient place where everything can happen:

  • it’s tied to the view lifecycle
  • it can receive user inputs
  • it’s a Context so it gives access to many data sources (ContentResolver, SharedPreferences, …)

Adding on top of that, most of the samples provided by Google have everything in an Activity, what could go wrong? If you follow that pattern I can guarantee that your Activity will be huge and untestable.

We took the decision to consider our activities/fragments as views and make them as dumb as possible. The view related logic lives in presenters that communicate with the domain layer. Presenters should only have simple logic related to presentation of the data, not to the data itself.

Models vs. View models

This architecture is moving a lot of logic away from the presentation layer but there is one last thing that we didn’t consider: models. Models that we get from the data sources are very rarely what we want to display to the user. It’s very common to do some extra treatment just before binding the data to the view. We’ve seen some apps that have 300 lines of code onBindViewHolder(), resulting in very slow view recycling. This is unacceptable, why would you want to add additional overhead to your process on the main thread? Why not move that overhead to the same background thread you used to fetch the data?

In the Songkick Android app, the presentation layer barely know what the original model is. It only deals with view models. A view model is the view representation of the content your data layer fetched. In the domain layer, each use case has a transformer that converts models to view models. To respect the clean architecture rules, the presentation layer provides the transformer to the domain layer and the domain layer uses it without really knowing what it does.

So say that you have the following Artist model:

Screen Shot 2016-02-25 at 14.32.42

If we just want to show the name and if the artist is on tour, our ArtistViewModel is as follow:

Screen Shot 2016-02-25 at 14.32.32

So that we can efficiently bind it to our view:

Screen Shot 2016-02-25 at 14.32.19

Communication

To communicate between these layers, we use RxJava by:

  • exposing Observables in repositories
  • exposing methods to subscribe/unsubscribe to an Observable that emits ViewModels in the use case
  • subscribing/unsubscribing to the use case in the Presenter

Structure

To structure our app we are using Dagger in the following way:

Screen Shot 2016-02-25 at 14.28.59

Repositories are unique per application as they should be stateless and shared across activities. Use cases and presenters are unique per Activity/Fragment. Presenters are stateful and should be linked to a unique Activity/Fragment.

We are also trying to follow the quote by Erich Gamma:

“Program to an interface, not an implementation”

  • It decouples the client from the implementation
  • It defines the vocabulary of the collaboration
  • It makes everything easier to test

Testing

Most of the pieces in this stack are pure Java classes. So they should be ready for unit testing without Robolectric. The only bit that needs Robolectric would be the Activity/Fragment.

We usually prefer testing the presentation layer with pure UI tests using Espresso. The good thing is that we can just mock the data layer to expose observables emitting entities from a JSON file and we’re good to go:

Screen Shot 2016-02-25 at 14.30.07

Of course there are drawbacks to only testing the domain and presentation layer without checking if it’s compliant with the external agencies, but we generally found that tests were much more stable and very accurate with that pattern. End-to-end tests are also valuable and we could imagine adding a separate category running through some important user journeys by providing the default sources to our data layer.

Conclusion

We’ve now run the new app for 4 months and it appeared to be very stable and very maintainable. We’re also in a great place with a good test coverage on both unit and UI tests. The codebase is pretty scalable when it comes to add new features.

Although it works for us, we are not saying that everyone should go for this architecture. We’re just at the first iteration of “Clean architecture” for Android, and are looking forward to seeing what it will be in the future.

Here’s a link to the talk I gave about the same topic: https://youtu.be/-oZswd1j5H0 (slides: https://speakerdeck.com/romainpiel/ingredients-for-a-healthy-codebase)

References

Uncle Bob’s clean architecturehttp://fernandocejas.com/2014/09/03/architecting-android-the-clean-way
https://github.com/android10/Android-CleanArchitecture
Martin Fowler – The repository pattern
Erich Gamma – Design Principles from Design Patterns

Move fast, but test the code

At Songkick we believe code only starts adding value when it’s out in production, and being used by real users. Using Continuous Deployment helps us ship quickly and frequently. Code is pushed to Git, automatically built, checked, and if all appears well, deployed to production.

Automated pipelines make sure that every release goes through all of our defined steps. We don’t need to remember to trigger test suites, and we don’t need to merge features between branches. Our pipeline contains enough automated checks for us to be confident releasing the code to production.

However, our automated checks are not enough to confirm if a feature is actually working as it should be. For that we need to run through all our defined acceptance criteria and implicit requirements, and see the feature being used in the real world by real users.

In a previous life we used to try and perform all of our testing in the build/test/release pipeline. Not only was this slow and inefficient, dependent on lots of different people to be available at the same time, but often we found that features behaved very differently in production. Real users do unexpected things and it’s difficult to create truly realistic test environments.

Our motivation to get features out to real users as quickly as possible drove our adoption of Continuous Deployment. Having manual acceptance testing within the release pipeline slowed us down and made processes unpredictable. It was hard to define a process that relied on so many different people. We treated everyday events such as meetings and other work priorities as exceptional events which made things even more delay-prone and frustrating.

Eventually we decided that the build and release pipeline must be fully automated. We wanted developers to be able to push code and know that if Jenkins passed the build, it was safe for them to deploy to production. Attempting to automate all testing is never going to be achievable, or desirable. Firstly, automated tests are expensive to build and maintain. Secondly, testing, as opposed to checking, is not something that can be automated.

When we check something we are comparing the system against a known outcome. For example checking a button launches the expected popup when clicked, or checking a date displays in the specified format. Things like this can be, and should be automated.

Testing is more involved and relies on a human making a judgement. Testing involves exploring the system in creative ways in order to discover the things that you forgot about, the things that are unexpected, or difficult to completely define. It’s hard to predict how time and specific data combinations will affect computer systems, testing is a good way to try and uncover what actually happens. Removing the constraint of needing fully defined expected outcomes allows us to explore the system as a user might.

In practical terms this means running automated checks in our release pipeline and performing testing before code is committed, and post release. Taking testing out of the release pipeline removes the time pressures and allows us freedom to test everything as deeply as we require.

Songkick's Test and Release Process

Songkick’s Test and Release Process

Small, informal meetings called kick-offs help involve everyone in defining and designing the feature. We discuss what we’re building and why, plan how to test and release the code, and consider ways to measure success. Anything more complicated than a simple bug fix gets a kick-off before we start writing code. Understanding the context is important for helping us do the right thing. If we know that there are deadlines or business risks associated then we’re likely to act differently from a situation than has technical risks.

Coming out of the kick-off meeting we know how risky we consider the feature to be. We will have decided on the best approach to testing and releasing the code. As part of developing the feature we’ll also write or update our automated checks to make sure we don’t break the feature further down the line. Our process is intentionally flexible to allow us to treat each change appropriately depending on risk and need to ship.

Consider a recently released feature to store promoter details against ticket allocations as an example. The feature kick-off meeting identified risks and we discussed what and how to test the feature. We identified ways to break down the work into smaller pieces that could be developed and released independently; each hidden behind a feature flipper to keep it invisible from real users.

Developers and testers paired together to decide on specific areas to test. The tester’s testing expertise, and the developer’s deep understanding of the code feed into an informal collection of test ideas based on risk. Usually these are represented in a visual mind map for easy reference.

The developers, guided by the mind map, tested the feature and added automated unit and integration tests as they went. Front-end changes were overseen by a designer working closely with one of the developers to come up with the best, feasible, design. Once we had all the pieces of the feature the whole team jumped in to do some testing, and update our automated acceptance tests.

The feature required a bit of data backfilling so the development team were able to use the functionality in production, in ways we expect real users to use it. Of course we found some bugs but by working with small releases we were able to quickly locate the source of the problem. Fast release pipelines allow fixes to be deployed within minutes, making the cost of most bugs tolerably low.

Once the feature had been fully released and switched on for all users we used monitoring to check for unexpected issues. Reviewing features after a week or two of real world usage allows us to make informed decisions about the technical implementation and user experience. Taking the time to review how much value features are adding allows us to quickly spot and respond to problems.

Testing a feature involves many experts. Testers must be on hand to aid the developers in their testing, often by creating a mindmap of test ideas to guide testing. We try to use our previous experience of releasing similar features to focus the testing on areas that are typically complex or easy to break. Designers and UX people get involved to make sure the UX works as hoped, and the design looks good on all our supported devices and browsers. Product managers make sure the features actually do what they want them to do. High risk features have additional deep testing from the test team and in certain cases we throw in some focused performance or security testing.

Most of our bugs come from forgetting use cases or not understanding existing functionality in the system. Testing gives us a chance to use the system in an investigative way to hopefully find these bugs. Moving testing outside of our release pipeline gives us space to perform enough testing for each feature whilst maintaining a fully automated, and fast, release pipeline.

Apple tvOS Tech Talks, London 2016

Apple tvOS Tech Talks
London 2016
by Michael May

opening-slide

As part of Apple’s plan to get more apps onto the Apple TV platform they instigated one of their irregular Tech Talks World Tours. It came to London on January 11th 2016 and I got a golden ticket to attend the one day event.

The agenda for the day was

Apple TV Tech Talks Kickoff
Designing for Apple TV
Focus Driven Interfaces with UIKit
Break
Siri Remote & Game Controllers
On-Demand Resources & Data Storage
Lunch
Media Playback
Leveraging TVML for Media Apps
Best Practices for Designing tvOS Apps
Break
Tuning Your tvOS App
Making the Most Out of the Top Shelf
App Store Distribution
Reception

All sample code was in Swift, as you might expect, but they made a point of saying that you can develop tvOS apps in Objective-C, C++, and C too. I think these are especially important for the gaming community where frameworks such as Unity are so important (despite Metal and SpriteKit).

I won’t go through each session, as I don’t think that really serves any useful purpose (the videos will be released, so I am told). Instead I’ll expand on some of my notes from the day, as they were the points I thought were interesting.

The day started with a brief intro session that included a pre-amble about how TV is so entrenched in our lives and yet so behind the times. This led into a slide that simply said…

future-of-tv

“The Future of TV is Apps”

That’s probably the most bullish statement of intent that I’ve heard from Apple, so far, about their shiny new little black box. I think that if we can change user behaviour in the coming months and years then I might agree (see my piece at the end).

Then they pointed out that, as this is the very first iteration of this product, there are no permutations to worry about – the baseline for your iOS app might be an iPhone 4S running iOS 8 but for tvOS it’s just the latest and greatest – one box, one OS.

This is a device for which you can assume

  • It is always connected (most of the time)
  • It has a high speed connection (most of the time)
  • It has a fast dual-core processor
  • It has a decent amount of memory
  • It has a decent amount of storage (and mechanisms for maintaining that)

They then went on to explain that the principles for a television app are somewhat different from a phone app. Apple specifically called out three principles that you should consider when designing your app.

  • Connected
    Your users must feel connected to the content of your app. As your app is likely some distance from the user, with no direct contact between finger and content, this is a different experience from touching the glass of an iPhone UI.
  • Clear
    Your app should be legible and the user should never get lost in the user interface. If the user leaves the room for a moment then comes back, can they pick up where they left off?
  • Immersive
    Just like watching a movie or TV series, your app should be wholly immersive whilst on-screen.

If you had said these things to me casually, I would have probably said, “well, yeah, obviously” but when you have it spelled out to you, it gives you pause for thought;

“If I did port my app, how would I make an experience that works with the new remote and also makes sense on everything from a small flat-screen in a studio flat to an insanely big projector in a penthouse.”

Add to that the fact that the TV is a shared experience – from watching content together to just allowing different users to use your app at different times – it’s not the intimate experience we have learned to facilitate on iOS. It should still be personal, but it’s not personal to the same person all the time. Think of Netflix with their user picker at startup, or the tvOS AirBnB app with it’s avatar picker at the bottom of the screen.

Next was the Siri Remote and interactions via it. This is one complex device packed in a deceptively small form factor – from the microphone to the trackpad, gyroscope and accelerometer, this is not your usual television remote. We can now touch, swipe, swing, shake, click and talk to our media centre. The exciting thing for us as app developers is that almost all of this is open for us to use, either out of the box (for apps) or as custom interactions from raw event streams (particularly useful for games).

As you might expect from Apple, they were keen to stress that there were expectations for certain buttons that you should respect. Specifically, the menu and play/pause buttons. I like that they are encouraging conformity – it’s very much what people expect from Apple, but found it a bit silly when demonstrating how one might use the remote in landscape as a controller for a racing game. This, to me, felt a bit like dogma. If you want this to become a great gaming device, accept the natural limitations of the remote and push game controllers as the right choice here. Instead they kept going on about the remote and controllers being first class citizens in all circumstances.

Speaking to an indie game developer friend about the potential of the device, he said that he would really like three things from Apple, at least, before hopping on board;

  • Stats on Apple TV sales to evaluate the size of the market
  • A games pack style version that comes with two controllers to put the device on a par with the consoles
  • Removal of the requirement to support the remote as an option in games. Trying to design a game that must also work with the remote is just too limiting and hopefully Apple will realise this as they talk to more games companies.

A key component of the new way of interacting with tvOS (versus iOS) is the inability to set the focus for the user. Instead you guide the “focus engine” as it changes the focus for the user, in response to their gestures. This gives uniformity, again, and also means that apps cannot become bad citizens and switch the focus under the user. One could imagine the temptation to do this being hard to resist for some kinds of apps – breaking news or the latest posts in a social stream, perhaps.

Instead you use invisible focus guides between views and focusable properties on views to help the engine know what the right thing to do is. At one point in the presentations the speaker said

“Some people think they need a cursor on the Apple TV…they are wrong”

It seems clear to me that the focus engine is designed specifically to overcome this kind of hack and is a much better solution. If you’ve ever tried to use the cursor remote on some “Smart” TV’s then you’ll know how that feels. If not, imagine a mouse with a low battery after one too many happy hour cocktails.

With the expansive, but still limited resources of the Apple TV hardware, there will be times when there simply is not enough storage for everything that the user wants to install. The same in fact, holds true for iOS already. Putting aside my rant about how cheap memory and storage are and how much Apple cash-in on both by making them premium features, their solution is On-Demand Resources (ODR).

With ODR you can mark resources as being one of three types which change when, and if, they are downloaded, and how they may be purged under low resource conditions. Apple want you to bundle up your resources (images, videos, data, etc, but not code) into resource packs and to tag them. You tag them as either

  • Install
  • Prefetch
  • Download only on demand

Install come bundled with the app itself (splash screen, on-boarding, first levels, etc). Prefetch are downloaded automatically, but after launching the app and on demand are as you might expect – on demand from the app. On demand can be purged, using various heuristics as to how likely they are to affect the user/app – things like last accessed date and priority flags.

Although not talked about that much as far as I can tell, to me TVML is one of the big stories of tvOS. Apple have realised that writing a full blown native app is both expensive and overkill for some. If you’re all about content then you probably need little more than a grid of content to navigate, a single content drill down view and some play/pause of that streaming content. TVML gives you an XML markup language, powered by a JavaScript engine, that vends native components in a native app. It can interact with your custom app code too, through bridges between the JavaScript DOM and the native wrapper. This makes a lot of sense if you are Netflix, Amazon Prime Video, Mubi, Spotify or, as they pointed out, Apple Music and the tvOS App Store.

It highly specific but its highly specific to exactly the type of content Apple so desperately need to woo and who are likely wondering if they can afford to commit time and effort to an untested platform. As we’ve seen with the watchOS 2, developers are feeling somewhat wary of investing a lot of time in new platforms when they also have to maintain their existing ones, start moving to Swift, adopt the latest iOS 9 features, and so on.

I think this is a big deal because what Apple are providing is what so many third parties have been offering for years, to differing degrees of success. This is their Cordova, their PhoneGap or, perhaps most closely, their React Native. This is a fully Apple approved, and Apple supported, hybrid app development solution that your tranche of web developers are going to be able to use. If this ever comes to iOS it could open up apps to developers and businesses that just cannot afford a native app team, or the services of an app agency (assuming your business is all about vending content and you can live with a template look and feel). I think this could be really big in the future and in typical Apple fashion they are keeping it very low key for now.

They kept teasing that we were all there to find out how to get featured (certainly people were taking more photos there than anywhere else) but before that they spoke about tuning your apps for the TV. This included useful tricks and tips for the well documented frustrations of trying to enter text on the tvOS remote (make sure to mark email fields as such  – Apple will offer a recently used email list if you do) to examples of using built-in technologies to share data instead of asking the user to do work.

To the delight of my friends who work there, they demonstrated the Not On The High Street App and it’s use of Bonjour to discover the users iPhone/iPad and push the product they want to sell into the basket of the app on that platform. From there the user can complete their purchase very quickly – something that would be fiddly to do on the TV (slow keyboard, no Apple Pay, no Credit Card scanner).

Next came another feature that I think could hint at new directions for iOS in the future – the top shelf. If the user choses to put your app in the top row of apps, then, when it’s selected, that app gets to run a top shelf extension that populates the shelf with static or dynamic image content. This is the closest thing to a Windows Phone live tile experience that we’ve seen so far and, as I say, I think it could signpost a future “live” experience for iOS too. A blend of a Today Widget and a Top Shelf Widget could be very interesting.

Finally came the session they were promising; App Store Distribution. The key take-aways for me were

  • Don’t forget other markets (after the US the biggest app stores are Japan, China, UK, Australia, Canada and Germany)
  • Keep your app title short (typing is hard on tvOS)
  • Spend time getting your keywords right (and avoid wasting space with things like plurals)
  • Let Apple know 3-4 weeks before a major release of your app (appstorepromotion@apple.com)
  • Make your app the very best it can be and mindful of the tvOS platform

top-ios-markets

Then it was on to a reception with some delicious canapés and a selection of drinks. This wasn’t what made it great though. What made it great were all the Apple people in the room, giving everyone time who wanted it. This was not the Apple of old and it was all the better for it. The more of this kind of interaction they can facilitate the stronger their platform will be for us.

The Future of TV is Apps?

I think the future of consumer electronics is a multi-screen ecosystem where the user interface and, of course the form factor itself, follows the function to which it is in service.

Clearly, the television could become a critical screen in this future. I believe that, even as we get new immersive entertainment and story-telling options (virtual reality, 3D, and who knows what else), the passive television experience will persist. Sometimes all you want to do is just sit back and be entertained with nothing more taxing than the pause button.

A TV with apps allows this but also, perhaps, makes this more complex. When all I want to do is binge on Archer, a system with apps might not be what I want to navigate. That being said, if all I want to do is binge on Archer, and this can be done with a simple “Hey Siri, play Archer from my last unplayed episode”, then it’s a step ahead of my passive TV of old. It had better know I use Netflix and it had better not log me out of Netflix every few weeks like the Fire TV Stick does.

If I then get a notification (that hunts for my attention from watch to phone to television to who knows what else) that reminds me I have to be in town in an hour and that there are problems on the Northern Line so I should leave extra time, I might hit pause, grab my stuff and head out. As I sit on the tube with 20 minutes to kill, I might then say “Hey Siri, continue playing Archer”.

Just as I get to my appointment I find my home has noticed a lack of people and gone into low power mode, via a push notification. If I want, I can quickly reply with my expected arrival home time, so that it can put on the heating in time and also be on high alert for anyone else in my house during that period.

I suspect most of these transactions are being powered by apps, not the OS itself, but I do not expect to interact with the apps in most cases anymore. Apps will become simply the containers for the means of serving me these micro-interactions as I need/want them.

One only has to look at the Media Player shelf, Notification Actions, Today Widgets, Watch Apps, Glances, Complications, 3D Touch Quick Actions, and now the tvOS Top Shelf to see that this is already happening and will only increase as time goes on. Your app will power multiple screen experiences and be tailored for each, with multiple view types, and multiple interactions. Sometimes these will be immersive and last for minutes or hours (games, movie watching, book reading, etc) but other times these be will be micro-interactions of seconds at most (reply to a tweet, check the weather, plan a journey, start a music stream, buy a ticket, complete a checkout). Apps must evolve or die.

That situation is probably a few years off yet, but in the more immediate term, if we want the future of TV to be apps (beyond simply streaming content) then users will need to be persuaded that their TV can be a portal to a connected world.

From playing games to checking the weather to getting a travel report, these are all things for which an apps powered TV could be very useful. It’s frequently on, always connected, and has a nice big screen on which to view what you want to know. Whether users find this easier than going to pick up their iPhone or iPad remains to be seen.

I think Apple see the Apple TV as a Trojan horse. Many years ago, Steve Jobs introduced the iMac as the centre of your digital world; a hub into which you plugged things. I think the Apple TV is the new incarnation of that idea – except the cables have now gone (replaced with the likes of HomeKit, AirPlay and Bonjour), the storage is iCloud and the customisation is through small, focused apps, and not the fully fledged applications of old.

It’s early days and if the iPhone has taught us anything it’s that the early model will rapidly change and improve. Where it actually goes is hard to say, but where it could go is starting to become clear.

Is the future of the TV apps? Probably so, but probably not in the way we think of apps right now. The app is dying, long live the app.

tour-pass

 

Posted in iOS

Recent talks on Songkick Engineering

Since I joined Songkick a little over four years ago, our development team has done some amazing things. Our technology, process and culture have improved an enormous amount.

We’ve always been eager to share our progress on this blog and elsewhere, and we often talk about what we’ve learned and where we are still trying to improve.

Here are some recent talks given by members of our team discussing various aspects of how we work.