Ingredients for a healthy Android codebase

Getting started in Android development is pretty straightforward, there are plenty of tutorials and documentation provided by Google. But Google will teach you to build a tent, not a solid sustainable house. As it’s still a very young platform with a very young community, the Android world has been lacking some direction on how to properly architect an app. Recently, some teams have started to take the problem more seriously, with the shiny tagline “Clean architecture for Android”.

At Songkick, we had the chance to rebuild the Android client from scratch 7 months ago. The previous version was working very well but the codebase had not been touched for almost 3 years, which was leaving us with old practices, old libraries, and Eclipse. We wanted to take a good direction so we spent a week designing the general architecture of the app. So we tried to apply the following principles from Uncle Bob’s clean architecture:

Systems should be

  • Independent of Frameworks. The architecture does not depend on the existence of a particular library. This allows you to use such frameworks as tools, rather than having to design your system around their limited constraints.
  • Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.
  • Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.
  • Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.
  • Independent of any external agency. In fact your business rules simply don’t know anything at all about the outside world.

…and this is what we ended up with:

Screen Shot 2016-02-25 at 14.21.45

Layers

Data layer

The data layer acts as a mediator between data sources and the domain logic. It should be a pure Java layer. We divide the data layer in different buckets following the repository pattern. In short, a repository is an abstract layer that isolates business objects from the data sources.

Screen Shot 2016-02-25 at 14.23.01

For example it can expose a searchArtist() method but the domain layer will not (and should not) know where the data is coming from. In fact one day we could swap the data source from a database to a web API and the domain layer will not see the difference.

When the data source is the Songkick REST API, we usually follow the format of the endpoint to know where data access belongs. That way we have a UserRepository, an ArtistRepository, an EventRepository, and so on.

Domain layer

The role of the domain layer is to orchestrate the flow of data and offer its services to the presentation layer. The domain layer is application specific, this is where the core business logic belongs. It is divided in use cases. A use case should not be directly linked to any external agencies and it should also be a pure Java layer.

Presentation layer

At the top of the stack, we have the presentation layer which is responsible for displaying information to the user.

That’s where things get tricky because of this class:

Screen Shot 2016-02-25 at 14.25.02

When I started developing for Android, I found that an Activity is a very convenient place where everything can happen:

  • it’s tied to the view lifecycle
  • it can receive user inputs
  • it’s a Context so it gives access to many data sources (ContentResolver, SharedPreferences, …)

Adding on top of that, most of the samples provided by Google have everything in an Activity, what could go wrong? If you follow that pattern I can guarantee that your Activity will be huge and untestable.

We took the decision to consider our activities/fragments as views and make them as dumb as possible. The view related logic lives in presenters that communicate with the domain layer. Presenters should only have simple logic related to presentation of the data, not to the data itself.

Models vs. View models

This architecture is moving a lot of logic away from the presentation layer but there is one last thing that we didn’t consider: models. Models that we get from the data sources are very rarely what we want to display to the user. It’s very common to do some extra treatment just before binding the data to the view. We’ve seen some apps that have 300 lines of code onBindViewHolder(), resulting in very slow view recycling. This is unacceptable, why would you want to add additional overhead to your process on the main thread? Why not move that overhead to the same background thread you used to fetch the data?

In the Songkick Android app, the presentation layer barely know what the original model is. It only deals with view models. A view model is the view representation of the content your data layer fetched. In the domain layer, each use case has a transformer that converts models to view models. To respect the clean architecture rules, the presentation layer provides the transformer to the domain layer and the domain layer uses it without really knowing what it does.

So say that you have the following Artist model:

Screen Shot 2016-02-25 at 14.32.42

If we just want to show the name and if the artist is on tour, our ArtistViewModel is as follow:

Screen Shot 2016-02-25 at 14.32.32

So that we can efficiently bind it to our view:

Screen Shot 2016-02-25 at 14.32.19

Communication

To communicate between these layers, we use RxJava by:

  • exposing Observables in repositories
  • exposing methods to subscribe/unsubscribe to an Observable that emits ViewModels in the use case
  • subscribing/unsubscribing to the use case in the Presenter

Structure

To structure our app we are using Dagger in the following way:

Screen Shot 2016-02-25 at 14.28.59

Repositories are unique per application as they should be stateless and shared across activities. Use cases and presenters are unique per Activity/Fragment. Presenters are stateful and should be linked to a unique Activity/Fragment.

We are also trying to follow the quote by Erich Gamma:

“Program to an interface, not an implementation”

  • It decouples the client from the implementation
  • It defines the vocabulary of the collaboration
  • It makes everything easier to test

Testing

Most of the pieces in this stack are pure Java classes. So they should be ready for unit testing without Robolectric. The only bit that needs Robolectric would be the Activity/Fragment.

We usually prefer testing the presentation layer with pure UI tests using Espresso. The good thing is that we can just mock the data layer to expose observables emitting entities from a JSON file and we’re good to go:

Screen Shot 2016-02-25 at 14.30.07

Of course there are drawbacks to only testing the domain and presentation layer without checking if it’s compliant with the external agencies, but we generally found that tests were much more stable and very accurate with that pattern. End-to-end tests are also valuable and we could imagine adding a separate category running through some important user journeys by providing the default sources to our data layer.

Conclusion

We’ve now run the new app for 4 months and it appeared to be very stable and very maintainable. We’re also in a great place with a good test coverage on both unit and UI tests. The codebase is pretty scalable when it comes to add new features.

Although it works for us, we are not saying that everyone should go for this architecture. We’re just at the first iteration of “Clean architecture” for Android, and are looking forward to seeing what it will be in the future.

Here’s a link to the talk I gave about the same topic: https://youtu.be/-oZswd1j5H0 (slides: https://speakerdeck.com/romainpiel/ingredients-for-a-healthy-codebase)

References

Uncle Bob’s clean architecturehttp://fernandocejas.com/2014/09/03/architecting-android-the-clean-way
https://github.com/android10/Android-CleanArchitecture
Martin Fowler – The repository pattern
Erich Gamma – Design Principles from Design Patterns

Move fast, but test the code

At Songkick we believe code only starts adding value when it’s out in production, and being used by real users. Using Continuous Deployment helps us ship quickly and frequently. Code is pushed to Git, automatically built, checked, and if all appears well, deployed to production.

Automated pipelines make sure that every release goes through all of our defined steps. We don’t need to remember to trigger test suites, and we don’t need to merge features between branches. Our pipeline contains enough automated checks for us to be confident releasing the code to production.

However, our automated checks are not enough to confirm if a feature is actually working as it should be. For that we need to run through all our defined acceptance criteria and implicit requirements, and see the feature being used in the real world by real users.

In a previous life we used to try and perform all of our testing in the build/test/release pipeline. Not only was this slow and inefficient, dependent on lots of different people to be available at the same time, but often we found that features behaved very differently in production. Real users do unexpected things and it’s difficult to create truly realistic test environments.

Our motivation to get features out to real users as quickly as possible drove our adoption of Continuous Deployment. Having manual acceptance testing within the release pipeline slowed us down and made processes unpredictable. It was hard to define a process that relied on so many different people. We treated everyday events such as meetings and other work priorities as exceptional events which made things even more delay-prone and frustrating.

Eventually we decided that the build and release pipeline must be fully automated. We wanted developers to be able to push code and know that if Jenkins passed the build, it was safe for them to deploy to production. Attempting to automate all testing is never going to be achievable, or desirable. Firstly, automated tests are expensive to build and maintain. Secondly, testing, as opposed to checking, is not something that can be automated.

When we check something we are comparing the system against a known outcome. For example checking a button launches the expected popup when clicked, or checking a date displays in the specified format. Things like this can be, and should be automated.

Testing is more involved and relies on a human making a judgement. Testing involves exploring the system in creative ways in order to discover the things that you forgot about, the things that are unexpected, or difficult to completely define. It’s hard to predict how time and specific data combinations will affect computer systems, testing is a good way to try and uncover what actually happens. Removing the constraint of needing fully defined expected outcomes allows us to explore the system as a user might.

In practical terms this means running automated checks in our release pipeline and performing testing before code is committed, and post release. Taking testing out of the release pipeline removes the time pressures and allows us freedom to test everything as deeply as we require.

Songkick's Test and Release Process

Songkick’s Test and Release Process

Small, informal meetings called kick-offs help involve everyone in defining and designing the feature. We discuss what we’re building and why, plan how to test and release the code, and consider ways to measure success. Anything more complicated than a simple bug fix gets a kick-off before we start writing code. Understanding the context is important for helping us do the right thing. If we know that there are deadlines or business risks associated then we’re likely to act differently from a situation than has technical risks.

Coming out of the kick-off meeting we know how risky we consider the feature to be. We will have decided on the best approach to testing and releasing the code. As part of developing the feature we’ll also write or update our automated checks to make sure we don’t break the feature further down the line. Our process is intentionally flexible to allow us to treat each change appropriately depending on risk and need to ship.

Consider a recently released feature to store promoter details against ticket allocations as an example. The feature kick-off meeting identified risks and we discussed what and how to test the feature. We identified ways to break down the work into smaller pieces that could be developed and released independently; each hidden behind a feature flipper to keep it invisible from real users.

Developers and testers paired together to decide on specific areas to test. The tester’s testing expertise, and the developer’s deep understanding of the code feed into an informal collection of test ideas based on risk. Usually these are represented in a visual mind map for easy reference.

The developers, guided by the mind map, tested the feature and added automated unit and integration tests as they went. Front-end changes were overseen by a designer working closely with one of the developers to come up with the best, feasible, design. Once we had all the pieces of the feature the whole team jumped in to do some testing, and update our automated acceptance tests.

The feature required a bit of data backfilling so the development team were able to use the functionality in production, in ways we expect real users to use it. Of course we found some bugs but by working with small releases we were able to quickly locate the source of the problem. Fast release pipelines allow fixes to be deployed within minutes, making the cost of most bugs tolerably low.

Once the feature had been fully released and switched on for all users we used monitoring to check for unexpected issues. Reviewing features after a week or two of real world usage allows us to make informed decisions about the technical implementation and user experience. Taking the time to review how much value features are adding allows us to quickly spot and respond to problems.

Testing a feature involves many experts. Testers must be on hand to aid the developers in their testing, often by creating a mindmap of test ideas to guide testing. We try to use our previous experience of releasing similar features to focus the testing on areas that are typically complex or easy to break. Designers and UX people get involved to make sure the UX works as hoped, and the design looks good on all our supported devices and browsers. Product managers make sure the features actually do what they want them to do. High risk features have additional deep testing from the test team and in certain cases we throw in some focused performance or security testing.

Most of our bugs come from forgetting use cases or not understanding existing functionality in the system. Testing gives us a chance to use the system in an investigative way to hopefully find these bugs. Moving testing outside of our release pipeline gives us space to perform enough testing for each feature whilst maintaining a fully automated, and fast, release pipeline.

Apple tvOS Tech Talks, London 2016

Apple tvOS Tech Talks
London 2016
by Michael May

opening-slide

As part of Apple’s plan to get more apps onto the Apple TV platform they instigated one of their irregular Tech Talks World Tours. It came to London on January 11th 2016 and I got a golden ticket to attend the one day event.

The agenda for the day was

Apple TV Tech Talks Kickoff
Designing for Apple TV
Focus Driven Interfaces with UIKit
Break
Siri Remote & Game Controllers
On-Demand Resources & Data Storage
Lunch
Media Playback
Leveraging TVML for Media Apps
Best Practices for Designing tvOS Apps
Break
Tuning Your tvOS App
Making the Most Out of the Top Shelf
App Store Distribution
Reception

All sample code was in Swift, as you might expect, but they made a point of saying that you can develop tvOS apps in Objective-C, C++, and C too. I think these are especially important for the gaming community where frameworks such as Unity are so important (despite Metal and SpriteKit).

I won’t go through each session, as I don’t think that really serves any useful purpose (the videos will be released, so I am told). Instead I’ll expand on some of my notes from the day, as they were the points I thought were interesting.

The day started with a brief intro session that included a pre-amble about how TV is so entrenched in our lives and yet so behind the times. This led into a slide that simply said…

future-of-tv

“The Future of TV is Apps”

That’s probably the most bullish statement of intent that I’ve heard from Apple, so far, about their shiny new little black box. I think that if we can change user behaviour in the coming months and years then I might agree (see my piece at the end).

Then they pointed out that, as this is the very first iteration of this product, there are no permutations to worry about – the baseline for your iOS app might be an iPhone 4S running iOS 8 but for tvOS it’s just the latest and greatest – one box, one OS.

This is a device for which you can assume

  • It is always connected (most of the time)
  • It has a high speed connection (most of the time)
  • It has a fast dual-core processor
  • It has a decent amount of memory
  • It has a decent amount of storage (and mechanisms for maintaining that)

They then went on to explain that the principles for a television app are somewhat different from a phone app. Apple specifically called out three principles that you should consider when designing your app.

  • Connected
    Your users must feel connected to the content of your app. As your app is likely some distance from the user, with no direct contact between finger and content, this is a different experience from touching the glass of an iPhone UI.
  • Clear
    Your app should be legible and the user should never get lost in the user interface. If the user leaves the room for a moment then comes back, can they pick up where they left off?
  • Immersive
    Just like watching a movie or TV series, your app should be wholly immersive whilst on-screen.

If you had said these things to me casually, I would have probably said, “well, yeah, obviously” but when you have it spelled out to you, it gives you pause for thought;

“If I did port my app, how would I make an experience that works with the new remote and also makes sense on everything from a small flat-screen in a studio flat to an insanely big projector in a penthouse.”

Add to that the fact that the TV is a shared experience – from watching content together to just allowing different users to use your app at different times – it’s not the intimate experience we have learned to facilitate on iOS. It should still be personal, but it’s not personal to the same person all the time. Think of Netflix with their user picker at startup, or the tvOS AirBnB app with it’s avatar picker at the bottom of the screen.

Next was the Siri Remote and interactions via it. This is one complex device packed in a deceptively small form factor – from the microphone to the trackpad, gyroscope and accelerometer, this is not your usual television remote. We can now touch, swipe, swing, shake, click and talk to our media centre. The exciting thing for us as app developers is that almost all of this is open for us to use, either out of the box (for apps) or as custom interactions from raw event streams (particularly useful for games).

As you might expect from Apple, they were keen to stress that there were expectations for certain buttons that you should respect. Specifically, the menu and play/pause buttons. I like that they are encouraging conformity – it’s very much what people expect from Apple, but found it a bit silly when demonstrating how one might use the remote in landscape as a controller for a racing game. This, to me, felt a bit like dogma. If you want this to become a great gaming device, accept the natural limitations of the remote and push game controllers as the right choice here. Instead they kept going on about the remote and controllers being first class citizens in all circumstances.

Speaking to an indie game developer friend about the potential of the device, he said that he would really like three things from Apple, at least, before hopping on board;

  • Stats on Apple TV sales to evaluate the size of the market
  • A games pack style version that comes with two controllers to put the device on a par with the consoles
  • Removal of the requirement to support the remote as an option in games. Trying to design a game that must also work with the remote is just too limiting and hopefully Apple will realise this as they talk to more games companies.

A key component of the new way of interacting with tvOS (versus iOS) is the inability to set the focus for the user. Instead you guide the “focus engine” as it changes the focus for the user, in response to their gestures. This gives uniformity, again, and also means that apps cannot become bad citizens and switch the focus under the user. One could imagine the temptation to do this being hard to resist for some kinds of apps – breaking news or the latest posts in a social stream, perhaps.

Instead you use invisible focus guides between views and focusable properties on views to help the engine know what the right thing to do is. At one point in the presentations the speaker said

“Some people think they need a cursor on the Apple TV…they are wrong”

It seems clear to me that the focus engine is designed specifically to overcome this kind of hack and is a much better solution. If you’ve ever tried to use the cursor remote on some “Smart” TV’s then you’ll know how that feels. If not, imagine a mouse with a low battery after one too many happy hour cocktails.

With the expansive, but still limited resources of the Apple TV hardware, there will be times when there simply is not enough storage for everything that the user wants to install. The same in fact, holds true for iOS already. Putting aside my rant about how cheap memory and storage are and how much Apple cash-in on both by making them premium features, their solution is On-Demand Resources (ODR).

With ODR you can mark resources as being one of three types which change when, and if, they are downloaded, and how they may be purged under low resource conditions. Apple want you to bundle up your resources (images, videos, data, etc, but not code) into resource packs and to tag them. You tag them as either

  • Install
  • Prefetch
  • Download only on demand

Install come bundled with the app itself (splash screen, on-boarding, first levels, etc). Prefetch are downloaded automatically, but after launching the app and on demand are as you might expect – on demand from the app. On demand can be purged, using various heuristics as to how likely they are to affect the user/app – things like last accessed date and priority flags.

Although not talked about that much as far as I can tell, to me TVML is one of the big stories of tvOS. Apple have realised that writing a full blown native app is both expensive and overkill for some. If you’re all about content then you probably need little more than a grid of content to navigate, a single content drill down view and some play/pause of that streaming content. TVML gives you an XML markup language, powered by a JavaScript engine, that vends native components in a native app. It can interact with your custom app code too, through bridges between the JavaScript DOM and the native wrapper. This makes a lot of sense if you are Netflix, Amazon Prime Video, Mubi, Spotify or, as they pointed out, Apple Music and the tvOS App Store.

It highly specific but its highly specific to exactly the type of content Apple so desperately need to woo and who are likely wondering if they can afford to commit time and effort to an untested platform. As we’ve seen with the watchOS 2, developers are feeling somewhat wary of investing a lot of time in new platforms when they also have to maintain their existing ones, start moving to Swift, adopt the latest iOS 9 features, and so on.

I think this is a big deal because what Apple are providing is what so many third parties have been offering for years, to differing degrees of success. This is their Cordova, their PhoneGap or, perhaps most closely, their React Native. This is a fully Apple approved, and Apple supported, hybrid app development solution that your tranche of web developers are going to be able to use. If this ever comes to iOS it could open up apps to developers and businesses that just cannot afford a native app team, or the services of an app agency (assuming your business is all about vending content and you can live with a template look and feel). I think this could be really big in the future and in typical Apple fashion they are keeping it very low key for now.

They kept teasing that we were all there to find out how to get featured (certainly people were taking more photos there than anywhere else) but before that they spoke about tuning your apps for the TV. This included useful tricks and tips for the well documented frustrations of trying to enter text on the tvOS remote (make sure to mark email fields as such  – Apple will offer a recently used email list if you do) to examples of using built-in technologies to share data instead of asking the user to do work.

To the delight of my friends who work there, they demonstrated the Not On The High Street App and it’s use of Bonjour to discover the users iPhone/iPad and push the product they want to sell into the basket of the app on that platform. From there the user can complete their purchase very quickly – something that would be fiddly to do on the TV (slow keyboard, no Apple Pay, no Credit Card scanner).

Next came another feature that I think could hint at new directions for iOS in the future – the top shelf. If the user choses to put your app in the top row of apps, then, when it’s selected, that app gets to run a top shelf extension that populates the shelf with static or dynamic image content. This is the closest thing to a Windows Phone live tile experience that we’ve seen so far and, as I say, I think it could signpost a future “live” experience for iOS too. A blend of a Today Widget and a Top Shelf Widget could be very interesting.

Finally came the session they were promising; App Store Distribution. The key take-aways for me were

  • Don’t forget other markets (after the US the biggest app stores are Japan, China, UK, Australia, Canada and Germany)
  • Keep your app title short (typing is hard on tvOS)
  • Spend time getting your keywords right (and avoid wasting space with things like plurals)
  • Let Apple know 3-4 weeks before a major release of your app (appstorepromotion@apple.com)
  • Make your app the very best it can be and mindful of the tvOS platform

top-ios-markets

Then it was on to a reception with some delicious canapés and a selection of drinks. This wasn’t what made it great though. What made it great were all the Apple people in the room, giving everyone time who wanted it. This was not the Apple of old and it was all the better for it. The more of this kind of interaction they can facilitate the stronger their platform will be for us.

The Future of TV is Apps?

I think the future of consumer electronics is a multi-screen ecosystem where the user interface and, of course the form factor itself, follows the function to which it is in service.

Clearly, the television could become a critical screen in this future. I believe that, even as we get new immersive entertainment and story-telling options (virtual reality, 3D, and who knows what else), the passive television experience will persist. Sometimes all you want to do is just sit back and be entertained with nothing more taxing than the pause button.

A TV with apps allows this but also, perhaps, makes this more complex. When all I want to do is binge on Archer, a system with apps might not be what I want to navigate. That being said, if all I want to do is binge on Archer, and this can be done with a simple “Hey Siri, play Archer from my last unplayed episode”, then it’s a step ahead of my passive TV of old. It had better know I use Netflix and it had better not log me out of Netflix every few weeks like the Fire TV Stick does.

If I then get a notification (that hunts for my attention from watch to phone to television to who knows what else) that reminds me I have to be in town in an hour and that there are problems on the Northern Line so I should leave extra time, I might hit pause, grab my stuff and head out. As I sit on the tube with 20 minutes to kill, I might then say “Hey Siri, continue playing Archer”.

Just as I get to my appointment I find my home has noticed a lack of people and gone into low power mode, via a push notification. If I want, I can quickly reply with my expected arrival home time, so that it can put on the heating in time and also be on high alert for anyone else in my house during that period.

I suspect most of these transactions are being powered by apps, not the OS itself, but I do not expect to interact with the apps in most cases anymore. Apps will become simply the containers for the means of serving me these micro-interactions as I need/want them.

One only has to look at the Media Player shelf, Notification Actions, Today Widgets, Watch Apps, Glances, Complications, 3D Touch Quick Actions, and now the tvOS Top Shelf to see that this is already happening and will only increase as time goes on. Your app will power multiple screen experiences and be tailored for each, with multiple view types, and multiple interactions. Sometimes these will be immersive and last for minutes or hours (games, movie watching, book reading, etc) but other times these be will be micro-interactions of seconds at most (reply to a tweet, check the weather, plan a journey, start a music stream, buy a ticket, complete a checkout). Apps must evolve or die.

That situation is probably a few years off yet, but in the more immediate term, if we want the future of TV to be apps (beyond simply streaming content) then users will need to be persuaded that their TV can be a portal to a connected world.

From playing games to checking the weather to getting a travel report, these are all things for which an apps powered TV could be very useful. It’s frequently on, always connected, and has a nice big screen on which to view what you want to know. Whether users find this easier than going to pick up their iPhone or iPad remains to be seen.

I think Apple see the Apple TV as a Trojan horse. Many years ago, Steve Jobs introduced the iMac as the centre of your digital world; a hub into which you plugged things. I think the Apple TV is the new incarnation of that idea – except the cables have now gone (replaced with the likes of HomeKit, AirPlay and Bonjour), the storage is iCloud and the customisation is through small, focused apps, and not the fully fledged applications of old.

It’s early days and if the iPhone has taught us anything it’s that the early model will rapidly change and improve. Where it actually goes is hard to say, but where it could go is starting to become clear.

Is the future of the TV apps? Probably so, but probably not in the way we think of apps right now. The app is dying, long live the app.

tour-pass

 

Posted in iOS