Started From The Model Now We’re Here: A Swift 3 Migration Diary

This is a romanticised activity log of Swift 3 migration work that happened in November last year.

For context, Songkick’s iOS app consists of both Objective-C and Swift code. Before migrating, we have converted 40% of our Objective-C classes to Swift.1 These Swift files were needed to be upgraded to Swift 3 to comply with current and future Xcode releases.

We hope that you find this story useful. If you are still working on the migration, best of luck! First days are surely tough, but if you persevere through then you will reach the finish line sooner than you think.


Day 0

“Xcode 8.2 is the last release that will support Swift 2.3.”
— Xcode 8.2’s release notes

Sigh, I think this is it. I understand Apple’s aggressiveness, but I was a bit surprised for it to be forced this quick. I guess I will raise this requirement to my Product Manager (PM) so we can prioritise migration work this week.

I tried migrating once before when Swift 3 just came out. Xcode’s Swift Migrator tool converted many things, but still left many errors. When I fixed 3 errors, 10 new errors appeared. We ended up going with the easier route: migrating to Swift 2.3. It worked well at the time, but now Swift 2.3 will not be supported.

I guess I have to deal with those endless errors and try my best. Luckily, I saw Rob Napier’s tweet just before starting the work.

Hmmm, compared to my previous approach his approach is better in terms of predictability and control. Great, I will remove all the swift files from our main app target and call it a day.

Day 1 & Day 2

Day 1 starts with PM’s blessing to start the migration. Great, now I can take my time and strategise my approach.

Based on stories from other developers, this will take days if not weeks. This project won’t compile for days and that is fine. The reason I can make peace with an uncompilable project is this: Errors are fine as long as they are in Objective-C files. More on this later.

The strategy starts from the simplest, most inter-dependent, and most testable objects: the models. I add them to the app target and run the migration tool, one-or-two files at a time. Once all models are included, I repeat the same process to network classes since they only depend on models. Eventually, I manage to convert all of our networking code, including request objects and API caller objects.

At this point, errors on Objective-C files are fine because they are missing classes written in Swift. There always will be errors until all Swift files are included back in the app target. Our approach focuses on making all included Swift files error-free, so errors in Objective-C is acceptable and will be resolved later.

Day 2 is a downhill compared to day 1. Most of the work is handled by the Swift Migrator. Various classes were migrated in this order: view models, extension classes, views, and view controllers. In total 170 Swift files in the main app target are successfully migrated.

Migrator tool really helps this process but there are manual conversions needed to be done. Below are the notes from the first two days.

AnyObject -> Any

Swift 3 converts Objective-C’s id into Any, not AnyObject like what Swift 2 did. This affects most of our models because they deal a lot with JSON dictionaries.

For example, look at this struct in Swift 2

struct Event {
  let id: Int
  let type: String

  var analyticsProperties : [String : AnyObject] {
    return [
      "id": id,
      "type": type
    ]
  }
}

The migrator converts to Swift 3 into

// The migrator's wrong direction
var analyticsProperties : [String : AnyObject] {
  return [
    "id": id as AnyObject,
    "type": type as AnyObject
  ]
}

In Swift 3, AnyObject only applies for NSObject classes. The migrator casts to AnyObject is because Int and String are Swift structs, not Objective-C objects. Most of the fix related to this is just to manually rename AnyObject to Any.

// The manual fix
var analyticsProperties : [String : Any] {
  return [
    "id": id,
    "type": type
  ]
}

New Access Controls

Swift 3 introduces new access controls: private, fileprivate, internal, public, open. The visual below explains clearly the differences between each access control.

The Swift Migrator converts all private access to fileprivate. All fileprivate access are manually checked and changed back to private whenever possible. public and open were not used because we only have one main target, which is the app target. This will be revisited in the future once we start to modularise the app into frameworks for sharing between targets (e.g. main app, extensions, unit tests, and UI tests).

Closure as parameters is non-escaping as default

Closure passed as an argument now is non-escaping by default. Escaping closure is a closure that is passed as an argument that is invoked after the function returns. Migration tool misses some of our API calls. The fix is to add the @escaping annotation manually.

// from
func getArtistDetails(for artistId: Int, success: (ArtistDetailsResponse) -> Void, failure: (NSError) -> Void) -> Request {}

// to
func getArtistDetails(for artistId: Int, success: @escaping (ArtistDetailsResponse) -> Void, failure: @escaping (NSError) -> Void) -> Request {}

Status: Main target cannot be compiled, but all swift files are error-free

Day 3

Good thing that we write tests for most of our Swift code. These tests have proven to be crucial for the migration process.

Migrating test files is done in a similar fashion, test files are added to the test target and are run incrementally. Failing tests are ignored for now, as we are aiming only for successful compilation. Once all tests are included in the test target, all the failing tests are fixed. In total, 70 test files are migrated.

Status: Test target can be compiled

Day 4

Day 4 is all about cleaning up; resolving warnings and renaming methods to comply better to Swift 3 naming guide.

Lowercased enums

Most of Foundation’s and UIKit’s swift enums got converted by migrator. To make it consistent with our code base, all of our own swift enums are manually lowercased.

NSDate categories

Swift 3 has Date object as a struct not a class. Have a look at this Date‘s helper method.

extension Date {
  func apiFormat() -> String { // ... }
}

This does not automatically translate into Objective-C categories well because Date is a struct. To fix this, NSDate‘s method is added in its extension by calling Date‘s appropriate method.

extension NSDate {
    func apiFormat() -> Bool {
        return (self as Date).apiFormat()
    }
}

Warning: “incompatible Objective-C category definitions”

Last weird warning is the “incompatible Objective-C category definitions”. This is triggered when a class has a computed property in an extension. Objective-C seemed unhappy about it (although it got translated fine). To remove the warning, use class method instead of computed property.

extension Bundle {
  
  // This produces warning
  class var appVersion: String { // ... }
  
  // Changing the computed property to a method resolves the warning
  class func appVersion() -> String { // ... }

}

Status: Main target can be compiled

Started From The Model Now We’re Here

It took almost 5 working days to migrate all swift files to version 3.0. There are 203 files migrated, in which 133 files included in main target and 70 files in the test target.

Let’s hope for a better Swift 4.0 migration process. Swift team is aiming for source compatibility moving forward, so hoping for less manual work and fewer aggressive changes.


Other useful links on Swift 3 Migration


1. Check our previous post for more details on how we approach and track our code conversion from Objective-C to Swift

Posted in iOS

Compare your Objective-C and Swift code through time with Swoop

At Songkick, we’re busy converting our iOS app Objective-C codebase to Swift, and we built a tool called Swoop to help us track our progress.

We started to use Swift for our iOS app since November last year. That means new features and new tests are written in Swift. But how about our existing Objective-C code? We approached the conversion from Objective-C to Swift carefully. We have a small team and wanted to keep shipping new features, so we could not afford the risk of having major code changes. Instead, we started with smaller changes to the most problematic Objective-C code.

Our models and networking code were the first two areas that we actively convert into Swift, mainly because we use an old and unsupported network library. Early 2016, we pushed quite hard on these conversions and progressed excellently.

At that time, I was curious to understand the velocity of our progress. Maybe seeing it as a graph would be cool. This idea was then realized into a Ruby gem I named, Swoop.

Swift and Objective-C comparison reporter

Swoop compares and reports your Swift and Objective-C code through time. It goes through your git history, reads your Xcode project, compares them, and presents the information in a digestible representation.

To use Swoop, just install the gem using `’gem install swoop_report’`, and use the command with 2 required parameters :

  • Path to your Xcode project, and
  • Directory of your interest (the directory inside Xcode project)

Call the swoop command from your terminal like so :

`$ swoop –path ~/your_project/project.xcodeproj –dir Classes`

By default, it will present you a table of the last eight tags of your project, similar to the table below.

How it works

The diagram below explains how Swoop’s main classes work together.

sw_diagram

  1. It creates a `Project` using the path parameter.
  2. `TimeMachine` uses the project, and then figures out which git commits should be used based on the options provided.
  3. Once `TimeMachine` got the list of commits, it checkouts each one and starts the comparison process, which is broken down into :
    1. Selects the files that are inside the specified directory.
    2. `EntityParser` parses filtered Swift or Objective-C files and counts its classes, structs and extensions.
    3. Collates file information into a `Report`.
  4. All of the `Report`s are rendered by a chosen subclass of `Renderer`.

Below is the snippet of the Swoop’s main program :

# 1) create project with path
project = Project.new(@project_path, @dir_path)

# 2) create time machine with project and options
delorean = TimeMachine.new(project, @time_machine_options)

# 3) time machine checkouts for each commit
reports = []
delorean.travel do |proj, name, date|
  # 3.a) filter interested files
  files = proj.filepaths
  
  # 3.b) parse information from files
  entities = EntityParser.parse_files(files)
  
  # 3.c) put information in a report
  reports << Report.new(entities, name, date)
end

# 4) render array of reports as a table
renderer = TableRenderer.new(reports, "Swoop Report : '#{@dir_path}'")
renderer.render

Result

This is what our iOS app’s comparison report looks like :

sw_chart

Until now, our Swift code constitute roughly 35% of our whole codebase. From the graph, we can see that the number was vastly improved because of the work done around February until March. At that time, we were actively converting code to Swift. Then, the past three months it stagnated a bit because we changed our team goals and changed our focus to other projects.

After it worked for our iOS app, I ran Swoop on two other open source projects: Artsy’s eigen and WordPress’ iOS app.

Artsy’s Eigen

Last 8 minor versions of eigen :
`$ swoop –path eigen/Artsy.xcodeproj –dir Artsy –filter_tag ‘\d\.\d\.\d(-0)?$’ –render chart`

sw_artsy

WordPress for iOS

Last 12 major version of WordPress for iOS :
`$ swoop –path WordPress-iOS/WordPress/WordPress.xcodeproj –dir Classes –tags 12 –filter_tag ‘^\d.\d+$’ –render chart`

sw_wordpress

All in all, it works pretty well for our app and we plan to incorporate this into our continuous integration pipeline.

Onwards

We will need to test Swoop using more Xcode projects because sometimes it fails to do the job for projects that have directory changes in their git history. Also, we will aim for 100% coverage in the near future.

Any form of contributions are welcomed! Let us know if it doesn’t work for your project (it’ll be better if the project is publicly accessible). For more information on how to use and improve Swoop, please visit : https://github.com/ikhsan/swoop.

Posted in iOS

Apple tvOS Tech Talks, London 2016

Apple tvOS Tech Talks
London 2016
by Michael May

opening-slide

As part of Apple’s plan to get more apps onto the Apple TV platform they instigated one of their irregular Tech Talks World Tours. It came to London on January 11th 2016 and I got a golden ticket to attend the one day event.

The agenda for the day was

Apple TV Tech Talks Kickoff
Designing for Apple TV
Focus Driven Interfaces with UIKit
Break
Siri Remote & Game Controllers
On-Demand Resources & Data Storage
Lunch
Media Playback
Leveraging TVML for Media Apps
Best Practices for Designing tvOS Apps
Break
Tuning Your tvOS App
Making the Most Out of the Top Shelf
App Store Distribution
Reception

All sample code was in Swift, as you might expect, but they made a point of saying that you can develop tvOS apps in Objective-C, C++, and C too. I think these are especially important for the gaming community where frameworks such as Unity are so important (despite Metal and SpriteKit).

I won’t go through each session, as I don’t think that really serves any useful purpose (the videos will be released, so I am told). Instead I’ll expand on some of my notes from the day, as they were the points I thought were interesting.

The day started with a brief intro session that included a pre-amble about how TV is so entrenched in our lives and yet so behind the times. This led into a slide that simply said…

future-of-tv

“The Future of TV is Apps”

That’s probably the most bullish statement of intent that I’ve heard from Apple, so far, about their shiny new little black box. I think that if we can change user behaviour in the coming months and years then I might agree (see my piece at the end).

Then they pointed out that, as this is the very first iteration of this product, there are no permutations to worry about – the baseline for your iOS app might be an iPhone 4S running iOS 8 but for tvOS it’s just the latest and greatest – one box, one OS.

This is a device for which you can assume

  • It is always connected (most of the time)
  • It has a high speed connection (most of the time)
  • It has a fast dual-core processor
  • It has a decent amount of memory
  • It has a decent amount of storage (and mechanisms for maintaining that)

They then went on to explain that the principles for a television app are somewhat different from a phone app. Apple specifically called out three principles that you should consider when designing your app.

  • Connected
    Your users must feel connected to the content of your app. As your app is likely some distance from the user, with no direct contact between finger and content, this is a different experience from touching the glass of an iPhone UI.
  • Clear
    Your app should be legible and the user should never get lost in the user interface. If the user leaves the room for a moment then comes back, can they pick up where they left off?
  • Immersive
    Just like watching a movie or TV series, your app should be wholly immersive whilst on-screen.

If you had said these things to me casually, I would have probably said, “well, yeah, obviously” but when you have it spelled out to you, it gives you pause for thought;

“If I did port my app, how would I make an experience that works with the new remote and also makes sense on everything from a small flat-screen in a studio flat to an insanely big projector in a penthouse.”

Add to that the fact that the TV is a shared experience – from watching content together to just allowing different users to use your app at different times – it’s not the intimate experience we have learned to facilitate on iOS. It should still be personal, but it’s not personal to the same person all the time. Think of Netflix with their user picker at startup, or the tvOS AirBnB app with it’s avatar picker at the bottom of the screen.

Next was the Siri Remote and interactions via it. This is one complex device packed in a deceptively small form factor – from the microphone to the trackpad, gyroscope and accelerometer, this is not your usual television remote. We can now touch, swipe, swing, shake, click and talk to our media centre. The exciting thing for us as app developers is that almost all of this is open for us to use, either out of the box (for apps) or as custom interactions from raw event streams (particularly useful for games).

As you might expect from Apple, they were keen to stress that there were expectations for certain buttons that you should respect. Specifically, the menu and play/pause buttons. I like that they are encouraging conformity – it’s very much what people expect from Apple, but found it a bit silly when demonstrating how one might use the remote in landscape as a controller for a racing game. This, to me, felt a bit like dogma. If you want this to become a great gaming device, accept the natural limitations of the remote and push game controllers as the right choice here. Instead they kept going on about the remote and controllers being first class citizens in all circumstances.

Speaking to an indie game developer friend about the potential of the device, he said that he would really like three things from Apple, at least, before hopping on board;

  • Stats on Apple TV sales to evaluate the size of the market
  • A games pack style version that comes with two controllers to put the device on a par with the consoles
  • Removal of the requirement to support the remote as an option in games. Trying to design a game that must also work with the remote is just too limiting and hopefully Apple will realise this as they talk to more games companies.

A key component of the new way of interacting with tvOS (versus iOS) is the inability to set the focus for the user. Instead you guide the “focus engine” as it changes the focus for the user, in response to their gestures. This gives uniformity, again, and also means that apps cannot become bad citizens and switch the focus under the user. One could imagine the temptation to do this being hard to resist for some kinds of apps – breaking news or the latest posts in a social stream, perhaps.

Instead you use invisible focus guides between views and focusable properties on views to help the engine know what the right thing to do is. At one point in the presentations the speaker said

“Some people think they need a cursor on the Apple TV…they are wrong”

It seems clear to me that the focus engine is designed specifically to overcome this kind of hack and is a much better solution. If you’ve ever tried to use the cursor remote on some “Smart” TV’s then you’ll know how that feels. If not, imagine a mouse with a low battery after one too many happy hour cocktails.

With the expansive, but still limited resources of the Apple TV hardware, there will be times when there simply is not enough storage for everything that the user wants to install. The same in fact, holds true for iOS already. Putting aside my rant about how cheap memory and storage are and how much Apple cash-in on both by making them premium features, their solution is On-Demand Resources (ODR).

With ODR you can mark resources as being one of three types which change when, and if, they are downloaded, and how they may be purged under low resource conditions. Apple want you to bundle up your resources (images, videos, data, etc, but not code) into resource packs and to tag them. You tag them as either

  • Install
  • Prefetch
  • Download only on demand

Install come bundled with the app itself (splash screen, on-boarding, first levels, etc). Prefetch are downloaded automatically, but after launching the app and on demand are as you might expect – on demand from the app. On demand can be purged, using various heuristics as to how likely they are to affect the user/app – things like last accessed date and priority flags.

Although not talked about that much as far as I can tell, to me TVML is one of the big stories of tvOS. Apple have realised that writing a full blown native app is both expensive and overkill for some. If you’re all about content then you probably need little more than a grid of content to navigate, a single content drill down view and some play/pause of that streaming content. TVML gives you an XML markup language, powered by a JavaScript engine, that vends native components in a native app. It can interact with your custom app code too, through bridges between the JavaScript DOM and the native wrapper. This makes a lot of sense if you are Netflix, Amazon Prime Video, Mubi, Spotify or, as they pointed out, Apple Music and the tvOS App Store.

It highly specific but its highly specific to exactly the type of content Apple so desperately need to woo and who are likely wondering if they can afford to commit time and effort to an untested platform. As we’ve seen with the watchOS 2, developers are feeling somewhat wary of investing a lot of time in new platforms when they also have to maintain their existing ones, start moving to Swift, adopt the latest iOS 9 features, and so on.

I think this is a big deal because what Apple are providing is what so many third parties have been offering for years, to differing degrees of success. This is their Cordova, their PhoneGap or, perhaps most closely, their React Native. This is a fully Apple approved, and Apple supported, hybrid app development solution that your tranche of web developers are going to be able to use. If this ever comes to iOS it could open up apps to developers and businesses that just cannot afford a native app team, or the services of an app agency (assuming your business is all about vending content and you can live with a template look and feel). I think this could be really big in the future and in typical Apple fashion they are keeping it very low key for now.

They kept teasing that we were all there to find out how to get featured (certainly people were taking more photos there than anywhere else) but before that they spoke about tuning your apps for the TV. This included useful tricks and tips for the well documented frustrations of trying to enter text on the tvOS remote (make sure to mark email fields as such  – Apple will offer a recently used email list if you do) to examples of using built-in technologies to share data instead of asking the user to do work.

To the delight of my friends who work there, they demonstrated the Not On The High Street App and it’s use of Bonjour to discover the users iPhone/iPad and push the product they want to sell into the basket of the app on that platform. From there the user can complete their purchase very quickly – something that would be fiddly to do on the TV (slow keyboard, no Apple Pay, no Credit Card scanner).

Next came another feature that I think could hint at new directions for iOS in the future – the top shelf. If the user choses to put your app in the top row of apps, then, when it’s selected, that app gets to run a top shelf extension that populates the shelf with static or dynamic image content. This is the closest thing to a Windows Phone live tile experience that we’ve seen so far and, as I say, I think it could signpost a future “live” experience for iOS too. A blend of a Today Widget and a Top Shelf Widget could be very interesting.

Finally came the session they were promising; App Store Distribution. The key take-aways for me were

  • Don’t forget other markets (after the US the biggest app stores are Japan, China, UK, Australia, Canada and Germany)
  • Keep your app title short (typing is hard on tvOS)
  • Spend time getting your keywords right (and avoid wasting space with things like plurals)
  • Let Apple know 3-4 weeks before a major release of your app (appstorepromotion@apple.com)
  • Make your app the very best it can be and mindful of the tvOS platform

top-ios-markets

Then it was on to a reception with some delicious canapés and a selection of drinks. This wasn’t what made it great though. What made it great were all the Apple people in the room, giving everyone time who wanted it. This was not the Apple of old and it was all the better for it. The more of this kind of interaction they can facilitate the stronger their platform will be for us.

The Future of TV is Apps?

I think the future of consumer electronics is a multi-screen ecosystem where the user interface and, of course the form factor itself, follows the function to which it is in service.

Clearly, the television could become a critical screen in this future. I believe that, even as we get new immersive entertainment and story-telling options (virtual reality, 3D, and who knows what else), the passive television experience will persist. Sometimes all you want to do is just sit back and be entertained with nothing more taxing than the pause button.

A TV with apps allows this but also, perhaps, makes this more complex. When all I want to do is binge on Archer, a system with apps might not be what I want to navigate. That being said, if all I want to do is binge on Archer, and this can be done with a simple “Hey Siri, play Archer from my last unplayed episode”, then it’s a step ahead of my passive TV of old. It had better know I use Netflix and it had better not log me out of Netflix every few weeks like the Fire TV Stick does.

If I then get a notification (that hunts for my attention from watch to phone to television to who knows what else) that reminds me I have to be in town in an hour and that there are problems on the Northern Line so I should leave extra time, I might hit pause, grab my stuff and head out. As I sit on the tube with 20 minutes to kill, I might then say “Hey Siri, continue playing Archer”.

Just as I get to my appointment I find my home has noticed a lack of people and gone into low power mode, via a push notification. If I want, I can quickly reply with my expected arrival home time, so that it can put on the heating in time and also be on high alert for anyone else in my house during that period.

I suspect most of these transactions are being powered by apps, not the OS itself, but I do not expect to interact with the apps in most cases anymore. Apps will become simply the containers for the means of serving me these micro-interactions as I need/want them.

One only has to look at the Media Player shelf, Notification Actions, Today Widgets, Watch Apps, Glances, Complications, 3D Touch Quick Actions, and now the tvOS Top Shelf to see that this is already happening and will only increase as time goes on. Your app will power multiple screen experiences and be tailored for each, with multiple view types, and multiple interactions. Sometimes these will be immersive and last for minutes or hours (games, movie watching, book reading, etc) but other times these be will be micro-interactions of seconds at most (reply to a tweet, check the weather, plan a journey, start a music stream, buy a ticket, complete a checkout). Apps must evolve or die.

That situation is probably a few years off yet, but in the more immediate term, if we want the future of TV to be apps (beyond simply streaming content) then users will need to be persuaded that their TV can be a portal to a connected world.

From playing games to checking the weather to getting a travel report, these are all things for which an apps powered TV could be very useful. It’s frequently on, always connected, and has a nice big screen on which to view what you want to know. Whether users find this easier than going to pick up their iPhone or iPad remains to be seen.

I think Apple see the Apple TV as a Trojan horse. Many years ago, Steve Jobs introduced the iMac as the centre of your digital world; a hub into which you plugged things. I think the Apple TV is the new incarnation of that idea – except the cables have now gone (replaced with the likes of HomeKit, AirPlay and Bonjour), the storage is iCloud and the customisation is through small, focused apps, and not the fully fledged applications of old.

It’s early days and if the iPhone has taught us anything it’s that the early model will rapidly change and improve. Where it actually goes is hard to say, but where it could go is starting to become clear.

Is the future of the TV apps? Probably so, but probably not in the way we think of apps right now. The app is dying, long live the app.

tour-pass

 

Posted in iOS

Testing iOS apps

We recently released an update to our iPhone app. The app was originally developed by a third-party, so releasing an update required bringing the app development and testing in-house. We develop our projects in a continuous build environment, with automated builds, unit and acceptance tests. It allows us to develop fast and release often, and we wanted the iPhone project to work in the same way.

This article covers some of the tools we used to handle building and testing the app.

Build Automation

We use Git for our version control system, and Jenkins for our continuous integration server. Automating the project build (i.e. building the project to check for compilation errors) seemed like a basic step and a good place to start.

A prerequisite to this was to create a Mac Jenkins Build Slave, which is outside of the scope of this blog post (but if you’re interested, I followed the “master launches slave agent via SSH” instructions of the Jenkins site).

A quick search of Jenkins plugins page revealed a Xcode plugin which allows for building Objective-C applications. Setting up the plugin was a snap – search and install the “XCode integration” plugin from the Jenkins server plugin page, point the plugin to your project directory on the build slave, enable keychain access, and save.

Now for every commit I made to the project, this task would automatically run, and send me a rude email if project compilation failed. In practice I found that this was an excellent way of reminding me of any files I had forgot to check-in to Git; the project would compile on my laptop but fail on the CI server due to missing classes, images, etc.

Unit testing

I looked briefly into the unit testing framework Apple provides, which ships with Xcode. I added a unit test project to the Songkick app, and looked into creating mocks using OCMock, an Objective-C implementation of mock objects.

We already have fairly extensive API tests to test for specific iPhone-related user-flows (such as signing up, tracking an artist, etc), and due to time constraints we opted to concentrate on building acceptance tests, and revisit unit tests if we had time.

Acceptance Testing

There are a bunch of acceptance testing applications available for iOS apps. Here’s a few of the tools I looked into in detail:

Frank

Frank is an iOS acceptance testing application which supports a Cucumber-style test syntax. I was interested in Frank as we already make use of Cucumber to test our Ruby projects, so the familiarity of the domain-specific language would have been a benefit.

I downloaded the project and got a sample test up-and-running fairly quickly. Frank ships with some useful tools, including a web inspector (“Symbiote”) which allows for inspecting app UI elements using the browser, and a “Frank console” for running ad-hoc commands against an iPhone simulator from the command line.

Frank seems to be a pretty feature rich application. The drawbacks for me were that Frank could not be run on real hardware (as of March 2013, this appears to now be possible), and Frank also requires recompiling your application to make a special “Frankified” version to work with the testing framework.

Instruments

Apple provides an application called Instruments to handle testing, profiling and analysis of applications written with Xcode. Instruments allows for recording and editing UIAutomation scripts – runnable JavaScript test files for use against a simulated iOS app or a real hardware install.

InstrumentsRecordingTest

Being able to launch your app with Instruments, perform some actions from within the app, and have those actions automatically converted into a runnable test script was a really quick and easy way of defining tests. Instruments also supports running scripts via the command line.

The drawback of test scripts created with Instruments is that they can be particularly verbose, and Instruments does not provide a convenient way of formatting and defining individual test files (outside of a single UIAutomation script per unique action).

Tuneup_js

Designed to be used as an accompaniment to UIAutomation scripts created using Instruments, Tuneup_js is a JavaScript library that helps to ease the pain of working with the long-winded UIAutomation syntax.

It provides a basic test structure for organising test steps, and a bunch of user-friendly assertions built on top of the standard ones supported by Instruments.

tuneup

I found that recording tests in Instruments, and then converting them into the Tuneup_js test syntax was a really quick way of building acceptance tests for iOS apps. These tests could then be run using a script provided with the Tuneup_js package.

Scenarios

I settled on using Instruments and Tuneup_js to handle acceptance testing. Instruments because of the ability to quickly record acceptance test steps, and Tuneup_js because it could be used to wrap recorded test steps into repeatable tests and allowed for a nicer test syntax than offered out-of-the-box with UIAutomation. What was missing with these applications was a way to handle running the test files in an easily repeatable fashion, and against the iOS simulator as well as hardware devices.

I couldn’t find an existing application to do this, so I wrote Scenarios (Scenar-iOS, see what I did there?) to handle this task. Scenarios is a simple console Ruby app that performs the following steps:

  • Cleans any previous app installs from the target test device
  • Builds the latest version of the app
  • Installs the app on the target test device
  • Runs Tuneup_js-formatted tests against the installed app
  • Reports the test results

Scenarios accepts command-line parameters, such as the option to target the simulator or a hardware device (with the option of auto-detecting the hardware, or supplying a device ID). Scenarios also adds a couple of extra functions on top of the UIAutomation library:

  • withTimout – Can be used for potentially long-running calls (e.g. a button click to login, where the API call may be slow):
    withTimeout(function(){
      app.mainWindow().buttons()["Login"].tap();
    });
  • slowTap – Allows for slowing-down the speed at which taps are executed. Instruments can run test steps very fast, and sometimes it helps to slow down tests to see what they are doing, and help create a more realistic simulated user experience:
    app.toolbar().buttons()["Delete"].slowTap();

Scenarios ships with a sample project (app and tests) that can be run using the simulator or hardware. Here’s a video of the sample running on a simulator:

Jenkins Pipeline

Now I had build and acceptance tests in place, it was time to hook the tests up to Jenkins. I created the following Jenkins projects:

  • “ios-app” – runs the build automation
  • “ios-app-acceptance-tests-simulator” – runs the app (via Scenarios) on a simulator
  • “ios-app-acceptance-tests-iPhone3GS” – runs the app (via Scenarios) on an iPhone 3GS

jenkins-pipeline

Committing a code change to the iOS app Git repo caused the projects in the Jenkins pipeline to build the app, run the acceptance tests against the simulator, and finally run the acceptance tests on an iPhone 3GS. If any stage of the pipeline failed, I received an email informing me I had broken something.

test-iphone

Manual testing with TestFlight

As well as an automated setup, we also made use of the excellent TestFlight service, which enables over-the-air distribution of apps to testers. We had 12 users and 16 devices set up in TestFlight, and I was releasing builds (often daily) over-the-air. It enabled us to get some real-user feedback on the app, something that build and acceptance tests cannot replace.

Jenkins also has a TestFlight plugin, which enables you to automatically deploy a build to TestFlight as part of the pipeline. Very cool, but as we were committing code changes often throughout the day (and only wanted to release to TestFlight once a day), we decided to skip this step for the time being.

Overall, I think that the tools (both open-source and proprietary) available today for automated testing of iOS apps are feature rich (even if some are still in their infancy), and I’m pretty happy with our development setup at Songkick.