Earlier this year we wrote about how we move fast but still test the code.
This was recently followed by another post about Developer happiness at Songkick which also focuses on the processes we have in place, as they provide a means to a productive working environment.
How does this all look from a tester’s point of view?
I have been asked a few times what a typical day looks like for a tester at Songkick. The post is about our processes that enable us to move fast from a tester’s point of view and how testing is integrated in our development lifecycle.
Organising our work
Teams at Songkick are organised around products and the process we follow is agile. Guided by the product manager and our team goals, we organise our sprints on a weekly basis with a prioritisation meeting. This allows us to update each other on the work in progress and determine the work that may get picked up during that week.
Prioritisation meetings also take into consideration things such as holidays and time spent doing other things (meetings, fire fighting, pairing).
On top of that we check our bug tracker, to see if any new bugs were raised that we need to act on.
Everyone in the company can raise bugs, enabling us to constantly make decisions on how to improve, not only our user facing products, but also our internal tools.
We also have daily stand ups at the beginning of each day, where we provide information on how we are getting on, and any blockers or other significant events that may impact our work positively or negatively.
Every 2 weeks we also a retrospective to assess how we are doing and what improvements we can make.
Sabina gave a great definition of the kick-off document here. Each feature or piece of work has a kick-off document. We try to always have a developer, product manager and tester in the conversation. More often than not we also include other developers, or experts, such as a member from tech ops or a frontline team. Frontline teams can be anyone using internal tools directly, members from our customer support team, or someone from the sales team.
Depending on the type of task; is it a technical task or a brand new feature, we use a slightly different template. The reasoning behind this is, that a technical non user facing change will require a different conversation than a user facing change.
But at the end of the day this is our source of truth, documenting, most importantly, the problem we are trying to solve, how we think we will do it, and any changes that we make to our initial plan along the way.
The kick-off conversation is where the tester can ask a tonne of questions. These range from anything about the technical implementation, potential performance issues, to what are the risks and what should our testing strategy be? Do we need to add a specific acceptance test for this feature, or are unit and integration tests enough?
A nice extra section in the document is the “Recurring bugs” section.
The recurring bugs consist of questions to make sure we are not implementing something we may have already solved and also bugs we see time and time again. These can range from field lengths and timezones, to nudges about considering how we order lists. What it doesn’t include is every bug we have ever seen. It is also not static and the section can evolve, removing certain questions or notes and adding others.
Having a recurring bugs section in a kick-off document is also great for on-boarding as you start to understand what previously has been an issue and you can ask why and what we do now to avoid it.
After the kick-off meeting, I personally tend to familiarise myself with where we are making the change.
For example, say we are adding a new address form to our check-out flow when you purchase tickets. I will perform a short exploratory test of this in our staging environment or on production. Anytime we do exploratory testing, we tend to record these as time-boxed test session in a lightweight format. This provides a nice record of the testing that was performed and also may lead to more questions for the kick-off document.
Once the developer(s) working on the feature have had a day or so, we do a test modelling session together.
Similar to the kick-off this is an opportunity for the team to explore the new feature and how it may affect the rest of the system.
It consists of a short collaboration session, with at least a developer, tester and if applicable the design lead and/or other expert, where we mind map through test ideas, test data and scenarios.
We do this as it enables the developer to be testing early before releasing the product to a test/production environment, which in turn means we can deliver quality software and value sooner.
It is also a great way to share knowledge. Everyone who comes along brings different experiences and knowledge.
The collaborators work together to discuss what needs checking and what risks need exploring further.
We might also uncover questions about the feature we’re building. Sharing this before we build the feature can help us build the right feature, and save time.
For example, we recently improved one of our admin tools. During the test modelling session, we discovered a handful of questions, including some around date formats, and also default settings. By clearing these questions up early, we not only ensure that we build the right thing, but also that we build it in the most valuable way for the end user.
In this particular example, it transpired that following a certain logic for setting defaults, would not only save a lot of time, but also greatly reduce the likelihood of mistakes.
The team (mainly the developer) will use the resulting mind map for testing.
It becomes a record of test scenarios and cases we identified and covered as part of this bit of work.
As we mainly work in continuous deployment or delivery (depending on project and risk of the feature), testers often test in production using real data, to not block the deployment pipeline.
This has the advantage that the data is realistic (it is production data after all), there are no discrepancies in infrastructure, and performance can be adequately accessed.
Downsides can be that if we want to test purchases, we have to make actual purchases, which creates an overhead on the support team, as they will need to process refunds.
Testers and Bugs
Any issues we find during our testing on production or a staging environment (if we are doing continuous delivery), will be logged in our bug tracker and prioritised.
Some issues will be fixed straight away and others may be addressed at a later date.
As mentioned above, anyone at Songkick can raise issues.
If this issue relates to one of the products that your teams are working on, you (as the tester on the team(s)) will be notified and often it is good to verify the issue, ask for more information and also assess if this may be blocking the person who reported the issue, as soon as possible, or is it even an issue?
We do have guidelines to not even bother logging blockers but to come to the team directly, but this may not always be possible, so as testers we always have an eye on the bugs that are raised.
Want to know more?
In this post I described some of the common things testers at Songkick do.
Depending on the team and product there may also be other things, such as being involved in weekly performance tests, hands on mobile app testing, talking through A/B tests and coaching and educating the technology team and wider company on what testing is.
If any of that sounds interesting, we are always looking for testers. Just get in touch.