Songkick from a Tester’s point of view

Earlier this year we wrote about how we move fast but still test the code.

This was recently followed by another post about Developer happiness at Songkick which also focuses on the processes we have in place, as they provide a means to a productive working environment.

How does this all look from a tester’s point of view?

I have been asked a few times what a typical day looks like for a tester at Songkick. The post is about our processes that enable us to move fast from a tester’s point of view and how testing is integrated in our development lifecycle.

Organising our work

Teams at Songkick are organised around products and the process we follow is agile. Guided by the product manager and our team goals, we organise our sprints on a weekly basis with a prioritisation meeting. This allows us to update each other on the work in progress and determine the work that may get picked up during that week.

Prioritisation meetings also take into consideration things such as holidays and time spent doing other things (meetings, fire fighting, pairing).

On top of that we check our bug tracker, to see if any new bugs were raised that we need to act on.

Everyone in the company can raise bugs, enabling us to constantly make decisions on how to improve, not only our user facing products, but also our internal tools.

We also have daily stand ups at the beginning of each day, where we provide information on how we are getting on, and any blockers or other significant events that may impact our work positively or negatively.

Every 2 weeks we also a retrospective to assess how we are doing and what improvements we can make.

Retrospectives

The kick-off

Sabina gave a great definition of the kick-off document here. Each feature or piece of work has a kick-off document. We try to always have a developer, product manager and tester in the conversation. More often than not we also include other developers, or experts, such as a member from tech ops or a frontline team. Frontline teams can be anyone using internal tools directly, members from our customer support team, or someone from the sales team.

Depending on the type of task; is it a technical task or a brand new feature, we use a slightly different template. The reasoning behind this is, that a technical non user facing change will require a different conversation than a user facing change.

But at the end of the day this is our source of truth, documenting, most importantly, the problem we are trying to solve, how we think we will do it, and any changes that we make to our initial plan along the way.

The kick-off conversation is where the tester can ask a tonne of questions. These range from anything about the technical implementation, potential performance issues, to what are the risks and what should our testing strategy be? Do we need to add a specific acceptance test for this feature, or are unit and integration tests enough?

A nice extra section in the document is the “Recurring bugs” section.

The recurring bugs consist of questions to make sure we are not implementing something we may have already solved and also bugs we see time and time again. These can range from field lengths and timezones, to nudges about considering how we order lists. What it doesn’t include is every bug we have ever seen. It is also not static and the section can evolve, removing certain questions or notes and adding others.

Having a recurring bugs section in a kick-off document is also great for on-boarding as you start to understand what previously has been an issue and you can ask why and what we do now to avoid it.

What’s next?

After the kick-off meeting, I personally tend to familiarise myself with where we are making the change.

For example, say we are adding a new address form to our check-out flow when you purchase tickets. I will perform a short exploratory test of this in our staging environment or on production. Anytime we do exploratory testing, we tend to record these as time-boxed test session in a lightweight format. This provides a nice record of the testing that was performed and also may lead to more questions for the kick-off document.

Once the developer(s) working on the feature have had a day or so, we do a test modelling session together.

Test Modelling

Similar to the kick-off this is an opportunity for the team to explore the new feature and how it may affect the rest of the system.

It consists of a short collaboration session, with at least a developer, tester and if applicable the design lead and/or other expert, where we mind map through test ideas, test data and scenarios.

We do this as it enables the developer to be testing early before releasing the product to a test/production environment, which in turn means we can deliver quality software and value sooner.

It is also a great way to share knowledge. Everyone who comes along brings different experiences and knowledge.

Test Model for one of our internal admin pages

Test Model for one of our internal admin pages

The collaborators work together to discuss what needs checking and what risks need exploring further.

We might also uncover questions about the feature we’re building. Sharing this before we build the feature can help us build the right feature, and save time.

For example, we recently improved one of our admin tools. During the test modelling session, we discovered a handful of questions, including some around date formats, and also default settings. By clearing these questions up early, we not only ensure that we build the right thing, but also that we build it in the most valuable way for the end user.

In this particular example, it transpired that following a certain logic for setting defaults, would not only save a lot of time, but also greatly reduce the likelihood of mistakes.

The team (mainly the developer) will use the resulting mind map for testing.

It becomes a record of test scenarios and cases we identified and covered as part of this bit of work.

As we mainly work in continuous deployment or delivery (depending on project and risk of the feature), testers often test in production using real data, to not block the deployment pipeline.

This has the advantage that the data is realistic (it is production data after all), there are no discrepancies in infrastructure, and performance can be adequately accessed.

Downsides can be that if we want to test purchases, we have to make actual purchases, which creates an overhead on the support team, as they will need to process refunds.

Testers and Bugs

Any issues we find during our testing on production or a staging environment (if we are doing continuous delivery), will be logged in our bug tracker and prioritised.

Some issues will be fixed straight away and others may be addressed at a later date.

As mentioned above, anyone at Songkick can raise issues.

If this issue relates to one of the products that your teams are working on, you (as the tester on the team(s)) will be notified and often it is good to verify the issue, ask for more information and also assess if this may be blocking the person who reported the issue, as soon as possible, or is it even an issue?

We do have guidelines to not even bother logging blockers but to come to the team directly, but this may not always be possible, so as testers we always have an eye on the bugs that are raised.

Want to know more?

In this post I described some of the common things testers at Songkick do.

Depending on the team and product there may also be other things, such as being involved in weekly performance tests, hands on mobile app testing, talking through A/B tests and coaching and educating the technology team and wider company on what testing is.

If any of that sounds interesting, we are always looking for testers. Just get in touch.

SlackMood – Analyse your teams happiness via Slack Emoji usage

We had a hack day in the office a few weeks back, and I decided I wanted to build something with Slack. Hack days give us a chance to work with people outside of our product teams, work with different and new technologies, as well as trying out fun ideas we’ve had.

Like any sensible company, we use Slack to help us collaborate and improve communication, but we also use it to share cat gifs (we have an entire channel) and a whole host of default, aliased and custom emojis. Based on this, I wondered if I could use our emoji use to gauge the average mood of the whole company. And so SlackMood was born.

emoji_stats

SlackMood showing that 85% of our current Slack use is neutral or positive.

My first step was figuring out how to get a feed of messages across our whole Slack. I’d already decided to build it in Golang, and fortunately some clever person had already built a Golang library for Slack, saving me a huge amount of work. I registered a new bot on the Slack developer site and started hacking.

Unfortunately I quickly ran into an issue. I wanted to get the RTM (real-time message) feed of every channel, but it turns out bot accounts can’t join channels unless they’re invited. I could see 3 solutions to this:

  1. Create a real Slack user with an API key (I decided Finance wouldn’t be happy with this)
  2. Add my own API key alongside the bot, use the API to have me join all the channels, invite the bot and leave – annoying everyone in the company
  3. Use the message history APIs to periodically scrape the channels.

I decided to go with 3, as it seemed the simplest to implement.

The actual code for this was relatively simple:

for _,c := range channels{
  if c.IsArchived{
    continue
  }
  hp := api.NewHistoryParameters()
  hp.Count = 1000
  h, err := s.Api.GetChannelHistory(c.ID, hp)

  if err != nil {
    log.WithFields(log.Fields{
      "error": err,
      "channelId": c.ID,
      "channel": c,
    }).Warning("Could not fetch channel history")
  } else {
    models.ParseEmoji(h.Messages)

    log.WithFields(log.Fields{
      "channel": c.Name,
      "channelId": c.ID,
      "messages": len(h.Messages),
    }).Debug("Got channel history")
  }
}

It then passes the message object into a function that extracts the emoji counts.

func ParseEmoji(messages []api.Message){
  r := regexp.MustCompile(`:([a-z0-9_\+\-]+):`)

  for _,m := range messages{
    msgId := fmt.Sprintf("%s-%s-%s", m.Timestamp, m.Channel, m.User)
    for _,r := range m.Reactions{
      emojiList.AddEmoji(r.Name, m, fmt.Sprintf("%s-%s-%s", msgId, m.User, m.Name))
    }

    foundEmoji := r.FindAllStringSubmatch(m.Text, -1)
    for _,em := range foundEmoji{
      emojiList.AddEmoji(em[1], m, msgId)
    }
  }
}

It uses both a regular expression on the message, and iterating over the reactions.

I’d decided to use BoltDB for the backend storage, maybe not the best idea as I think a relational datastore like Sqlite would have been much better suited, but Bolt was a technology I’d never used before so it seemed interesting. We generate a message ID from the base message, then the reactions all have their own IDs based on the user who posted them. These are all stored in BoltDB as message ID -> details, where details is a struct describing the emoji:

type Emoji struct{
  Name      string
  SeenAt    time.Time
  Channel   string
  User      string
}

Now we’ve got a list of emojis and their timestamps, we can go through and assign each one a rating, of ether positive, negative or neutral. Fortunately, some of our team had already built a spreadsheet of emoji sentiment analysis for a previous hack project (turns out, we love emojis) with positive to negative rankings (1 to -1):

Screen Shot 2016-07-04 at 14.59.59

Our emoji rankings spreadsheet, obviously.

With our emoji ranks loaded into a struct array, we can go through and analyse the score of our current listed emoji.

func GetMood(emoji []*Emoji) Mood{
  m := Mood{}

  for _, e := range emoji{
    for _,r := range ranks.EmojiRanks{
      if r.Name == e.Name{
        switch r.Rank {
        case 1:
          m.PositiveCount += 1
        case 0:
          m.NeutralCount += 1
        case -1:
          m.NegativeCount += 1
        }
        m.TotalCount += 1
        break
      }
    }
  }

  m.Positive = percentage(m.PositiveCount, m.TotalCount)
  m.Negative = percentage(m.NegativeCount, m.TotalCount)
  m.Neutral = percentage(m.NeutralCount, m.TotalCount)

  return m
}

(N.B. looking back at this now, I realise a map of emojiname -> mood would have been much better rather than a double-loop, but this was like 6 hours in and I was keen to get something working).

Now we know the mood of all the emojis, calculating the graph just involves iterating through all the seen emojis and storing them in a map of date->mood. The GetMood function above works on a list of emojis, so we just bucket the emojis by the selected time period.

Due to storing all the emoji in Bolt and not being able to do proper filtering, we first filter by the time period we care about, then divide this up.

type Mood struct{
  Positive      float32
  Negative      float32
  Neutral       float32
  PositiveCount int32
  NegativeCount int32
  NeutralCount  int32
  TotalCount    int32
  Time          time.Time
  TimeString    string
}

func FilterEmoji(from time.Time, to time.Time, emoji []*Emoji) []*Emoji{
  var emj []*Emoji
  for _, e := range emoji{
    if e.SeenAt.After(from) && e.SeenAt.Before(to){
      emj = append(emj, e)
    }
  }

  return emj
}

func GraphMood(over time.Duration, interval time.Duration) []Mood{
  var points []Mood

  now := time.Now().UTC()
  dataPointCount := int(over.Seconds()/interval.Seconds())
  endTime := time.Unix(int64(interval.Seconds())*int64(now.Unix()/int64(interval.Seconds())), 0)
  periodEmoji := FilterEmoji(endTime.Add(over*-1), endTime, AllEmoji())
  for i:=0;i<dataPointCount;i++{
    offset := int(interval.Seconds())*(dataPointCount-i)
    startTime := endTime.Add(time.Second*time.Duration(offset)*-1)

    m := GetMood(FilterEmoji(startTime, startTime.Add(interval), periodEmoji))
    m.Time = startTime
    m.TimeString = startTime.Format("Jan _2")
    points = append(points, m)
  }

  return points
}

GraphMood returns a struct array which we can just JSON encode and feed into Chart.JS to get the nice visualisation above.

All in all, it was pretty fun but the whole project contains a lot of terrible code. If you want, check it out on Github here.

Other stuff I would have liked to add:

  • Most positive/negative person
  • Most used emoji
  • Biggest winker ?

Maybe next hack day.


P.S. if you fancy working somewhere with regular hack days, in a team which has a pre-prepared spreadsheet with emoji sentiment analysis, Songkick are hiring a variety of technology roles at the moment. So come work with us, we have a 64% SlackMood happiness rating™.