Header Tabs

Monday 20 March 2017

Giving Testers time to Grow

One of the saddest exchanges I've ever had with a fellow tester went something like this:

Me: Hey! We missed you at the C# fundamentals meeting yesterday, I know you really wanted to go, everything alright?

Them: Ugh, I'm so sad I missed it, but it was impossible to get there, I have so much pod work to do!

Of all the reasons to miss a training session you really want to do to, this is the one that I dislike the most. The feeling that spending time not doing pod work is a guilty one, and the idea that somehow it's less valuable.

It turns out that a lot of testers felt like this.

I think it's inherent in organisations where delivery is considered king. And delivery is important, it's what makes money that pays people to keep working. But I would put forth the idea that delivery should never be everything. Equally important is to feel like you are learning, moving forward, making yourself better. This is what drives you to engage in work.

This by no means applies just to testing either, anyone in a team should be given time to spend on self-learning and group learning. This time should be set aside and considered sacro-sanct. Only a genuine emergency should be able to pull you away from this time.
This last bit was an important rule my own team learned. We spend a morning every week mob-programming together. It's a great learning tool for all of us. But it was easy to start letting other things get in the way, after all, how often do you really have an entire team available for a discussion without booking weeks in advance? We noticed that we were letting ourselves, and others, co-opt that time for other (still valuable) discussions and we were missing out on an important team learning time. We actually felt it. So we resolved to leave mob programming time for just that.

But when it comes to testing, I think we can feel a bit more of a niggle of guilt than most developers would. Especially in Agile teams, our ratio is usually one of us to several developers and a few other team members. We often feel directly responsible for things being delivered on time, just because from the outside we look like a natural bottle-neck.

This is why it's even more important to ensure that testers have dedicated time away from pod work to do their own learning. Some ideas are:

  • Give learning a story in a backlog, story point the time and account for it in your sprint work. 
  • Or give your teams jog days in between sprints where they can do the things they think are important. 
  • Set aside afternoons in your team calendar each week for learning time.
  • Give your entire teach teams a few hours on a friday afternoon where training and interesting talks can be organised.
  • Do all of the above if you can!
The point is that this can only improve not just the morale of testers and their teams but the gains you make in the quality of the work will directly show in the all important push to delivery. Give a little and gain a lot.

Learning and challenged testers are happy testers.
Make your testers happy.

Have a lovely day nerds!
<3

Tuesday 7 March 2017

The Lazy world of Minimum Viable Testing

I actually wrote this for a testing magazine, so in a way, you could call this shameless self promotion... I'm at peace with this (if you can't shamelessly self promote on your own blog, where can you?).

http://www.testingtrapezemagazine.com/wp-content/uploads/2017/02/TestingTrapeze-2017-February.pdf

Have a lovely day nerds!

Tuesday 31 January 2017

The future is green and full of fields.

Happy New Year to all and sundry! Yes, yes, it's February, I get it, but we haven't had anything to chat about in a while.

Now we do.

Do you know what a Greenfields project is? It's a project that is completely new in environment and in code and has absolutely no reliance on legacy within a system. It's basically when you decide "bugger it" we're going to just make something completely new. One of the amazing things about this type of project is that you start from essential scratch. What systems will you use? What environment? What kind of testing DO YOU WANT? You start with an idea of the ideal world, the paradise of a project that you want to work on and you go for it.

My team and I started on a project like this about 6 months ago. We picked our new environment on AWS, Docker with Linux and .Net Core and a completely separate deployment system so we could do what we want, when we want. Nothing we would do could impact anyone else (aside from the goodness our project would bring to the wider teams once done). Good, that's exactly what we wanted.

But what about testing? Oh we had the basics sorted out, we'd all code in TDD so there'd be unit tests, integration tests of course (we're a service API) and performance tests were easy enough to get into place (all running as part of our environment spin up in docker, thank you, thank you). But that's not enough for something that will be as fundamental as our service to the wider development team. All future projects will rely on ours to be correct.

I had some thinking to do.

There would never be a UI, other services who hook into ours will take care of those. I would have no knowledge of what would actually be using our service as they didn't exist yet. Oh we had hopes, but that's not enough to build a test off. So my testing (automated or otherwise) would need to take into account that I would have nothing physical to work with. Boo-urns.

This is a problem that more and more of us are starting to discover as we move faster and faster in the world of technology and new tech springs up as soon as we've just discovered something else. How do we test things when we don't really know what they're going to look like in the end, or if they even have something that you can look at?

For us there'd be no mindmaps, no exploratory testing, no usability tests, no specflow automation, basically everything I normally do went out the window.

I've found it immensely important to go back to absolute basics and think very carefully about what I do know and what I actually have to work with :


  • No UI? That's fine, selenium and specflow automation are out, I'll just need to find something else.
  • Don't know how the service is going to be used? Cool, I won't worry about this and just pay attention to what we know our service can do.
  • If unit, integration and performance tests are done but not enough, that's fine, I'll just go up one level to acceptance tests.


So my research has moved to acceptance tests, based on what we know of our service can do, that won't have a UI to work with.

Right, we've got our picture. From here it's just a matter of finding a way to make it work.

The actual tools I use aren't actually important here, but just in case anyone is interested, I've started using Storyteller (http://storyteller.github.io/). With a bit of work (just a bit ^_^) they'll be run along with the integration test as part of our building docker containers and release pipelines.

I also spend a lot of time watching and being a part of the code be developed (mob programming is a great move when you're all in new territory), docs are hugely important not just so we have an accurate view of what we're doing but so others can actually use our service when it's ready (something like Apiary is a must for situations like this) and just generally being nosy.

There is always a way to test something, and there's always a way around any problems you have. If you find yourself in the green fields of the future and everything you would usually do has already been ripped from your grasping hands, don't worry about what you don't know - just sit down (take a calming breath), write a list of what you do know and the picture of what you'll likely do will spring up around it. The future may be indistinct, but it's there and it's definitely testable.

Have the best week nerds!