Header Tabs

Monday 20 March 2017

Giving Testers time to Grow

One of the saddest exchanges I've ever had with a fellow tester went something like this:

Me: Hey! We missed you at the C# fundamentals meeting yesterday, I know you really wanted to go, everything alright?

Them: Ugh, I'm so sad I missed it, but it was impossible to get there, I have so much pod work to do!

Of all the reasons to miss a training session you really want to do to, this is the one that I dislike the most. The feeling that spending time not doing pod work is a guilty one, and the idea that somehow it's less valuable.

It turns out that a lot of testers felt like this.

I think it's inherent in organisations where delivery is considered king. And delivery is important, it's what makes money that pays people to keep working. But I would put forth the idea that delivery should never be everything. Equally important is to feel like you are learning, moving forward, making yourself better. This is what drives you to engage in work.

This by no means applies just to testing either, anyone in a team should be given time to spend on self-learning and group learning. This time should be set aside and considered sacro-sanct. Only a genuine emergency should be able to pull you away from this time.
This last bit was an important rule my own team learned. We spend a morning every week mob-programming together. It's a great learning tool for all of us. But it was easy to start letting other things get in the way, after all, how often do you really have an entire team available for a discussion without booking weeks in advance? We noticed that we were letting ourselves, and others, co-opt that time for other (still valuable) discussions and we were missing out on an important team learning time. We actually felt it. So we resolved to leave mob programming time for just that.

But when it comes to testing, I think we can feel a bit more of a niggle of guilt than most developers would. Especially in Agile teams, our ratio is usually one of us to several developers and a few other team members. We often feel directly responsible for things being delivered on time, just because from the outside we look like a natural bottle-neck.

This is why it's even more important to ensure that testers have dedicated time away from pod work to do their own learning. Some ideas are:

  • Give learning a story in a backlog, story point the time and account for it in your sprint work. 
  • Or give your teams jog days in between sprints where they can do the things they think are important. 
  • Set aside afternoons in your team calendar each week for learning time.
  • Give your entire teach teams a few hours on a friday afternoon where training and interesting talks can be organised.
  • Do all of the above if you can!
The point is that this can only improve not just the morale of testers and their teams but the gains you make in the quality of the work will directly show in the all important push to delivery. Give a little and gain a lot.

Learning and challenged testers are happy testers.
Make your testers happy.

Have a lovely day nerds!
<3

Tuesday 7 March 2017

The Lazy world of Minimum Viable Testing

I actually wrote this for a testing magazine, so in a way, you could call this shameless self promotion... I'm at peace with this (if you can't shamelessly self promote on your own blog, where can you?).

http://www.testingtrapezemagazine.com/wp-content/uploads/2017/02/TestingTrapeze-2017-February.pdf

Have a lovely day nerds!

Tuesday 31 January 2017

The future is green and full of fields.

Happy New Year to all and sundry! Yes, yes, it's February, I get it, but we haven't had anything to chat about in a while.

Now we do.

Do you know what a Greenfields project is? It's a project that is completely new in environment and in code and has absolutely no reliance on legacy within a system. It's basically when you decide "bugger it" we're going to just make something completely new. One of the amazing things about this type of project is that you start from essential scratch. What systems will you use? What environment? What kind of testing DO YOU WANT? You start with an idea of the ideal world, the paradise of a project that you want to work on and you go for it.

My team and I started on a project like this about 6 months ago. We picked our new environment on AWS, Docker with Linux and .Net Core and a completely separate deployment system so we could do what we want, when we want. Nothing we would do could impact anyone else (aside from the goodness our project would bring to the wider teams once done). Good, that's exactly what we wanted.

But what about testing? Oh we had the basics sorted out, we'd all code in TDD so there'd be unit tests, integration tests of course (we're a service API) and performance tests were easy enough to get into place (all running as part of our environment spin up in docker, thank you, thank you). But that's not enough for something that will be as fundamental as our service to the wider development team. All future projects will rely on ours to be correct.

I had some thinking to do.

There would never be a UI, other services who hook into ours will take care of those. I would have no knowledge of what would actually be using our service as they didn't exist yet. Oh we had hopes, but that's not enough to build a test off. So my testing (automated or otherwise) would need to take into account that I would have nothing physical to work with. Boo-urns.

This is a problem that more and more of us are starting to discover as we move faster and faster in the world of technology and new tech springs up as soon as we've just discovered something else. How do we test things when we don't really know what they're going to look like in the end, or if they even have something that you can look at?

For us there'd be no mindmaps, no exploratory testing, no usability tests, no specflow automation, basically everything I normally do went out the window.

I've found it immensely important to go back to absolute basics and think very carefully about what I do know and what I actually have to work with :


  • No UI? That's fine, selenium and specflow automation are out, I'll just need to find something else.
  • Don't know how the service is going to be used? Cool, I won't worry about this and just pay attention to what we know our service can do.
  • If unit, integration and performance tests are done but not enough, that's fine, I'll just go up one level to acceptance tests.


So my research has moved to acceptance tests, based on what we know of our service can do, that won't have a UI to work with.

Right, we've got our picture. From here it's just a matter of finding a way to make it work.

The actual tools I use aren't actually important here, but just in case anyone is interested, I've started using Storyteller (http://storyteller.github.io/). With a bit of work (just a bit ^_^) they'll be run along with the integration test as part of our building docker containers and release pipelines.

I also spend a lot of time watching and being a part of the code be developed (mob programming is a great move when you're all in new territory), docs are hugely important not just so we have an accurate view of what we're doing but so others can actually use our service when it's ready (something like Apiary is a must for situations like this) and just generally being nosy.

There is always a way to test something, and there's always a way around any problems you have. If you find yourself in the green fields of the future and everything you would usually do has already been ripped from your grasping hands, don't worry about what you don't know - just sit down (take a calming breath), write a list of what you do know and the picture of what you'll likely do will spring up around it. The future may be indistinct, but it's there and it's definitely testable.

Have the best week nerds!




Tuesday 20 December 2016

Building a Tester Community of Practice

I know, I know. It's been a while since my last blog post. But it's going to be a good one, promise, I've been saving up for it.

Last time we talked a little bit about the trouble we'd been having with our QA practice, as in, there wasn't one. We were missing the halcion days of having a Practice lead who did everything for us. In short, we were feeling sorry for ourselves.

Over the past 4 weeks, we've been trying something new and although it doesn't solve all the issues we raised during our marathon QA Retro-flective, it has gone a long way to making us feel like a community again.

It started with a Community of Practice for a Community of Practice (INCEPTION!).

You see, part of the problem with how we were doing things was this old idea of supporting each other. Now, we all like to support each other, but we were expecting that to extend to spending time every 2 weeks potentially doing something that was of zero interest to you. That's a hard ask for someone who's busy trying to get team work done. No one minded spending the time, they minded spending it doing something that wasn't for them.

Enter the Community of Practice.

We decided that no matter how we did it in the old way, someone was missing out. That wasn't necessarily the problem though, the problem was expectations. Expecting 30 testers to show up and getting 5 is disheartening. But what if you knew that it was going to be 5? And you were all passionate about your topic and ready to get talking? Or doing? That's a different feeling right?

You see, a community is based around a participation model.

Image result for community of practice model


  1. At the center you have your core members, they're the ones actively involved with the doing. They're organising talks and meetings, workshops, whatever is needed.
  2. Then you have the active members, these are your regulars, always willing and wanting to attend.
  3. Next are the occasionals, they drop in and out depending on a wide range of factors including interest.
  4. Then you have the peripherals, they're not really interested in participating but may pop in to see how it's going.


It's important to note that people move in and out of circles as time goes by. A Core member may need time off and turn into a active member. A Peripheral may become more engaged and move into the occasional. The point of a CoP is that everything is fine, anything goes as long as people are interested.

So our mini CoP group got together and discuss all of the above and how we would get others involved in this idea, what would failure look like? What would success look like? What did we want people to take away.
We arrived at the idea of a 4 week trial, every Friday we would put two hours aside to be used as people wanted. Three weeks were to be spent with trialing the actual community and the 4th would be for a review (at the pub) to discuss if we wanted to keep going.
We would keep a trello board and anyone would add ideas they wanted to talk about that week. Everyone would vote for topics they wanted to see and one person would act as champion of that topic and basically just organise everyone in that group. Without a champion, no matter how many people vote, a topic wouldn't go a head. A champion didn't need to organise a talk or slides or anything, they could just be a facilitator if they wanted. Talks lasted as long as the groups liked. The idea was that if something needed 5 minutes, that's what it would get. Other conversations may need the full 2 hours.

One topic only had 2 people, they went off and had a great time talking about it. Another had 13 people. The point was that everyone was going where they saw value.

I'm not going to lie, I was pretty surprised by the response we got to this. It went really well. People loved it. We quickly saw the core and active members emerge and even the makings of an occassional. It was pretty awesome to watch people just put themselves forwards to be champions (that was honestly the biggest fear, we'd made a rule that those of us in the mini CoP wouldn't act as champions during the trial period).

We've now had our pub review and the consensus is a resounding "Keep Going!" So we're already setup to keep going next year after the break.

It's been wonderful to watch my fellow testers just pull themselves back together, and all it needed was a few people to get an idea going. It's been a tough year for us, but things are really starting to look up, and I'm glad we got to finish on such a high note.

Happy Holidays nerds!


P.S. I hope you've all gone to see Rogue One already. If you haven't, go, go now. Stop reading and go. Star Wars is more important.


Wednesday 16 November 2016

Shaking your Agile self to a better testing community.

Well, it's been a shaky few days here (quite literally) not to mention the flooding and gale force winds that apparently come along with an earthquake now-a-days. But we're on the up and up again!

<3

Our testers have just done something super agile together in the past few weeks. We've all been feeling a bit down after some big structural changes that have left us without the single point of leadership that we're used to. After months of this, we decided to get together and work something out once and for all, do we actually need a named lead? Or were we relying on them for things we can do ourselves?

Agile to the rescue!

In a fit of inspiration (*cough* and with the help of one of our Agile coaches), we decided it was time to have a QA Retro to actually talk about how we're all feeling and what we want the future to look like.

We used post-it's to write our points down beforehand, then one by one we put them up on the board, then silently grouped them into topic areas.

One of the things that really struck us is that we all had some very similar concerns about how things were going. With that done, we moved to the lean coffee format and timed a discussion on each topic group. Now came the fun bit. We talked, and I mean really talked, with a no interruptions rule and a mindfulness on if you were contributing perhaps a bit too much (so that other's get a chance to talk too).  Most importantly, this wasn't about solutions, solutions were forbidden.

We worked a lot of stuff out, the most important though, was that we could be doing a lot of things that we had been relying on a QA lead for. Our sense of community and learning, career progression, internal and external tester perceptions, all these were things that we could tackle ourselves. In fact, these were things that in hindsight it seems strange to rely on someone else for. No one can make you have a community, no one can make you learn. These are things we need to push for ourselves.

We did see that we definitely needed someone to represent testers and testing at higher levels, but that person didn't need to be a "lead", just someone with our interests at heart.

We all left our retro feeling a mixture of hopefulness, optimism and apprehension. It's an easy kind of thing to forget about, well, doing anything about. But all you really need is a few really passionate people to push these things forwards. Luckily we have those passionate people who are willing to take the time to put some ideas together.

Agile can be used for so much more than just pod level things, and never be afraid to just get everyone in a room to talk through things. We forget that we're not alone when things are feeling a bit down, it's so easy to just retreat into yourself and your own work, never realising that there are many others who are feeling the same way and that together, you can really make some changes.

Have a great week nerds!
Helena<3



Monday 7 November 2016

Why you need to market testing and yourself.

I remember the very first time I found out that testers weren't always seen as the valuable people that I always thought them to be. As someone who started in support, testers were my buddies. They would help me find problems, and put them to developers who would fix them for us and our customers. I spent a lot of time talking and interacting with testers, which is probably a big part of why I ended up here.

But alas, the rose-coloured glasses had to be slapped off my face at some point. It was maybe four months into my new testing career that I read the fatal words. On Facebook no less, ouch. A developer friend of mine started what no doubt was an innocent and wide-eyed-fun thread about the worst things in development. "Javascript!" went one, poor Javascript. "The customers!" went another, har har, they're always getting in the way, those crazy customers.

Then it happened, "F&$*%ing testers!" went the next. My heart started beating faster and I just stared at my screen. "Why?" I thought. "What have I ever done to you?", my mind made it very personal, very quickly. It didn't matter to me that maybe this guy had just come across a tester who may have been a bit of a douche, he'd just put us all in the same sweary basket. There's no way this dev takes his tester or the work they do seriously. I wrote a detailed, long response reprimanding this developer, which I stared at for about 5 minutes, then deleted. I find that sort of thing cathartic, I don't actually need to post things to get them off my chest.

But this kind of anger is something that no doubt a lot of us have to deal with. The perception that testers are there to literally ruin your day. For those of us who don't experience out-right hostility, what we deal with is a little more insidious. We deal with apathy and a general lack of understanding of what we do. The two things kind of go hand in hand.

I work in a great team, full of great people, who care about the quality of their work. I asked them a little while ago, just at a stand up, what they thought I did. Everyone stared around a bit, looked a bit awkward, then one hazarded a guess of "You...test...things?." Huh. I guess she was sort of right, but I've always thought I was doing my job wrong if I was spending more than maybe 15% of my time physically testing something. What did they think I did with the rest of my time? Did they think I was just sitting there shopping online? Well, to be honest, yes, but I have 3 screens and I'm great at switching my focus between them.

After my little pop-quiz though, I started to ask around. I know my team is pretty up there when it comes to what I'm going to call "enlightened individuals". For some of the other testers, it was even worse. Not only did no one know what they did beyond the physical act of testing, but a lot told me that they weren't even expected to do anymore. Instead of Agile, they were working in mini-waterfall, and the cracks showed. More roll-backs than you'd think was alright and testers who felt dismissed, under-appreciated and just generally sad. Not only does this lead to poor software, it leads to retention problems. You can lose some great people because of a lack of care.

So something needed to change, we needed to start marketing ourselves. We need not just our teams to care about quality and testing, we needed our CEO, our managers, everyone, to get involved. And it's started with something that Kate Falanga of the previously ironically named Huge Inc (it's not ironic anymore, they really are huge), gave us a head start with when she took us through her "Teaching testing to non-testers" workshop.

Getting everyone involved in what testing really is means that they start to understand, and with understanding comes care and enthusiasm. Re-engage your testers by re-engaging everyone with testing. You need to market who we are and what we do, or else we remain that other thing that's always raining on everyone's parade.


P.S - Did anyone watch BlizzCon? Oh my nerd, the recreation of D1 in D3 may be my favourite thing this year. It may not be a Warcraft 4, but it'll tide me over ^_^


Sunday 30 October 2016

On automation and learning the hard way

I promised we’d talk about automation, and don’t worry, we’ll get into some of the nitty gritty stuff soon enough, but this is mostly the second half of my origin story. That’s right, in my head I’m basically a superhero, I’m also dressed as Harley Quinn as I write this, so be prepared for some comic references.

Automation was pretty damn new when I started testing a few years ago, in fact, our framework had only been in existence for around 6 months and all the people (all 6 of them) working on it were previously manual testers who’d been doing some Pluralsight courses on C# and Selenium. This was the big change time, manual testing was out, and we really needed to get up to speed on this automation business. Regression was taking around 2 weeks and anyone can see that that’s just not sustainable. At least, not when you’re releasing every 3 weeks. So we all went hard out.

While at the beginning, as we talked about last time, I had no real clue what to do with the manual side of testing, the automation side came to me quite easily. I may not have coded before (well, except for a Fairy website that I setup using html on geocities when I was about 13), but I was technical, coding made sense. We were still learning how to put it all together in a way that worked for automated testing (and by that I mean learning the hard way that you can’t code everything), but we were getting good at it. At least that’s what we thought at the time.

If anyone here has ever coded before, in any language, you already know what I learned. That looking back at code that you wrote 3 months ago, that you were really proud of at the time, is a pretty humbling experience. “Ouch” was the most often heard thing in my head whenever I needed to go back and fix something.

There are two main lessons I’ve learned about automated testing over my testing time (ha! "my testing time" - just call me old man Helena).

The first is that when you focus either entirely on automation or entirely on manual, you’ll fail. There is a time and place for both, there must be balance to the force (I know, I know. It’s not a comic reference, but meh, this is my blog and I’ll do what I want!). We dropped all focus on manual testing when we moved to automation, and we lost something important because of it, we lost the humanity in our testing. Equally, we wouldn’t be where we are now (using a Continuous Integration release process) without all the effort we put into automation. It was hard at the time, and it’s painful to look back on, but we needed to do what we did. We’re aware now though, and we’re redressing the balance. That’s all you can hope for.

The second big lesson was that automation isn’t a second class coding citizen. It’s not the Marvel Squadron Supreme to the DC Justice League (got one!). Automation is coding, it’s proper, normal coding, the same theories and standards apply as usual. One of our biggest challenges was getting Developer help with our automation. We should have asked for it sooner, we should have pushed for it sooner. For every dev that said to me“Oh, I don’t know how automation works” was one who said “It’s just normal code, here, let me show you.” We’d been learning from online courses, they had years of experience. Eventually that experience became our savior, our Framework is in pretty good nick all things considered. Nowadays we even have dev college graduates do a Testing rotation, where they do both automation and manual testing. It took time to get to this mindset though.

So that’s the high level overview of my origin story. I may not be an actual superhero (or to be more honest, a super villain) but everyone deserves an origin ^_^

Happy Halloween little nerds!
Helena <3