1 00:00:07,021 --> 00:00:09,560 - We're not going to lie, succeeding with automation 2 00:00:09,560 --> 00:00:10,979 can be hard. 3 00:00:10,979 --> 00:00:14,083 Take time to experiment and find the approach 4 00:00:14,083 --> 00:00:16,621 that works best for your team. 5 00:00:16,621 --> 00:00:20,040 A smart strategy gets you on the right path. 6 00:00:20,040 --> 00:00:22,179 Sometimes you get feeling like you're running 7 00:00:22,179 --> 00:00:23,560 on that hamster wheel. 8 00:00:23,560 --> 00:00:26,276 It's just a never ending fixing testing 9 00:00:26,276 --> 00:00:28,163 where there's no time to automate 10 00:00:28,163 --> 00:00:32,268 because you're so busy manually testing and retesting. 11 00:00:32,268 --> 00:00:34,055 We do need a strategy. 12 00:00:34,055 --> 00:00:37,012 We need a strategy to get off that hamster wheel, 13 00:00:37,012 --> 00:00:39,678 but we do not need a heavyweight strategy. 14 00:00:39,678 --> 00:00:41,929 We will share some models and tools 15 00:00:41,929 --> 00:00:44,590 to help you take small steps. 16 00:00:44,590 --> 00:00:48,153 Let's get you out of that hamster wheel. 17 00:00:48,153 --> 00:00:50,209 Does this upside down pyramid, 18 00:00:50,209 --> 00:00:53,082 reflect how your automation tests look? 19 00:00:53,082 --> 00:00:56,004 Teams who succumb to the tool vendor sales pitch 20 00:00:56,004 --> 00:01:00,506 which says, non-coders can create automated tests. 21 00:01:00,506 --> 00:01:03,673 All you have to do is record and play. 22 00:01:05,673 --> 00:01:08,009 You end up with very few unit tests. 23 00:01:08,009 --> 00:01:10,897 Maybe a couple of API level tests, 24 00:01:10,897 --> 00:01:14,064 but mostly shaky tests through the UI. 25 00:01:15,389 --> 00:01:19,007 We've known teams with suites of unmaintainable 26 00:01:19,007 --> 00:01:22,409 test code sitting wasted on the shelf. 27 00:01:22,409 --> 00:01:26,409 Mike Cohn's simple pyramid model flips it upside down. 28 00:01:26,409 --> 00:01:29,229 We talk about different tests for different goals. 29 00:01:29,229 --> 00:01:31,550 This model reminds us to push the automated tests 30 00:01:31,550 --> 00:01:33,550 down to the lowest level. 31 00:01:33,550 --> 00:01:35,927 The lowest level being the unit test. 32 00:01:35,927 --> 00:01:38,553 That's where the best return on investment happens. 33 00:01:38,553 --> 00:01:41,273 Mostly because its fast feedback. 34 00:01:41,273 --> 00:01:45,287 No touching the database, no tests through the UI. 35 00:01:45,287 --> 00:01:46,865 The user interface. 36 00:01:46,865 --> 00:01:49,867 Automating in the middle layer, that API service layer, 37 00:01:49,867 --> 00:01:52,646 usually has many speed benefits too, 38 00:01:52,646 --> 00:01:55,252 because it does not go through the user interface. 39 00:01:55,252 --> 00:01:57,972 The workflow tests which we usually associate 40 00:01:57,972 --> 00:01:59,812 through the user interface 41 00:01:59,812 --> 00:02:01,993 are the slowest and most expensive. 42 00:02:01,993 --> 00:02:04,169 Even with a good automation frameworks 43 00:02:04,169 --> 00:02:05,550 and drivers today, 44 00:02:05,550 --> 00:02:07,790 they tend to be brittle, don't have to be, 45 00:02:07,790 --> 00:02:10,969 but they tend to be, but are definitely slow. 46 00:02:10,969 --> 00:02:13,968 Think about those levels of precisions, features. 47 00:02:13,968 --> 00:02:16,009 Features have many stories. 48 00:02:16,009 --> 00:02:17,649 Stories have many tasks. 49 00:02:17,649 --> 00:02:20,169 So let's automate those workflow tests 50 00:02:20,169 --> 00:02:23,369 through the UI at the feature level. 51 00:02:23,369 --> 00:02:26,009 That's when the code is stable, 52 00:02:26,009 --> 00:02:28,873 there are no more changes to the user interface. 53 00:02:28,873 --> 00:02:31,268 So you don't have to repeat things 54 00:02:31,268 --> 00:02:34,372 while changes are still being made during the iteration. 55 00:02:34,372 --> 00:02:38,313 The API level, that's where we automate at the story level. 56 00:02:38,313 --> 00:02:41,929 We are taking full advantage of collaborative tools 57 00:02:41,929 --> 00:02:43,609 when we test at this level, 58 00:02:43,609 --> 00:02:46,360 and of course the unit tests are done at the task level 59 00:02:46,360 --> 00:02:49,012 while the programmers are working on the story 60 00:02:49,012 --> 00:02:50,746 and doing their coding. 61 00:02:50,746 --> 00:02:53,913 This is a model, it doesn't apply to every situation, 62 00:02:53,913 --> 00:02:57,289 but for most teams it's a good starting place. 63 00:02:57,289 --> 00:02:59,327 In our book, More Agile Testing, 64 00:02:59,327 --> 00:03:02,848 we have a chapter with different adaptations of the pyramid. 65 00:03:02,848 --> 00:03:05,993 So you might wanna look at that and see what works for you. 66 00:03:05,993 --> 00:03:09,513 Many teams are less familiar with that middle layer. 67 00:03:09,513 --> 00:03:13,753 Testing through the API level underneath the user interface. 68 00:03:13,753 --> 00:03:16,110 We'll focus on those over the next little bit 69 00:03:16,110 --> 00:03:19,609 and talk about some of the tools for that. 70 00:03:19,609 --> 00:03:21,033 Unit test. 71 00:03:21,033 --> 00:03:23,929 Generally done using TDD test-driven development. 72 00:03:23,929 --> 00:03:25,689 Lisa explained that before. 73 00:03:25,689 --> 00:03:28,907 Core agile development practice that really helps 74 00:03:28,907 --> 00:03:32,291 the programmers understand how their little parts 75 00:03:32,291 --> 00:03:33,829 of code should behave. 76 00:03:33,829 --> 00:03:37,849 It's a design activity, not necessarily a testing activity, 77 00:03:37,849 --> 00:03:41,171 but it's not designed to be read by non-coders. 78 00:03:41,171 --> 00:03:44,153 A test might look something like this. 79 00:03:44,153 --> 00:03:47,812 Assert true, options selected not equal null. 80 00:03:47,812 --> 00:03:51,310 That makes no sense to me if I was looking at it 81 00:03:51,310 --> 00:03:53,449 from a business perspective, 82 00:03:53,449 --> 00:03:56,350 but it's important for the programmers to have. 83 00:03:56,350 --> 00:03:58,532 Now let's move up to the higher level. 84 00:03:58,532 --> 00:04:02,713 The API or the service level without the user interface. 85 00:04:02,713 --> 00:04:06,073 We want to touch the code at that level. 86 00:04:06,073 --> 00:04:08,632 This is a great opportunity for the tester 87 00:04:08,632 --> 00:04:10,489 coder collaboration. 88 00:04:10,489 --> 00:04:13,390 Tests are based on business rules or examples 89 00:04:13,390 --> 00:04:14,633 form the business. 90 00:04:14,633 --> 00:04:17,150 We get those examples during the planning meetings. 91 00:04:17,150 --> 00:04:21,369 Some tools that use this layer very effectively 92 00:04:21,369 --> 00:04:25,430 are fitness, cucumber, robot framework, 93 00:04:25,430 --> 00:04:27,273 and there's many more. 94 00:04:27,273 --> 00:04:29,683 But they all encourage collaboration. 95 00:04:29,683 --> 00:04:33,870 Testers understand rules, testers see the big picture. 96 00:04:33,870 --> 00:04:38,868 The programmers understand the technical implementation. 97 00:04:38,868 --> 00:04:42,040 When you're collaborating and when you're working together. 98 00:04:42,040 --> 00:04:46,270 Watch that a tester or a product owner doesn't go off 99 00:04:46,270 --> 00:04:49,490 specifying the tests and isolation. 100 00:04:49,490 --> 00:04:53,089 You really need to be collaborating so that the team, 101 00:04:53,089 --> 00:04:56,244 programmers and testers, and the product owner 102 00:04:56,244 --> 00:04:57,609 all understand. 103 00:04:57,609 --> 00:05:01,230 But let's look a little closer on what this really means. 104 00:05:01,230 --> 00:05:03,550 If you look at this model, 105 00:05:03,550 --> 00:05:05,812 it could apply to all levels of testing, 106 00:05:05,812 --> 00:05:09,430 but we're gonna focus it from the API level. 107 00:05:09,430 --> 00:05:12,430 We see the tester and programmer collaboration 108 00:05:12,430 --> 00:05:15,028 at its best at the API level. 109 00:05:15,028 --> 00:05:16,807 So that's where we're gonna talk. 110 00:05:16,807 --> 00:05:19,593 The top layer in this picture is the tests. 111 00:05:19,593 --> 00:05:21,412 The examples that we write. 112 00:05:21,412 --> 00:05:23,508 The middle layer, the test method, 113 00:05:23,508 --> 00:05:28,469 some tools might call it fixture, step definition, keyword. 114 00:05:28,469 --> 00:05:31,113 This is executable code. 115 00:05:31,113 --> 00:05:33,608 There might be multiple layers in that code, 116 00:05:33,608 --> 00:05:36,965 helper methods or classes and libraries. 117 00:05:36,965 --> 00:05:39,172 The bottom layer is the production code. 118 00:05:39,172 --> 00:05:41,407 The system under test. 119 00:05:41,407 --> 00:05:43,433 That's what the programmers write. 120 00:05:43,433 --> 00:05:44,953 That's what we see. 121 00:05:44,953 --> 00:05:48,030 The test inputs from those top layer, the examples, 122 00:05:48,030 --> 00:05:50,590 they're passed through to the test method, 123 00:05:50,590 --> 00:05:53,391 which ten passes them through to the production code. 124 00:05:53,391 --> 00:05:55,972 There's no logic in those test methods. 125 00:05:55,972 --> 00:05:57,892 They're just passing. 126 00:05:57,892 --> 00:05:59,790 Inputs unexpected outputs. 127 00:05:59,790 --> 00:06:03,385 The test framework along the side interacts 128 00:06:03,385 --> 00:06:04,851 with all those layers. 129 00:06:04,851 --> 00:06:08,585 It compares the expected result to the actual results. 130 00:06:08,585 --> 00:06:10,533 Gives you a pass fail. 131 00:06:10,533 --> 00:06:12,800 The magic in this type of framework 132 00:06:12,800 --> 00:06:16,282 is when a tester and programmer discuss 133 00:06:16,282 --> 00:06:18,635 how the tests will interact. 134 00:06:18,635 --> 00:06:20,694 How should we describe the tests, 135 00:06:20,694 --> 00:06:23,221 how can they actually be automated? 136 00:06:23,221 --> 00:06:26,922 So testers need some of that technical understanding, 137 00:06:26,922 --> 00:06:30,080 that technical awareness to be able to converse 138 00:06:30,080 --> 00:06:31,415 with a programmer. 139 00:06:31,415 --> 00:06:34,240 Doesn't have to be deep understanding, 140 00:06:34,240 --> 00:06:36,478 but enough to be able to say, 141 00:06:36,478 --> 00:06:39,018 what is this input, what are the variations, 142 00:06:39,018 --> 00:06:41,440 what might the output look? 143 00:06:41,440 --> 00:06:43,334 It can be a little complicated, 144 00:06:43,334 --> 00:06:45,621 but when the whole team works together 145 00:06:45,621 --> 00:06:50,388 to understand that automation strategy and its components, 146 00:06:50,388 --> 00:06:52,960 then the whole team can work together 147 00:06:52,960 --> 00:06:54,885 to share that information. 148 00:06:54,885 --> 00:06:57,541 We'll look at some more examples in a bit. 149 00:06:57,541 --> 00:06:59,322 We encourage teams to figure out 150 00:06:59,322 --> 00:07:02,858 the type of tests that makes the most sense to them. 151 00:07:02,858 --> 00:07:06,501 We gave an example earlier of how we used spreadsheet 152 00:07:06,501 --> 00:07:08,296 style tests. 153 00:07:08,296 --> 00:07:10,522 Some of the frameworks support tabular format, 154 00:07:10,522 --> 00:07:11,802 the spreadsheets. 155 00:07:11,802 --> 00:07:14,181 Others cater to the given when then style. 156 00:07:14,181 --> 00:07:16,661 Other use their own syntax, 157 00:07:16,661 --> 00:07:19,562 but it's important for you and your team 158 00:07:19,562 --> 00:07:21,520 to understand what you need, 159 00:07:21,520 --> 00:07:25,687 what your test should look like before you choose the tool. 160 00:07:26,774 --> 00:07:28,858 How do you want your test to look like? 161 00:07:28,858 --> 00:07:30,682 Who needs to read them? 162 00:07:30,682 --> 00:07:34,298 So take time tor research and try out different approaches. 163 00:07:34,298 --> 00:07:37,516 We like the tools that encourage collaboration 164 00:07:37,516 --> 00:07:42,278 between the technical and the non-technical team members. 165 00:07:42,278 --> 00:07:44,503 Sometimes there's a framework that will work 166 00:07:44,503 --> 00:07:48,170 for the API layer and also for the UI layer, 167 00:07:49,621 --> 00:07:53,418 or adding appropriate libraries or drivers on. 168 00:07:53,418 --> 00:07:57,322 Experiment to find the approach that works for you. 169 00:07:57,322 --> 00:08:01,018 Let's go back to our Debbie the dog owner persona. 170 00:08:01,018 --> 00:08:02,378 Here's a story. 171 00:08:02,378 --> 00:08:05,722 As Debbie the dog owner, I can book an available time 172 00:08:05,722 --> 00:08:08,419 to get my dog groomed so I can plan my day. 173 00:08:08,419 --> 00:08:11,620 Let's look at some examples about how we can specify 174 00:08:11,620 --> 00:08:14,522 executable tests for this story. 175 00:08:14,522 --> 00:08:17,127 Using this given when then style, 176 00:08:17,127 --> 00:08:20,538 you can see you have a scenario, the expected behavior. 177 00:08:20,538 --> 00:08:22,362 The time slot is available. 178 00:08:22,362 --> 00:08:24,181 What you may not have realized 179 00:08:24,181 --> 00:08:26,341 is that this can be automated. 180 00:08:26,341 --> 00:08:29,141 This is English readable tests. 181 00:08:29,141 --> 00:08:32,181 The tests method or step definition 182 00:08:32,181 --> 00:08:34,797 might be what's in italics. 183 00:08:34,797 --> 00:08:37,162 The select and available time slot. 184 00:08:37,162 --> 00:08:40,995 Behind that phrase we can create a test method 185 00:08:41,861 --> 00:08:43,402 that does all the work. 186 00:08:43,402 --> 00:08:46,560 This allows non-coders to specify tests 187 00:08:46,560 --> 00:08:50,260 and the programmers support that test method. 188 00:08:50,260 --> 00:08:54,427 Which translates this plain English language into test code. 189 00:08:55,363 --> 00:08:58,820 Sending the inputs to the production code. 190 00:08:58,820 --> 00:09:01,620 The easy readability of these tests 191 00:09:01,620 --> 00:09:04,960 make them excellent living documentation. 192 00:09:04,960 --> 00:09:08,936 If we look at the style for the misbehavior, 193 00:09:08,936 --> 00:09:10,602 it's a negative test. 194 00:09:10,602 --> 00:09:12,298 What if test. 195 00:09:12,298 --> 00:09:14,661 If the story is coded for testability, 196 00:09:14,661 --> 00:09:17,338 we should be able to create the automation 197 00:09:17,338 --> 00:09:19,578 that is maintainable as well. 198 00:09:19,578 --> 00:09:22,042 For example, the selected time slot. 199 00:09:22,042 --> 00:09:24,741 Maybe we can reuse it for this test 200 00:09:24,741 --> 00:09:27,381 and just have different parameters. 201 00:09:27,381 --> 00:09:30,122 You want to be able to think and keep things 202 00:09:30,122 --> 00:09:31,898 as simple as possible. 203 00:09:31,898 --> 00:09:35,120 Testability, maintainability. 204 00:09:35,120 --> 00:09:37,781 If we look at the tabular style, 205 00:09:37,781 --> 00:09:40,183 at the top here we have two business rules. 206 00:09:40,183 --> 00:09:41,737 BR stands for business rule. 207 00:09:41,737 --> 00:09:45,140 Appointment hours, nine o'clock to four o'clock 208 00:09:45,140 --> 00:09:46,958 Monday through Saturday. 209 00:09:46,958 --> 00:09:49,460 Business rule, full clip, 90 minutes for toy 210 00:09:49,460 --> 00:09:50,778 and miniature sizes. 211 00:09:50,778 --> 00:09:52,581 So those are two business rules. 212 00:09:52,581 --> 00:09:54,400 What we want to think about 213 00:09:54,400 --> 00:09:57,221 is what is a test method that might look at it? 214 00:09:57,221 --> 00:09:59,720 We have four inputs that we've identified. 215 00:09:59,720 --> 00:10:03,137 We have day, time slot, option, and size. 216 00:10:04,442 --> 00:10:07,119 If we can coordinate with our programmers 217 00:10:07,119 --> 00:10:09,080 and start talking about, 218 00:10:09,080 --> 00:10:11,061 is this what it should look like? 219 00:10:11,061 --> 00:10:13,578 Perhaps the programmer might come back and say, 220 00:10:13,578 --> 00:10:17,732 this isn't a great way to look at the time slot. 221 00:10:17,732 --> 00:10:22,218 We need a starting time, plus something else. 222 00:10:22,218 --> 00:10:25,335 You want to have that conversation. 223 00:10:25,335 --> 00:10:29,480 In this case can it be booked is one of the outputs. 224 00:10:29,480 --> 00:10:31,760 It's an expected result. 225 00:10:31,760 --> 00:10:34,203 We also have what the message might be. 226 00:10:34,203 --> 00:10:39,098 Real examples like this help us think of more questions. 227 00:10:39,098 --> 00:10:42,549 Do we have enough information for the test? 228 00:10:42,549 --> 00:10:47,178 Perhaps we should have a calendar date versus just the day. 229 00:10:47,178 --> 00:10:49,040 Is that part of the story? 230 00:10:49,040 --> 00:10:50,362 What about holidays? 231 00:10:50,362 --> 00:10:52,137 Do we even consider that? 232 00:10:52,137 --> 00:10:55,920 So the conversation is where the magic happens, 233 00:10:55,920 --> 00:11:00,087 because this is how we build testability into our stories, 234 00:11:01,414 --> 00:11:03,728 but really watch for scope creep 235 00:11:03,728 --> 00:11:05,882 because it's very easy to start adding 236 00:11:05,882 --> 00:11:07,482 and adding and adding. 237 00:11:07,482 --> 00:11:11,578 So think about it, use it as a base for testing. 238 00:11:11,578 --> 00:11:13,482 So remembering our pyramid. 239 00:11:13,482 --> 00:11:15,059 We've talked about unit tests, 240 00:11:15,059 --> 00:11:17,381 we've talked about tests through the API. 241 00:11:17,381 --> 00:11:20,298 Now let's go back to tests through the user interface. 242 00:11:20,298 --> 00:11:23,061 When we think about tests through the user interface, 243 00:11:23,061 --> 00:11:26,581 we usually mean workflow or end-to-end tests. 244 00:11:26,581 --> 00:11:29,840 These can be brittle because of all the moving parts. 245 00:11:29,840 --> 00:11:32,173 This example shows one test, 246 00:11:33,178 --> 00:11:35,781 but it could be two separate tests. 247 00:11:35,781 --> 00:11:37,760 There is no one right way. 248 00:11:37,760 --> 00:11:39,858 We ask for minimal coverage. 249 00:11:39,858 --> 00:11:41,760 So that's what we're trying to do 250 00:11:41,760 --> 00:11:44,880 is keep it to as few tests as possible. 251 00:11:44,880 --> 00:11:47,242 We check that the error message works 252 00:11:47,242 --> 00:11:50,716 and then we go back and check that the happy path works. 253 00:11:50,716 --> 00:11:54,240 Because we've tested all of those business rules 254 00:11:54,240 --> 00:11:55,999 at the API layer, 255 00:11:55,999 --> 00:11:58,420 we do not need to retest them here. 256 00:11:58,420 --> 00:12:01,882 We want to make sure the workflow works. 257 00:12:01,882 --> 00:12:04,843 Remember when automating end-to-end tests, 258 00:12:04,843 --> 00:12:09,325 that the tests through the user interface are slow. 259 00:12:09,325 --> 00:12:12,840 Tests that touch the database are slow. 260 00:12:12,840 --> 00:12:16,960 So think about minimum feature coverage. 261 00:12:16,960 --> 00:12:20,202 But there are other types of tests 262 00:12:20,202 --> 00:12:22,421 that happen at the UI level. 263 00:12:22,421 --> 00:12:25,998 For example, how the page renders, 264 00:12:25,998 --> 00:12:30,101 maybe the visuals, are the icons placed correctly? 265 00:12:30,101 --> 00:12:33,807 Do the footers overlay the test text? 266 00:12:33,807 --> 00:12:37,322 You can use tools like visual diffing tools. 267 00:12:37,322 --> 00:12:40,485 They can let you see the changes more easily, 268 00:12:40,485 --> 00:12:42,666 but you also might want to test them 269 00:12:42,666 --> 00:12:44,725 using exploratory test methods. 270 00:12:44,725 --> 00:12:48,528 Using visual ways of looking at the screen 271 00:12:48,528 --> 00:12:51,063 instead of trying to automate. 272 00:12:51,063 --> 00:12:54,225 Be selective on what you're automating.