1 00:00:06,560 --> 00:00:09,720 - So let's take a look at that wait for task pattern 2 00:00:09,720 --> 00:00:13,170 in a much more practical way around Goroutine pooling. 3 00:00:13,170 --> 00:00:15,800 Now I do want you to hesitate when creating pools 4 00:00:15,800 --> 00:00:17,440 of Goroutines, because as I told you, 5 00:00:17,440 --> 00:00:19,190 the Go schedule is very intelligent 6 00:00:19,190 --> 00:00:21,985 and those Ps are kind of like pools of Goroutines already. 7 00:00:21,985 --> 00:00:24,430 So we don't have to worry about imagining 8 00:00:24,430 --> 00:00:25,910 our own pools of Goroutines, 9 00:00:25,910 --> 00:00:29,430 though there may be times where you have a limited resource 10 00:00:30,365 --> 00:00:32,830 that has to require limited access to something 11 00:00:32,830 --> 00:00:34,590 and you might need to limit access 12 00:00:34,590 --> 00:00:36,370 to something through a pool. 13 00:00:36,370 --> 00:00:38,510 All right, so let's go through this code here. 14 00:00:38,510 --> 00:00:40,440 Right out of the box on line 95, 15 00:00:40,440 --> 00:00:43,620 we're saying we want to have guarantees 16 00:00:43,620 --> 00:00:46,140 and we're signaling with string data, pooling 17 00:00:46,140 --> 00:00:48,610 and the whole idea's got Goroutines waiting for work to do. 18 00:00:48,610 --> 00:00:51,470 So the work is gonna be string based and we want guarantees. 19 00:00:51,470 --> 00:00:54,280 You absolutely want guarantees with pooling, 20 00:00:54,280 --> 00:00:56,646 because later on we want to be able to apply deadlines 21 00:00:56,646 --> 00:01:00,960 or timeouts when a pool is, let's say, underload 22 00:01:00,960 --> 00:01:02,830 and not responding fast enough. 23 00:01:02,830 --> 00:01:04,760 Can't do that with buffered channels. 24 00:01:04,760 --> 00:01:07,859 Now look at what we do here, I've got a loop of two, 25 00:01:07,859 --> 00:01:12,150 and we're currently gonna create two Goroutines in our pool. 26 00:01:12,150 --> 00:01:13,747 So what we can imagine is, I'm gonna create 27 00:01:13,747 --> 00:01:17,440 this pool of Goroutines right here. 28 00:01:17,440 --> 00:01:20,020 There it is, and these , and we're gonna end up with 29 00:01:20,020 --> 00:01:24,420 two paths of execution in the pool, Goroutine1, Goroutine2 30 00:01:25,350 --> 00:01:28,500 and they're both gonna be here in this waiting state. 31 00:01:28,500 --> 00:01:30,460 What makes them wait? 32 00:01:30,460 --> 00:01:32,530 What makes them wait is the for range. 33 00:01:32,530 --> 00:01:34,510 I want you to look at what we're doing. 34 00:01:34,510 --> 00:01:38,210 We are ranging over the channel. 35 00:01:38,210 --> 00:01:40,220 When you range over a channel, 36 00:01:40,220 --> 00:01:43,790 you are basically in a channel receive. 37 00:01:43,790 --> 00:01:48,790 So I've got two Goroutines blocked on the same CH channel. 38 00:01:49,020 --> 00:01:50,610 We're using closures there, by the way 39 00:01:50,610 --> 00:01:54,090 and we are now blocked in a channel receive. 40 00:01:54,090 --> 00:01:57,160 Now order better not matter, because once data 41 00:01:57,160 --> 00:01:59,110 comes into the channel, the scheduler 42 00:01:59,110 --> 00:02:02,500 can choose any Goroutine that it wants to do the work. 43 00:02:02,500 --> 00:02:05,120 Okay, how does the for range terminate? 44 00:02:05,120 --> 00:02:07,280 Well that's gonna be through a signaling change. 45 00:02:07,280 --> 00:02:09,400 We're gonna go from open to closed 46 00:02:09,400 --> 00:02:10,740 and that will terminate the loop. 47 00:02:10,740 --> 00:02:11,780 I'll show you that. 48 00:02:11,780 --> 00:02:14,600 So look what I got here, two Goroutines in my pool. 49 00:02:14,600 --> 00:02:17,900 They're both blocked on the channel receive right here 50 00:02:17,900 --> 00:02:21,210 and now we go off, right we go off, we got the pool 51 00:02:21,210 --> 00:02:25,000 in place and now we go off right there on line 108. 52 00:02:25,000 --> 00:02:26,480 So here we are, right? 53 00:02:26,480 --> 00:02:28,155 We're on line 108 right here. 54 00:02:28,155 --> 00:02:30,230 Here's our main Goroutine. 55 00:02:30,230 --> 00:02:33,290 This is where we were, this is where we created that pool. 56 00:02:33,290 --> 00:02:36,256 So we create the pool of the two Goroutines 57 00:02:36,256 --> 00:02:38,260 right there on line 98. 58 00:02:38,260 --> 00:02:41,080 This is all, all on line 98 right there 59 00:02:41,080 --> 00:02:44,900 and now we come down and we get to line 108. 60 00:02:44,900 --> 00:02:48,740 Now on line 108 what we do is we end up in this work loop. 61 00:02:48,740 --> 00:02:50,880 We're gonna end up in this work loop 62 00:02:50,880 --> 00:02:55,250 where what we're gonna do is pass work into the pool, 63 00:02:55,250 --> 00:02:58,577 ten pieces of work and that's why you see on line 109, 64 00:02:58,577 --> 00:03:01,520 you're gonna be setting the work into the channel. 65 00:03:01,520 --> 00:03:04,520 Remember that channel send looks like this, 66 00:03:04,520 --> 00:03:08,490 channel, send that data right here. 67 00:03:08,490 --> 00:03:11,180 Now in order for the send to complete 68 00:03:11,180 --> 00:03:13,570 we need a corresponding receive 69 00:03:13,570 --> 00:03:16,950 and again, the scheduler now has to choose a Goroutine. 70 00:03:16,950 --> 00:03:19,923 Which Goroutine is the scheduler gonna choose? 71 00:03:19,923 --> 00:03:21,920 I absolutely have no idea. 72 00:03:21,920 --> 00:03:24,372 when all things are equal, it is un deterministic. 73 00:03:24,372 --> 00:03:28,790 So let's just say on the first generation, this send, 74 00:03:28,790 --> 00:03:33,620 right here, maybe it ends up binding to that receive. 75 00:03:33,620 --> 00:03:36,148 Now I've got a send and a receive coming together. 76 00:03:36,148 --> 00:03:39,960 This Goroutine gets the data, it's starts doing work. 77 00:03:39,960 --> 00:03:41,130 Then what happens? 78 00:03:41,130 --> 00:03:43,490 Then we iterate and we do it again. 79 00:03:43,490 --> 00:03:46,430 Maybe on the next channel send, 80 00:03:46,430 --> 00:03:49,400 maybe on the next channel send of data, 81 00:03:49,400 --> 00:03:52,740 maybe because that ones busy, the scheduler says, 82 00:03:52,740 --> 00:03:55,630 okay I'm gonna allow this one to bind to this. 83 00:03:55,630 --> 00:03:58,010 Now that work is getting done, right? 84 00:03:58,010 --> 00:04:01,500 Now if another, on the third piece of work, 85 00:04:01,500 --> 00:04:04,720 here we go again, we're gonna have the third piece of work, 86 00:04:04,720 --> 00:04:06,840 with the data, but this time I don't have 87 00:04:06,840 --> 00:04:09,460 a single Goroutine to perform the receive. 88 00:04:09,460 --> 00:04:12,550 We have guarantees, so now this is blocked. 89 00:04:12,550 --> 00:04:15,060 How long is this going to block? 90 00:04:15,060 --> 00:04:18,430 It's unknown, the guarantees, we have to wait 91 00:04:18,430 --> 00:04:20,700 for one of these Goroutines, right, 92 00:04:20,700 --> 00:04:25,022 to finish their work, finish their work, come back in, 93 00:04:25,022 --> 00:04:27,730 and say I'm ready to do more work, 94 00:04:27,730 --> 00:04:29,060 then the scheduler can come in 95 00:04:29,060 --> 00:04:32,370 and go okay brilliant, I finally have a Goroutine for you. 96 00:04:32,370 --> 00:04:33,730 Let's get going. 97 00:04:33,730 --> 00:04:35,800 So in this particular case, we're gonna try 98 00:04:35,800 --> 00:04:38,410 to pass ten pieces of work to the pool, 99 00:04:38,410 --> 00:04:40,830 but with only two Goroutines, 100 00:04:40,830 --> 00:04:42,770 this Goroutine's going to have some latency. 101 00:04:42,770 --> 00:04:45,063 There's definitely gonna be some signaling, 102 00:04:46,057 --> 00:04:48,090 sending latency here depending on 103 00:04:48,090 --> 00:04:50,210 how fast these Goroutines complete the work. 104 00:04:50,210 --> 00:04:51,480 If we want to reduce that latency, 105 00:04:51,480 --> 00:04:53,760 we gotta add more go routines to the pool, 106 00:04:53,760 --> 00:04:56,150 but you can see here that we're doing this. 107 00:04:56,150 --> 00:04:58,420 Why we want to use the guarantee, 108 00:04:58,420 --> 00:05:02,130 is we could later on encode, put a time out here. 109 00:05:02,130 --> 00:05:06,830 We might say, hey you've got one second, that's it, 110 00:05:06,830 --> 00:05:09,840 one second for this send to complete 111 00:05:09,840 --> 00:05:11,450 and if not, we're gonna move on, 112 00:05:11,450 --> 00:05:12,860 and that's a way of being able to say, 113 00:05:12,860 --> 00:05:15,000 hey, you know what, we might be underload 114 00:05:15,000 --> 00:05:16,246 and we don't want to wait. 115 00:05:16,246 --> 00:05:18,210 When you're running multi threaded software, 116 00:05:18,210 --> 00:05:21,090 you've got to deal with back pressures and latencies 117 00:05:21,090 --> 00:05:23,320 and one way to deal with back pressure and latency 118 00:05:23,320 --> 00:05:24,270 is timeouts. 119 00:05:24,270 --> 00:05:27,450 Timeouts, timeouts, timeouts, timeouts are everything, 120 00:05:27,450 --> 00:05:30,390 and so we may eventually want to put timeouts 121 00:05:30,390 --> 00:05:33,659 on these channel sends to make sure 122 00:05:33,659 --> 00:05:36,200 that we're not blocking here forever 123 00:05:36,200 --> 00:05:38,030 depending on what happens in the pool. 124 00:05:38,030 --> 00:05:39,600 So this is how you go in to do that. 125 00:05:39,600 --> 00:05:43,400 On line 113, I'm simulating a program shutting down. 126 00:05:43,400 --> 00:05:44,940 If you notice on this channel, 127 00:05:44,940 --> 00:05:47,500 we're both signaling with data and without data. 128 00:05:47,500 --> 00:05:50,220 While the program is running we're signaling with data, 129 00:05:50,220 --> 00:05:51,810 pushing data into the pool. 130 00:05:51,810 --> 00:05:54,020 Eventually we want to shut the program down. 131 00:05:54,020 --> 00:05:59,020 We call close on 113, that causes these receives now 132 00:05:59,150 --> 00:06:02,140 that are based on the for range to terminate 133 00:06:02,140 --> 00:06:04,670 and then these Goroutines can then, 134 00:06:04,670 --> 00:06:07,790 eventually terminate and we can shut our program down. 135 00:06:07,790 --> 00:06:09,350 So very cool pattern for pooling. 136 00:06:09,350 --> 00:06:11,420 This is your base pattern mechanics. 137 00:06:11,420 --> 00:06:13,240 You may wan to add to this one day, 138 00:06:13,240 --> 00:06:16,680 adding some sort of program counters around metrics, 139 00:06:16,680 --> 00:06:18,370 around back pressure. 140 00:06:18,370 --> 00:06:19,203 You can do that kind of stuff. 141 00:06:19,203 --> 00:06:21,040 This is the base pooling pattern 142 00:06:21,040 --> 00:06:22,953 leveraging the wait for task.