1 00:00:06,520 --> 00:00:08,200 - Now let's review an introduction 2 00:00:08,200 --> 00:00:12,150 to Amazon ECS for Kubernetes or EKS. 3 00:00:12,150 --> 00:00:15,560 With Amazon ECS for Kubernetes, we gain 4 00:00:15,560 --> 00:00:19,250 a fully managed control plane for Kubernetes 5 00:00:19,250 --> 00:00:24,230 and so AWS will create and manage the master nodes 6 00:00:24,230 --> 00:00:27,780 that make up that control plane for Kubernetes. 7 00:00:27,780 --> 00:00:31,010 And so keep in mind that with Kubernetes, you have all 8 00:00:31,010 --> 00:00:33,400 of the Kubernetes specific software 9 00:00:33,400 --> 00:00:35,940 and you also have etcd. 10 00:00:35,940 --> 00:00:39,530 So the Kubernetes control plane relies on etcd 11 00:00:39,530 --> 00:00:42,930 which is a distributed key value store 12 00:00:42,930 --> 00:00:47,140 and so it's really important to maintain high availability 13 00:00:47,140 --> 00:00:50,570 of that service and durability in the data 14 00:00:50,570 --> 00:00:54,620 because the the values in etcd are back to disk. 15 00:00:54,620 --> 00:00:59,080 And so, while AWS creates and manages those master nodes, 16 00:00:59,080 --> 00:01:02,880 ensuring constant durability and high availability 17 00:01:02,880 --> 00:01:05,600 for the Kubernetes control plane, 18 00:01:05,600 --> 00:01:09,660 the customer, we the customer will manage the worker nodes. 19 00:01:09,660 --> 00:01:11,600 And we will take a look at a diagram 20 00:01:11,600 --> 00:01:13,560 on this here in just a moment. 21 00:01:13,560 --> 00:01:17,560 And so, ECS Kubernetes is certified 22 00:01:17,560 --> 00:01:19,780 to be Kubernetes conformant. 23 00:01:19,780 --> 00:01:22,500 It is compatible with applications designed 24 00:01:22,500 --> 00:01:24,200 for standard Kubernetes. 25 00:01:24,200 --> 00:01:26,600 It essentially is standard Kubernetes except 26 00:01:26,600 --> 00:01:30,310 that AWS is managing the master nodes 27 00:01:30,310 --> 00:01:31,460 for us. 28 00:01:31,460 --> 00:01:33,290 So, let's take a look here 29 00:01:33,290 --> 00:01:36,400 at a diagram showing the architecture. 30 00:01:36,400 --> 00:01:41,240 So what you can notice here is that as a customer, 31 00:01:41,240 --> 00:01:46,010 we will have our own VPC, right so we would create 32 00:01:46,010 --> 00:01:49,230 our own VPC and in that VPC, 33 00:01:49,230 --> 00:01:54,033 we are then responsible for the worker nodes. 34 00:01:54,903 --> 00:01:55,736 AWS, 35 00:01:57,410 --> 00:01:59,040 Amazon is creating 36 00:01:59,040 --> 00:02:02,880 another VPC specifically for 37 00:02:02,880 --> 00:02:04,420 the master nodes, 38 00:02:04,420 --> 00:02:07,570 right so AWS is managing those master nodes, 39 00:02:07,570 --> 00:02:09,885 we know that they are there, 40 00:02:09,885 --> 00:02:12,500 but they are 41 00:02:12,500 --> 00:02:17,370 pretty well opaque to us, we don't really get to see exactly 42 00:02:17,370 --> 00:02:18,830 what's going on behind the scenes 43 00:02:18,830 --> 00:02:21,990 we're just trusting that AWS is managing 44 00:02:21,990 --> 00:02:26,190 the Kubernetes worker nodes in a highly available 45 00:02:26,190 --> 00:02:28,870 fault tolerant, secure and durable way. 46 00:02:28,870 --> 00:02:30,751 And so as far as the 47 00:02:30,751 --> 00:02:33,260 worker nodes go, 48 00:02:33,260 --> 00:02:35,390 these instances here, 49 00:02:35,390 --> 00:02:39,905 these are up to us to launch and put in place. 50 00:02:39,905 --> 00:02:42,107 AWS does provide 51 00:02:42,107 --> 00:02:45,053 a machine image that includes 52 00:02:45,053 --> 00:02:48,711 the Kubernetes software and allows us to connect 53 00:02:48,711 --> 00:02:51,860 the worker nodes to the master nodes 54 00:02:51,860 --> 00:02:56,497 and as you can see here, these worker nodes will all 55 00:02:56,497 --> 00:03:00,890 essentially communicate with the master nodes 56 00:03:02,410 --> 00:03:06,070 once they are up and running but it is our responsibility 57 00:03:06,070 --> 00:03:08,500 to ensure that we're using the right machine image 58 00:03:08,500 --> 00:03:11,237 for the worker nodes, that we are using 59 00:03:11,237 --> 00:03:13,563 the right security groups. 60 00:03:14,460 --> 00:03:16,562 And then of course, we do have complete freedom 61 00:03:16,562 --> 00:03:18,104 to use 62 00:03:18,104 --> 00:03:20,620 just about any EC2 instance 63 00:03:20,620 --> 00:03:21,518 we feel is appropriate. 64 00:03:21,518 --> 00:03:24,800 We could use, you know, c5.larges, 65 00:03:24,800 --> 00:03:28,200 we could use m58.xlarges we could use, 66 00:03:28,200 --> 00:03:30,990 you know, our five machines, 67 00:03:30,990 --> 00:03:32,784 we could put a blend of different machines 68 00:03:32,784 --> 00:03:34,318 in as worker nodes, 69 00:03:34,318 --> 00:03:36,450 if our applications you know, 70 00:03:36,450 --> 00:03:39,489 have the ability to run on different types of machines, 71 00:03:39,489 --> 00:03:40,322 right, 72 00:03:40,322 --> 00:03:42,620 So what we do with their worker nodes 73 00:03:42,620 --> 00:03:43,760 is totally up to us, 74 00:03:43,760 --> 00:03:45,706 but they are our responsibility. 75 00:03:45,706 --> 00:03:49,540 And then of course, as we deploy containers to Kubernetes 76 00:03:50,400 --> 00:03:51,740 those containers 77 00:03:52,900 --> 00:03:56,880 will be launched in run here on the worker nodes 78 00:03:56,880 --> 00:04:00,232 and then our users, if we are running any kind of 79 00:04:00,232 --> 00:04:03,153 application that's accessible over the internet, 80 00:04:03,153 --> 00:04:06,134 such as API's or web socket applications. 81 00:04:06,134 --> 00:04:08,226 Then of course, our users will come in 82 00:04:08,226 --> 00:04:10,061 through the internet gateway 83 00:04:10,061 --> 00:04:13,062 through a load balancer and then the load balancer 84 00:04:13,062 --> 00:04:16,900 could load balance to the containers 85 00:04:16,900 --> 00:04:18,140 running on 86 00:04:19,070 --> 00:04:20,417 these worker nodes. 87 00:04:20,417 --> 00:04:23,700 Right, so that's EKS Elastic Container Service 88 00:04:23,700 --> 00:04:25,010 for Kubernetes, 89 00:04:25,010 --> 00:04:28,803 giving us a fully managed Kubernetes control plane.