1 00:00:00,001 --> 00:00:01,534 [No Audio] 2 00:00:01,534 --> 00:00:05,101 When you specify a Pod, you can optionally specify 3 00:00:05,101 --> 00:00:10,634 how much CPU and memory or RAM each container needs. 4 00:00:11,134 --> 00:00:13,234 When containers have resource requests 5 00:00:13,234 --> 00:00:16,533 specified, the scheduler can make better decisions 6 00:00:16,533 --> 00:00:19,401 about which nodes to place Pods on. 7 00:00:19,601 --> 00:00:23,234 And when containers have their limits specified, containers 8 00:00:23,234 --> 00:00:26,267 can make sure that the nodes don't crash. 9 00:00:26,267 --> 00:00:30,101 Let's start out with getting a list of Pods. Let's open 10 00:00:30,101 --> 00:00:33,801 the file resource-pod.yaml. 11 00:00:34,067 --> 00:00:38,234 And there we go, the file seems larger than previous Pod YAMLs 12 00:00:38,234 --> 00:00:41,167 that we have used. But don't worry, instead 13 00:00:41,167 --> 00:00:45,534 of 1, we have 2 containers this time. One is 14 00:00:45,534 --> 00:00:48,522 MySQL database container, whereas the other is 15 00:00:48,522 --> 00:00:53,234 frontend WordPress container. The Pods name is front end. 16 00:00:53,234 --> 00:00:55,001 First of all, let's go through the 17 00:00:55,001 --> 00:00:58,767 obvious things like name of the containers, images 18 00:00:58,767 --> 00:01:03,034 being used, environment variable setup, and 19 00:01:03,034 --> 00:01:06,400 metadata of the Pod. Once all of those are out of 20 00:01:06,400 --> 00:01:09,634 the way, we have resources field, in both of the 21 00:01:09,634 --> 00:01:12,934 containers. This field is used to provide limits 22 00:01:12,934 --> 00:01:16,034 of resource per container and request per 23 00:01:16,034 --> 00:01:20,634 container, resources, our memory and CPU. As you 24 00:01:20,634 --> 00:01:23,301 can see, we have provided pretty little amount of 25 00:01:23,301 --> 00:01:26,501 resources to both of the containers were resource 26 00:01:26,501 --> 00:01:31,001 limit is 128 megabytes and request limit is just 27 00:01:31,001 --> 00:01:34,501 64 megabytes. Let's see what happens when we try 28 00:01:34,501 --> 00:01:38,001 to create such a Pod, let's save and exit this file. 29 00:01:38,101 --> 00:01:41,667 As usual, run kubectl create -f command 30 00:01:41,667 --> 00:01:45,801 followed by the file name, and the Pod is created. 31 00:01:45,934 --> 00:01:49,467 Let's list the Pods out. Hmm, it seems like the 32 00:01:49,467 --> 00:01:52,101 Pod is still in the container creation state. 33 00:01:52,201 --> 00:01:56,101 Let's give it a bit of time. Well, it seems like 34 00:01:56,101 --> 00:01:58,534 the containers are still being created. 35 00:01:58,701 --> 00:02:02,234 Or in other words, they have not been created yet. 36 00:02:02,234 --> 00:02:05,734 Why is that? Let's take a look at the description a bit. 37 00:02:05,734 --> 00:02:08,501 Alright, so the Pod is not ready because the 38 00:02:08,501 --> 00:02:11,467 containers are still being created. As you can see 39 00:02:11,467 --> 00:02:14,667 our Pod is following resource limitations quite 40 00:02:14,667 --> 00:02:19,734 strictly. Let's list the Pods again. Hmm, only 1 41 00:02:19,734 --> 00:02:22,767 out of 2 containers is ready and the Pod is in 42 00:02:22,767 --> 00:02:25,367 CrashLoopBackOff status. Let's see what's the 43 00:02:25,367 --> 00:02:28,167 problem here. When we run kubectl describe 44 00:02:28,167 --> 00:02:31,201 command again, we can clearly see that the state 45 00:02:31,201 --> 00:02:34,267 of database container is terminated. And the 46 00:02:34,267 --> 00:02:38,334 reason for that is OOMKilled, which stands for 47 00:02:38,334 --> 00:02:41,867 Out Of Memory Killed. Troubleshooting this isn't much 48 00:02:41,867 --> 00:02:44,501 difficult. It clearly suggests that the resource 49 00:02:44,501 --> 00:02:47,301 allocation limits that we have provided are just 50 00:02:47,301 --> 00:02:49,734 not sufficient enough for this container to run. 51 00:02:49,734 --> 00:02:52,567 Whereas on the other hand, the WordPress container 52 00:02:52,567 --> 00:02:55,367 is running properly. Even when we look at the 53 00:02:55,367 --> 00:02:58,101 events, all of the events regarding WordPress 54 00:02:58,101 --> 00:03:01,234 containers seems to have gone well. But in case of 55 00:03:01,234 --> 00:03:04,167 MySQL database container, the image was pulled 56 00:03:04,167 --> 00:03:06,767 successfully, but the container could not start 57 00:03:06,767 --> 00:03:09,267 because the resources were just not enough and if 58 00:03:09,267 --> 00:03:12,001 you notice, both of these containers are scheduled 59 00:03:12,001 --> 00:03:15,601 on the same node, because they are in the same Pod. 60 00:03:15,801 --> 00:03:19,701 So when we are running more than one containers in a Pod, 61 00:03:19,701 --> 00:03:22,101 they will be scheduled on the same node. 62 00:03:22,101 --> 00:03:25,134 But let's not distract from our main objective, 63 00:03:25,267 --> 00:03:28,567 we need to figure out a way to make sure that both of 64 00:03:28,567 --> 00:03:31,034 these containers are running smoothly in this Pod. 65 00:03:31,034 --> 00:03:34,367 For now, let's delete our frontend pod using 66 00:03:34,367 --> 00:03:38,767 kubectl delete pods command followed by the name of the Pod. 67 00:03:38,767 --> 00:03:41,001 There can be one or more pods that we 68 00:03:41,001 --> 00:03:43,567 want to delete, but in this case, we just want to 69 00:03:43,567 --> 00:03:46,901 delete frontend and it seems to be deleted. 70 00:03:46,928 --> 00:03:50,762 Let's get back to the YAML file of our frontend pod and 71 00:03:50,767 --> 00:03:54,101 increase the resource limits for our containers. 72 00:03:54,101 --> 00:04:00,134 Instead of 128 MB we are changing it to 1 gigabyte. 73 00:04:00,134 --> 00:04:03,034 And while we're at it, let's do the same 74 00:04:03,034 --> 00:04:04,767 with WordPress container as well. 75 00:04:05,967 --> 00:04:10,701 Let's save the file and exit nano and let's try to create the Pod again. 76 00:04:10,701 --> 00:04:14,567 And when we list the Pod, Viola! it didn't 77 00:04:14,567 --> 00:04:18,567 even take 11 seconds and our Pod along with both 78 00:04:18,567 --> 00:04:22,034 of its containers is in Running state. 79 00:04:22,267 --> 00:04:26,134 When we describe it using kubectl describe, we can 80 00:04:26,134 --> 00:04:29,667 clearly see that the resource limits have changed. 81 00:04:29,801 --> 00:04:32,267 All of the events regarding both of the containers 82 00:04:32,267 --> 00:04:34,134 of our Pod went smoothly.