1 00:00:00,007 --> 00:00:03,975 [Background music] 2 00:00:04,000 --> 00:00:07,445 Hello and welcome to this lecture on Kubernetes pods. 3 00:00:07,470 --> 00:00:09,514 [No Audio] 4 00:00:09,539 --> 00:00:12,830 Before we head into understanding pods, we would like to 5 00:00:12,840 --> 00:00:16,094 assume that the following have been set up already. 6 00:00:16,239 --> 00:00:20,350 At this point, we assume that the application is already 7 00:00:20,360 --> 00:00:24,570 developed and built into Docker images, and it is available 8 00:00:24,580 --> 00:00:29,599 on a Docker repository like Docker Hub, so Kubernetes can pull it down. 9 00:00:29,951 --> 00:00:35,394 We also assume that the Kubernetes Cluster has already been set up 10 00:00:35,427 --> 00:00:40,980 and is working. This could be a single node setup or a multi node setup, 11 00:00:41,128 --> 00:00:45,355 doesn't matter, all the services need to be in a running state. 12 00:00:46,277 --> 00:00:50,875 As we discussed before, with Kubernetes our ultimate aim 13 00:00:50,900 --> 00:00:54,629 is to deploy our application in the form of containers on 14 00:00:54,654 --> 00:00:58,934 a set of machines that are configured as worker nodes in a cluster. 15 00:00:59,098 --> 00:01:05,186 However, Kubernetes does not deploy containers directly on the worker nodes. 16 00:01:05,319 --> 00:01:12,131 The containers are encapsulated into a Kubernetes object known as pods. 17 00:01:12,240 --> 00:01:16,393 A pod is a single instance of an application. 18 00:01:16,460 --> 00:01:22,237 A pod is the smallest object that you can create in Kubernetes. 19 00:01:22,262 --> 00:01:24,329 [No Audio] 20 00:01:24,354 --> 00:01:28,660 Here we see the simplest of simplest cases where you have 21 00:01:28,670 --> 00:01:33,723 a single node Kubernetes Cluster with a single instance of your 22 00:01:33,748 --> 00:01:38,839 application running in a single Docker container encapsulated in a pod. 23 00:01:39,409 --> 00:01:43,610 What if the number of users accessing your application increase 24 00:01:43,660 --> 00:01:45,760 and you need to scale your application? 25 00:01:45,770 --> 00:01:51,515 You need to add additional instances of your web application to share the load. 26 00:01:51,600 --> 00:01:54,990 Now, where would you spin up additional instances? 27 00:01:55,340 --> 00:01:59,270 Do we bring up new container instance within the same pod? 28 00:01:59,660 --> 00:02:06,528 No, we create new pod altogether with a new instance of the same application. 29 00:02:06,700 --> 00:02:11,320 As you can see, we now have two instances of our web application 30 00:02:11,330 --> 00:02:16,401 running on two separate pods on the same Kubernetes system or node. 31 00:02:16,480 --> 00:02:20,440 What if the user base further increases and your current 32 00:02:20,470 --> 00:02:22,540 node has no sufficient capacity. 33 00:02:22,550 --> 00:02:28,075 Well, then you can always deploy additional pods on a new node in the cluster. 34 00:02:28,810 --> 00:02:32,404 You will have a new node added to the cluster to expand the cluster's 35 00:02:32,429 --> 00:02:38,300 physical capacity. So what I'm trying to illustrate in this slide is that pods 36 00:02:38,325 --> 00:02:43,488 usually have a one to one relationship with containers running your application. 37 00:02:43,580 --> 00:02:50,026 To scale up, you create new pods and to scale down you delete existing part. 38 00:02:50,120 --> 00:02:55,422 You do not add additional containers to an existing pod to scale your 39 00:02:55,447 --> 00:03:00,220 application. Also, if you're wondering how we implement all of this and 40 00:03:00,230 --> 00:03:03,675 how we achieve load balancing between the containers, etc, 41 00:03:03,700 --> 00:03:06,530 we will get into all of that in a later lecture. 42 00:03:06,640 --> 00:03:11,302 For now, we are only trying to understand the basic concepts. 43 00:03:11,966 --> 00:03:16,510 We just said that pods usually have a one to one relationship 44 00:03:16,570 --> 00:03:20,460 with the containers, but are we restricted to having a single 45 00:03:20,490 --> 00:03:22,275 container in a single pod? 46 00:03:22,440 --> 00:03:27,780 No, a single pod can have multiple containers, except for 47 00:03:27,790 --> 00:03:31,286 the fact that they're usually not multiple containers of 48 00:03:31,311 --> 00:03:35,084 the same kind. As we discussed in the previous slide, 49 00:03:35,120 --> 00:03:39,350 if our intention was to scale our application, then we would 50 00:03:39,360 --> 00:03:41,398 need to create additional pods. 51 00:03:41,520 --> 00:03:45,300 But sometimes you might have a scenario where you have a 52 00:03:45,310 --> 00:03:49,230 helper container that might be doing some kind of supporting 53 00:03:49,240 --> 00:03:53,020 task for our web application, such as processing a user, 54 00:03:53,020 --> 00:03:57,481 enter data, processing a file uploaded by the user, etc, and you 55 00:03:57,506 --> 00:04:01,928 want these helper containers to live alongside your application container. 56 00:04:02,020 --> 00:04:06,610 In that case, you can have both of these containers part 57 00:04:06,620 --> 00:04:10,370 of the same pod so that when a new application container 58 00:04:10,380 --> 00:04:14,410 is created, the helper is also created, and when it dies, 59 00:04:14,435 --> 00:04:18,135 the helper also dies. Since they are part of the same pod. 60 00:04:18,160 --> 00:04:22,660 The two containers can also communicate with each other directly 61 00:04:22,670 --> 00:04:27,141 by referring to each other as local host since they share the same 62 00:04:27,166 --> 00:04:32,325 network space. Plus they can easily share the same storage space as well. 63 00:04:33,391 --> 00:04:37,370 If you still have doubts in this topic, I would understand if you did, 64 00:04:37,402 --> 00:04:40,876 because I did; the first time I learned these concepts. We could take 65 00:04:40,901 --> 00:04:44,690 another shot at understanding pods from a different angle. 66 00:04:45,844 --> 00:04:49,960 Let's for a moment keep Kubernetes out of our discussion 67 00:04:49,970 --> 00:04:52,968 and talk about simple Docker containers. 68 00:04:53,020 --> 00:04:56,730 Let's assume we were developing a process or a script to 69 00:04:56,755 --> 00:04:59,505 deploy our application on a Docker host. 70 00:04:59,620 --> 00:05:04,210 Then we would first simply deploy our application using a 71 00:05:04,210 --> 00:05:07,780 simple docker run python-app command and the application 72 00:05:07,780 --> 00:05:10,508 runs fine and our users are able to access it. 73 00:05:10,572 --> 00:05:14,900 When the load increases, we deploy more instances of our 74 00:05:14,910 --> 00:05:18,290 application by running the docker run commands many more 75 00:05:18,300 --> 00:05:22,750 times. This works fine and we're all happy. Now, 76 00:05:22,760 --> 00:05:26,170 Sometime in the future, our application is further developed, 77 00:05:26,180 --> 00:05:30,675 undergoes architectural changes, and grows and gets complex. 78 00:05:30,700 --> 00:05:35,350 We now have a new helper container that helps our web application 79 00:05:35,360 --> 00:05:38,434 by processing or fetching data from elsewhere. 80 00:05:38,540 --> 00:05:42,858 These helper containers maintain a one to one relationship 81 00:05:42,964 --> 00:05:46,550 with our application container and thus needs to communicate 82 00:05:46,560 --> 00:05:51,814 with the application containers directly and access data from those containers. 83 00:05:52,120 --> 00:05:57,466 For this, we need to maintain a map of what app and helper 84 00:05:57,491 --> 00:05:59,675 containers are connected to each other. 85 00:05:59,860 --> 00:06:03,640 We would need to establish network connectivity between these 86 00:06:03,650 --> 00:06:07,430 containers ourselves using links and custom network. 87 00:06:07,460 --> 00:06:11,602 We would need to create, shareable volumes and share it among the containers. 88 00:06:11,780 --> 00:06:16,070 We would need to maintain a map of that as well, and most 89 00:06:16,080 --> 00:06:19,700 importantly, we would need to monitor the state of the application 90 00:06:19,710 --> 00:06:23,480 container and when it dies, manually kill the helper container 91 00:06:23,490 --> 00:06:25,435 as well as it's no longer required. 92 00:06:25,580 --> 00:06:28,850 When a new container is deployed, we would need to deploy 93 00:06:28,860 --> 00:06:31,818 the new helper container as well, With pods, 94 00:06:32,020 --> 00:06:35,682 Kubernetes does all of this for us automatically. 95 00:06:35,800 --> 00:06:39,636 We just need to define what containers a pod consists of 96 00:06:39,764 --> 00:06:43,530 and the containers in a pod by default will have access to 97 00:06:43,540 --> 00:06:48,140 the same storage, the same network namespace, and same fate 98 00:06:48,150 --> 00:06:52,040 as in they will be created together and destroyed together. 99 00:06:52,340 --> 00:06:56,240 Even if our application didn't happen to be so complex and 100 00:06:56,250 --> 00:06:58,040 we could live with a single container, 101 00:06:58,050 --> 00:07:02,860 Kubernetes still requires you to create pods, but this is 102 00:07:02,860 --> 00:07:06,370 good in the long run as your application is now equipped 103 00:07:06,380 --> 00:07:09,939 for architectural changes and scale in the future. 104 00:07:10,280 --> 00:07:15,090 However, also note that multiport containers are a rare use 105 00:07:15,090 --> 00:07:18,790 case, and we're going to stick to single containers per pod 106 00:07:18,815 --> 00:07:23,550 in this course. Let us now look at how to deploy pods. 107 00:07:23,620 --> 00:07:27,054 Earlier we learned about the kubectl run command. 108 00:07:27,160 --> 00:07:32,090 What this command really does is it deploys a Docker container by creating 109 00:07:32,115 --> 00:07:37,540 a pod. So it first creates a pod automatically and deploys an instance 110 00:07:37,565 --> 00:07:42,984 of the NGINX Docker image. But where does it get the application image from? 111 00:07:43,100 --> 00:07:46,929 For that, you need to specify the image name using the 112 00:07:46,954 --> 00:07:51,080 dash dash image parameter, the application image 113 00:07:51,090 --> 00:07:56,211 in this case, the NGINX image is downloaded from the Docker Hub repository. 114 00:07:56,320 --> 00:08:00,460 Docker Hub, as we discussed, is a public repository where 115 00:08:00,470 --> 00:08:03,700 latest Docker images of various applications are stored. 116 00:08:04,000 --> 00:08:07,660 You could configure Kubernetes to pull the image from the 117 00:08:07,670 --> 00:08:11,950 public Docker Hub or a private repository within the organization. 118 00:08:12,020 --> 00:08:16,775 Now that we have a pod created, how do we see the list of pods available? 119 00:08:16,800 --> 00:08:22,376 The kubectl get pods command helps us see the list of pods in our cluster. 120 00:08:22,490 --> 00:08:26,570 In this case, we see the pod is in a container creating 121 00:08:26,570 --> 00:08:30,020 state and soon changes to a running state when it is actually 122 00:08:30,020 --> 00:08:34,260 running. Also, remember that we haven't really talked about 123 00:08:34,270 --> 00:08:38,246 the concepts on how a user can access the NGINX Web server, 124 00:08:38,419 --> 00:08:42,320 and so in the current state, we haven't made the web server 125 00:08:42,345 --> 00:08:47,059 accessible to external users. You can, However, access it internally 126 00:08:47,070 --> 00:08:51,169 from the node. For now, we will just see how to deploy a 127 00:08:51,200 --> 00:08:54,799 pod and in a later lecture, once we learn about networking 128 00:08:54,809 --> 00:08:58,030 and services, we will get to know how to make this service 129 00:08:58,055 --> 00:09:01,010 accessible to end users. That's it 130 00:09:01,020 --> 00:09:03,406 for this lecture, we will now head over to a demo, 131 00:09:03,431 --> 00:09:05,480 and I will see you in the next lecture. 132 00:09:05,505 --> 00:09:09,100 [Background music]