1 00:00:00,267 --> 00:00:03,801 In last lecture, we had created this cluster on GKE 2 00:00:03,801 --> 00:00:07,467 or Google Kubernetes Engine. This time, let's 3 00:00:07,467 --> 00:00:10,901 connect to it, let's navigate to it. The most 4 00:00:10,901 --> 00:00:14,401 intuitive option seems like pressing that Connect button. 5 00:00:14,901 --> 00:00:16,101 Let's do it. 6 00:00:16,834 --> 00:00:18,667 When we click on Connect, 7 00:00:18,667 --> 00:00:23,101 Google prompts us a command to run in Cloud Shell. 8 00:00:23,101 --> 00:00:26,834 Cloud Shell is a CLI shell provided by Google to 9 00:00:26,834 --> 00:00:30,134 perform all sorts of commands. You can define 10 00:00:30,134 --> 00:00:33,601 Cloud Shell as an SSH access to a VM, which 11 00:00:33,601 --> 00:00:37,301 already has gcloud command line set up for us. 12 00:00:37,534 --> 00:00:41,167 Without further delay, let's click on run in Cloud Shell. 13 00:00:41,734 --> 00:00:43,734 And our Cloud Shell has opened. 14 00:00:43,934 --> 00:00:47,967 Let's resize it a bit while it's connecting to make it look prettier. 15 00:00:48,234 --> 00:00:51,934 And there we go, Google welcomes us to our Cloud Shell, 16 00:00:51,934 --> 00:00:55,967 and they are friendly enough to print that command on terminal as well. 17 00:00:56,101 --> 00:01:00,034 All we have to do is press enter. But before we do that, 18 00:01:00,034 --> 00:01:02,401 let's try to comprehend this command. 19 00:01:02,667 --> 00:01:05,601 It says that we are getting credentials of a cluster 20 00:01:05,601 --> 00:01:11,401 named k8s-cluster, from project rapid-being-218812 21 00:01:11,401 --> 00:01:15,334 on europe-north1-a zone. In a nutshell, 22 00:01:15,334 --> 00:01:19,101 it is giving access to our k8s-cluster to the 23 00:01:19,101 --> 00:01:22,934 VM, which is hosting the Cloud Shell, there we go. 24 00:01:22,934 --> 00:01:26,134 Now, we should be able to run kubectl command line. 25 00:01:26,501 --> 00:01:29,801 Let's run kubectl get nodes. 26 00:01:30,234 --> 00:01:34,167 Depending on your network connectivity, the zone or region 27 00:01:34,167 --> 00:01:37,367 which you have chosen, or the load on Google Cloud 28 00:01:37,367 --> 00:01:41,401 itself, the speed of operation may vary a bit. 29 00:01:41,401 --> 00:01:44,467 But you will definitely get the fruitful results. 30 00:01:44,467 --> 00:01:47,667 And here we go, here's the list of all of these three 31 00:01:47,667 --> 00:01:51,301 nodes, which we had seen in previous lecture as well. 32 00:01:51,301 --> 00:01:53,667 It looks more or less like output of the 33 00:01:53,667 --> 00:01:56,167 cluster which we had bootstrap by ourselves. 34 00:01:56,167 --> 00:01:58,701 But there is a little difference. 35 00:01:59,034 --> 00:02:05,067 Check out the ROLES column. None of the nodes have master role, why is that? 36 00:02:05,067 --> 00:02:08,501 Well, we have not bootstrap this cluster. 37 00:02:08,834 --> 00:02:13,167 We have just provision it, Google has bootstrapped it, 38 00:02:13,167 --> 00:02:16,534 and it is allowing us to use it as a hosted 39 00:02:16,534 --> 00:02:20,367 Kubernetes or managed Kubernetes cluster. So the 40 00:02:20,367 --> 00:02:23,601 master is managed by Google. What is the IP 41 00:02:23,601 --> 00:02:27,934 address of Master, what is the VM name of Master, 42 00:02:27,934 --> 00:02:30,634 what is the size of Master, what is the 43 00:02:30,634 --> 00:02:34,301 architecture of Master, we know nothing about it. 44 00:02:34,501 --> 00:02:39,301 All we know is it is learning Kubernetes version 1.9.7. 45 00:02:39,401 --> 00:02:42,567 Because we had said so, while creating the cluster. 46 00:02:42,967 --> 00:02:44,367 This doesn't just add 47 00:02:44,367 --> 00:02:47,001 another layer of reliability and security, 48 00:02:47,001 --> 00:02:51,034 but also saves us from handling the taints of master 49 00:02:51,034 --> 00:02:54,201 which blocks pods from being scheduled on it. 50 00:02:55,334 --> 00:02:57,867 Let's run kubectl get pods. 51 00:02:58,501 --> 00:03:00,234 And as expected, 52 00:03:00,234 --> 00:03:04,434 no resources found. Moving further, let's have pods 53 00:03:04,434 --> 00:03:07,701 from all namespaces. And here we get a long list 54 00:03:07,701 --> 00:03:11,267 again, but this time, the pods are not the same. 55 00:03:12,267 --> 00:03:16,234 All of these pods are on Node instances. And none 56 00:03:16,234 --> 00:03:18,601 of the Master's pods are available here. 57 00:03:18,601 --> 00:03:23,701 Can you find kube-apiserver or kube-controllermanager 58 00:03:23,701 --> 00:03:28,167 or even kube-scheduler, none of them are here, because 59 00:03:28,167 --> 00:03:32,234 Master is completely out of access. Instead, what 60 00:03:32,234 --> 00:03:36,434 we do have is kube-proxy for all of our nodes, 61 00:03:36,434 --> 00:03:41,667 preconfigured kubernetes-dashboard, dns, fluentd as 62 00:03:41,667 --> 00:03:45,434 the pod network, and heapster for monitoring our 63 00:03:45,434 --> 00:03:48,734 Kubernetes cluster. It feels like an entirely 64 00:03:48,734 --> 00:03:51,134 different cluster from what we had bootstrap by 65 00:03:51,134 --> 00:03:55,967 ourselves, which it is on the backend at least. 66 00:03:56,201 --> 00:03:59,034 But on frontend, we'll be using kubectl command 67 00:03:59,034 --> 00:04:02,234 line just the way we use on our previous cluster. 68 00:04:02,467 --> 00:04:05,067 So no worries there. In next lecture, 69 00:04:05,067 --> 00:04:07,901 we'll create an application on this GKE cluster.