1 00:00:07,310 --> 00:00:10,303 - Now let's review an introduction to Amazon ElastiCache. 2 00:00:11,910 --> 00:00:14,490 So with Amazon ElastiCache, we gain access 3 00:00:14,490 --> 00:00:17,460 to a fully-managed, in-memory cache. 4 00:00:17,460 --> 00:00:19,840 Now, with in-memory caching, 5 00:00:19,840 --> 00:00:22,750 a classic example of this type of use case will be, 6 00:00:22,750 --> 00:00:25,180 let's say that you had a relational database, 7 00:00:25,180 --> 00:00:27,420 where certain queries were running slow. 8 00:00:27,420 --> 00:00:30,280 And by slow, I mean maybe, up herds of one second. 9 00:00:30,280 --> 00:00:33,760 And even though you may have indexed them by the book 10 00:00:33,760 --> 00:00:36,000 and you're querying them by the book, 11 00:00:36,000 --> 00:00:37,790 there's nothing more that you can do, 12 00:00:37,790 --> 00:00:39,270 and I've seen this plenty of times 13 00:00:39,270 --> 00:00:41,210 where just the nature of the data means that 14 00:00:41,210 --> 00:00:43,230 the query is running slow. 15 00:00:43,230 --> 00:00:44,840 And so, the idea there is that 16 00:00:44,840 --> 00:00:46,849 you take the result set from the database, 17 00:00:46,849 --> 00:00:50,630 and you store that result set in memory. 18 00:00:50,630 --> 00:00:52,270 And then when you have a fleet, 19 00:00:52,270 --> 00:00:54,013 let's say you have a fleet of web servers. 20 00:00:54,013 --> 00:00:57,000 Maybe dozens or hundreds of web servers, 21 00:00:57,000 --> 00:00:59,660 you don't want each one of them 22 00:00:59,660 --> 00:01:01,170 storing those things in memory, 23 00:01:01,170 --> 00:01:05,510 you want the same result set available to all of them. 24 00:01:05,510 --> 00:01:07,420 And so you store that result set 25 00:01:07,420 --> 00:01:09,370 in memory some place central, 26 00:01:09,370 --> 00:01:12,450 like in memory caching servers, 27 00:01:12,450 --> 00:01:15,080 specifically designed for in-memory cache. 28 00:01:15,080 --> 00:01:20,080 And so, very much like RDS, ElastiCache gives us 29 00:01:20,530 --> 00:01:22,413 that fully managed platform for 30 00:01:22,413 --> 00:01:24,680 running that in-memory cache. 31 00:01:24,680 --> 00:01:28,060 And so, very much like RDS, we are off-loading 32 00:01:28,060 --> 00:01:31,501 the operational burdens of the operating system, 33 00:01:31,501 --> 00:01:36,501 and the cache engine to the ElastiCache service, 34 00:01:37,700 --> 00:01:40,500 and so, MAWS then becomes responsible 35 00:01:40,500 --> 00:01:44,560 for doing patches to the operating system 36 00:01:44,560 --> 00:01:48,470 as well as performing updates to the actual engine. 37 00:01:48,470 --> 00:01:49,950 And with minor updates, 38 00:01:49,950 --> 00:01:52,400 those can be performed automatically, 39 00:01:52,400 --> 00:01:55,838 most of the time, those can be done in place 40 00:01:55,838 --> 00:01:58,309 during a maintenance window. 41 00:01:58,309 --> 00:02:01,430 ElastiCache also supports clustering, 42 00:02:01,430 --> 00:02:04,371 so that you can add numerous nodes to an existing cluster 43 00:02:04,371 --> 00:02:08,770 in order to gain access to larger sets of memory, 44 00:02:08,770 --> 00:02:12,480 larger than any one server can actually handle. 45 00:02:12,480 --> 00:02:14,510 In the case of ElastiCache, 46 00:02:14,510 --> 00:02:18,943 we can gain a terabyte's worth of in-memory storage. 47 00:02:19,970 --> 00:02:22,441 And now with ElastiCache, we do get 48 00:02:22,441 --> 00:02:26,350 a couple of different options, of software. 49 00:02:26,350 --> 00:02:30,428 We have the ability to run Redis or Memcached. 50 00:02:30,428 --> 00:02:35,428 With Redis, Redis is more of an in-memory database. 51 00:02:35,871 --> 00:02:39,820 Because we gain access to multiple data types, 52 00:02:39,820 --> 00:02:43,310 like hashes, sets, sorted sets, 53 00:02:43,310 --> 00:02:46,330 we can also do atomic operations. 54 00:02:46,330 --> 00:02:50,816 So we can do in-place updates and increments and decrements, 55 00:02:50,816 --> 00:02:55,720 we can do in-memory sorting in a number of 56 00:02:55,720 --> 00:02:57,510 different operations that make Redis 57 00:02:57,510 --> 00:03:00,720 a really really powerful in-memory cache. 58 00:03:00,720 --> 00:03:05,080 Well, in in-memory database, it's more than just a cache. 59 00:03:05,080 --> 00:03:07,420 In Redis, it's also backed to disk. 60 00:03:07,420 --> 00:03:10,460 Every so often, and of course this is configurable 61 00:03:10,460 --> 00:03:13,200 but every so often, the contents of memory 62 00:03:13,200 --> 00:03:16,160 can be written to this, so that if there were 63 00:03:16,160 --> 00:03:18,930 and some type of an issue or an error 64 00:03:18,930 --> 00:03:20,930 with that underlying host, 65 00:03:20,930 --> 00:03:23,620 then you could recover that by either 66 00:03:23,620 --> 00:03:25,571 replacing the host or rebooting it, 67 00:03:25,571 --> 00:03:30,360 and allowing memory to be re-populated from disk. 68 00:03:30,360 --> 00:03:33,130 Redis also, because it is backed to disk 69 00:03:33,130 --> 00:03:34,670 and also gives us the ability 70 00:03:34,670 --> 00:03:37,090 to automate the backups of that disk, 71 00:03:37,090 --> 00:03:38,660 and like I mentioned before, 72 00:03:38,660 --> 00:03:42,020 automatically patching the operating system. 73 00:03:42,020 --> 00:03:44,059 Now we also have Memcached. 74 00:03:44,059 --> 00:03:48,056 Memcached is very popular, it's very performant, 75 00:03:48,056 --> 00:03:53,056 but it is more of a simple in-memory cache rather than, 76 00:03:53,150 --> 00:03:56,420 it doesn't have any kind of database functionality 77 00:03:56,420 --> 00:03:59,390 like Redis does, it's a simple key/value store. 78 00:03:59,390 --> 00:04:01,690 Key equals value, that's all you get. 79 00:04:01,690 --> 00:04:06,250 You don't gain access to datatypes like hashes or sets 80 00:04:06,250 --> 00:04:09,060 or anything beyond simple key equals value. 81 00:04:09,060 --> 00:04:11,800 And of course, Memcached is not backed to disk. 82 00:04:11,800 --> 00:04:14,890 So if a node were to fail, anything in its memory 83 00:04:14,890 --> 00:04:16,510 would be completely gone. 84 00:04:16,510 --> 00:04:18,890 There's no way to recover that from Memcached. 85 00:04:18,890 --> 00:04:22,890 But we still get automated patching of the operating system 86 00:04:22,890 --> 00:04:27,377 and automated minor updates to the engine itself. 87 00:04:27,377 --> 00:04:30,561 So again, keep that in mind that if you have scenarios 88 00:04:30,561 --> 00:04:34,299 in which you have certain types of data, 89 00:04:34,299 --> 00:04:37,640 and again, it could be a result set from 90 00:04:37,640 --> 00:04:41,370 a relational database query, it could be rendered HTML, 91 00:04:41,370 --> 00:04:44,620 maybe there are pages within your web application 92 00:04:44,620 --> 00:04:46,560 that take longer to render, 93 00:04:46,560 --> 00:04:48,480 and you simply want whatever it is, 94 00:04:48,480 --> 00:04:50,510 whatever that data is, you can take that 95 00:04:50,510 --> 00:04:53,830 and store it in memory in order to make that 96 00:04:53,830 --> 00:04:55,250 or in order to deliver that 97 00:04:55,250 --> 00:04:56,913 to your in-used or much faster 98 00:04:56,913 --> 00:05:00,980 than waiting on the database to run the query 99 00:05:00,980 --> 00:05:05,580 or waiting on your web server to render that HTML. 100 00:05:05,580 --> 00:05:07,362 Now when the case of something like Redis, 101 00:05:07,362 --> 00:05:09,486 where you have in-memory sorting, 102 00:05:09,486 --> 00:05:11,639 where you have atomic operations, 103 00:05:11,639 --> 00:05:16,380 there is also the ability to instead of relying 104 00:05:16,380 --> 00:05:19,280 on a relational database or no sequel data store, 105 00:05:19,280 --> 00:05:21,895 where everything is going to disc directly, 106 00:05:21,895 --> 00:05:24,570 the fact that with something like Redis, 107 00:05:24,570 --> 00:05:27,630 the fact is everything is happening directly in memory, 108 00:05:27,630 --> 00:05:32,130 means that you could potentially see microsecond latency. 109 00:05:32,130 --> 00:05:34,260 So, with something like Redis, 110 00:05:34,260 --> 00:05:37,600 even though you might not be cashing result set, 111 00:05:37,600 --> 00:05:40,680 you could simply be storing a data set in memory 112 00:05:40,680 --> 00:05:44,010 so that the functionality is much much faster 113 00:05:44,010 --> 00:05:46,310 than it would be with other data stores. 114 00:05:46,310 --> 00:05:48,750 So keep in mind, Amazon ElastiCache 115 00:05:48,750 --> 00:05:50,863 for doing things in memory.