DevOps Topeaks

#20 - S3: Simple Storage Service

April 19, 2023 Omer & Meir Season 1 Episode 20
#20 - S3: Simple Storage Service
DevOps Topeaks
More Info
DevOps Topeaks
#20 - S3: Simple Storage Service
Apr 19, 2023 Season 1 Episode 20
Omer & Meir

In this episode we discussed S3, which is not all that "simple"!
Policies, web hosting, tiering, smart tiering, Glacier, Cloudfront, indexing and MORE!

Links and things mentioned: 

  • S3 search and access throughput with prefixes: https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance.html
  • URL shortener with S3 redirect function
  • https://github.com/ducaale/xh
  • https://thebrowser.company/
  • https://brave.com/
  • https://minbrowser.org/

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

Show Notes Transcript Chapter Markers

In this episode we discussed S3, which is not all that "simple"!
Policies, web hosting, tiering, smart tiering, Glacier, Cloudfront, indexing and MORE!

Links and things mentioned: 

  • S3 search and access throughput with prefixes: https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance.html
  • URL shortener with S3 redirect function
  • https://github.com/ducaale/xh
  • https://thebrowser.company/
  • https://brave.com/
  • https://minbrowser.org/

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

Oh my ready! Oh my his life! And action! Action! Action! Action! Action! Action! So, I love everyone and welcome to the 20th! 20th episode! Episode! Yes! Episode! Episode! 20th! Yes! If you can say that! And today, we are going to talk about S3! Ooh! Okay, which is Amazon's simple storage service! Yes! So, we have all the online supplies in Guillermo. Thanks for coming! Of course, I just had to say something about S3. I listened to a podcast. It was two years ago, but it was April Fools. And like they released an episode on April 1st. And then they said, they had like all these obviously fictional topics. And one of them was that AWS are going to release S4 next year. And people started. It created hype and people were crazy. What's going on, S4? What's the upgrade? What's the feature? And then they said, no guys, it's April 1st. It was kind of obvious, but nope! Nice, nice one. Yeah. Okay, okay, okay. Getting back to our topic. Let's do it. Okay, so S3, Omele, today we are going to talk about S3. So, the first question is, as always, Omele, what's the first thing that comes up to your mind? And I say S3. S3. AWS S3. Okay. So first thing, not the first thing. Yeah, so you probably have seen lots of stuff running through my mind as you were asking. So S3, as the name says, it's the simplest service on AWS. I'm probably making other people angry by saying that, but it is. It's a very simple service. But as simple as it is, it's as power as it is. That's what I wanted to say. It's a very simple service. It was actually the first one. AWS, when they just started, I mean, you know, Amazon, their retail company, they started by selling books. And then they needed some kind of a storage for their online books or online operation. So they started a service and then they said, okay, we have a ton of that and we're pretty good with that. Let's sell this. And then they started selling this and they got customers and stuff. And then it became a service S3 and that's how AWS started. If I'm not mistaken, that was the only service back in 2006. So as simple as it is, again, it's the most senior one, the oldest one. It has tons of features. It has a tons of, again, the special word we like, intricacies and specifications that we can talk about. But that's it. At the end of the day, it's an object story. It's your way to just put fives on the cloud and store them. That's it. I'd say my S3 association is Google Drive. Even though it has way more features and whatever, but I'm like, listen, it's like Google Drive. You know, see, if you want to tell someone when they ask you, so what's S3? Can you explain to me like I'm five years old? So I'm like, Google Drive? Same, same. Totally. Okay. Way more features, but same, same at the basics, right? Okay. So an object storage and email, which features does S3 give us, you know, providers that is so different from Google Drive. I'm not comparing to Google Drive, but why is it not such a simple storage service? Why is it more than that? I don't even know where we start. Let's start with the, we're ops guys. So let's start with that. First of all, it's very, very highly available. How much? How much high, when you learn to the AWS certifications, I don't know if you remember. They make you, they make you learn the number of nines after the decimal point. So it's like 99.11 nines or something like that available. When you're on the first tier, we'll touch on that subject in a minute. But basically what it means that AWS ensure your availability, the availability of your objects to, I think it comes to the fraction of, maybe downtime of a second a year. That's basically their promise, which obviously is not something you can measure, but whatever. I think what they're doing under the hood is that when you store an object in the first tier, I'm pretty sure it's replicated under the hood onto another availability zone for a short, pretty sure that it backed up in some other region as well. But they have their way of ensuring your availability, basically always for eternity. Availability and Google. And durability, of course. So that's part of it. But what's the difference? But you say, I don't know, what's the difference between availability and durability? Maybe you'll have to help me here, but I'm pretty sure that availability means the fact that I can actually access the object at any given time. And durability doesn't necessarily mean I can access it, but maybe I will be able to access it, meaning it's not lost. It is kept somewhere, even if there is some kind of downtime because an availability or a region is burning up. But then the object is still there and they can restore it from a backup, and that means that it is durable. Maybe not accessible, but it is there. I think that's the difference. Yes, that's exactly it. I look as availability as like several is up or down. And durability as the chances of the file to be lost because the data warehouse was burned down or flood or whatever, you know? So they tell you the odds. Also the odds for the durability for the file to get lost is like, you know, one to a shining star. Exactly. The moon after the sun came up and went down and burned everyone. So these are the odds too. So that's a great segue to talk about the next feature of availability, which is the tiering, right? So do you want to talk about a little bit about the tiering? The tiering of what? What do you mean about the tiering? I look so surprised. Boomerang question. What are you talking about? What? What's tiering the AWS S3? What do you mean? Okay, I think you'll have to help me here also because I'm not sure everything I'm going to touch is tiering. But basically, like we said, you have this 99.11 or 9.9 or whatever, 9.9.9. Yeah, percentage of availability. It was the Holocaust Memorial Bell Day. You kind of do it right now. I thought about 9.9 precinct, you know, but yeah, nice, nice, nice, good job. Anyway, so that's the first tier. But what you can do is go for lower tiers, lower storage tiers on AWS. They call it less frequent access, which means maybe I'm storing an object, but I don't need it as accessible as I probably would with the first tier. What that means is I'm not sure. Help me out here. But the number of replications they make to the same data is less, less, right? They're not replicating it as much as they would with the first tier. I think it's called single IA, right? Like single, I mean, single in frequent availability zone, you know, in frequent access availability. In any case, they're not replicating it as much as they would. And then you're kind of in a lower, again, air quotes risk to lose your data. Probably if you're not accessing it as much as you would or do, you can store it in a less frequent tier. And that would also mean that you don't pay as much. So you get a discount for that. And then that goes even lower there. I don't remember the number of tiers, but it goes all the way down to a service. That's not really, I'm not sure it's even part of S3 anymore. It's called Glacier. And Glacier is basically the name tells you it's frozen data. You can take objects that you rarely ever access, but you do need them as backup. You know, maybe financial bills and stuff that you have to under regulations. You must keep the history of. Logs for compliance, like exactly a week ago, where they told me we need to see the test best for this version. So you need to recover a lot. So you're touching an amazing point here. So that's the thing with Glacier. You can store it basically forever, it's cheap, it's crazy cheap. I mean, it's sense pair terabytes every year. But then again, it's stored on tapes. I think it's like practically on magnetic tapes. So it's not even attached to a real resource. So when you try to access something, first of all, you can't really access it immediately. It takes like, I think, the SLA is ours or minutes, whatever. It takes time. It depends. It depends. So it's okay. So we talked about tears in S3 moving on. So honor what is Glacier? Oh, okay. So let's divide it. Okay. So Glacier, when your data is completely frozen and you don't ever need to access it. Maybe in a few years, someone will ask, like you said, in a compliance review, I need to access like this tiny shred of a bit of information, then I can access it. I can, it's not even accessing it's like applying a request. And I'm pretty sure it's the, you know, Amazon have this retail robots that manage warehouses. I'm pretty sure that's the same one that manages the magnetic disk. And that one will just go fetch the disk and attach it to a resource and then they tell you, okay, your data is here, but first of all, this costs money. And if you access your data in the first three months, you actually pay a penalty. Amazon are, they're like, they're building on the fact that you're almost rarely to never access the data and they can store magnetic tapes on shelves on their warehouse. And if they need to do something in order to fetch the data, you have to pay for it. I mean, in the first three months after that, it changes and they have this weird pricing model. That's the important bit. So, but I think it is important to know, right, companies, many, many times are storing everything they have on the first tier on S3, even if it's logs from four, five, six years ago, which you don't have to. First of all, you can go to the next year and you can go to Glacier, that's an option. And then the last thing I wanted to say, and you can expand that is Amazon released recently a feature that's called Smart Tearing. So they can basically look at your objects, obviously they know it. They can see how frequently you access different objects, buckets, whatever it is, and they can segment that to different tiers on S3 so that you pay less than what you would have if everything is stored on the first year. Does that make sense? Yes, but it still costs money. It costs money, but not as much as the first one. Yes. Right. Right. Yes. And that's it. And Glacier, I think Glacier has to be your own decision, they won't do it for you. But so I'll add a bit about Glacier, just a bit. You talked about how much time and maybe money costs on Glacier. So the galloping feels that you have, so you can get it expedited, and then you need to pay very few dollars maybe to get objects. Okay. And then you get it between, I think, one minute to five minutes, something like that. They make the robot go fast. I don't know if you press the button, maybe that's why you pay a lot of money. And then you got the, I think, stand out way, which takes between three hours to five hours, something like that. And then you got the bulk weight, which is five hours to twelve hours. I always do the bulk because, you know, usually compliance, you don't just jump and tell you when it is now. Wait, wait, wait, wait. You said always. How often do you use Glacier? It depends. So when I do, I use the bulk, I've never used Expert I did, you know, or normal. I always had like a normal timeframe to get data from Glacier when I need it, you know what I mean? So it's not like, I know it's for different use cases, you know, different companies, different use cases, but I never, you know, purchased the Glacier units to do the Expert I did stuff. Okay. Okay. But if you do want to know how to calculate how much it costs to get, you know, objects from Glacier, do you know how I check it? Do you have any idea how I check it? No. One guess, we had a session about it, about it, about him, about that. It feels like the calculator, the AWS, the JPP, I just said JPP, hi. I need to retrieve, you know, 900 objects from AWS Glacier in bulk mode. How much will it cost? I can tell you, approximately $12, my question is, do you believe it? Yeah, because it gives you references to the pricing and it looks okayish, so yeah, it's good to get a reference dimension, even if it's wrong by 20%, I feel, you know, it's a reference dimension, which is good, totally. Yeah, so that's Glacier, anything else you want to know about Glacier? No, I think we can start talking about buckets. Okay, buckets. What features does the bucket have that you can speak about? The features of AWS three buckets. You can start with the annoying features, which, okay, so buckets are, as you know, they're basically global, but you do define them on a specific region, because that's your main region where you would normally access your objects. But bucket names are unique globally, right? So if you try to name a bucket, just mail, which is a name that's probably taken, because it's four characters, AWS would tell you, no, no, no, you can't access it, I think the reason is because you get a DNS, a unique DNS for your bucket, and then if that's a name that's already taken, then it's gone. I mean, I'm still amazed and shocked that Amazon didn't do that. So let's define global, yeah, just like global means the cost any AWS account, the cost any region in the AWS account, okay, exactly, exactly. And that's kind of annoying, because how many names left in the world for different buckets? So we start doing crazy string manipulations with dashes and points and different names, but essentially, yes, you need to pick a unique bucket name, so that's kind of annoying. If you want to add anything about that, I see you. Usually, like, when you're talking about normal, like creating a bucket, normally, yeah, but usually in telephone, for example, or anything else, I love to generate a random bucket name, so I don't have to deal with a DNS, it's just, you know. But that makes my life harder when I go to the, you know, the optical user interface, but it's content and then you're trying to understand like, yes, yeah. So then you're trying to search by tags and stuff like that, you know. Yeah, so it did matter back in the days when, okay, Amazon has a feature on S3S3, can serve, can act as a web server. So this means that you can basically generate static files for our website, store them in an S3 bucket and say that, okay, that's now web, web server, as if it was engine X. It's a feature that you turn on and then you get the special, special DNS name that's based on the bucket name and then you can access it. And it used to be the case that you had to give it the, I think the same domain name that you have to the bucket in order for your domain to access the bucket. It's not, it's not the case anymore, but it used to be not, was kind of annoying, right? Because you can just, you can create a bucket CNN dot com, just because you can maybe CNN didn't take it and the day they wanted, they can't have it. So Amazon had some issues with it back in the day, so I don't think that's the case anymore. But having a web server is an option. What else can you do with it? So you're saying, okay, I created an S3 bucket, I upload an index HTML file to that bucket. And then I took the, I turn on the enable static website for this S3 bucket and suddenly when I go to the, I copy the link of the website for my S3 and then I can see the index HTML generated, right? Cool. What else can I do? I mean, like, should I serve it directly from S3? So you're saying, I got my application in S3, should I serve it publicly to the world from S3? Is it okay? I see where you're going. Yes, you're talking about the CDN, aren't you? Okay. First of all, yes, you can obviously do it with an S3, a few issues with that. First of all, your objects are, I mean, you're paying for access for the object. So if your website, hopefully you're going to get a ton of users and traffic, you're going to pay for the access. That's the first. Second, your bucket is like the main location is the region that you specify. So if your bucket is an Ireland, but most of your users are coming from South Africa, they would get a little longer latency. And that is solvable with the CDN, AWS have cloud front. You can use cloud. What is CDN? Okay, a CDN is another episode. It's a content delivery network and that's basically the largest in the world. I think it's Akama, you have fastly, you have, again, cloud flow, cloud front, et cetera, et cetera, tons of... Cloud front, AWS, exactly. And what's special about CDN companies is that they put what you call a pop. I don't remember what a pop means, maybe you do a pop application. Yes, it's an edge location. They call it pops, which are basically storage locations all around the world, much more distributed than just the cloud platforms. So they put those little pops around the world aiming to the thousands and tens of thousands, and then they, first of all, it's a cache mechanism, right, it's a cache system. So it caches images, HTML files, all the static files that can be fetched by your end customer. And then they get everything in cache, but they get the lowest latency possible because they reach the closest pop possible, right? So instead of reaching all the way to S3, and Mail has to pay for the access of the objects, the objects are cashed in those end location. Obviously, you can define the TTL of the objects for how long are the object being stored, when is the cache validated, et cetera, et cetera. That's what a CDN basically does. And you usually want to use that on top of S3 when you're serving web content. Okay, but I had an issue the other day. All right. I uploaded my files to S3. And as you said, I created maybe, well, not maybe, but I created a Cloud Fonts distribution, which is, you know, creating a CDN that is above, or let's say, yeah, the font before my S3. So a request from the client from the browser just went through the Cloud Fonts distribution, and then went on to the S3 bucket. But then I uploaded a new index HTML file, and I refreshed my page in the browser, and it's not updating, and I'm like, why is it not updating? So how can I, how do I fix that? It's annoying. You know, I have Cloud Fonts, or maybe Cloud Fonts, or any other CDN, S3 bucket, I'm deploying my files, upon my application to S3, and the application is not updating in the browser. What should I do? So what you want to do is delete the bucket and close the AWS account, and then open and you, I'm joking. That's a cache problem. So I mentioned the feature that's called invalidation. Basically, what you mentioned is the typical user trying to access some kind of data, maybe an index HTML file, and that's cached on the nearest pop. If I'm on it, and I'm the developer of the website, and I uploaded a new version, the new version is stored on S3, that doesn't necessarily mean that it's already what they call propagated to the pops around the world. The way to propagate things, as with the other systems, by the way, not only CDNs, but with CDNs, what you do specifically cloud from, they call it invalidation. And invalidation is a caching system's term. What that means is making the cache invalidated a step backward. When you upload something into a cache system, you give it a TTL. Like I mentioned before, a TTL is time to leave. For example, I'm uploading index HTML, and I'm giving it a seven days TTL, which means after seven days, that's not valid anymore, and the object needs to be refect from the back end, in my case S3, S3, sorry, and updated on the caching system. So if I just uploaded a new version, and the seven days has not already passed, then I would have to wait until that point, but I don't want to wait. I've uploaded a new version, I want the customer to see it. What I do is create an invalidation. If that's cloud from, for example, that's what I'm using today. Cloud from will go to all of the pops around the world and we'll tell them, okay, Omer uploaded a new index HTML file, delete all of the index HTMLs, and on the next time someone tries to fetches it, go to S3, bring a new version, keep it for the next, whatever, seven days, that we're configured from the get go. Sounds good, sounds like a plan. So usually what do you recommend, like, okay, so I guess I'm not even sure I want to cache my index HTML, because maybe I want it to be updated all time. I don't know, so maybe the time to live should be zero at all times. So there's location or maybe 10 minutes or something like that, so there's locations that are always updated with it, okay. So which objects or which type of files would you recommend caching, like, as a general as a whole of them? So you said something really important here. First of all, I would cache most of the static files, and we just invalidate them, just to keep myself the headache of separating them, but code is easier, right? Because it's dense, it's usually smaller files, but things like images, sound objects, things that make up your static website that don't necessarily update all that much, probably your images don't change all the time. If it's a new images, a new image, the cache system will realize that it doesn't have it, and we'll obviously go to the backend and get it, but it's not updating all the time. You're not uploading new versions of the images, keep them over there, and images tend to be larger files. They tend to be in the megabytes, not kilobytes, or lower, if you're talking about dense code. So mainly that kind of media is something I would make sure is stored on cache and has the higher TTL possible on the system. Okay, that's it. So what are some media files, static files, Wikipedia, high- But then again, we went to deep into CDN, so I think we should pull back to S3. I went back to S3, I went out of question about it, because I used CloudForms and everything worked fine and stuff like that, but then I wanted to use CloudFlare, because my manager told me, listen, you gotta use CloudFlare, that's it. We moved our CDN to CloudFlare. Now I'm trying to understand, because CloudFlare is not in the same network as AWS. So how can I give only, and I don't want to make my bucket publicly available, right? I want people to only access my bucket only through CloudFlare. So how can I make that connection between CloudFlare and S3? How can I limit, what can I do? So the simplest solution is that you manage things, especially with websites, through DNS. So what you give the user is a DNS entry, and the DNS entry points at somewhere, in our case, the CDN, and the user obviously doesn't care, they don't have your bucket address. Maybe you're referring to something else, but at the end of the day, if I'm going to CNN.com or to Facebook.com, they have their own, their own CDNs, and I don't know where I'm going, unless I'm opening the developer tools and trying to understand where are the images or the media is fetched from. They can reroute it whenever they want, if you control the DNS, and you do control the DNS of your domain, you can just replace it, put another entry point for. But let's say I just want to allow only CloudFlare to be able to make requests to my AWS bucket. Do you have any ideas? How can I do that? Are you going to bucket access controlists? Yes. Okay, so go ahead. Bucket quality. Tell us about it. Okay, so if you want to, you know, let's make a not just CloudFlare, any CDN or any IP address, you want it to be able to access a bucket that is a thing, which is called a database S3 bucket policy, and you can add a bucket policy, which says I allow the get object for example, the get object action only for these IP addresses. And how do you get the IP addresses? So for example, in CloudFlare, if you Google for CloudFlare IP ranges, you'll get a list of IP ranges of CloudFlare, then you copy paste it and put it in your AWS bucket policy, and then you make this connection that, you know, the relationship between AWS and CloudFlare, whether you say, listen, I only allow CloudFlare, CloudFlare, and then when you try to access the buckets directly, you won't be able to. So right, you can only go through CloudFlare, and that's due to the bucket policy, which is an amazing use case. And by the way, a bucket policy attached to a bucket ACL, I'm access controlist, and you can do a tone with that, sometimes people are confused between the bucket ACLs and they and I am access resources, right? Because they use the bucket ACL, that's a great use case, what you just mentioned with CloudFlare. CloudFlare, sorry, but maybe I want to restrict, I don't know, a certain group at my company or a certain resource at my company from accessing a bucket. So I can tell the bucket within the policy or the policy that's attached to the ACL, who can access it? You can do things even within Amazon. And it's, I think, they complement each other. You can use IM policies to dictate who can access a three and who can access a certain bucket within a three. But you can use on the other end the bucket policy to say, okay, who can access, who can get objects from me, right? So it's just a complimentary two systems that you can use basically in tandem to use the way you see fit, honestly, I almost, I rarely use them, only when it makes a lot of sense with customers and things like that, but that's an external use case just like yours. Internally, I don't have much use for them, but it's an option. Yeah, same here usually I just block with the IM policy, just block the whole access to the bucket, not just specifically like, you know, usually if you can read, you can like, yeah, always to the bucket, okay, so I don't have a last question, okay, not sure, okay, but it's a last question, okay. So we talked about accessing through maybe cloud, through cloud font and whatever, but in my website, I also have maybe, I don't know, private places, but I only want subscribers to view the content and get the content, all right? And I have no idea how to do that with history, any suggestion on how like maybe a cell value in the media, I don't know, I love your questions, man. I forgot the term, that this magic one-time link on S3, what's it, please sign, pre-sign, pre-sign, perfect, by the way, pre-sign URL, do you know Louisa K, yeah, okay, great test comedian in the work, I think Louisa K, you can buy any of his shows for five bucks on his website, and he's using pre-sign, at least the last time I bought. He's using pre-sign, the URL, my story, if Louisa K is using, yeah, yeah, and if it doesn't work, I'll blame him, but basically what that means is that you can generate a unique URL for download for a certain user, it's probably configurable with many others, again, TTL, number of downloads, but usually it's a single download per link that you can send it to someone and they can use your link to download some kind of media, for example, a Louisa K show and enjoy the show for once because they paid the five bucks, anything else to say? It's been just generally about the flow, like how does it work, like we say pre-sign, it sounds like a magic, but how does it like the flow, the magic, how it works? I actually don't remember, so I think you'll help me here, okay, browser, okay, just sell, like client side requests from the server, okay, Louisa, and I want to pre-sign the URL, and then the server goes to AWS, generates the pre-sign URL, and then sends that URL to the browser back, so then like the browser doesn't really interact with S3, or like the transfer of the file doesn't go through your server, so the load goes to directly to S3, okay, and that's because you give them signed URL that gives the access to S3, and then that's the mechanism that keeps you from downloading again and again, or past the number of times that are specified for the object, and you can protect the file, as you said, maybe number of times, maybe, and I think the main thing is the download doesn't go through your server as a proxy, because you don't want like, you know, request to the server, download for S3, and then replies back to the client, because then everything goes through your server and God knows, okay, cool, so that was, please thank your URL, anything else you want to add before we move to the corner of the way, I think to, I have two things, but I'll say them really quickly, okay, shoot one, you reminded me of last week, you can use S3 to reroute requests, right, so you can use S3 to use as a router, basically, do you want to explain a little bit about that? Well, usually there's a redirect, like the most common use case is the WWW, so you want your website to also include WWW, so you get an S3 bucket, in its configurations, you go to the properties, set it to redirect, and then just pass on the address you want to redirect. Right, so an amazing use case for that, and that used to be, I don't like it as an interview question, but it used to be an interview question that people would ask, how would you, how would you create using S3 features, how would you create, do you know, tiny URL URL, sorry, the service, how would you replicate tiny URL using only S3, and that's the way, you can reroute, you can reroute URLs, so you can basically generate a shorter URL based on the domain you have, using S3, which is really cool, if you want a home assignment to try and play with AWS, it'll teach you a lot, try to replicate tiny URL using S3 features. That's the first, the second thing is kind of tip, and I only remember that from the AWS certification exam, and I remember it till today, maybe only used it once, but basically AWS have a very unique way of indexing the objects within your bucket, and so if you have thousands of objects within a single bucket, and they all start with a very long string that is the same, it'll take longer to fetch because of how they index the files or longer to search files, as opposed to if you had some kind of string that's changing, maybe a timestamp, maybe something else, they actually, I think, if I remember correctly, it was five or four years ago, they actually suggest that you turn around the order of the, how you structure the names of the objects. For example, if you start with something that's constant, your company name, and then the whatever account ID, and then you go to the month, and then you go to the actual object name, if you reverse that completely and you start with the object name, which tends to change a lot, that would index your files much quicker, they would be easier to retrieve, and you make both your customers and AWS's life easier, so that's the tip. That's it, I think. Wow, that sounds like a crazy idea, you know, I listen to your idea, I'm like, so how is it going to look like in the structure, you know, in the bucket structure, if you reverse the order. So take care. First, yes, but you're gaining a faster retrieval time and faster search time, even just for yourself as a user within the console. I don't want to, you know, annoy people with explanations that I don't fully remember, but if you Google AWS S3 buckets indexing mechanism tips, suggestions, recommendations by AWS, you'll find a ton of documentation, maybe we'll add something in the show notes that just that you can see and remember, mainly valid for people that are using really with thousands, hundreds of thousands of files, if that's the case, and you're suffering from long latencies and long search times, then maybe that can help. Cool, cool, cool, cool, cool. Ready for the corner? I think I'm ready. Wait, I need to prepare. Okay, I'm ready. Cool. Okay, so in this corner, in this corner, we are going to talk about experiences, technologies or cool things that happen to us this week, last week, last year or any time in the world. Yes. Okay. So Omele, what happened to you or what do you want to share that happened to you anytime and you will any place about anything? So you always ask me about an experience and I always share tools. So I will also share a tool today. One minute about browsers, I'd like to try out new browsers, my current browser is brave, which is based on Chrome, Chrome, actually. Say for faster, I really like it. There is, however, another browser, a browser that's called min, am I n, and it's a very minimal browser without almost nothing, even no tabs if you don't allow it. And it's just a window with stuff, good for content creation and stuff like that, but also just to keep you focused around what you're doing or just reading something. And the third one, there's a company that's called the browser company. And you can guess what they're doing, they're developing a new browser and they change the entire experience. The browser is called ARC, ARC. And ARC has a different way of layout. All your tabs are sitting on a left side. Everything is cleaned up every 24 hours. So if you're used to keeping, you know, long-living tabs on top of your bar, at least I am, they're getting recycled every night. So if you don't want them, if you want them, you can keep them, you can kind of drag them upstairs. But if you don't want them, they'll just disappear. So it keeps your life very, very clean. You can split windows vertically or horizontally. It has spaces, so you can kind of separate your work, your work life from your home. Very cool. I'm still testing it out. It has a little bit of bugs, but it's pretty cool. Oh, and based of criumium. So I just moved from brave and it imported everything. My cookies, my keys, my bookmark, steps, everything. History, of course. This is it. I suppose I thought that you do what I do. You know, I just use curl, you know, C URL. I curl a website, I get a response, and then I render it in my head. I realize what's supposed to happen, and then I do it. I used to do it. This is how I do it. Yeah, for a minute, you were thinking, wait, no, no, what's up? Yeah, I'm rendering. Yeah, you don't know how to render JavaScript files and HTML files in your head or I'm surprised. By the way, for C URL, look in GitHub for a tool that's called XH, which is incredible. It's so much easier than us. You'll have to put it in this clip. I will. You said the tool, you've got to put it in your clip. XH and Arc and Brave will all go in the description. Okay, I want to show that I just had a great experience with React, and I used a great library, which is called material React table. So I really recommend it. So if you're creating a table in the font, okay, so it's React. If you're creating a table and you want like a code table, you don't create it up, they believe table where you can, like all the built-in features of selected rows, export, or anything like that. So material React table is great. It works nice. Okay. So that's my tool and experience sort of amazing. You're turning slowly turning into a front-end developer. Slowly, yeah, yeah. I mean, it's gradually building up. I can see every time you're adding another technology and another technology. Yes, yes, yes. All right. Cool. Anything else? No, I think we can wrap it up. I have one more thing. One more thing. We were thinking together on how to, I mean, we have some listeners more than three. Yeah. We were thinking on how to interact with our users. You and me and Elvis. Exactly. Exactly. Exactly. That's the trend. In any case, if you if you made it to this point and you are still listening, we have a Facebook page, group page, it's a page, and you can respond there. We'll put our links. We already started long ago. It's just not updated. So I will make sure to have this one over there with all the links to Spotify, YouTube, whatever. If you want to create a discussion, respond to something, that's the way. That's the place. So another nice. We actually respond. Yes. Very much so. Most of the time. Most of the time. Yes. Whenever we log in. Yes. That's it. Okay. Amen. So thank you. Thank you, everyone. Thank you. Yeah. And immediately, you know, it's very intimate. Yeah. Yeah. Yeah. Thank you. Thank you. Elvis for attending and thank you. Thank you. Elvis for attending. Elvis is always happy to attend. You know, he's always here. Yeah. So see you next week. See you next week. Bye.

Intro
What's S3
High Availability
Tiering
Glacier
Web Serving
Cloudfront on Top
Access Policies
Presigned URLs
Object Redirect
Object Naming & Prefixing
Tools / Experience of the week!