DevOps Topeaks

#21 - EC2: Elastic Compute Cloud

April 28, 2023 Omer & Meir Season 1 Episode 21
DevOps Topeaks
#21 - EC2: Elastic Compute Cloud
Show Notes Transcript Chapter Markers

This week we had the pleasure of discussing EC2!
The basic building block of AWS has so much going on around it so we made an effort keeping it short (did not do all that well if you compare to previous episodes 😉)


  • EC2
  • The SSM plugin for AWS CLI (Using SSM connect from a teminal)
  • Golang Telegram bot API -
  • GORM: The Golang ORM -
Leadership Lessons From The Great Books
Because understanding great literature is better than trying to read and understand...

Listen on: Apple Podcasts   Spotify

Meir's blog:
Omer's blog:
Telegram channel:

Oh, I'm ready. I'm ready. And action. Action. No idea. I just found it. So Hello, everyone, and welcome to the Oh, no, I don't remember that I'm the 21st 21st 21st 21st episode of DevOps topics. Yeah, and today we are going to talk about AWS EC2 EC3. We said we'll talk about we said EC2 next week. We might be three. Yes, next time EC3. Okay, when we'll talk about S4, we'll talk about the EC3. Exactly. So it will have a deal. Okay, so in case nobody knows over people don't know there is no such thing as EC3. We are just talking nonsense. Okay, so welcome to DevOps topics 21st episode. Today we're going to talk about AWS EC2. And obviously the first question that we are going to ask in this session is email. What's the first thing that comes up to your mind when I say EC2? The most basic building block of AWS. I feel like I'm answering the same answer to every time you ask me about an AWS resource. I'm just saying the same. It's the most basic. No? No, no, no, S3 is like you said, it's like it's one of the first ones. Also, we see it with the first VPC, maybe on IMI, I said it's the most dangerous. Okay, maybe I'm fine. Anyway, the most basic, almost the most basic building block that you can build in the cloud because it's essentially a server, right? That's what it is. More often than not, we're coming to the cloud in order to, you know, rent resources and those resources tend to be servers. So that's what it is. EC2 is just a server. What do you mean by server? Oh, no, I'm not sure I understand. You say server. So I'm new to AWS. I have no idea. You know, what is EC2? I go into the AWS console C EC2 and I'm like, what's going to happen next? So I'll have the server that you're talking about. So a server is just a compute resource, right? It's like a machine. It's a computer, someone's joke that the cloud is just someone else's computer. That's what the joke is about. You're launching an EC2 and AWS are basically allocating compute resource for you. It's a machine that can have CPU or will have CPU memories, some kind of storage, probably network card that in order for it to communicate whether in the VPC, inside the internet network or outside. That's bad. So you said inside VPC, outside VPC, so hang on, hang on. So let's say again, I'm total newbie, okay? Nope. And I come around the AWS EC2 console and I want to launch an EC2 instance. Yes. Do I need to do something prior doing that? Like do I like other than any prerequisites doing that? Maybe you said something like VPC network, something exactly exactly. So was it our last episode about VPC? Yes, before that, I think. Was it? Anyway, one of the recent episodes, it was about VPC and everything you do within AWS, you have to do within a VPC. If you want to set, I'm sorry, if you need to set up an EC2, you need to deploy it inside the VPC, which is your network. It wasn't the case to begin with a few years back. There was a notion of EC2 classic. EC2 classic, they're still around the world somewhere. People just didn't turn them off for some reason, but around what six, seven, eight years ago, you could launch EC2 in the cloud without actually telling it what's the network that it should reside in. So that doesn't exist anymore. When you deploy an EC2 today, you have to put it in a VPC. And another thing that's important to say, EC2 doesn't necessarily mean that you know the exact physical rock inside AWS that they're going to put your instance in. When you're asking for something for something, sorry, it's a virtual machine that sits somewhere. AWS just will deploy it somewhere. If you need proximity to other instances or other storage resources, you can ask for Amazon. I don't know, do you remember the term exactly, but you can ask for it to be dedicated. Dedicate the tendency or something like that. So you can ask for it to be literally within the same rack so that the latency is as low as possible. What do you mean by rack? Can you describe it physically? Like rack? Yeah, sure. That's what it is. A rack, a physical rack with servers. If you've ever been in IT or just Google, you know, a server farm or a warehouse server warehouse, you'll see what racks are. It's just physical structures where you can deploy certain resources. That can be the servers and hard disks and network cards and everything that comes with it. Okay, I remember like a few topics ago, a topic that episodes ago, we talked about availability zones. We talked about AWS infrastructure. Remember the growing infrastructure. And you said proximity, proximity, close, file. So I'm trying to understand, let's say I want to launch an EC2 and can I launch it like maybe in the same availability zone as my other resources is a limited to per region. Do you know how it works? Okay, perfect. So we just said that we need to deploy it within a VPC. A VPC can span across many subnets. Subnets, you remember subnetworks, you told me that. So it's part of your network can be segregated into different CIDR blocks, different ranges of IPs. And a subnet can span one availability zone, right? So if I deploy one subnet of my network, I can tell that to be in a certain availability zone. It can be, for example, we're running in Virginia. It can be Virginia A, Virginia B, Virginia C all the way up to, I don't know, how many letters do they have now? By the way, I'm important note, the fact that you're running in U.S. East one A doesn't necessarily mean that the same one, the same A in another account is the same availability zone. If it's really important to you, you can speak to AWS and they'll help you figure it out. But it doesn't necessarily mean it's the same one for obvious reasons. People would know where to run or by default would run on A and then everything would just be concentrated on one availability zone and create an impossible amount. So my U.S. East, A, your U.S. is one A, is not my U.S. East one. Might be your U.S. East one C, right? Exactly. Exactly. Okay. Cool. So availability zone, a word about availability zone within the same region, we have different physical locations. They're not the same warehouse. They're not even the same area. I think the proximity needs to be up to 100 miles, not 100% sure about that, but something like that. So up to 100 miles one way from the other, that's still considered an availability zone within the same region. And if latency is very important to you, you would like to deploy it within the same availability zone. But if you're having on the other end considerations about high availability, because an availability zone, obviously, when something is concentrated within the same physical location, it's riskier, right? Same with these investments, same with everything. When everything is concentrated within your number. Just give it to your number. Why is it risky? Like the physical examples of why it's risky? A server firm. It needs to be cooled. If the cooling system doesn't work, servers will just shut down. That's the easiest. I mean, that's the file. Someone breaks into the availability zone. Exactly. And by the way, this is not made up. This thing's happened. If I'm not mistaken, U.S. is one back in 2019, something before 10. It may not be burned. Like, ah, okay, they're the burned one. Because I remember like U.S. is one in 2019, something like that was off. So they said, like the engine is down. You'll remember that? That Netflix is down. Everything is down. Yeah. And it's not only that, by the way, within a certain availability zone, you could reach some kind of capacities. Now we're talking about Virginia, one of the biggest regions in the world. If you're looking to have an M5 large, and you want a thousand of them, right? At some point, you'll run out of capacity. Even within Virginia, but let alone speaking about London, Paris, smaller regions. The capacity is obviously capped within availability zone. So if you span multiple availability zones, not only you're more highly available because you're risk averse, you're more able to be able to, you know, to come by the capacity that you're looking for. There are ways to solve that. You don't, I mean, if you're running a fleet of EC2 instances, it's better to be able to run, let's say, a range of instances. So if I'm looking for M5, maybe I can tell it to run either five large, X large, two X large just for the availability zone, just for being able to endure high capacities, or sorry, low capacities within the availability zone. But yeah, everything we mentioned now is very crucial when deciding where to deploy. Can you give me maybe a particular implementation of you talking about high availability? And like, so let's say we have two machines in two availability zones. And let's say it's the same server. So how do you do that trick of high availability that you're talking about? Two EC2 is one in each availability zone in each tablet, whatever. What do you do now? So are you talking about distributing the load between them? Okay, cool. So living me to the answer, that's a load balancer. And a load balancer will actually spend across availability zones, much like your VPC. So we deploy that in a private network. So we deploy the availability, deploy the load balancer. Yeah, deploy that in a private network. So an load balancer can either be internal or external. I think most likely the use will be external for the load to come from the external resources in internet users, whatever else. And distribute the load to the internal resources. So I will deploy it in a public network for that reason. I mean, likely in the in the public network. And then it will ask me, okay, what availability zones do I need to spend? And under the hood, when you tell it, okay, span, A, B, and C, Amazon will literally deploy EC2s under the hood that are preconfigured to serve that load balancer. Right? So if you are running a VPC on subnets, A, B, and C, and have different EC2s that need to serve your application, because that's where you put them. You need the load balancer to span across those availability zones. If by the way, you deploy a load balancer and it can't reach, we need deploy a load balancer. You can see which instances it's serving and whether they're available, healthy, whatever else you've configured. If they're not healthy, one of the reasons may be that the availability, yeah, sorry, the load balancer is just not configured to work in that availability zones and you need to help it grow to there. Give me another reason. Why would I see a target not healthy? So let's say my target is not healthy in the load balancer. I'll just just one last question about that, because it sounds like there's got to be one more thing that we can probably to get out to understand. Traffic comes from the load balancer to the EC2, but it's not reaching. So that's more on the load balancer side rather than the EC2, but when you connect EC2 to a load balancer, you don't have to, but you probably want to tell it how to health check the instance. Health check means the load balancer will ping, it doesn't have to be ping as a concept, not as an actual protocol. It will ping the instance or probe the instance to see whether it's live or not. And probing to see whether something live can be with, you know, the standard slave health to HTTP, you can answer to that. If you're not running an HTTP server, it can be a pinging TCP like actually a port knocking. And if I get the connection refused, then what am I, what, what did I do wrong? Maybe for some way the load balancer can be lots of reasons. One of them it can reach the EC2 because it's not on the same availability zone. Other may be because your application doesn't really listen on that port. Another reason can be that the security group that you've configured between them either on the load balancer or on the instance is not able to go out to that port or come into that port. Another reason may be that you have, you know, your Linux is hardened on the instance and is configured to refuse to a certain port connection. There can be many reasons. I think you were aiming at the security group. Ah, you saw my smile. You gave it up. Yeah, I just want to touch it a bit. You know, I like to touch every topic that we can. Okay. So if the all balancer can't reach the EC2, I'll check the thing that you said. And I think like the easiest way would be to check the security reports. You know, so when I troubleshoot stuff, just to the rule of thumb, I like to check the things like fail fast. You know, it will be, it will take me probably three seconds to realize if the all balancer can reach the EC2 by watching the security group or by inspecting it. But maybe checking around the health check to see if the probe is okay. We'll take five seconds, not three seconds. So first, I like to do the things that take less time and then I move on. This is how I troubleshoot. And AWS released this reachability test something. Did you use it? Yeah. So that can be great helping understanding what's wrong. But it's not, it's okay. It's nice and all, but I think they also say in the disclaimer that it's not really checking for connection. It just checks the rules. Right. Right. So it's the knuckle arrow case. The security groups are okay. Loud thing is okay. But there's, it's not a real test. You know, it's like everything is in theory. Everything should work. You know? So I don't want to shit on Amazon. But what I tend to say is that when they release something, it's a great feature. Obviously, everyone needs that. And like many other features, they will take you 80% of the way. You'd have to be the rest 20. And like sometimes it just doesn't make sense. That's what it goes next from island. It's our job security owner. Exactly. They help us keep ourselves until the AI robots. It's our job security. They're 20%. So you're in AWS, I mean, like full. But what you said about the reachability. So like, I like what you talked about the probing because if you think about it, if in theory, everything works. So security group knuckles, I don't know how thing, but then all you got to do is probe the instance. And then so you're saying everything works. If you pop the instance, you mentioned that maybe your application is not listening to some thought. So maybe that would be the issue. So as you said, they narrow it down to 10, 20% of the problem. So it's nice. You know, I like complaining. Okay. Okay. Okay. So moving on, moving on. Okay. It's like I have a list of questions written. No, well, moving on. EC2. Do you want to go into security a little bit because we touched on security groups? We touch all the time security. We will touch it in a few minutes. Hang on. I have a better question. Okay. I'm curious, because I walked on my EC2 and I installed many applications because I like the up getting solo APK Edo level. And all of a sudden, I see that I'm running out of space. And I'm running, you know, a single standalone EC2, maybe I'm running a local compose or minicule or K3S or any other application that can run container of frustration or whatever. And it serves on the cloud. And I don't care about high availability because it's a startup or maybe it's even, you know, a mockup or, yeah, some mockup server that I'm trying to run. But I'm running out of space. Well, I allocated, let's say, 20 gigabytes of storage initially. But I need more. What are the steps to do that? Like how am I supposed to get more space? Why are you always pushing me to speak about my day job? I understand. It's boring. No, it's just commercial. Okay. Okay. Okay. How do you get more space? You have more than one way to do that easiest way. You can, today, it wasn't always like that, but you can just okay. That's even circle a little bit further back. When you deploy an instance, it depends on the instance type, but you usually want to attach some kind of an EBS when you launch it. So you just said I've launched 20 gigabytes. You could have not done this with an EBS and just launch an ephemeral disk. That's attached to the instance. And then when the instance goes down, the disk just evaporates or the data evaporates rather. When you attach an EBS, which provides some kind of persistency, you can configure it, by the way, to not die with your instance. When you turn it off, there's a deletion protection. So you can keep the EBS alive. And that's another layer of protection for your persistency. But if you're out of space, you can just go to that EBS while working and right click it or use an API and tell it to extend itself. So you can extend it from your 20 gigabytes to 100 gigabytes. Okay. To a certain number that you want. Problem with that. First, you can only do it once in every six hours if I'm not mistaken. Yeah. And the second problem. So it's not really real time. It's not a solution that you can keep doing it while you grow, right? Unless you're growing very slowly and every once in every six hours is okay for you. And the other problem with it is that if you exaggerate it and launched instead of 20 gigabytes, you've extended it to 100 gigabytes and you don't need it tomorrow, there's no way to shrink it back. Okay. So side note, that's exactly what I do in Zesty. And we have a solution. Once it's installed, it can grow in shrink instances. Um, not really going into the technology, but at the end of the day, you can do that. Um, what you can do on your own. By the way, it's a classic cloud. Um, let's say use case because it's easy to scale out. It's very, very hard to scale back in. Right. Yes. You know, it's a classic cloud the cloud issue where we can just build on the texture. Exactly. You want to think on both ends on both directions. So what do you like about scaling in? So how, how do you? Okay. So let's say I have 20 gigabytes, I expanded it to 200 gigabytes. And then you're saying that your application can shrink it back. So does it shrink it without affecting my hard drive, my application? So Zesty will shrink it without affecting your application on AWS. You do have a way, but it's very complicated. There's no way to go to the disk and tell it to shrink. So the other way, instead of growing that disk, what you can do is just attach another AWS volume. But then what happens when you attach another AWS volume, the disk, the machine isn't ready to work with it. So you either, you need to, well, it's already formatted, but you need to create a file system or extend your existing file system. You need to do something with it. Right. You need to tell the instance that it's there, uh, attach like, like use the other volumes that I've used. And then you have two blocks, uh, your 20 gigabytes and another 50 gigabytes, for example, and you've extended the file system. And then when you want to shrink back and shrink again, uh, air quotes, because it's not that simple, but when you want to shrink back to your 20, um, theoretically, you can delete the 50 gigabytes. But then your 50 gigabytes are already, you know, they have data, uh, it has data running on or stored on it. And then you need to move that complicated. Uh, that's why I said it's another solution. You don't have to extend the one. You can add another one. Not that easy to scale down. So think about it before moving forward. That's it. Okay. And if my application for rental needs to access, okay, so my application needs to download stuff from S3 or maybe get a simple on the telephone systems manager. Okay. Okay. How can I, so should I maybe, uh, create an AWS ticket and access key and then put them as involved in my application in my EC2 or, I don't know, is there another way for the EC2 to access AWS API without how coding the credentials? Okay. So before we'll touch I am, which is where you're going. Let's just explain what's SSM in the sense of EC2 because it's important. Um, SSM is, uh, systems manager systems. It is systems manager. I'm just thinking of what the S the other S means. Let's talk about the provisions of what we talked today. Okay. So EC2, Omer, do you know what EC2? Elastic compute, could lead. Either cloud computing on compute cloud. Okay. EBS elastic block storage. Okay. And now we talked about SSM. SSM. Something system, some, uh, systems manager. Okay. So, okay. Systems manager, uh, is, as it is, a systems manager within AWS, it has many sub applications. Let's call it one of them is the typical run command. So you can just run commands on many instances through that. The way it operates is that when you launch a new EC2, um, if you're running the default AMI or any AMI by Amazon, which I think most people do. If you're running Amazon Linux or an Amazon or any other Linux that's managed by Amazon, they preconfigured it or pre installed an agent that's called an SSM agent. And that agent is, uh, it's not, I don't know how to find it, but it, it's, uh, in slip mode when you start the instance, it's not running. What I mean by that is that when you start the instance, the agent comes to life, it tries to ping AWS if you've given it permissions, the instance you don't have to. If you gave it permissions to speak to SSM, it will connect. If you didn't, it will just go to back to sleep. And that happens on most instances on AWS. You have an agent, a sleepy agent, uh, on the instance. If it is connected, then a SSM can control the instance in many ways. One of them is run a command from remote, so you can kind of go to SSM and then just say, pick all the instances with, uh, with the agent on them that are, that belong to the tag, whatever environment staging and run the command, uh, APT update. For example, yes, you can do that another way. So you talked about permissions, and I want to talk about permission and SSH. So let's start with permissions. Uh, you told me my application on the EC2 needs to reach AWS because that's part of my application. For example, I need to download, uh, objects from S3. I would do that with an API from running, uh, Python, I'm probably using bottle three and I need to access. So if you, you rightfully asked, what do I do? Do I create an IM users with access key and security like the docs say and copy them to the instance. Uh, and no, if you have listened to our IM, uh, episode, you probably know that, uh, keys and secrets belong to users. And in order to run something within Amazon that speaks to an Amazon resource, you want to use an IM role. And a role is a set of permissions that are attached to an instance. Today they call it an is to profile, right? Just another name for a role that's such attached to an EC2. And that role gives you the permissions and by default, that's already attached to your application. So your application has this hierarchy level of permission that it will search for. It will search the environment on the instance to see if it has keys. It will search whether the application configured themselves. If it won't find anything, eventually it would see if the instance has a role. And if the instance has a role, it can reach out to other resources. If it has, which you're referring to, it's like the credentials management when using AWS SDK of any kind. So if I'm using AWS CLI, AWS SDK for go long, CPP or C++ or any other SDK that is implemented somehow, there is like a flow of credentials, you know, first take environment variables. If not exist, take assume, well, by density token or something like that, if not exist, check EC2 metadata. So then it thinks the IP 169 blah blah and then check spoke credentials, right? So this is, so we don't need to do anything. Everything is already implemented by the AWS SDK, which is also implemented in the CLI, right, to the CLI implements SDK. Okay, great. So you're saying I just need to create an IM role, provide permissions to the relevant packet, attach it, attach the role to the instance. And then I'll do with AWS CLI, AWS S3LS, right, the command that we check with your permissions, you know, like the basic one. If I see it works, everything is fine, right? That's the, okay, okay, that's the flow. Okay, you want to attach the other part because it connects to SSM shoot. So that's the SSA, SSH, SSH in the system. What is the SSH made? The QL shell. What is IAM? Identity and access management. Oh, we know, we know about evasion. Some of them. Okay, so SSH, what do you mean? SSH and SSM, they're connected? Yeah, they are. So SSH is your typical way to create a shell into the instance, connect to the instance in order to run commands on it. When you deploy an EC2 and pretty sure AWS will, if you don't change anything, AWS will offer and by default deploy a security group attached to it, that opens port 22, which is the SSH port to the entire world, 0000. So anyone can basically attach to your instance within SSH protocol, which is a bad idea, even if they don't have the key. When you launch an instance, you pick one of the keys that you already have configured within your account or create your own key. The problem with that is, of course, if keys are shared or leaked, anyone can access your instance, given that the security group rule is open. So two things you want to do. One, don't share keys. Two, don't open the security group. Four port SSH, you the entire world. I would even go further and say, don't even open port 22. If you really have to configure something, do it to your IP, but more often than not just configure it with using user data automation tools, Terraform, CloudFormation, Ansible, whatever, it's good practice to keep SSH completely turned off. Well, not turned off as a service, but just not accessible. And now we're going to SSH. But my CAC there are nails. Okay, so I have easy tools. CAC there are nails. And something is off with my Reynolds. I'm not sure what. And I need to SSH to one of those Reynolds to see how they are on containers. And I got to see that with my own eyes, you know, I get it tested to troubleshoot it. What should I do? Perfect question. Why? Because SSM has the ability to connect to your instance. If the instance has permissions for the agent, like we talked about a minute earlier to speak to SSM, then those services are connected. And then you can do it in many ways, but the easiest is just to right click on the instance and connect, you know that feature. And then you have a few tabs. One of them will be systems manager. And if it's available, you you'll see a button that's available, you click connect, and you get a shell within your browser. If you don't like that, I don't like to have my shell in the browser. You can use an AWS plug-in. I will share it in the description that lets you do the same from within AWS CLI. So you run AWS SSM, I think start session, something like that. Start session with the easy to ID immediately connects, and you got a shell from what your terminal of your preference. So by the way, I do like it. I like the shell in my model. Just do you know. Yes, I do. It's really weird. Yeah, I know, but I like it. It feels like, you know, it feels like I'm somewhere else not on my machine, you know, so I like it. Because I like to feel that it's remote and stuff, though the control C control V is not so nice. Exactly. I would not, yeah, I would just, but you don't really need to use it. Why I'm okay with that? Because it's not like I'm using it on a day-to-day basis, not like I'm using it. So ad hoc for ad hoc stuff that I need to check. It's okay. If I had to use it for hours, I would definitely find a solution like a plug-in that you said. One remark or a comment on what I said, exactly like you asked, what happens if something goes wrong. If something goes wrong, there's very high chances that the SSM agent will also be inactive for many reasons. Okay. And then your only way to reach to the instance is SSH. So maybe with critical systems, I don't know if your runners are considered critical systems. I don't think they are and you probably launch another instance or change the configuration in any case. If you're worried about not having the SSM agent available, you probably do want to configure your instance with an SSH key just for security, right? Just for emergency. I got to give you a useful that. Yeah. I got to give you useful. Already have by the way, that's what we keep a key on the easy to run. I mean, in a use case, why sometimes SSM agent to an EC2 specifically CICD runner might be a must have might be a must have okay, it's a sentence might be a must have. So for example, if your CICD pipeline runs static test regression test and you have a web application and this part of the sanity of regression test, you want, let's say, Docker compose to, you know, imitate the real environment. So think about it, your EC2 run is already a Docker container, right? Which is running Docker compose. So you're already Docker in Docker in Docker, whatever. And sometimes there are network issues. By network issues, I mean, maybe the application cannot reach database, maybe the application cannot reach the host or whatever reason, you know, that that can be so many reasons for that. So specifically for EC2 one else, when I mean, it's to CICD one else, it's not like I do SSH on a daily basis, but it's definitely good to do to make sure your CICD one has a fast way to get into them to check for build, especially if you are running complex CICD workflows, you know, pipeline's workflows opinions. My opinion is, is against running Docker compose within a CI pipeline exactly for that reason, because I'm looking at issues. Let's tell you, so how do you, how would you do that? I would reach to the cloud and actually, okay, I'm assuming that you're not running Docker compose in production. No, of course not. And then my, exactly. And then my guess is that the test has some kind of differentiation. Well, a lot of differentiations to begin with, because it's running Docker compose and not the real thing, be that ECS or Kubernetes or whatever else. And then there's already a difference. If there's already a difference, your test is already not fully compatible with production environment. And then either do one of two things, go to one of the extremes, either go to the cloud and deploy the same thing and test it there, or if it's not that critical and it's not part of an end-to-end test, just mock it. Hang on, yes, yes. You're right. If I wanted to test ECS, EKS, or whatever. But if I'm testing the application alone, that's it. I only want to test if I create this user, reset the taskbar to get an email, blah, blah, if this flow work, I don't care where it's deployed. If it's on ECS, if it's on EKS, I don't care. So this is why I'm running my tests. This is why Docker compose answers my need. Why would you need compose for that and not just run the single container? What do you mean a single container? Docker, I'm trying to understand, it's not just something I would do, but why not talk a run and use Docker compose? So assuming, all right, so assuming my application, for example, is running with, I don't know, MongoDB and maybe open search, you know, Elasticsearch, and maybe other database, or maybe Elastic, I'm trying to already, so any other service, you know, I'm smiling when I have an answer. Okay, so the developers are running those tests and QIA running those tests on their computer, on Docker compose, right? So the running, Docker compose, they're running, these open search, the application itself, everything is running on their machine, blah, blah. I would expect the CISD to run the same thing, the same tests, this is how I like things go, you know, like, if it's on, it runs local, you don't need CISD. I get what you're saying, for that reason, exactly modern CITOLs, GitLab, GitHub, drone, harness, I can think of more of Travis for sure. All right, there's seven machines, right? Services, exactly. I hate it. I hate it. Why? I absolutely hate it. First, logs, logs, in drone, it's okay, go run a service in GitHub, okay? Sometimes the logs are, I don't know, like, it skips and it's not full or like, you need to, right? And you're very specific. Okay, so it works, great. Let's really work. GitHub alone, but the developers cannot run the same thing on their machine. So what you're telling me now, I want you to run make blah, blah, like, if I'm running make files, you know, I like the make files. Yeah, yeah. So let's make up, we run different things on a developer's machine and in the CICD. So what I would claim is that it's not different. It's actually more alike because when you're running in CI, like you said, it's Docker in Docker, in Docker. And then the convoluted systems of networks is, hey, it's inception of containers one inside the other. And when you're running services exist to create a flat structure of containers, I think we're drilling too deep into concepts and architecture now, but services are exactly for that to not complicate how the networks work and how Docker and Docker operates, which sometimes can be very weird. It exists to flatten the structure and then you run your application within. But it's been completed with local development. I realize what you say, I mean, GitHub in GitLab and drone and everyone else didn't invent it for nothing. They invented it for a reason, but it conflicts with my, you know, I'm very lean to local development. Like I really like local development to be super fast. And I think that's the most important thing you need to work on. So if the local development is fast, you can do everything else fast. Right? Is how I think. And if you are- That's a great, it's a great thing. What? Let's agree to disagree. I would actually not, I would not use Docker compose. I would use the flat structure. I'm pretty, I'm sure there are cases. I'm sure there are use cases. You don't have services for everything. Sometimes you have a specific or very special type of network configuration that's not available to the services. If it's simple, like just speaking to a ready instance to check whether you're ready commands are working, I would personally use the service. I understand what you're saying. Okay, so we had two options, running Docker compose or running a service that is provided by the STACD and get to disagree. Okay, we fight. We fought. No fight in the past. No fight. Actually, okay, so to tell you the truth, the whole discussion over here was just for you to say the world convoluted. This is why we had this whole discussion, because I wanted to hear you say it correctly. I just wanted to hear you say the world convoluted. This is probably not even in a place. I liked it. Okay, I think I had one last question hang on. One last question that I had about EC2 is. Do you want me to sing while we wait? Yes, elevator please, elevator song. Okay. No, I forgot my last I had a good question about EC2 and I fall. Ah, okay, okay, I have it, okay. This is what happens when you don't write questions upfront. So this is what happens. Okay. So I'm running my application on my EC2 in Docker compose case three years, whatever we talked about. And I don't know, I have a feeling I'm paying a lot of money because I'm running the server all day long 24 seven the whole month. Like, is there a way to make it maybe cheaper or something because it costs a lot of money to one and if you do it again to my day job. Am I, no, no, why? Yeah, so part of this thing. This is just storage, no. So there are many ways to reduce costs. Let's begin with that. Okay, you can obviously stop your instance if it's not serving anything. But I need one for seven, so it turns all the time. Sure, so you can scale down the number of instances if you're running more than one, which you should run at least to if you're running in production for high availability. You can scale down the instance size. You can scale down the resources. All of that is given. What you can do also, that's what Sesti does is you can buy an R.I. You can buy a reserved instance, which is your way to commit to a capacity to a certain period of time can be one year or three. You can tell Amazon if you're running if I'm running an M5 now. And what I'm paying now is an on demand price on demand price. Then I need it now, I might not need it in an hour. That's the pay. I pay like premium for that. Certainly in the cloud, but within Amazon, I'm paying the premium price. If I can tell Amazon, okay, I'm willing to commit to one year. I can pay some, I don't know, 60% of the price. If I'm already committed, regardless of whether that instance is up or down, if I turn it down tomorrow, I'm still paying. I can't pay upfront and even say a little more. Let's not go into that. In what happens if, if I decide one day to change my infrastructure for an easy tool to a serverless infrastructure. So now I'm running lambdas. Is there a way, again, maybe to commit upfront, but without paying specifically for easy tool? You mean saving plans? Okay, yes, so the other point of that is using saving plans where you can, by the way, there are lots of ways to save on Amazon. Maybe that's our next episode. There are tons of ways, there are programs, there are credits, lots of ways. Yes, so can buy the same plan. The best way is to not use Amazon. That's the cheapest way to avoid paying money to Amazon? To the project. The thing is on the code. Yeah, not code. Yeah, so it's cheapest way, just don't use it. Yeah, so what are you saying about... Do you want to talk about saving plans? Yeah, just a word. You know, so we talked about the level instances and you got savings plan. So it's, I don't remember the specifics, but it's a general commitment plan to a long time that can cover many of your resources. It has a lot of, you know, small characters of details of what it can be incurred on. Well, sorry, but it can be applied to what it can be applied to. It's a little bit different and reserved and says, if you can, you know, elaborate on that. Well, touch it in different savings, but I have another... Okay, so my last question also when it comes to maybe savings. So let's say my application is like it's highly available. I got three C2s, one in each availability zone. And I'm paying an on-demand price. My application is so smart when you send the C-int kill to it. It knows how to shut down slowly. So it knows how to scale up and down and everything is persistent and everything is fine. Is there a way for me to save money? If I have this type of infrastructure, well, I don't care if I just kill an instance because everything is, you know, persistent. I'm not sure where you're going. Really? Okay, so maybe... Okay, so that's the question was definitely not good if you're not your way on going. Okay, let's put it this way. Or maybe again, but a bit differently. Three C2s, one in each availability zone, all right? It's an auto-scaling group, whatever, right? I don't mind if you kill one of my instances at any time because my application is highly available and the infrastructure, you know, everything works okay. Yeah, yeah, yeah, okay, I see one. And I pay an on-demand price, but I don't want to point with our eye. Is there another way for me to save money for that? Yeah. How? How can I do that one? The way is to use spot instances. Wow, what is spot instances on a spot instances? Is your way of running an EC2 instance that doesn't really belong to you? Again, when you're paying on-demand or you've committed to reserved capacity, you will always have that capacity for you because you're paying premium or pre-committed. Running a spot instance is basically Amazon telling you, take that 50% off for something, use it, but we can take it anytime. When I say anytime is like a few minutes notice, we'll take it away. There are lots of ways to use spots. You can build your entire infra on spots as long as like you said, you're ready to lose them. You can build this hybrid groups of instances. Some can be on-demand, some can be spots. Lots of ways to use spot instances. It requires an extra mile from the developers. Remember, I told you you need to send my application all how to deal with a SIGINT. Then I realized if I have a containerized application, I need the health structure, everything should be aligned with how it works. So the health structure should be configured to be configured to be configured. So there are the times, yes, definitely. Going to a spot architecture is great in terms of pricing. It has lots of effects that you should take into consideration when architecting the application. So I like what you said. Going to spot architecture. It's not just purchasing spot instances. You need to think in a spot architecture because it's not like you just buy the instances and that's it. You need to maybe change the architecture of your application itself. Definitely. And by the way, spots are integrated into many other areas. When you launch a Fargate ECS cluster, you can say, I want it Fargate, but I want it Fargate Spot. So it will run instances that already do not belong to you because it's Fargate. It's kind of serverless, but it's also running on spots. So at any point, the Fargate instance can die on you. It still works great, in my opinion, but that's another option to save. Okay, cool. So almost before we continue on that, it's going to be an episode in the length of load of the ring. The entire trilogy. Yeah, the entire trilogy is squeezed into 43 minutes. Yeah, hopefully it's all soon. So anything else you want to say about this tutorial, should you move to the corner? I have so much to say. I think we need to move because we've said enough. So I'll move to the corner. Okay, we're moving to the corner. Oh, no. Today? A week. Oh, there's an affair. Yeah, yeah, yeah, okay, but we need to save the effect. Okay, so ready? Okay. Yeah, cool. Yeah, all right. So corner of the week. Hello, everyone. Welcome to the corner of the week. And in this corner, Omele and I will share any experience that we had this week about anything. So Omele, you start. I don't think I have anything. I feel like I did too many font and then backhand stuff. And I don't want to share about it because each week I'm doing that. So once I have a great milestone on a shelf, so it's your time. Okay, I have a few, but they're combined into one. I have a very weird group with my cousins and it's a workout group. Every time someone works out, they upload a picture of themselves, like a selfie from the workout. And then my brother basically counts the workouts. And then if you missed the work, if you didn't work out for a week, you're kicked from the group. And then if you want to come back, you need to upload two of them. Okay, that's a very weird group, but it runs for like a few years. And then I was shocked to hear that it's not automated. It's just my brother actually using a spreadsheet to count the workouts. So I sat and built this week a bot, a telegram bot. So I learned about the telegram bot system and I used the go endpoints for that. So that's a project with 4,000 stars on GitHub, I'll put a link to it. And I also needed persistency because I needed to, you know, data persistency to keep track of the workouts and my users and everyone that are connected. So I needed a database, but not like a full fledged, I don't know, RDS. I just needed something small, but I needed to manage it from within my code. And again, I'm writing go link and I needed a simple ORM to manage my SQLite. So there's a project called gorm, which is go ORM that has like, I don't know, 17,000 stars on GitHub and it's incredible, even if you're running something very simple like my telegram bot, gorm is incredible for an ORM solution. So I'll add that as well. And that was my experience. Wait, wait, wait, a question about your experience. So the gorm is part of the application, right? So it's part of the web server itself. Gorm is the ORM layer, everything that I'm doing against the data creation of tables, my creation. So the database itself, you just use the SQLite, which is a simple file. So SQLite is a simple file. What I've done for persistency is deployed it on an EBS. By the way, you know what, another tool. I don't know if I've mentioned it before. You remember Heroku, right? Heroku was acquired by Salesforce and they kind of killed the free tier altogether. And now I've discovered, I think I've listed it here. If not, then that's another one. Fly IO. Fly IO is like what Heroku used to be. You can deploy for free, your servers, unless you need extra stuff. And you can get logs. You can run blue-green deployments or rolling deployments. You can SSH into your container. You can build the Docker file. It's incredible. So you just run Fly IO deploy. They even have a GitHub action that I'm using. And I've deployed it with them. And for persistency, what I did is added a volume. Basically, like an EBS. I didn't deploy it with AWS because I was about to ask you about, you said about Go endpoint and I was like, what, do you do an API gateway in a Lambda? So I thought, I wanted to ask you about it. And now you're saying that there's a running container, running container through Fly IO, which are probably sitting on AWS. And they have a notion of persistent volumes. And that's how I'm used to, when the container is replaced, the data is persistent on the volume. So the volume stays, container is replaced. That's it. Lots of experiences in one week, yes. Yeah, yeah, sounds cool. I think, so I'll share maybe, you know, we talked so much about EC2. I just said that this week I had, so I wanted to expand the volume on an old EC2, you know, a legacy to instance. And I was quite shocked that what you said in the beginning of this conversation, that if I have maybe a 30 gigabytes instance and I form like via the AWS console, I just changed the volume from 20 gigabytes of 32 to 100 gigabytes. It is automatically expanded or so in the operating system, you know, without doing anything. So I didn't have to have your five system will be expanded. I was shocked. I was like, wait, but I need to SSH and then expand it and extend it or whatever. And no, nothing. You just do it in the console, which was amazing. Yeah, yeah. So that's my experience this week, which is something that I think we both did for like, I don't know how many times, but whatever. Okay. Okay, so thank you and any of that 12 demo. Where is the Elvis? Well, okay, so everything is in the baggage. I next time. Oh, now we need to start over. Yeah, okay, next time. Okay, next time. So I'm ready to see you next week.

Launching an EC2
Regions and AZs
Connecting to a Load Balancer
(Cont.) Connecting to a Load Balancer
Using SSM to SSH
EC2 & CI Architecture
EC2 Commitments
Spot Instances
Experience of the Week