Hi! This week it's just Omer again, waiting and hoping for better quieter times.
Discussing yet again serverless instantiations vs containers, why serverless is special, where does it fit and what is an actual valid use case.
In terms of links:
Okay, we'll come back to topics. This is episode 35, I think. Anyway, I'm still here trying to keep up with the tech news and discussions like we didn't until now. Hopefully soon enough we'll go back to normality and American rejoin me and we can keep up where we left off. So I'm going to dive right in and in terms of tech discussion, I'd like to again address the ever going debate between serverless and containerization and the reason I'm doing that well, there's actually two reasons for that. One, it's something I'm still going through in my day work, I'm still debating that with peers and co-engineers. I'm not sure why it's still going on and my belief just to put it on the table is not that serverless is bad or containers are good or something like that. It's not that. I'm not in love with any technology or tool. What I believe is that there's a time and place for everything and in terms of tools there's there are certain tools that much certain workloads. There's that saying that if you have a hammer everything looks like an ale and trying to avoid that. The fact that you're really good with serverless for example or you think that's the skills you have in the organization or on the other hand, rather if you think that the only skill you have in the organization is, I don't know, Kubernetes or Docker or just plain instances, this doesn't mean you need to ignore everything else. It actually means that you might want to think outside of the box and explore other options based on the application and what does that mean based on the application? Well, we'll dive right into that. Now, I said two reasons and the other reason was last night I was helping someone in a Facebook chat group and he was asking something along the lines of I'm using an application which sends dedicated geo locations to the users. So basically what happens is the app starts upon request because it's a function. It's a lambda function on AWS. So a serverless app and there's an incoming request and the app generates a large JSON with, I think, somewhere in the neighborhood of 1,000 points where they're computed dedicated to that user. And so it doesn't work, it doesn't want to work with a database which is another entire new layer of how you run serverless with data intensive systems like databases, but it doesn't want a database, it wants something quicker like a cache system. So one option, well, the fastest option would be to just keep that in the memory of the application of the live micro VM that's running the current instance of the function. The other option would be to, you work with some kind of an external cache service like Reddit, or memcache, or something like that. Now, this creates a new word of problems, but let's first address the application itself. If you're doing that, you might want to think of keeping something that's central, like a container. And the fact that you run a serverless application because you think it's cheap for whatever reason, or you think it's faster for whatever reason, sometimes complicates things well exactly like this one. So working, yes, an external release is one good option, but let's think of what that means. If you're working with an external system, you mean that means you need to connect to it, right? So there are a few ways to do that. First of all, sure, you can create the connection when the instance starts, and then it'll be terminated on its own, or you can shut it down on purpose when the app dies or when the instance ends its work, rather, because it's a function. So that's one option. The other one is to share it globally. And that's actually something you do want to do. That's best practice with serverless. And opening it globally means you want to open the connection, kind of outside the framework, outside the lambda has a wrapper where you get the event and an endpoint and etc, etc. So outside that wrapper is where you want to open the connection. And then what happens is because you have the concept of cold starts and hot starts. And if the micro VM doesn't die because it's the subsequent request enters the same micro VM, the same environment, then that's a hot start. So you don't need to recreate the environment. And then if the connection is already open and it hasn't been closed because it's the same function in the same environment, you can utilize that. And then you don't have to recreate the connection. Now what that does is two things. First of all, in terms of latency, the connection is already open. It doesn't cost anything, air quotes cost in terms of effort and time to open the connection. And what you gain is lesser, less load on the system. And the system by the system, I mean the cache system be that ready or a database or wherever you're connecting to. And the reason I'm saying that is if you open too many connections, what happens to the data service you're working with is that starts to get a load. It's not expecting in terms of CPU. Let's think about readys, for example, that's a highly intensive memory intensive application that's designed to work with high loads of ramp data, right, keeping things in the memory. But when you're starting to, you're using a cache, but you're starting to overload it with connections by overloading, I mean, thousands and thousands of connections that keep opening and maybe hopefully closing after a while, it starts to get intensive in terms of CPU. And then you start scaling the system, not actually based on the memory you're keeping inside it, but actually the CPU workload for you to be able to handle the number of connections. And that becomes much more expensive than just scaling it for memory. And believe me, I know. So a few things you can do. First of all, yes, you can hack yourself around managing centralization or a proxy that manages the connection. And that's really a good option when you're working with some Postgres database or any kind of database. You can keep a database proxy that manages the connections. That's one thing. With cache, it's a little bit more complicated. There are solutions for that. But my question is this. If that's your case, why not run the same application or the scale of applications on top of a container? You don't have to, you know, be, I think I said it like a thousand times, you don't have to be a master of Kubernetes or anything like that. There are things like ECS, you know, I think that's like ECS on Fargate, which is basically like taking a lambda function. If you think lambda is so simple and you don't have whatever, the DevOps capabilities, ECS Fargate is basically the same. It's just a few clicks away from being able to install an application of your own. And the same goes for other services, right? There are flyio and Haroku and things like that that you can use to run your application. If it's a really small one and that's why you're actually running on serverless. So that's my question basically. And I hate myself for asking it. And people hate me for asking it. But if something is not working or it's too complicated to handle, maybe you need to think of the platform you're using. So why run a serverless function? If it has a great reason, be my guess to it. But if it's not and it starts to complicate your life, you might want to think otherwise. And I don't want to get all that negative about serverless. It has great applications. So let's talk about that. When does it make sense? When do you want to use a serverless application and not use a container? Well, think of that. First of all, you have the obvious reasons. When it's something that's crumbay stories, not that load has no load like zero load. I'll give you a live example. A cousin of mine built an application for his friends. And he wanted to have what was it? It was football gambling, right? So once a week on match day, around 40 or 50 of his friends would log in, not all at the same time, would place their bet and maybe look on the numbers or view the tables. And that's it. They talk out. This is basically it. So there's zero load. It's 99% of the time not required to be available. If real time is a thing, which is not in this application, then you might want to consider something else, but it's not. So you don't care about hot starts and cold starts. So that's that's that's the last nail that you need in the coffin here. But there's no reason to run it on anything else. Lambda is perfect. It's easy to deploy. It's easy to handle. You don't need anything else beyond that. It's just a request that's coming in once a week or 50 requests or a hundred requests that comes in not at the same time once a week. It's nothing. Even if you hold those connections, we talked about which he doesn't. But even if you do hold connections against a database or a cash instance, that's not that big of a deal. So that's a great thing. If it's a small application, there's no too much luck. Now, if you do want to have like, let's say this application for example, and I'd say that it's scale, he scales it up. He gets a thousand or 20,000 users next month or next year. What you want to do is one of two things, either consider the platform. That's obvious. But if you don't want to keep yourself on serverless, first of all, you need to be mindful for the fact that you're running on a on a micro VM instance that starts and shuts down. It depends on the platform, I think in AWS is something like 30 seconds. In Azure, I think it's three minutes something about at the end of the day, it's a disposable environment. So what you want to think about and be mindful of is this exactly. If you're opening connections, if you're connecting to systems like variable management or parameter management or of course, secrets management, every time a new micro VM starts up, you need to connect to it, create a connection. And that means both holding the connection in the air and both creating the connection. If it's a secrets management, you probably need to decrypt stuff encrypt and decrypt stuff. This takes time. And in Lambda, at least on AWS, you pay for seconds use not only for resources. And if those seconds mount up, I think you have like a million seconds a month for free, they used to be like that at least. If this starts to mount up, it becomes really, really expensive. So Lambda has this extremity. It starts very, very cheap to be basically nothing. It costs nothing. But when you start to scale, like real scale, it becomes much more not only complicated but expensive than just a normal application deployed on a, on a containerized system. Of course, if you run on easy to rather than firegate, then you save another 30%, which I think are actually worth to expand. But that's it. So that's what I had to say. Just another ever going debate that nobody solves. And I think the main takeaway here is don't fall in love with certain technology, or don't fall in love with the concept or the notion that you have certain skills. And that's all you can do. And it happens with serverless and it happens with Kubernetes. And it even happens with ECS each technology. People just think, okay, that's what I have. That's what I know. That's what I'll do, which is okay to begin with. But if you don't catch it early on and you do scale to some point, it either becomes really, really expensive in terms of how much you pay at the end of the month, the cloud bill, or expensive in the fact that you, you need to hire people, you need to hire the skills or, you know, gain the skills in some way. It costs. So plan the platform is all I'm saying. You don't have to over engineer everything from the get go. But please do catch it early on. And I'm, you can probably hear, I'm speaking from from experience. I think we live it that. And that's the debate for the week. And like every week, it's pretty short. But like every week with mayor, we had a few links that we mentioned to cool technology and new stuff. So this, this week, I have two of them. One is if you ever played around with hacking or bug bounty, especially around Linux, which I did a lot, there's a technique that's called privilege escalation. You probably know it if you come from a system system system at the middle world. And privilege escalation is the notion of SSHing or gaining access to a terminal or a system. And then escalating your privileges, maybe to the point that you're the super user on the instance or just an elevated user at all that has further permissions. Like, I don't know, reading passwords, changing other users passwords, gaining access to sensitive data that you shouldn't be able to stuff like that. It's more of a traditional Linux system kind of thing because it's not, if you log in today to an easy to instance, you don't really care about these kind of things. But traditional systems really, really mind about privilege escalation and how they manage different permissions and roles within the system. So anyway, there's a nice repo on GitHub, a really famous one. It's called GTFO beans. You can guess what GTFO means. It's get the F out beans. And it's just a list of privilege escalation tools that you can use for any kind of a mixed system. I mean, unique system. It's just binaries that are useful for that. So that's really cool to play with. And by the way, there's something that's called work games which will teach you slowly different security concepts within Linux. It's really cool to, it's designed as a game. So you basically SSH into the first level, you can play with that, learn about one thing, like privilege escalation, for example, and it walks you through the process. And when you're done, you can SSH into the next level, you get a key. And then use that is kind of a capture the flag thing. If you really like that, there's there are many platforms that you can play CTF capture the flag in many others. One of them is hack the box, which I really liked and played for years. So I live a link to all of that below. The next link that I have is called top grade. Top grade is another rust return tool, cool tool on GitHub. And top grade basically keeps you on top of every upgrade you have on your machine. For example, you have, you need to update your operating system and you want to upgrade Chrome or Brave or Arc or whatever else you're using. Top grade basically looks at all of your applications and covers you end to end in terms of upgrades and updates to to keep yourself up to date. Some people, I think I'm one of them, don't really want to keep everything up to date and and have latest on everything because I think it's not always best practice. But on some other cases, if you have like policy on your workplace that dictates that at least a large portion of what you run on your machine has to get the latest updates, not for the updates, but more for the security security updates. Because that's security policy in the organization and it's quite common, I think. So that's something that can really help and it's a really cool tool to just check out. I'll leave a link to that in the show notes as well. I think this is it. It's again quite short, but it's only me and hopefully soon enough may we'll be back here and we'll be back to business as usual. That's it. Have a nice week. Things that I want to mention. One, I have a newsletter. I mentioned it last week. I'll leave a link to that as well. So things like the discussion we just had and cool links and cool tools are things I will share. Hopefully on a weekly basis. So that's one. And the other is if you want to contact us. At least me, I live a link to my personal telegram. So you can reach out and let me know what you think. If you have different requests or comments, that's something else. That's it. I'll catch you next week. Have a nice and quiet weekend. Bye.