DevOps Topeaks

#38 - Squeezing the Lemon

November 17, 2023 Omer & Meir Season 1 Episode 38
DevOps Topeaks
#38 - Squeezing the Lemon
Show Notes Transcript Chapter Markers

This week we talked about when is it a good time to make changes in systems and how!


  • Post:
  • Topeaks Twitter:
  • Git CTF:
  • Home Assistant:
  • a4 terminal:
Climate Confident
With a new episode every Wed morning, the Climate Confident podcast is weekly podcast...

Listen on: Apple Podcasts   Spotify

Inspiring Computing
The Inspiring Computing podcast is where computing meets the real world. This podcast...

Listen on: Apple Podcasts   Spotify

Meir's blog:
Omer's blog:
Telegram channel:

take off. Oh my ready? Yes, I am. And that was a fast and dry. Hello everyone and welcome to DevOps topic episode number 38 and in today's topic, Omer wanted to call it like this and we're going to call it like this because Omer wanted it and we love what Omer wants. You know what we do every week? Like we go through the episode, right? We go through the episode, we talk about whatever we talk and then at the end we say, Oh, you know what? We have this Facebook group and this channel and we can reach us here and reach us there. So do we do it now? Okay. Hello everyone and welcome to the Ox topics. I'm going to say it to begin with. You can reach out to us. You have both our Twitter profiles. There's a profile specifically for DevOps topics and I'm going to take the initiative of updating it. So that's kind of a more direct way to approach us. I'm going to put the episode there and you can comment on that as well as on Telegram directly to us or on the Facebook group if that will be kept updated. What else do they miss anything? I don't think so. I hope so. Okay. Cool. So end of the news. Let's start. Okay. So today's topic is on an end. It's squeezing the lemon. Squeezing the lemon. So email. Please tell me. What's the first thing that comes up through your mind when I say squeezing the lemon? I think I'm a rito. Isn't that the first thing? Okay. The idea that I had and why the name came up to mind is because of an interesting blog post that I obviously link the link to, which was called squeezing the hell out of your systems. And that's basically was discussing a scenario, which I can put it out there and just I want to hear what you think. Okay. So let's start. Let's just go right at it. The scenario is this. We at our company, we work on the same team. We have a huge monolith. That's kind of a shout out to the recent episode that you can catch on how to deploy a monolith on a Kubernetes. Anyway, we have a large monolith. It's one big application and it's working with one huge database that's Postgres database deployed on AWS. Okay. Over time, the load build up, which is a good thing because, you know, we have traffic coming in and it started building and building to the point that it went to what's going on with the fireworks. Someone needs to see the video if you want to see the fireworks behind me. Anyway, the load on the database just keeps going up and up and up and up. 60, 70, 80% and now we need to do something, right? We need to scale. How do we scale? A lot of ways the easiest one is to just go to if that's an easy to instance or an RDS just scale up, right? Put a bigger instance. We did that over and over and over and over to the point that we now add the largest instance possible. That makes sense at least. Otherwise, it's a quantum leap in pricing. So we're there. There's no way to scale up. And again, we're at 80%. And there are short bursts on which we actually reach 100%. And we're on downtime. What do we do? How do you take it from here? Before I'm going to lay out what the guy from the blog posted? Wow. Well, first, I'm not a database expert. So I definitely, I will definitely need to read more about how to overcome this issue. I mean, I've never faced such an issue where you had such a massive load. So I'd love to hear what you have to say about it. Okay. So they're roughly to, I mean, the blog post described, and I'll say what they did, but I think you have two main options. One of them is kind of the exciting way. Let's rebuild our system. It's a monolith, right? If we're going to go to something like microservices, which is really cool or some other way of deployment or functions or anything that that feels exciting to your technological mind, we can do that. It's exciting. It's a long work. It's an expensive work, and we'll talk in a minute on what expensive actually means. But it's not necessarily the right thing to do. And what I mean by that is that the alternative to that would be to slowly and gradually improve the system. Okay, we're at the peak of the load, right? All we ever did was to increase the instance. How about we take a look at the moment on what's going on within the database? What are the queries like? Do we have like telemetry metrics to look at? Can we see where the load coming from? We can probably improve the application or the ORM within it that generates the queries against database. What is going on? Just want to make sure that I'm with you. Yeah. I mean, I'm not sure I'm following the line. All right. You talked about RDS and databases and heavy loads. And on the other hand, you talked about their monolith. Right. So now I'm trying to figure out if you're talking about bringing up maybe all the services with a total compose on an easy to instance. And then scale it or what if I'm not sure like I understand you for example. You were asking why is there even an idea of breaking down the from a monolith to RDS? So I have a monolith and one database. Right. Because that's the database that I'm working with and I started working with and as the application grew, there weren't additional services that can work with external databases. This was the database. That was the application. This is where you write. There's no other option. So naturally, everything else that we write is going to the same place. However, if you break down the application to a few services that have to be all the way down to microservices, at least two, three, four services, you can start using the right tool for the job air quotes. Right. Because I can deploy something else. Maybe this fits Elasticsearch or already cluster or something else that will not only be better for the specific service, but will take the load of Postgres. Right. I can do many other things. I can take just Postgres itself and shard it, build a cluster around it because it can work in a cluster. I can do lots of things to improve the process. Right. So that's on the infrastructure side. If we take, we have DevOps. We always try to break it down. This is the ops part. If we take the Dev part, we can probably improve how the application works with the database. I'm saying probably it's an assumption. Based on the fact that we, we in the scenario have never done anything to improve that. We just kept doing the same thing. But that's a major like what you're talking about is like months or weeks of work, and on the database or the team size. So that's a lot of work. If you want to break down the application. For example, as you said, we have a single database which hosts whatever, and then you realize it's only certain parts of that are based needs to be read and very, very fast. So yeah, we'll go with the elastic cell to open cell to whatever. Yeah. But it will require a lot of work. And you know, developers wanting to adopt this new database, this new, I don't know, Elastic Search or Redis or whatever you want to use as a sidekick. And you know, it sounds good because you distribute the load between different systems, but you also complicate things because now you need to know two different databases, you know, or storage places. So, you know, pros and cons. So I think you touched exactly the point that I've seen in the article. What this guy did was essentially saying, okay, I'm going to squeeze with squeezing the lemon, right? So we're squeezing the system. What do I mean by squeezing the system? I'm going to look at the database, and do the alternative way that he suggested, which is not improving the infrastructure, not at the moment. We're not going to touch the, the design, the architecture, nothing like that. We're going to literally take the road of looking at what's going on. Open telemetry, go through metrics, understand the queries, go through the code. It's a more specific way. Let's call it. It's a longer road. It's not as an exciting way to go. That's what they did. And that means squeezing out the system because we have a system. It's loaded. Let's improve how we work with it. And then you don't need to scale as much. Putting that aside, you touch something. You said, that's going to be a lot of work, and it's going to take time. And it's an expensive thing to do. And what I want to say is when you squeeze your systems, your systems are not only machines. You have human engineers working in the team, right? And squeezing them is another thing to mention in many, many, many, many aspects. Not only you're going to pay for engineering hours, attention, which is something he mentions in the blog post, the entire system, or at least some of those who are working on the shift, are going to take their attention off of the features. What about down time risks? Exactly. Another good point. What's going on? How loaded is the system? Exactly. So there are a lot of things to look at. And I think the human part is something that you don't want to overlook. And beyond, if you're taking a bunch of engineers and telling them, okay, you don't work on the features anymore, you don't work on the exciting technology shift we're planning, you're going to look at telemetry metrics and improve the way we work with databases. If that's going to happen for a week, two, maybe three sprints, maybe that's fine. This dude had fine himself working for the better part of, I think, three or four months, right? A quarter over a year on that thing. That can not only break engineers, I'm asking the question, is is it worth it? Is it worth investing those three months? Because you said something really important, it's going to take time if we make the architectural shift from a monolith to microservices. What if we wanted to break the monolith to three services? How long would it take? Two quarters, three quarters, maybe it's time that we were going to invest anyway. This quarter is gone, right? We're going to invest it in something in either way, maybe it's worthwhile, putting that in the longer effort and doing that. And I'm not saying there's a right way. I'm just saying it's something to consider based on the context. Breaking your application to better use the resources you have or want to provision. For example, yes, if I'm breaking down my application, I can start using additional databases, right? I can rethink the way I'm working with, maybe a different no or M, maybe run an entirely new language with, I don't know, sharding capabilities, something that I don't even think about at the moment. But when you make technological shifts, right, you can make better decisions, you can find more suitable solutions for what you're doing at the moment. Because I guess when they started, let's look at any project that starts. It starts with something. How do you measure it? Hang on, let's, I want to focus. Okay, how do you measure it? So before like we're stuck, we need to scale. Okay, we got too many traffic, we need to scale, all right? How do you approach it? Like what's your, what's your way of analyzing how we are now? And then do something and measure, okay, did it improve or not? I'll try to answer with something that he said in the blog post. He said, let's not forget our original ask. This whole thing started not by our wish to improve things or like a huge road road map. It was basically a burning database that is failing to load. This was the entire thing. If the database was on 40% load, we weren't even here. We were not even having the conversation. So that's the point. So I think the measurement of success is something we need to keep in mind. This is the KPI, right? How low the database? If this leads you to breaking up your application, it may be the right thing to do because this is just the symptom of the problem and you want to solve the root cause. But again, the KPI would be to break the load down and keep working on the features you want to release. So I think as a DevOps consultant, like you and I, you know, I think like the last thing I would suggest to a customer is to break down his application or change the infrastructure. I mean, you know why? The first thing that I would do would say scale out. And if he'd say we cannot, the application is too monolithic. If that's such a thing, I would say, okay, scale up. But I wouldn't change anything in the code unless it's ultra necessary. And I would also explain about the costs. So I'm scaling up and scaling up. I would talk about maybe availability, do ability, you know, how much will it cost? Does it scale up and down? Is it steady? That database doesn't scale when you have more reads or writes or whatever. But I would never talk about breaking down the application. I mean, I would put it maybe in the backlog. And after maybe they do an IPO, it's for a startup. I would then tell them, okay, so now you've got the budget and the time to actually do microservices because startups or, I don't know, companies that already have this monolith, you know, doing this change is crazy. So what do you have to say about that? I totally agree. I just need to remember that lots of times it's not small startups. I think I agree with you on that notion. If it's a small startup, don't jump into those exciting paths of breaking things down. And I asked you why won't you suggest that there's a consultant, which is something I thought about a lot because I'm probably going to be the one that either spearheading that or handling the entire ops aspect. No, no, I wasn't being egoistic. I wasn't trying to avoid work, you know, but still, it would require tons of efforts, you know, usually most companies don't have this capacity of changing the whole architecture and application just to use, you know, if it was because the application is failing because it's doing something bad, sure, go with FACTO. But if you have an infrastructural issue and you want to save time, pay for it, you know, don't walk. So here is my thing. I totally agree with you and I'm going to take it to step forward. So these guys, let's say, I'm not sure what I've read, but let's imagine it was an EC2. I can't, you know what? I have a good example. I started not going to say name. I'll avoid name, but I started at a company. And the first database was Cockroach, you know, Cockroach? No. Nevermind. The idea is this is Cockroach, it never dies. Nevermind. Nothing to say about Cockroach specifically. It was deployed on an EC2, a bunch of them actually. And someone tried to kind of saw things together by scripting the hell out of Cockroach and the instances around it and kind of built his own cluster on top of his two instances. And it was so hard to scale and rebuild and backup. It was terrible as opposed to running something on a service like RDS. My point is, I would start off, and this is me. I would start off by using a service, especially for something like that, because you've said something correct. As a startup, you don't have time to fight for these things, especially if you're at least basically funded. You have funding to pay for these things. Use your engineers to develop new features and systems and don't use them to, I'm trying not to curse here, but not doing a shitwork like, you know, trying to scale in a database installed on an EC2. If you have RDS, you pay a premium on top of that, but there's a reason. It's automated backups and snapshots and updates to the system. So it's pretty much managed for you. And the scale up is kind of transparent. You can do basically whatever you want. If you want to turn that single instance to a cluster, it's relatively easy. If you want to put up a snapshot from a backup, if you want to automate stuff, it's relatively easy, because everything works within the system or automated, you know, with systems that were built around that because it's a service. So I would throw that tip of utilizing or squeezing the lemon to the garbage. I must say, I would pay for the lemon. I wouldn't squeeze the lemon, pay for it. It's better. So here's my, so here's another lens to look through. What if you and I are founders of a company? Okay, and it's just the two of us. There's no one else. One day, one day, one, maybe at the end of this episode, the afternoon will build something. You don't have money at all, not only you're not paid salary, you're trying to build something and there's no funding at the moment. So you build it as light and as lean as you can. And you're running on as much as free stuff as you can gather. And then you approach kind of the same issue with the database. And it is probably running on an EC2, probably an EC2 that you stop and start every morning or every night. No, no, you'd never. Whatever. What if the difference is paying $4 a day as opposed to paying $400? No, no, no. Unless it's worth talking about thousands. Okay, if you have, okay, if you're starting something and you can't afford something like $3,000 to, you know, go fully through it and to make it something that you can present to, I don't know, to someone that will engage them, then don't even bother. Like if you're going to save $4 a month or $100 a month, what's the worth of it? So if I take what you're saying, you're saying, if you're starting out, you're going to spend an infrastructure and make sure you spend it on services. So you can keep going and the services do things for you. Right? Yeah. Like the worst can happen. You can just delete that. And yeah, you lost, you lost for, okay, for the first time ever, you'll probably lose something like $500 because the project will go to the trash. All right. Second one, you'll probably waste what? $200 because you know when to stop earlier and then go on and go on. So you'll know how to optimize your costs when you're starting your project. This is what I did, you know, cost the years, you know, fully. You're right. It's 100%. I'm just, there's something I think we need to say. When you're starting out, there are tons of services that will give you a free tier. You can start building on, including database as a service. What was the Google one? The Google acquired. It was, it's always funny, the animations. The one that they acquired, it was Firebase. So you have Firebase and you have FlyIO and you had the Heroku and it's all the same thing, right? It's a platform that you can use cash and databases and servers and functions for basically free Cloudflare, I think, offer the same. The one catch here is that if you keep building on top of them and you scale up, it becomes pretty expensive. So there's a tipping point where you're already used to work with a system and everything is integrated and hooked together, which if you moved your things after you do have a team and funding to AWS, it'll probably be cheaper. But that's also an engineering decision to be made, right? Am I making this shift now off of Heroku to AWS, which is funny because Heroku hardly played on AWS? Yeah, regardless. You understand the point. So you do have options, but what we're saying here, I think if I'm trying to distill the message is try to use as many services as you can from the get go so you can focus on other stuff, because trying to hook the infrastructure. Let's sharpen it all out. Yeah, try to use as many managed services as you can. Yeah, yeah, that's what I mean. That's what I mean. Yes, managing easy to is great for databases, but using AWS is probably better. And you're going to pay for it with your own time so much down the road that it's just usually not worth it unless it's something that really is doing its own thing and it never does go for a managed service amount. So a big, a big part of your life. Though I can also recommend, you know, we only talked about RDS, but if you need a document DB, so doing the integration with Mongo Atlas, so for example, if you need MongoDB, a document database, and your startup or whatever, you can register to MongoDB and then create a cluster on a shared instance. And then you can do a VPC billing to AWS account, which is great because you're using a managed service, you know, like Mongo Atlas and you're using AWS, which is what you want to use. And in time, you can either use MongoDB portio or maybe even migrate or whatever to document the being AWS or something like that, you know, depends on your needs. I would probably go with the power version of Mongo Atlas because they really do provide a great service. And by the way, there's a broad range of options even here with document DB. You can start with your own easy to instant that idea. You can use a deployment that they give you, some of these companies. I think Atlas too, they will give you either a compost system or container system that you can run on your own and they'll probably provide the support. Not as good idea, because you still need to manage the infrastructure and then they'll go further with you and they'll tell you, okay, we'll deploy it on our side or you can bite from the AWS marketplace, but still it's instances that you manage. So cheaper yet you do have to manage infrastructure and going all the way is probably the best option of paying for a service on their end that they manage the cluster. And there's an even, I think, more expensive of doing the same thing on Amazon, right? You do have the service on Amazon. But you know, I wouldn't use this. So it's important for me to say about using third-party services. Okay, let's focus on that for a second. So I wouldn't use Mongo Atlas if it hadn't supported the VPCP link with AWS. That's very important. If you're using third-party service for whatever reason, you need to make sure that the integration with them with your current infrastructure or the thing that you know. So let's say you're using Google Cloud, anyone who's MongoDB and you want to use Mongo Atlas, make sure they also have some sort of a VPCP link. So it will be like as if that database is in your cluster. You haven't taken into a network, sorry. Yeah. I have a question about that, though. I usually tend to go for the companies that I don't really like the VPCP link option. I rather have the back tunnel, you call it, right? Okay. End point, the VPC endpoint. Okay. You can basically be offered and offer services on top of AWS using VPC endpoints, external ones. And that goes through the AWS backbone. So you basically get a tunnel on the backbone of AWS. So you can even launch it through systems that go, that are deployed on private subnets. You don't have to have an internet connection to reach out. Have you done it with Mongo Atlas? Because with Atlas, I only know VPCP. So I haven't done something like that. With Atlas, but I did do that with Redis. Redis don't know how many it works perfectly. And not only it's quicker, it's safer. And you can access your services that are not connected to the internet. Now, yes, if you are, you are deployed within that, that will work. But if what if you're on a, you know, what we call a black subnet. So you can understand it again. No, no, no, I'm not sure. Okay, just because I'm not sure I have those two. So maybe the audience did it in the stand. Yes. Okay. So can you explain again the info touch your what you're talking about, Redis? Sure. So if I have an application that's deployed on a public summit, that's rather easy. It's connected to the world through an internet gateway. You can reach out to the internet, and the internet can reach back to you. Given that you have a public IP address attached to the instance, that's a simple way. The correct way of deploying stuff would be having your application run within a private subnet and then having some kind of a load balancer or other proxy that sits in a public subnet and filters the requests that are coming into the private, private subnet. What do we call a private subnet? It's a subnet that's not connected directly to the internet, and there's something filtering in between. Usually that would be a not gateway and an at gateway segregates you from the internet, and it's kind of a filter, right? You can go out, but people or people, no entity can come in directly to you because you're going through that nut, which basically it's like your home router, right? You can access devices directly unless it's mapped in the system. You have a third option of running a private subnet, it's not even connected to an at no internet gateway and no not, and so nothing comes in, nothing goes out, which means even if you try to update a package with yam upgrade or yam install it wouldn't work. A solution to that would be going through VPC endpoints. For example, let's say you have a, okay, why should you do that at all security? You never know what happens with the services, even if it's only outgoing requests, they can be exposed. You don't want that, so you keep these things as private as possible. If you're using some kind of internal service, like S3 or API gateway or DynamoDB, just for the same example, you can use a VPC endpoint. So you kind of deploy this resource that's attached to your VPC, and you can reach out through AWS, you don't need the internet for that, and then you can access a three API gateway and DynamoDB privately. It works faster, naturally, because the latency is smaller. You pay a small price for having that thing up. Do you remember how much? Not a lot, okay? These are official ones offered by AWS. There are custom ones that you as an AWS user can offer your customers, or you can consume that for other customers. For example, my example with Redis, you can consume, this is the part. Okay, so I was up to this part as you were just saying, that Redis provided you with a custom endpoint. VPC endpoint, that you were able to consume in your VPC. I haven't done this. I've never done such a thing, you know, consuming custom VPC endpoints. It's good to know that there is a good use case for it, you know, using Redis. It's not that, you know, it's not an edge case. Okay, so it totally makes sense to use that. Usually, you know, VPC endpoints, when I say that, I just think about, hmm, use S3 because if you're running, maybe you'll see, I see the channels to pull images from ECR, if you'll go through the S3 endpoint to pull the images, you know, even though it's ECR, it's a bit confusing. But that's it. So custom endpoints sounds good. And unfortunately, it's only modern companies. So you would expect, I actually would expect Atlas to have something like that, and I know Redis offered that. But if you go to more enterprise geared towards, sorry, companies that are geared towards enterprises like Tableau, for example, Tableau won't offer either. If you want to deploy something like that, they will give you an easy to image, like an AMI that you deploy. It's not far from that. They give you an image that you deploy on your end on a public summit, and that serves as the filter, right? It's kind of, they give you a load balancer of their own to deploy. And it's installed of windows as you, like you like mail. Yeah, so that's the case. So not everyone offers that. And if you're looking for a good solution, it's not a nice thing to say, but usually modern companies will offer one of the two solutions. They'd be superior in something that connects to AWS privately. I think, okay, I think we covered this point. See, endpoint this point. So I think we covered this point. I think now we should go back to buying the lemon, because the episode changed. We realized we don't want to squeeze anything. We prefer to buy managed services, because this is how you can use your time efficiently. So is there anything you want to add about buying the lemon? Yes, I want to, it's squeezing the, I want to counter your point. For a second, even if we are squeezing the lemon and we are using managed services, which I probably would use. And by the way, let's put a little star fine print to what we're saying. I would do that 99% of the time. The 1% I'm living aside is I wasn't actually a founder that had to sit at home with no salary and think about every dollar he's spending. And make the decision to now use a database that costs, cost $300 a month instead of three, right? So that's the 1%. I'm putting it there. I don't know what I do, what I'd actually do. It's hard to say. That's about that. The other thing I wanted to say is even if you are using a managed service, we talked about the scenario where they kept seeing an uptick with the load on database and kept upscaling it, which is okay on the ox aspect of things. I still think you need to take a look at what's going on with the application, because you may be doing something that's so inefficient that you will get out of hand. And I'm talking here from experience these days and actually working with an elastic yeah, an elastic cluster, elastic search cluster that is going out of hand. And we're running it. So I would expect what you're saying now, I would expect to talk with the ops only after the developers went through the code and checked for queries and checked that everything is going as they should. You live in a fairy tale, I think. You know what I mean? So no, that's the first thing I would do. I would first ask, listen guys, is this this normal, you know, when when did it start, you want to check the logs, I can help you with that stuff like that, you know, but I wouldn't start scaling, you know, unless, okay, if it's a hot fix, like listen production is them, sure, tak tak scale, right? But if it's like, listen, we're going to bad places because the CPU usage grows, you know, the average usage or memory grows every day or whatever. I would tell the developers, guys, girls, check your stuff. And, you know, after that, we can see if we can scale, we should scale. Yeah, unfortunately, the check your stuff, which is, by the way, bidirectional, they tell the ops team, check your stuff and the ops team go, you check your stuff. No, we should both check, maybe we realize, you know, we allocated the to, you know, like 256 megabytes for memory, you know, so for me, the solutions was always to build a small team around that and try to fix that together. Maybe one person from each team and look how do you build a team with docale gradient, you have to deploy a Docker compose with an ops engineer and a developer and let them run side by side. Nice. Nice. And the early start, if they fail, exactly, you put a watch dog on top, which is their leader. Okay, so anything else you want to do about squeezing the lemon? No, I think that's enough in that regard. Okay, so you want to move over to the corner? Yeah, why don't we? Why don't we? Okay, so I think I had a few experiences this week, so it might be a longer corner. Okay, so okay, because we are only 30 minutes and usually we do it for like two hours. Okay, keep getting 50 minutes. Okay, so I'm ready for the corner. Yep. Okay, that was a factor of the week. I'll try to do the fact, how did the fireworks work earlier? It doesn't work anymore. Yeah, doesn't work. Okay, so welcome to the corner of the week when Omer and I share whatever experience we want on or anything else we have on our hearts that happens to us last year. Maybe it will happen to us next month and whatever. So Omer, would you like to stop? We can do it one more. If you have more, like if you have a few, we can do like a conversation about it. Okay, let me spit it all out, you know. No, no, no, let me start with the first one, because I think it's an interesting topic. Question, are you using some kind of a multiplexer, a multiplexer? By multiplexer, I mean, taking your terminal and separating that into different panes and windows and systems and manages that. 100% not. No, so what do you do if you need, I don't know, to use two of them simultaneously, you just open another, another tab or another window. I just wish the listeners could see my hand when I do the swipe with three fingers on my Mac or I just jump between skins. Yeah, but then what do you have on both twins? Like the same terminal or a different, usually, I walk usually with Visual Studio Code. So what I do in that, maybe that's your multiplexer and in Visual Studio Code, they just open a new tab, not open your email. Okay, you know, I'm sorry, I asked. You are sad. And it works, you know, that's the part of what I smile and say it works. And yeah, okay, good. So for those of you. But the thing is, you know, the thing is with you, you know, because I do work fast and I'm efficient and whatever. So when you hear those kind of things, you're like, but how is he efficient if he's doing that, that contract? You're using this thing a lot. No, I guess. No, no, no, with keystrokes, I like to do with, you know, I like it. Okay, cool. So now with swiping with my fingers, using a lot of commands tab or command to be honest, that's kind of multiplexing because you are having a few multiple tabs and shortcuts or keystrokes to move between it. So it's kind of that. What if you want something side by side? Is that optional? Like two terminals? I don't do side by side. Right. Okay. Um, I only code. So with Visual Studio Code, you can present code. Okay, cool. So that's a good example because what I do is pretty much the same because I'm running VIM within terminal and then like you have your own VS code, which is basically the ID on top and then terminal in the bottom. I kind of run the same always, but at some points, I want to see a few things. For example, I am watching logs and Kubernetes for a controller I'm working on. So I need the controller logs, and then I need the demon set logs, and then I need another pain to kind of run against Kubernetes. Cubectl get pods, get deployments, get stateful sets. So I'm running something to multiplex that what I am using is Tmax, which is, I think the most famous one, you can get it on every Linux slash Unix deployment. There's another very new one that we talked about in the past. It's called Zellage. It's pretty new, written and Rust, a very cool project. By the way, Zellage is really cool. You should try it out because it's self explanatory. You deploy it. Everything is right in front of you and all the key strokes and combinations are laid out in front of you. It kind of helps you move. It kind of helps you use the tool as opposed to Tmax, which you don't see anything. It's just green lines when you start off. See, you don't know it. Anyway, there's a third one I just discovered. You're trying to find the common ground and you see that you look at the person that doesn't have any clue what you're talking about. Yeah, I just feel bad. I feel like too much of a nerd. Anyway, there's a third one. I found it this week. It's called A4, which I still don't understand. I'm not actually sure if you can call it multiplex or a terminal. I think it's more of a terminal. It's If you want to go check it out, it's cool. I never knew we had additional ones. By the way, there are more terminals that offer you solutions for multiplexing. Where's their keyt? You're using i-term, I guess. Visual server code. You're using Visual Studio code. Yeah, but I use bash. That's my default shell. Okay. I think most users who start tweaking with something they'd get i-term, but you have a lot of other terminals. Each terminal can offer his own. The thing I like about Tmex is the ability to deploy it on all of them. I don't care what terminal I'm using, and I'm switching them every once in a while. Tmex is always there with the configuration. So A4term, I leave that. I took a lot of your time. I have another one. Maybe I'll put that in the end. I think the viewer is an i. I'm very happy for you. Even though you are very, very efficient and you love Veeam and you love keystrokes, you still find the way to keep improving your you'll walk to be even more efficient. Now, you know, I'm just thinking about scared, Omer. I'm scared that there is this one time that we will do this podcast, you know, contact this podcast, and I'll see a computer in front of me instead of you. You know, so just be careful how you dive in those keys. Maybe. Okay. Maybe you are seeing a computer and it's not really me. You're not deepfaking me, right? I don't know. You're the one paying premium for an open AI. Okay. Okay, one more one to my stuff. One thing. One thing. We talked about squeezing the lemon. Maybe I'm working so hard on squeezing my productivity lemon that I was doing to actually work. That's what I'm doing on day. That's my first time work. That's why I was so bright. I just work. Maybe I won't say it's not efficient. It's efficient up to a certain amount of efficiency, if you can say that compelling to you, but I worked. I will say a serious, a serious sentence. One serious sentence. One. You only have one sentence every episode. So watch out with that. This is it. I'm going to spend it now. Okay. These things compound. Now, maybe I'm a bit of an extremist, but using things like VIM or dedicated terminal TMRC multiplexers, these things compound over time as you get used to knowing them and working with them. It's not only fun. It actually makes you better in your work. I think because part of it, because you're having fun, but you're faster in things you can do in a cheap. That's it. This was really important. You need to sell it right. Just before we continue to my stuff, you need to sell it right. So you say these things compound. So you say like VIM, terminal, this things compound. You know, and that's it. And that's how people will want to use VIM and terminal. The problem is I can't sell anything because it's an open source and free. That's my life. Yeah, selling the pitch, the P. I understand. Sell the pitch. Okay. Okay. Okay. So now moving to my stuff. Yeah. I actually, I did a lot of cool things. So I think first of them, like the first thing was, um, okay, for example, if you have maybe 70, it have a lot of positives, you know, a little post or whatever. And you realize you want to migrate from using AWS secrets, because maybe in your pipelines, you have AWS secrets, access key, AWS. Access ID, whatever. In, in, you know, each workflow, it appeals, you know, in each repo, you have different workflows, right? You have different files. Maybe you're not that organized that your organization has a template where you create everything from that. And if you, even if you do, still need to update your repositories to start using GitHub and AWS OIDC. So if you're having this, well, I won't say issue, but challenge, well, you're saying, I don't want to use secrets anymore. I want to use the proper way to secure my access to AWS API. Let's migrate. But the thing is, if you have 70 repositories, so go through each one, clone it, create a branch, go to the relevant pipeline, delete, invite variables, then inject the relevant step that is relevant for the environment. So for example, if it sees AWS access keys for development, it needs to use the OIDC for development, it sees production, should use OIDC for production, because they assume all is different, you know, they assume different roles. So it's a lot of work, Homer. And I didn't have any plan on doing it, you know, by myself. So I wrote, and I probably post blog post about it, because it's cool, that you can just run a Python script that, you know, lists only all GitHub repositories and filters out those that are archived. And then it uses GitHub API to check if the workflow files in each repo contain the relevant string that you're searching for. So if I'm searching for AWS secret access key, this is relevant. So first, I'm filtering out irrelevant repositories, because I'm about to clone something like 40 repositories locally, because I'm doing it locally. I want to do it with my user, because I also need to commit and push. And after you have the list of relevant repositories with their status, like clone, not clone, has an issue, whatever, you know, the JSON file with the status, I just run a script, which uses pyyama, and whatever, that goes to the file, removes the environment variables. It also removes the entire key, and in the pipeline, if it's empty, you know, so it's also pretty spicy than whatever. And add the relevant step in the relevant place before running an AWS command, whatever. So it works. It's amazing. And I don't know, like, I wish you could tell me if, in my situation, if I had a different way of doing that without going over each repo, you know, if this is the situation, each repository has its own stuff, how can I do it? Otherwise, even if it's possible, you know, any ideas? I have an answer you won't like, because I'm using GitLab, which I'm not a fan of, but I, you know, we have the templating engine. So that's the solution. So you just inherit a central repo of steps that you want to integrate part of that. Okay, so now I have another use case for that. So when I did that, I was like, but I don't think it's relevant only for this. And then I was like, wait, if I want to update all my repositories that contain, you know, git checkout version one, change it to git checkout version four, you know, with the GitHub action. So suddenly I realize I have a mass control of mass, I won't say even changing, but creating a branch with changes, and doing it programmatically for, let's say, security fixes, production fixes, like things that that you got to fix across many repositories, because they got to be fixed now and we're not going to do it manually. Right. So it was nice. I also used a bit of chagibility to do it. He did a lot of bad work. I had to rewrite a lot. Um, yeah. So that's about that. I also have another thing, but that's moved to you again before we move to me or even about it for that. I can just connect what you just said to our episode last week. We talked about how we can kind of use them only to deploy it on Kubernetes, but having it spread across different services. So you are running a central repository. And in that regard, this would be very easy because you only have one CI pipeline. That is, across the board, it looks, it's not even looking the same. It's the same one. It's literally one file on one repo. So you get to have a lot of services run from one minority with all the pros and cons with that. So just another. Yeah, you know, there are many solutions. If you already have a good infrastructure of your GitHub repository, but again, sometimes startups or companies, whatever, not everything is well and super organized. So in those cases, tell me about it before you move to the organized part, you know, it's a walk around. I can't say it's a perfect solution, but it's a walk around to fix something. Yeah, right. Okay, I'll take it every five minutes you speak. I have another thing to say. So we can just, it will be like a perpetual mob. Go on forever. Okay, it's an experience that is kind of off topic, but I got a Raspberry Pi because okay, let's start, let's even go a step back. I have a lot of lights here. When I was a child, I was born once. I was born twice, by the way. Oh, yeah. No, I was born only once. It was before this episode, after this episode. Yeah, my life. My life changed. Continue. We are wasting time. Okay, I wanted to control the entire smartplex that I have in the office. I have like four or five, and it's really annoying to turn off and off and on everything every day. So I wanted that one central system to control that. And I tried to buy the cheapest Chinese ones because you don't need a lot of more than that. Not all of them are easily connectable to Google Home or Alexa or Apple Home. So I needed one central place to manage them. So there's a project that's called Home Assistant. I guess like everyone who hears this actually is aware of the project. I don't know, it fits like it's everyone knows about it. Except me, it's open source. You can deploy it anywhere, even as a container. They provide any kind of measurement to help you. I got a small Raspberry Pi 4 and deployed it on that. It's just connected here behind me to the electricity. And that's it. It controls everything. I also connected it to CloudFlare. So I could access it externally from, you know, just domain or regular domain. So I can reach out from my house and turn off and on things here if I forget the lights or anything like that. That was pretty cool. Just say that. It's really cool. So if you have anything like that, like smartplex or you want to get them, it's a good idea to check it out. The other thing is there's an, actually an Israeli guy who has his own Golang episode. And it's really an episode podcast. It's a really nice podcast that live a link to it. But this dude has created a Git CTF, CTF stands for captured flag. It's a game. If you want to improve your Git skills, you can just go into the game and it takes you on through challenges. You SSH to an instance. It tells you something you need to do. Like check out a branch or a little branch or commit something. And then it goes up and up in the difficulty. So I live a link to that as well. So this is a little bit, you know, it was kind of boring to begin with. A little bit. I had to learn about VIM. So I didn't have time to. Yeah, exactly. I don't have time for this. Now it was really quite a beautiful thing for an hour. Okay. So I'll make a result. You know, this was a very long corner because the episode will be shorter. So the corner got longer. So we did a lot of work this week. We did stuff this week. Yeah. Yeah. And so I just want to have a dessert with what I did for us or what I'm doing for us. A friend of mine told me, listen, you can use LAMA to run. Who was that? The animal deals and everything locally. If you want to train something, you know, do like a lot, large language models or whatever. His name was OML. I don't know if you know him. And I decided I'm not going to write our YouTube description anymore because it's exhausting. And it's consuming a lot of time because you need to like get flings. And like it's crazy. If you want to do it like the way I want to do it. So my standard is so high that I can't even meet up to. Okay. Which is like the sentence of I think everybody can say it on themselves. Like there's no way your standard is good enough. So it's some point to break. Okay. The unexpected issues of AI. Yeah. So I decided to download. So I'm like explaining about the research or the journey that I've gone through, which is like a minute of journey. But I downloaded all the subtitles of all the existing episodes in YouTube. I downloaded all the descriptions, which I like. So now I have a data set of the description and subtitle that I like. And I also have the subtitle of the episodes that I'm missing. So my expectation is to train the model with my subtitles and the description of the existing ones and use, you know, the default seven B, whatever model of llama and train that, you know, do a fine tuning based on my modules and predict the description of the next episode. Because I love to add timestamps and subtitles do provide timestamps. So I wonder if it works. You know, I wonder if the eventual description will feel probably will find out maybe in the next episode. You know, better than me, that open AI had just launched their platform that you can build your own GPTs and get like a rev share if they're successful. Maybe that's an idea for one. You can make money. I thought about it, but I'm not sure you can, you know, upload everything and train it like that. Maybe you can. Maybe I can create DevOps topics description, but the fine tuning is important because I don't need just to summarize stuff. I need to summarize according to our survey the way I do context. Yeah. Okay. Yeah. So it means to adopt context and needs to learn that when I say a only what comes up to your own mind. So I even wanted to put a timestamp on it, you know, maybe I can instruct it by the way to do so or maybe it will figure it out by the text. I don't know. I need to see. So I guess I'll keep working on it once we finish our conversation. Okay. Cool. And then maybe we'll we'll have a YouTube video zone. Yeah, maybe this will push me to be able to see all the great animations that for some reason, my camera keeps doing. Okay. Anything else on there or should we should we? We'll wrap it up, I think. Okay. So bye for now. Compound. See you next week. Bye bye.

Squeezing the system
(Cont.) Squeezing the system
How to measure success
Managed services
Links of the week
(Cont.) Links of the week