DevOps Topeaks

#15 - Scale

February 24, 2023 Omer & Meir Season 1 Episode 15
#15 - Scale
DevOps Topeaks
More Info
DevOps Topeaks
#15 - Scale
Feb 24, 2023 Season 1 Episode 15
Omer & Meir

Send us a Text Message.

In this episode we discussed "scale". What does it mean in the context of operations / devlopment, but also in regards to management and internal systems.
We talked about empowering engineers to make decisions, prepare systems for you joiners but even more importantly preparing systems to scale to handle additional load and company growth!

Tools mentioned:

  • Omer mentioned https://score.dev
  • Meir mentioned https://github.com/GAM-team/GAM

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

Show Notes Transcript Chapter Markers

Send us a Text Message.

In this episode we discussed "scale". What does it mean in the context of operations / devlopment, but also in regards to management and internal systems.
We talked about empowering engineers to make decisions, prepare systems for you joiners but even more importantly preparing systems to scale to handle additional load and company growth!

Tools mentioned:

  • Omer mentioned https://score.dev
  • Meir mentioned https://github.com/GAM-team/GAM

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

I'm ready. I'm ready. I'm I was born ready. I'm always ready. All right. Action, action, action, action. Yeah. Hello everyone. Welcome to the Omaha Help Me 15th. 15th. 15th episode of DevOps topics. Okay. So hello. Hello. And today we are going to talk about scale. Yep. So. Oh, man. Yes. What's the first thing that comes up to your mind when I say scale? Nothing. I got nothing. Okay. I'm joking. Of course. So we're done. Okay. Yeah. Okay. Yeah. Thank you for being clear. And that's it. So what do you think about scale? What can you tell me about scale? Okay. So it's kind of a problematic one because scale tells me a lot of things that the two major things that I'm thinking of is one scaling systems. And the other is scaling teams, which I think kind of goes together because when you scale the team, naturally the systems need to grow with you, but it's a different kind of systems. I think today we might want to concentrate on scaling systems. And by that, I mean, for example, what happens when we're running some kind of a product company and we're used to some kind of a steady traffic that comes in, maybe it has a little bit of fluctuations, but it's not major. And tomorrow we're signing a new contract with a new customer that's going to triple the traffic. It's going to be something huge. Okay. What happens then? How do you prepare for that? That's the first thing that I'm thinking of when someone tells me scale. Okay. What are you thinking of? You know, just as like getting your workload and then I need to think how I can endure the workload. So then I think, okay, we got a scale. What do you mean by that? So now I got, let's say I got too many requests. I got too many requests for CPU for memory, too many HD file requests or any kind of demand. You know, so upon demand on demand, I need to endure this demand, you know, to to bear with it and then scale accordingly. And the hardest part is on there and we all know it. It's not a scale out. It's to scale in. What do you mean by that? I was scaling out and scaling for that. So with teams, you know, so you know, the scaling out like to duplicate yourself and then to scale in, which in teams I think means like file people. So it's like, it's like getting rid of those machines. So it's always hard to to scale in because it's hard. It's easy to scale out. You know, you can get multiple machines very, very fast. To kill them, you need to think, wait, maybe this walk or maybe this node is busy. So I don't want to shut down immediately. I want it to have a grace shut down. You know, like with a SIGINT. Okay, you know what I mean? Let me take this one step back and tell you what I think as part of because you said something very important. I said, I want to endure and my idea with scaling is actually, of course, we want to endure because we want to, you know, see the other end of the scale or the huge peak that we're anticipating. But what I'm thinking of is let's plan ahead. Let's make sure that we are actually enduring and it's not actually enduring. It's experiencing the same experience that we have until now. Normal flow, things can come in whether that's requests or a lot of the systems we can go through and we should be able to handle almost anything that comes in. But in terms of planning, my idea is, let's say we are experiencing the scenario that I've spoken about before. We are a product web application product company and we have a new signing in like we've just signed in a new customer that's going to triple our traffic. Okay, so the natural thing is, okay, I'm going to get this for you. Yeah, obviously good for our business, not so good if that's going to break our entire system, right? And it happened before multiple times is that's going to break the business that customer is not going to stay with us. I don't know about the other customers. So we really want to stay and by the way, like out of context, many companies in the past that was like, I think VCs used to say that if you break once, they will never come back. You break your relationship, you break it once, they don't trust you anymore, you break the trust, there's no way back in a lot of cases. So especially with large customers, someone that you value and want to maintain as a name on your landing page, you probably want to make sure it goes well. So let's talk about planning. For example, when I'm preparing to something like a huge scale, maybe tripling my traffic and everything, I want to lay out all the services that I'm using. Again, my context is AWS and infrastructure in the cloud, it doesn't matter where you are, what you're using is the same. So I will lay out everything. Whether that's in my case, API gateway, maybe that's containers in Kubernetes, so I probably need to make sure that the applications that are going to be impacted are able to scale both on the container level and the node level, right? So I have things in place to make sure that's happening. For example, if I'm running serverless functions, I may hit all kinds of stuff, reserve capacity, maybe the actual capacity of parallel runs that I can run in my account for some reason. Maybe I need to prepare ahead and say, okay, we're tripling, let's see what's the peak. The peak at the moment is 6000 parallel runs of lambdas. By the way, sorry, the default in Amazon is I think something like 1000. So if the peak now is somewhere around this 1000 or the other cap that I've said, I want to, you know, plan ahead and increase that talk to Amazon. That can be, I don't know, access to databases, maybe you want to scale them, maybe you want to scale your DynamoDB table. So if you have any cash systems, everything that's involved in the chain of, in the chain of requests that's coming in and going out to the customer. And then that's all nice and empty, but then what do you do next? How do you actually know that it works? Well, it's not always that easy to actually send all those requests in, maybe it costs too much to test, but you might want to consider some kind of load test, right, on the system. So that we can go into them, but the idea is first of all, plan layout, all the services that are going to be part of this chain and then try to load this to understand whether you missed something because you're always going to miss something. Even in the scenario that I just spoke about, maybe there's some kind of a cue system, maybe there's some kind of a service that's just not scaling because that's one service that didn't scale and the request stopped there and everything great, etc. What do you think? I like the idea of like, okay, so that's like the first topic I'd say. You're saying go through all of the components in the scale in your system, not even the scaling system, but all of the components and make sure not them break if you need to scale. So that's like the first topic, I want to take you even further and relate to what you said about the huge customer that you brought in and I want you to reply to that, okay. You bought a huge customer, so sometimes it's best, as you said, so let's say I have a lot of normal customers and small customers, everyone are hosted, maybe I'm a SARS company, on the same Kubernetes cluster. Suddenly a huge customer, maybe, you know, like Amazon and VDI, you know, someone who has tons of customers and customers and whatever, and they signed the contact with me and now they're going to be in the same shell cluster of all of my other customers. So then, maybe it's better to create a new Kubernetes cluster dedicated for that new customer, so I want to think with you out loud about that. So then you'll have duplicated work, right, because you have, you also want to deal with the scale, right, because that's a new level of scale you're going to deal with, and it's not going to break your current scale, but then you need to deal with two Kubernetes clusters, for example, that you want to align. I would normally, normally I think I'd advise against it, because, okay, specifically, we have a Kubernetes cluster to your example, and now you're creating a new one, that's new territory. You may have forgotten to connect one of the things, maybe the ingress controller isn't configured properly, maybe you forgot about, I don't know, one of the IPs, the DNS, something around it. On the other hand, you have a cluster that's already there, and serving production customers, and is already functioning correctly. So you have to test a lot, and I think nine out of 10 times that's going to be only a production cluster, you're probably not going to duplicate staging, because that's only for scale, right? And maybe you forgot to configure something, so I normally, unless there's a very good reason, I wouldn't go there. Maybe in your namespace, maybe a different node group, maybe some kind of other segregation inside the cluster, and you one might be tricky. Yeah, by all means. So how can I test, so, okay, I don't want to, like, downtime, as you said, let's go into the bad, like, into the bad situation. So there's no way any of my customers can experience a downtime, because it will cost me thousands and ten thousands of dollars, if not millions of dollars. And there's no way the new customer will complain about scale, so what should I do? Should I, maybe, as you said, test it on staging, like, maybe I have a production cluster and staging cluster. So you're saying, test everything on staging, if all goes well, deploy to production. Exactly. Exactly, duplicate everything. Even if that means, like, creating another cluster for a new customer, maybe you want for every new customer, maybe only, you know, you have a certain threshold that above you're launching a new cluster, and below you may be only creating a new namespace or not at all. You have one cluster for small customers and other big ones for big customers. Do it in staging. Yes, duplicate the clusters, make sure that everything goes according to plan and test production before you launch the new customer. Because if the moment of launch is going to fail, it's going to be really hard to build the trust again. How do you, like, do you have the guts? Do you have the courage to, let's say, say, you tested everything on staging, everything on staging cluster. And then let's say the customer is Nvidia, okay, because we both have no relationship with Nvidia, so it's easier to talk about them. Yeah. So let's say Nvidia are saying, okay, guys, let's go up in the air and you're saying, listen, we tested everything on our testing environment with the low test. Everything is fine. And then you all out those changes to your production cluster for the first time ever. How are you so sure it's going to work? Um, you have an test editing production cluster yet. Personally, I will never be sure. And maybe it's kind of a pessimistic view, but I always expect the worst. And for that reason, if that's possible, I ask the customers to only test a portion of the traffic. Maybe 10%, you know, like a normal cannery deployment or a rolling update, maybe only test 10% of what you can do. If that's possible, if they have the mechanism in place, test a portion, try to send me your traffic, by the way, that's good for me too. Let's see the ability to gradually scale rather than setting everything up before they even send their traffic. Because going back to my first example, if that's actually tripling your traffic and it's going to be all at once, it may break things that on another occasion could have gradually scale themselves up. So that's another reason to do that. So that's possible. And most customers, in our case, at least, they can do that. And they will do that for them as well. They don't want to break stuff just as well. So they're also rather sending a portion of the traffic and then let us gradually improve our gradually scale rather. Okay. So the tip here sounds like, okay, let's not just go up in the air with tens of millions of users and stuff. Let's just gradually scale. Let's not tell our customers like Nvidia, listen, you can go live now with 10 million users. Let's start with thousands of users or maybe only your power users or whatever and only then let's go and bring everyone in. Yeah. See those tend to say to the customers, no problem at all. Our civilians, our systems can handle the entire workload of the internet. So come to us, walk world. Yeah. You can bring yourself, you can bring your friends, no worries. We've built everything around it. And they say, can they launch? Yeah, I think we have servers on Mars. Exactly. Don't worry about us. It's your problem. We'll do fine. So, yes. As good as your CEO thinks his systems are, maybe come him down and do things gradually. Yeah. That's the tip. So I got two tips for now. Like go over the components, like which one can scale or not and gradually scale even if you sign up with a new customer. And don't create a new cluster unless you have to because you don't like the idea of creating a new cluster. Yes. Exactly. Now, do you want to take this into deeply into the DevOps world? Sure. Go ahead. Okay. My thinking is this. We as ops or DevOps engineers, whatever you want to call it. We go to companies. We used to go to companies. Now, we work in companies and we tell them, we have to build a CI system. We have to run some tests. We have to install a staging environment. Maybe add a Dev layer to see that everything is okay. We want to do things automatically and gradually. But that's why you want to have, I think, you want to have these systems in your first day as a company. And some people would say, no, no, no, that's too much load. It takes too much time. It, you know, the Rails people from them focus on the product. And I'm thinking, you never know when this customer is going to sign. You just never know. And if you have this system in place, you're always ready. If you're scaling, you don't have to use like, you know, the huge scale that we were talking about. But if you have things in place, like auto-scaler from the cluster, maybe your node groups know how to handle themselves. There are too, there's too much load. Same goes for Lambda functions. If scaling is already in place, you're always ready. And you don't have to work through nights and tests. I don't know, because someone is signing up tomorrow morning. You're always ready. So put the systems in place. Don't wait for something to happen. That's an approach, I think. We can debate on it, but that's my approach. And that debate would be, as always, the sweet spot. You know, we are always like, how much time and effort do you want to put in, you know, dealing with the scale? If it's not that urgent. So maybe if I'm too small, I'd be like, let's deal with showing the, you know, investors and stuff that everything works. Once we are starting to get leads and maybe we'll get in your customer, I'll start maybe investing in scaling. Initially, I don't think I will run, you know, that's not going to be like my zero priority. You know what I mean? I'd go for maybe security as my zero priority. I totally agree. And what I'm saying, let's take this. Let's be specific with this example and put it in the Kubernetes world, right? You have lots of ways to scale things in Kubernetes. What I'm saying is you just built a Kubernetes cluster, regardless of what's running there. I'm assuming you have a good reason. There's more than one application. You have some kind of use for it. I'm talking a buzzword. Probably, you know. Okay. Install the cluster auto scalar, right? Don't ignore it. You don't have to go to, I don't know, KEDA or some kind of a weird auto scalar that's using some kind of custom metrics. It doesn't have to be that, but it's not something just for you to handle. It can be the very small load that actually needs one more node now and then another node because you're in a demo with your first client. Your first client, right? We're in the beginning of our journey and the first client comes in and you're demoing the product and it doesn't work because the new node didn't launch. Hopefully, in that stage of the company, you're not yet running on Kubernetes, but I've seen it more than once and more than twice. So you probably are and that's why I think you want to take care at least the basics. Again, we're talking about scale, right? I don't think scale is a large company problem. Scale is your first day company issue and that's something you need to tackle from day one. It doesn't have to be like a full-blown system that can handle anything in the world, but it needs to be considered. That's what I'm saying. Okay, makes sense, makes sense. I agree. All right. What else can we talk about scaling? Do you have any other ideas? So we can touch the other subject, which I thought would be another topic, but scaling claims. Scaling claims scaling the team. One that works as opposed to a sorry Microsoft. That wasn't serious. Okay. So first of all, a suggestion. I've listened last week to a podcast that's called change log. Quite a big one. And they were hosting the previous VP engineering of GitHub. And she was speaking about scaling teams. And before they were acquired by Microsoft, I believe. She had to deal with a small team that was growing and growing and growing, especially after they've well not merged acquired by Microsoft. And then had to, they had a new population of people, all of a sudden in the team, and it grew to hundreds of engineers. And it had to scale things accordingly. And that's a different type of scale. Obviously, I'm not going into HR now, but you need to think of this kind of scale when you're talking about daily DevOps tasks, your systems, your runners, for example. Let's say the simplest things is your CI ready to handle 10 X the load that it takes today, right? Imagine you run, I don't know, five jobs a day, you now need to run 50. Is there going to be a queue? Is there a queue at all? Are things going to magically disappear because you have too much load in the system that was never ready for that? These are things to consider when you scale a team. And scaling a team isn't just, okay, oh, we were acquired by Microsoft, there's going to be a 10 X no. If you're a company that has 50 engineers in the team and you have open positions for 50, 100 more, and that's pretty common. At least in our industry, you need to plan ahead. It is a plan right now. Actually, you already need to have it built. And if it's not there, I think you should be worried. So scaling teams need to come with the systems that support them. And that has to come with like a myriad of other stuff, documentation. A new engineer comes in today. Does he know what to do or does he need like help from one, two, three, four other engineers that's going to slow them down? Imagine he slows down three or four engineers because there's no documentation or there are no testing systems. And there's no, I don't know, playground for new engineers. Easy going to have to bother four engineers and that's going to be, I don't know, twice a week because you're growing so fast. Imagine having four engineers stopping their work for a full day, I don't know, twice a week, just to support the new engineer. That's probably going to be more than that. So documentation. She was speaking about documentation, CI system. She was talking about empowering people. I'm not going to lay off the entire things. But empowering people, for example, have champions in teams. You may not have the entire ops division just ready to scale from 50 to 100. You probably don't. And for that, you want to have people that are empowered to make decisions. For example, someone in the team that's in charge of the ops aspect that can decide that he wants a new runner that can scale the systems, maybe to set up new things, maybe to be the one that configures the infrastructure around the product that teams take care of. Those are things you don't think about when you're managing a small team. But when you grow, I think you need to start thinking about it long before those people join. Did you ever experience that kind of growth? I need you, but okay. I like it. But I guess I'm not sure I can listen to the talk, but I just want to understand. How can I measure it in my company? So let's say I want to, I want to propel for the load for scaling the teams. But when I will talk to my VPL and the CEO and everyone, like one of them, I'll say, listen, we need to propel for scaling out with the team because we are going to a quarter of a PPL and I'm scared of it because we need to scale. How can we measure that we are prepared? Like, you can tell me in time you'll know because if you succeed or fail, you'll know, but we want to plan ahead that we don't fail. So how can we measure our repair preparation for scaling out with teams? So my previous boss or CEO told me something I remember and that's when you speak to people in companies and you want to explain something, you need to come with numbers, explain them, how much something costs and they listen right away. You try to tell them, oh, we're not ready to scale and this takes too much time. And someone has to divide, like, you know, change the attention and now change the context and help someone else. They don't care. But if you tell them, okay, we have four engineers twice a week giving up two hours over their time. I don't know how much is that. It's something like 16 hours. I don't know. 16 hours a week. Those engineers are paid whatever, $200 an hour. You make the math. Now, this is what costs you to bring in one one man and that's only what we measured. It can be twice at that. Imagine that going on for a month, show them that every month they lose, I don't know, $5,000 because they don't have documentation in place that someone can probably prepare in an hour. What do you think they'll say? So this is how, okay, so this is how to get the attention. So let's divide it. So now you, you brought me a good, you know, good pitch. Okay. So the pitch we got it covered. That's great. How do you measure? Like then they tell you, okay, you have my okay scale and then they come after one week and they will tell you, how can we measure your success for preparing to the scale? So how can you measure that? I have the idea of how. I don't know if I actually sit down and do it because someone has to do it. But if we measure the example that we spoke about, a new engineer that comes in, how long does it take for him to onboard to understand what's going on? I think large companies, they, I don't know, put three months in or something to let someone actually start their words. Anyway, let's stay there for six, let's stay on the roll I guess. Let's take their first week. Okay, they just onboard. In my company, we have an onboarding task. It takes a few days to actually understand the ecosystem, the environment, configure your machine, everything around that. Let's take the first week and measure, you won't be accurate to the mid, but you can probably more or less understand how much time did it take for the engineer to understand what's going on based on the documentation and the system that are already in place. And how much time did he have to wait to get certain permissions from the ops team? Maybe his team leaders, I don't know, they couldn't attend or help him without it needed at the moment. Maybe he couldn't even find that documentation, maybe throughout the task something wasn't clear enough that he had to ask his fellow engineers. That's something you can measure based on a 40 minute interview with a new engineer or following throughout the week. That's up to you. But that's something I think you can measure. But that's measured, as we already said, like in previous topics, like after the fact, right? So only after the new engineer arrived. So it sounds like, I don't think there, I'm not sure if there is a way, I'm just trying to challenge you because I'm not even sure it's possible. But I'm just wondering, like, when people talk and say, we need to do this, we need to do that. Let's do this, let's do that. And I'm like, okay, how do you measure your success before you fail? Because what you're talking about is let's test it after people arrive and it might fail. And it might won't. So how can we do it before we test it? I actually think there is a way. I actually don't think there is a way. But in what I've described, it takes two measurements. One of a new engineer, understanding what's your current status and then the next engineer engineer that comes in. And what's the status after that? If you take, for example, this VP engineering of GitHub that spoke about empowering teams, it comes down to easy things like a new onboarded engineer would have to wait six hours because it doesn't have access to GitHub. For example, he needs the source code to work, right? And there's no access because someone needs to provide the access. Maybe we can understand if we take empowerment, for example, maybe people don't have enough permissions. His team leader can't provide access to GitHub and he has to wait for an absent engineer and we don't have enough absent engineers. That means the simplest thing in the world, right? You need to provide, you need to empower people, give them permissions to help new engineers. It's the simplest thing ever. But then again, you lost six hours of a new engineer that just tried to onboard and couldn't do something. And add to that frustration, context of other people that needs to help them, et cetera, et cetera. Yeah. It feels like we're boiling down to very simple things and very simple concept. But I think everything is at the end of the day. So if you think about huge things like empowerment and champions in teams and system, et cetera, et cetera, and boil it down to very simple things that you all you have to do is listen and act. It's not that complicated. Okay, cool. Still, I guess like there's no way to measure beforehand, which is sad. And I don't like it because when we talked about machines, so see the difference between humans and machines, right? So with machines, we could plan ahead. We could test in staging and stuff with people. It just doesn't work. Correct? Correct. They have to arrive if you want to make sure you succeeded. Yeah. People are way harder to measure than machines. There's no doubt about that. Yeah. Yeah. And if I, okay. So moving on to the corner of the way. Okay. We talked about the experience that we have this week. We had this week before we want to go to sum up sum it up with systems. You need to go over the components to make sure when you scale. It's like, let's check all the components in the way all the layers to make sure you scale. Other than that, we also had a second point where you said, remember that one? I don't know, Mimble. Yeah. About systems or about people. Systems. Okay. So you need to plan what you can. And then you need to understand what's coming in. If you know what's coming in, for example. For example. Tripling the traffic. Try to load test yourself if that's possible and makes sense. Okay. So there those were like the main two points that we had. And with people, you need to make documentation and power them. And make sure you can scale from 50 engineers to 100 and 100 engineers. Hopefully you record 50 engineers. I don't know how you do that. But for you, maybe it'll be double Microsoft. Okay. So that's to sum up what we just had. Do you want to add something to that? No. Maybe a tip. No. Okay. That's basically it. So again to the corner of where you share your experience. So let's start with you. As most weeks, I have a tool. But I've read about it long ago. So I'll try to make the best of what I remember. The tool was named score. I think the URL is score dot them. And what the. Let me, I'll try to think of how to say it. But basically what they're trying to do is let you translate configuration templates and translate them between different languages. When I'm saying languages, take, for example, a Helm chart that's translated into a YAML you can install in Kubernetes or that is taken into a Docker compose. If I'm not mistaken, they have all kind of small plugins that are kind of plug about into the system. And then you can run it and it transpiles from one configuration system to another. And that's something cool to consider not only for yourself, but also to how the tool score score dot them. This is it. Okay. So that's the tool I've seen. If I remember correctly, the feeling, I like my personal feeling was that it's a really cool idea. Amazing concept. I feel like it's not mature yet enough for me to use. But it was really interesting to see. There is a name for that like the concept of what you said of having like a language translated to a different language in the same. It's called the Rosetta Stone. Yes. Do you know what was it? So yeah. So it sounds like they built a Rosetta Stone for, I don't know, configuration languages. I live in a country that actually stole the Rosetta Stone. So I'm very much aware of the concept. Okay. Okay. So my experience was it's not related to ops or dev or maybe ops, maybe ops, but it's just, okay. Yeah. One of the Google administrators in my company, right? So do you use Google workspace? Yes, I do. Okay. Are you also an admin in your company? Yep. Okay. So yeah, because of this, you know, configuring the application stuff. So you get to be an admin. All right. So a few weeks ago, maybe months, I don't know. One of the users were deleted. Okay. Right. And employee that left the company was deleted. You know, a few very, very, like, long time ago. And apparently that employee scheduled a recurring event. You know, ahead. So now it's starting to sound really nice. Okay. Yeah. Okay. So the event is scheduled for the whole company. Okay. Right. Bear with me. We got an employee that left the company a few years ago. Yeah. And he scheduled a few events that are occurring every year or every month for the whole company. Can I, can I guess use, user is now gone. You now can't delete it forever. So you now can't delete it forever. Apparently the events. But there is a tool, which is called GUM or GAM, maybe GAM. Using a daily AM. Seriously. Yeah. Yeah. You're using GAM. So to me, it was like, what? What is this? So Google administrator management or admin management or something like that. Yeah. And apparently with GAM, you can, so the first step is to go into this schedule and find the event ID with the troubleshooting. I will write down a very simple instruction in the description of this video. But you can open Google calendar in troubleshooting mode. Did you know that? Nope. So they're like, if you add like, if you add, you know, a query parameters to the Google calendar URL, you can get into troubleshooting mode. And then you can see the event ID. Yeah. So you, you get the event ID. You're not going to explain about GAM or whatever. But then you use GAM, this cool CLI tool that you use. And suddenly, the event is going for the whole company. I think it is super amazing. Amazing. Yeah. I can tell you about my use case. Oh, go ahead. Go ahead. So we just wanted the way to, we started a really small company like any company, but I was there. And we wanted the way to automate the process of, you know, adding new engineers to the team or not engineers, anyone that joins the company. I had to create like, it was the CTO. And then it was passed on to me. And I had to create the new user. And when you need to provision like 10 or 15 a week, it becomes a hassle. So what you just said sounds like, you know, DevOps story of my life. Yeah. It was best to the CTO. And it was best to me. Actually, it kind of connects very nice to the scale theme of this episode. So anyway, I was looking for a way to automate this. And to automate this was, it's not only to, it's creating the user, configuring them, adding a password, I don't know, multi-factor authentication, sending an email, welcoming them, telling them they need to configure one, two, three, et cetera, et cetera. And it was a huge hassle. So I built a container around this that using something, and that something was gum. And then I added some kind of additional processes to sync them into AWS if they need other systems, et cetera. But gum was the beginning of this project. So yes. Wow. Cool. I'm using it, you know, ad hoc just to do those tasks that you can't really deal with the graphical user interface. Yeah. Which is weird. Okay. So that was my experience. I'm glad that you are welcoming employees with gum. They don't know what's gum. They think it's me. It's not even. Yeah. Okay. So that's it for this week. Or not anything else you want to add. No, that's it. See you next week when you'll be in Spain, buddy. Yeah. Oh, yeah. So in case those of you see us, this is not the regular stages you can see. We have a cat and a box over here. And that's not my regular microphone. But no worries next week is going to be the same. We don't stop. We don't stop. We don't stop. Okay. All right. All right. Thank you. Bye bye. Okay.

Intro
Whats Scale?
How and Why to plan ahead
Scaling Teams
Tools of the week