DevOps Topeaks

#36 - Evolving with AI

November 06, 2023 Omer & Meir Season 1 Episode 36
#36 - Evolving with AI
DevOps Topeaks
More Info
DevOps Topeaks
#36 - Evolving with AI
Nov 06, 2023 Season 1 Episode 36
Omer & Meir

This week we talked about AI and how we use it and are evolving with the different tools out there. From using latest GPT options to running local models.

Links:

  • https://github.com/fathyb/carbonyl
  • https://ollama.ai/
  • https://pinokio.computer/

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

Show Notes Transcript Chapter Markers

This week we talked about AI and how we use it and are evolving with the different tools out there. From using latest GPT options to running local models.

Links:

  • https://github.com/fathyb/carbonyl
  • https://ollama.ai/
  • https://pinokio.computer/

Meir's blog: https://meirg.co.il
Omer's blog: https://omerxx.com
Telegram channel: https://t.me/espressops

Hi Eddie, I'm ready and action action. Hello everyone, welcome. Hello. The Bobsto picks. It's been a while. It's been a very tough month and it still is. I'm smiling because I'm trying to get back into something. Because that's the only way. That's the only way. Yeah. So in case you don't know, I'm from Israel and Omar is also a junior from Israel and he already talked about it in his previous episodes alone when he gave his show. So we're not gonna focus about that but just keep in mind we're trying to keep our spirit here and yeah. So getting back to sanity. Welcome to the Bobsto picks. And today we're going to talk about evolving into the AI era. Okay, which is a very... Sounds interesting. Yeah. So Omar, it's gonna be a very harsh title to respond with what comes up to your mind. But let's do it. So homel, what comes up to your mind when I say evolving into the AI era? Chagipiti. I'm actually joking. That's not the first thing. That's the first thing. That's everyone in. I mean, if you say AI, that's I think the number one thing that comes to mind today. You know what you just do? You have like Google it or Xeroxits which stands for copying it. So you took the brand of the most famous whatever regarding AI and branded it with the open AI. Right. But I mean, that's just an LLM, right? A large language model. Specifically Chagipiti, you have so many other users. So what comes to mind? So what comes up to your mind? Okay, I'll start with really what comes to mind. What comes to mind is the last thing that I've used. And I mentioned it once before, like I don't know, three weeks ago, it was, um, Mrad, right? The model, Mrad. So, uh, let's go back a little bit, Facebook released this, uh, llama too. It's their own model that they've been training for, I think, around the year. It's trained on something like, I don't know, seven billion parameters, whatever. It's your ability. It's your, you're able to take that and run it. It's open source. So you can run the model even on your machine. And I'll speak to that in a second. You can run it locally and speak to that. So that's the first thing that comes to mind. I've been running it for the past week. And not only, um, Mrad or llama too, I've been kind of playing with different models. One for math, one for code, one for just speaking to the AI. And it's really cool. I run it in my terminal and just ask it random questions throughout the day. And it's been really nice. Okay, yours. So, so that's the first thing comes up to your mind. The Mistral and llama. It's, it's right in front of me. Okay. So to me, I'm a, a simple guy than you. I'm thinking about Genesis and Terminator and how it's got all going to end soon. Like we always talked about like, kind of, in everything. So when you say evolving into the AI era, I imagine robots coming out of the ground and taking over this world. So that's what comes up to my mind. Come on in the spirit, in the spirit of a ignoring reality, because reality is not too good to us right now. Let's ignore what's, what's the next, you know, phase of that. Okay. Okay. Let's stay in. So when it comes to DevOps, and I say evolving into the AI era, I just think to myself, wow, I can do things way faster than I did before. Um, and, you know, produce artifacts and produce work and minimize my, uh, conflicts or, or struggles or anything like that when it comes to solving solutions, because I have this nice friend, this AI era, you know, this, uh, Cheji Petit, which, this is the model that I use, just solves me tons of things and makes my life easier. So this is what I see in the evolving into this. So this year was amazing when it comes to anything that I got living, uh, AI, you know, because the AI boom went out. And now you got barred. You also got, by the way, I think it was only released yesterday. I don't ask also released something you saw a bot name, yeah. Yeah. Yeah. You saw, so there's another, and you know what he wanted it to be like, what's the, what he claims it to be different from other chat GPT's chat bots. So grok is different. You know why? I think it's has, it has something to do with, with taking data from social media, right? Something like that. I don't know about that. What I do know that I almost said that this bot might answer your question to questions that other bots won't. So it got me intrigued because maybe Cheji Petit like if I want to make an, uh, a model of cocktail. If I want to make a nuclear bump and I asked that it'll help me. That's amazing. So I'm not sure what are the limits, you know, because usually they protect the users and protect data and protect everything. But still, so there's another player out there. So we got grok from X, you know, X Twitter, whatever we got about from Google, Cheji Petit from OpenAI, which is invested by Microsoft and anyone else has like, do you know any other famous one that is Yeah, Lama, Lama from meta. Okay. Lama from meta. Okay. So I got like tons of questions about Lama. So let's get started with it, right? You said running Lama. Okay. When you say you talk about LLM, this is a very large and abstract concept to me, right? Because you say, Lama, what I think about is like a gigabytes of information in a dictionary and somehow you need to eliminate it. This is what I think when I hear LLM, okay, large language models. And I don't really grasp, you know, it's how for me to grasp how it really works. When you say you can download it, it run it on your computer. So can you like explain it to me like I ask to Cheji Petit sometimes, explain it to me like I'm a five years old or, you know, make it very basic on what do you mean by running it through my machine? Why would I do that? How do I do that? Okay. So go. Let me know. Okay. Okay. Let's start with the very basic answer because obviously I'm not a data scientist. So I can speak to what my understanding is of LLM. But let's start with with the Y and the first Y, the really easy Y is that it's free. If I go to Cheji Petit today, I can use 3.5 to a certain extent. If I want four, I need to pay something like 20 something bucks a month, making more maybe less. And yeah, and only then I get the results. And I think I have to use their, do I have to use their own UI or do I have other interfaces? I don't know. I'm not, I'm sure of an API, but you need to pay like there is another payment for the API. So the, you know, the graph is not related to the API. There you go. Just another reason. So if I run my own model locally and it's free to use, I can do a few things. First of all, probably quicker. I don't think anyone will notice, but it is quicker. I don't have to be connected to the internet because I have a trained model. Now I'm opening brackets here. What does it mean that I have a trained model? It means someone, you know, to a neural network and trained it on a set of data. In a case of LAMA, or you know what, let's even go a little bit further back. I'm using a tool that's called, oh, LAMA, I'll link it below. And it's really cool. You have the CLI. It kind of works like Docker. By the way, you can write not Docker files. It's called model files, stuff like that. And you can run different prompts, like pre-made prompts. For example, there's one that simulates a DevOps engineer. So it's the model file says, you're a DevOps engineer, and you're happy to help anyone that asks you a question about Terraform AWS, the clouds, etc. And then the, you start the model and it runs according to that prompt. And then each question that it's being asked, it responds in the context that this model file is running. So it's kind of like a Docker file. So you run it in a certain context and then it's up in there and it's ready. You get this nice console. So it's faster, it's local. I can speak to it. And I can provide the context, something I'm not sure I can do with charge PTT. The next part is that it has its own API and the API. I can okay perfectly know that free. It's free. The API is running locally. So I can do whatever I want with it without paying with no tokens with nothing like that. And this allows me to do a lot of things. First of all, I can build tooling around that one example. I'm using Neovim, as you probably know, someone already wrote a plugin, but I'm intending to extend it. You can use that plugin to reach out to your local model. Speak to that and generate code based on that and fix your stuff on Neovim. It doesn't even have to be code. It can be just text and help you proofread stuff, write it better, change to the tone, anything else you can do with charge PTT. You can do with those models and you can change the model. There's a list. If you go to our line, you go to models. There's a list of like 20, 30 different, different models, different trained models that you can use. You can see the number of parameters and then you have a section below of extensions made by the community. So not only Neovim, you can do you use RACAST, you know, RACAST? No. You knew Alfred from Mac OS. Anyway, you have spread list in Mac OS. Then the next good. So the next generation was Alfred and today you have RACAST. RACAST is like kind of one stop shop for everything. It's free. First of all. So here's a tip. It can manage windows. It can search Google. It can search files. It can search YouTube. It can do lots of stuff. You can automate lots of things on your machine. Anyway, there is a community extension for RACAST. So I can literally open my spotlight on my Mac and have it search in my LLM that's running locally and have it provide certain results according to something. Whatever I'm asking you, right? So it's it's right at the tip of my finger. This starts spotlight and ask it a question and it goes to, you know, my personal chat, DPT, air quotes. So I can do lots of stuff with it and it since it's open source, I can also provide it as a commercial product. Now this is going a little bit far, but I can literally run it as a backend and, you know, build a UI on top of that and have people pay for use it. That's another cool thing. And I can do lots of other extensions. I can train my own model. I can take the existing models and do them, et cetera, et cetera, et cetera. I think that's enough. So we get to at least try pros and cons of using chat GPT, which is an online service, you know, using the cloud and pros and cons of using llama and running it or your on your PC. And then we'll also move to another section where I will talk about running llama may be in the cloud. Okay. But first, let's talk about pros and cons of running it locally and in chat GPT. So I'll start with chat GPT. Okay. I'm the chat GPT representative. So with chat GPT, you can either use the graphical user interface for $20 a month. You have access to the latest and greatest tool ball, you know, model four or 3.5 tool ball with unlimited almost unlimited access. You know, you can ask as many questions as you want, even if the server is busy, you can always get a question and answer fast. And besides that, if you want to use chat GPT programmatically, you can deposit some like budget, maybe like $10, $20, $50. And then when you do the API calls, they got this mechanism. I bet any cloud service has this, like the tokens per minute or something like that, tokens per word. Well, you pay pill. I'm actually using it right now. Yeah. So you got those late limits and everything. Either way, it's like paper usage, you know, so you deposit the money and then you set a budget and you pay per usage. So it's very easy to use. It's in the cloud. It's quite fast. It has the option for streaming. So if you are developing maybe a chat or something, you can proxy and I did it. You can actually proxy to open AI. You can, when you do calls to the open AI API, you can stream back the text. So it gets back like the chat GPT and graphical user interface. What I'm saying is usually when you do an API call, you get the result when it finishes. When you stream it, you get it word by word. So you get the effect that someone is actually typing on the other end, which is cool, because this is what we're used to, like from Open AI, ChGPT. So I really enjoy using it. It's very easy. I don't think the pricing is too high. I used it extensively this week, like I did massive calls and it cost me something like four dollars. I'm talking about the API for all. Can you just give a few examples of what of why they use it? You're just general use case. Your question of my general use case was first help me develop some Python application. So instead of starting to learn maybe about Matplotlib or pandas or whatever package for data scientists that there is in Python, I just told ChGPT any decent that. He told me what you do. You know, back and forth, copying the code snippet for me to him, telling him, do this, do that, do this, do that, and eventually you get an application. Besides that, I also used it as a backhand server, where I thought I gave it an instruction. Please reply in this JSON format. And then in that format, I inserted variables. You know, I marked them as a dollar. I figured he'll know what I mean. And then I said, let's say, one of the values is status. So I told him a well status refers to the result from blah blah. And I told him, please only reply in JSON format with nothing else. And if something fails, please add a fail message in the JSON object, whatever. So then when I made the call to the API with my content that I want to inspect or check whatever, I got the JSON response and I'm using open AI abilities, you know, ChGPT abilities in my API, which is incredible because it can also generate scripts. I generated data, but I can also generate code with my API, which is crazy. So this is like, can I talk some stuff? You know, I think open AI would love to hear this call. Yeah, but now I want to hear why should I use the comma. Okay, so before going to Lamb, I just want to build on top of what you said. I think the way I see it, okay, and I think others should too. AI is at least this, I'm speaking about ChGPT now, not the art stuff. We can touch on those a little later, but we work in ops, right? Dev and ops. AI in that sense is a shortcut. It's just a shortcut. It's doing things you probably already know how to do. Maybe sometimes a little bit better. That's usually not the case when I try going to more complicated stuff or writing complex applications or things that it's not sure about the context I got, I got mixed results. Sometimes the results were it looked perfect, but when you went into understanding white's buggy and on actually working, you spend more time than it would take you to actually write the thing on your own. But when you're trying to do really simple stuff, like create a, I don't know, a bash loop that iterates over lines of a file and extracts the string that says Python, whatever, something like that. That's a pretty simple task. And you can use literally any model to do that, even when that's not dedicated to or was trained on code. So that's simple. So you need to view that. This is something like that. And then use that at the shortcut to achieve other things. You can probably tell me something I don't know about chat GPT for because I wasn't using this specifically. I was using the more simple models. So that's how I view it. And to build on top of that, someone reached out to me, someone who's listening to this podcast, so maybe he listens to that now. And he told me, I want to get into the world of DevOps. And I'm not sure how. And maybe because he's not working, and it's just a student, it's hard for him to afford chat GPT for. So just another reason to use an open source tool that you can literally install today on your machine and have it and give it a go. And if you want to start, I don't know, getting into DevOps, maybe ask it. How do I get into DevOps? What are the best tools solutions? What is the most widely used cloud platform? How do I get into it? What's the fastest way to learn? Help me write a Docker file, help me install something on top of Kubernetes. Stuff it can always help you just, for example, yeah, I hope you, which is releasing it now, it probably won't be trained on that. Well, maybe we need to release our own model that's based only on that podcast. Anyway, it can help you shorten the path, right? You decide on something you want to do. You have a plan. Okay, two things. First, regarding chat GPT, you said about like chat GPT for and whatever. So also want to mention that like, this is crazy because last month or I think two months ago, chat GPT for also goes out online. So previously, being had the ability to be like chat GPT, whatever, I didn't like that. I am used to use, you know, chat GPT, not being. I don't want to use that search engine, whatever. Then chat GPT for and now has the option for browsing. So what I do, I can actually give it like a URL and I tell him, here are the docs. This is the object that's supposed to be returned. Please create a function function that aligns with the docs. So when you said about, so first, I just want to say it goes out online, which is like super major improvement. So I've been using it for a year and like that the past two months been like crazy. I'm using it more than ever and it's way easier to use it. And the second thing for complex stuff, I will also relate it to the browsing issue. Usually it can't solve complex stuff usually because it doesn't have enough data. You know, imagine if it just had all the data and all information in the world, it would know everything. But when I do the chat, okay, so maybe I know we should have like a specific chat of and maybe do a demonstration of how we're actually using chat GPT for something like for a big project because I always send a copy of my file. And of course, I make sure that I'm not exposing secrets or whatever, but I'm copying the content of my files or snippets. And sometimes if I'm dealing with data, so what I do, I take snippets from that data. And then I tell him, okay, here's a snippet from my data. Please write a function according to that. So when it comes to complex problems, you just need to think in a more complex way. You know, we need to know how to feed it so it can solve complex problems. So it's currently difficult because people are not really sure how to do it. But when you do it a lot, I'm telling you, I'm using it for everything. Seriously, everything, even to write a kubernetesi ml file. I just don't want to write it. You know, that's one of the first examples. Yeah. Yeah. Generating it with labels and what or I wanted the other day, I forgot how to use, you know, I didn't use for a while with tolerations and anti-affinity kubernetes. No, I don't really care what they are. I'm just saying, I asked it. Please, what is this? What is that? Please generate this, generate that. And I got everything explained easily and nicely. You know, I didn't want to cut you before. But you were meant, you mentioned how you don't want to use Bing. And you did want to use Chagititi. So I was so surprised you talked about Bing because I saw something today. It was a comedian speaking about how he went to China. He taught English in China. And you can't access Google because for some reason that region Google was blocked because I don't know, the government is blocking it. And Bing was available. So he said, Bing is so crap that China allows it across the board. And you can access it from everywhere. And it's laughing about it. He says, you can look for Bing, what is democracy? And you're in China, Jim, Bing, what is democracy? And he said, democracy, that's kind of a long word. Maybe maybe ask me something else. Maybe let's try something else. Maybe Google it. So just I'm a bad comedian, but you understand the point. Anyway, going back to the point, you asked me again, so how are why I use llama. So I think I mentioned that all of the things that I've mentioned I do quite, I'm doing pretty much the same things day to day. I'm using it for everything. I'm using it to proofread my not so good English. I'm using it to literally fix text. I'm writing blog posts and writing scripts for videos I'm making. I'm using that to not only proofread me, but also generate them. For example, I'm trying to write a passage that explains something that makes a story out of something. I let one of the models help me with that. I tell it, here's what I want to convey. Here's the message. Please all right, a story around that. And if I'm not sure, please provide 10 examples or 10 starting points. And if I want that to be a tweet, I can say use strong words and shorten it to whatever 220 characters, whatever is allowed on Twitter. And by the way, going back to llama, if you go to all llama, sorry, if you go to the examples and we talked about the model files that are kind of like Docker files, you can find lots of examples for everything I just mentioned. If you want to write powerful tweets, if you want to use your own DevOps engineer, deploy it locally, or if you want to use something else. So that's basically it. Now, I want to go to something you started with because we're talking about ops here. And running these models, and you mentioned it as well, is not, it's kind of, it's resource consuming, right? It's pretty intensive. Models are taking like four or five, six gigabytes. And you don't have all the resources in the world. If you're running a powerful computer, and it might be wise to run them remotely. So what I did, I did a few things. First of all, obviously, you can go to an easy place like AWS and deploy it there. You can use the service on top of that, like fly AO or a Roku or whatever. And another cool thing I've been trying to do, but I didn't finish it yet, hopefully I'll finish it later, is deploy it on a Raspberry Pi. So if you have something running on your desk right now, like a Raspberry Pi not doing anything, that's a great place to put your model running for very few models that are ready for you to ask it questions. But it's very weak on a Raspberry Pi. So you know, I have a Raspberry Pi like five gigabytes, an eight gigabytes, and it they're very made. Oh, Raspberry Pi. Okay. Okay. I got three and four. Yes. So the five should be, so the five should be three times faster, but with the four for Raspberry Pi four with four gigabytes should be more than enough to support your models. You can try it out. And I'm certainly going to try it out and let you know how it went. Okay, cool. So getting back, okay, getting back, every time we get back, we get in, we get back, get before. Okay. So describe me like in very simple, like I want to start with the llama. So how should I start my journey with llama with doing my chagibity things with chatbot things offline with llama? So what should I do? So here's my take on it, which is pretty much what I said before. I live a link to llama. It's open source. It's on GitHub. Plus they have a really nice website that you even for Mac, you don't go for the usual brew install. You download something at DMG, you click on it. It installs both the server. It's running kind of like your running Docker for desktop, and it shows up on your status bar on top. So install on llama. It, it, oh, llama, sorry, shows up on there, kind of like a hypervisor. And that shows you a status and what models are currently running, it, they really mimicked the environment of working with Docker locally. So you install that, it also installed the CLI, and it explains how to do stuff with a small box on your, on your screen, it tells you, go run. First of all, start with serving, oh, llama serve, it runs the server, and then, oh, llama run. Mistral, for example, and it just starts the model, you get a prompt, and you start running, you ask, ask it, whatever you want in natural language. It's as simple as that. Really, I don't need data systems like the chats. It's that. The chats are very persistent across running, you know, cause whatever we do there. So what I'm going to say now, I'm not 100% sure. Yes, they obviously maintain the context as you run the, the console. I think if you kill the server, then no, but it's an interesting question. It probably has those chats, I'm going to check. You know, it's not to save your boss or something and save on the server, so it's easier, because you can access it in your mobile. So I'm not sure, I'm not sure about llama, but I'll check. Yeah. Okay, by the way, another use of Chaji, pd that I'm doing is I think maybe we talked about it, I'm not sure, but like, also Google, Google glasses has it not Google lens, I'm not sure how they call it, but whether you take a picture of something and then ask it, like describe to me what's in the picture or you can say, do you see a man or stuff like that? So it can, it also has like this image recognition feature, you know, which is amazing. If you go to a restaurant before in language, you can take a picture of the menu and ask it, please translate it to to English, and then you'll see the man. And it works fast enough. Super fast. It's amazing. You know, I did it like in for a full week, and it was amazing, you know. You know what? By the way, here's here's something I wanted to say. It's quite obvious, but when you run something locally, you don't have to have an internet connection. So if you're connection, and I know we live in this modern world where you have connection even in on flights, but if you don't, you have this model running locally. So even if your connection breaks or you don't want to load the current connection or in your entire and on the beach, and you want to ask GPT or whatever other model is some questions, you don't have to have an internet connection, which is obviously a plus. So just to put it out there. You know, I still like, I still don't have the urge to try and running it locally, because maybe I'm not using it intensely enough for using it locally, you know, I'm quite satisfied with GPT and what it offers, you know, in the price. Yeah, that the real reason is me being cheap, that's all. The rest, the rest is just trying to, you know, rationalize the decision. Also for writing, you know, blog posts and everything, you know, emails, I just use Grammarly, you know, which is also a paid service costs, I think almost the same, maybe 10 bucks a month or 20, I don't remember, but if you write a lot and you do it a lot, I prefer paying for something online, you know, maintain online, people know what they're doing, instead of trying doing it. I'm pretty sure they use the same, the same LLM, maybe train a little bit further to their needs, but it's doing the same thing, which is why I just use my local one for everything. And it's not writing my blog posts, but it's, yeah, I'm asking it to build a system, because I like my blog posts to be based on, you know, dots, to make sense in the context of a story, you have a start and an end, and something in the middle. So it gives me, it gives me a construction, like a skeleton for something I'm building, and I'm doing that with code, and I'm doing that with blog posts and videos and everything. And it's pretty good. And if it's not good, I just make it reiterate the same thing. Okay, it's been worse. So do you have, all right, so do you have maybe some other questions or any other topics that you want to talk about, the evolving era of AI evolving into the era, that we call it? I mean, it's worth mentioning that all we talked about was the coding aspect, because that's our world and that's our field, but AI evolving into so many other places can do so much more, if you're a little bit artistic, or, you know, just playing with art, it can be imaging or videos or even music, it can do so much. There's a website I'm checking right now, I think it's called Pinocchio, I'll share a link. And Pinocchio is like this huge dashboard for everything you want to do with AI, it's not supporting, it's just a dashboard with options, you can go to video creation, AI narration, you can take photos and have an AI system animate them, so it literally takes a still photo of your own and have the AI animated to the point. It's not really as good as you think, some of them can make your face move, some of them will just take the camera and kind of, you know, pan it across the room, so it looks like someone is taking a video of you doing something, it's really cool, there's so many options. The AI narration, by the way, people have been building YouTube channels, they were taking written posts or have the chat right the post for them, let's say write a motivational speech, right? So you take the motivational speech and then it takes, okay, I need someone to read it, let's take Tony Robbins, so it takes the imitates the voice of Tony Robbins, you can by the way upload an mp3 of Tony Robbins speaking for 30 seconds for it to you know, grab the voice, and then you paste the voice on top of that, and then you have it, okay, let's create some kind of a video in the background of according to the words, make a video based on the words that it's showing, it puts that, it, you know, paste everything together, we have three, three layers, and now we tells it, okay, extract everything that was narrated to text and build titles on top of that, so you have like five different layers and you upload all of that into YouTube and you can start making money tomorrow, so you can have AI do everything for you, yes. Wow, that was amazing, and it's all fake, right? The voice, the voice, the video, the video, the video, the video, the video, the legal tips, in this episode. Actually, I think it's completely legal because there's no one's IP, right? It's free, it was made by an AI model, it wouldn't be as good, you know, it wouldn't be that as good with the voice, everything, it would be subbar, but I think it's time, let's, let's talk in a year or two, it'll be as good as the real thing. Okay, well, at the end of the day, it's based on model from the internet, so. Okay, so let's, let's summarize, like if I listen to this episode, whatever I learned, so I think I've learned that Chagibity is awesome and you should use it, and I also learned that if you want to do Chagibity or Chatbot offline, you should run Lama offline, you know, and then do what Omer does and maybe save a few bucks, but also enjoy the experience of running a large language model on your computer, or you can host it in the cloud, but then it will cost money, so it's a bit weird to put it in the cloud, or you can put it on a Raspberry Pi, like Omer suggested. Other than that, we learned about Pinocchio, which shows all the technologies that there are for AI and whatever. Omer, do you want to add something to the summary? Ah, I'm just an anecdote, it's not for the summary because we talked about how to make money from AI, so here's another anecdote. I always love to hear how to make money, sure. Someone so try it out. You try with Chagibity, I'll try with, I don't know, we're struggling, let's compare results next week. And everybody comes next week, we'll come with everybody, we'll come, both of us will come with baggage, you know, with a suitcase, and we'll see a lot of money. We won't even show up, we won't even show up because we'll have so much money, we'll be somewhere. This guy on Twitter wrote that he's doing, I think it was Chagibity for. Anyway, his experience was let's ask Chagibity to tell me, I'm asking, it was a really long prompt asking him, let's build a company, I want to make money, I want to make it fast, and I want it to be easy enough for me to handle it on my own, I don't want to hire people, something around that, and let's start. And he gave it to them, I don't even remember what was the idea, maybe something around flour, so shoes, anyway, something around retail, so it helped it build a LinkedIn page, and some content, like blog posts, and then he told it, okay, the next step is reaching out to a few customers, here's the email, you want to send them, so he sent a few emails, and basically the idea was reaching out to investors to invest in his company, because, I don't know, the AI wrote an email that says, we're going to be so big, we have this idea and that idea and it's going to be great, please send us money, after a week he raised $15,000, okay, after two or three weeks he had something around 100k invested in his company, and then he started running stuff, now just a caveat here, a part of that was him documenting that on Twitter and this getting a lot of traction and a lot of likes and retweets and the traction that is got from Twitter obviously benefited the business itself, that said he did have something like $200,000 after one month of running the experiment, which is pretty cool, so all I'm suggesting is try to do the same with one of the models we suggested and see how this turns out, that's it, end of story, I think you got stuck for a second, I mean, I got stuck, you got stuck, yeah, are you there, yeah, so, but it's not that I heal everything, but from time to time like, are you there, right, so thank you for that anecdote, I think that reminds me the word, the popularity that you said, the popularity, say, the popularity, proprietary, yeah, also an English lesson, so let's move to the corner of the week, are you ready, kind of, okay, corner of the week, yeah, okay, so now that we are in the corner, in this corner we'll talk about what we did last week, last month, next year or maybe today, okay, so we can talk about anything, any challenge, any knowledge that you acquired at any point of your life, even if you learned how to tie your shoes at the age of four, you can say it now, okay, so I'll start because I think I have a very short thing, I am walking, you know, during those episodes I talked how I'm using Conan for C++ to, you know, Conan the package manager for C++ and like, like any other framework in any other language, like, uh, no, you got the package blockchain, so on your outlook, in go, you got the go, some go mode, Python requirements, TXT, so every language has its own log file, so Conan also has a log file, but I had an issue with locking packages completely, so for example, if I created package A, and then I want package B to consume package A, I had issues with consuming that log file, okay, so it didn't really, it wasn't immutable, like some packages were suddenly downloaded and got their new hashes and it wasn't stable, so I did some hack that I was able to solve it, I'll just talk about the hack where package A, I just included the log file, and then in package B, initially I downloaded the recipe, extracted the log file, installed it, and only then I consumed package A, it sounds like a very tough workaround, but it works flawlessly, I mean, it's amazing that it actually works, you know, so the only difference was take package A, add the log file, and in package B, before installing package A, just download the recipe, extract log file, install the log file, you know, the contents, and that's it, and it works, sounds stupid, but if you have issues like that, maybe it's an idea, okay, so that's what I did, oh man, moving to you, or if you have, because I think I shared comments about it, I don't know, yeah, no, because I shared so many links during the episode that I don't know, I actually do know where to go, so let me just mention what we did, did mention in the episode, and I'll put all the links below, we mentioned Olamma, and all the models that come with Olamma already in the website, so I'm not gonna bother with listing them, I mentioned Pinocchio, which I'll list as well, and I highly recommend you check that out, and the third thing, regardless of AI, and it's something you probably shouldn't use, but it's there, if you want to use your browser from the terminal, for whatever reason, you don't reach out to your browser while going to a certain website, there is a project in Rast, it's called Carbonil, Carbonil, something like that, anyway, Chromium based browser that runs in your terminal, and can show different graphics, if your terminal supports graphics, then it'll be, and even better experience, I'll share that below as well, it has like 15,000 stars, for whatever reason, try it out, that's it, Olamma, okay, so I gotta respond, you know, because these are crazy times and crazy days, I gotta tell you what I heard when you said, if you want to use a browser from your terminal, so I was like, my brain got me to a point where I heard, so if you want to use a terminal from your toilet, all you gotta do, and I was like, why would you use a terminal from your toilet, Olamma, what's going on with you? Okay, so don't use the terminal in the toilet. Speaking about terminal is in the toilet, I'm going to check in, you know, toilet's specific, but did you ever try to run a terminal from your phone? I think I did QT something, but on my Android, I don't think it's possible on iPhone, but on Android, I did Q something, there's an app for that. I did too, I can't remember even why, but he just reminded me of that, I need to go back to that, and it really helped me do a lot of things. I need to go back to running a terminal from my phone, which I can then use from my toilet, which would be great. Okay, so this is another way to practice early in the next skills. It's the terminal from the toilet, that's the sound of the other episode, so that should be the new title. Okay, so getting back to our current times before we go, so even though I think we had a bit, maybe me, of a low energy because of the times that we're experiencing here in Israel, I just want to wish everyone be safe and I miss Rael Phai and stuff, and that's it, so I won't let anything else. No, I think that's enough, I'll see you, we'll see you next week. Yeah, I'm going to go. Bye for now. Bye.

Intro
The case for local LLMs
The case for a paid service
How to observe AI
Making money?
Links of the week