Counting Sand

AI Hot Sauce Taste Test Challenge

Episode Summary

Angelo Kastroulis and Petter Graff, two technology enthusiasts, explore the exciting field of artificial intelligence and its influence on everyday products. In this episode, they perform a taste test of hot sauces, attempting to discern the difference between AI-optimized hot sauce and traditionally-made sauce.

Episode Notes

Key Topics

AI-optimized vs Commercially Available Hot Sauce: Angelo and Petter perform a blind taste test with three different hot sauces, one of which is AI-optimized, to see if they can determine which one is created by AI.

Background of the AI Hot Sauce Creators: A brief insight into the story of Shekeib and Shohaib, the two brothers who combined their passion for data science and business to create an AI-optimized hot sauce.

Understanding Bayesian Optimization: A comprehensive discussion on Bayesian Optimization, a technique that uses previous knowledge to influence future decisions, perfect for creating unique hot sauce recipes.

Discussion on Other Optimization Techniques: Petter invites Angelo to delve into the different types of optimization algorithms and their pros and cons.

Understanding Gradient Descent: Angelo gives a brief introduction to the concept of Gradient Descent, a popular optimization algorithm, explaining it as akin to finding a valley when on a mountain.

Recommendations

Episode Quotes

"Hot sauces are part of my favorite start of the day, so it'd be interesting to see what AI could come up with here." - Petter Graff

"Bayesian is an optimization technique that centers around using your previous knowledge to influence the future and that works really well." - Angelo Kastroulis

"Bayesian can kind of skip a bunch of steps because you've got a better second try." - Angelo Kastroulis

"The algorithm of gradient descent basically goes like this. If you're trying to find from where you are to where you should go, imagine that you're on a mountain trying to find the valley." - Angelo Kastroulis

Episode Transcription

Angelo: Welcome to Counting Sand. I'm your host Angelo Kastroulis and we have a special episode today. I've got my partner and fellow technologist, Petter Graff, and we're going to be doing something a little bit fun and different. We're going to taste test some hot sauces to see if we could tell the difference between an AI-optimized hot sauce and one that's commercially available, otherwise using the traditional methods. And, if you haven't had a chance to check out the previous episode where we interview the creators of it and we talk about how they did it, you should check that out. So Petter, hey, thank you for being here today. Petter: Thank you, Angelo, I'm looking forward to this. You know, hot sauces are part of my favorite start of the day, so it'd be interesting to see what AI could come up with here. Angelo: Yeah, so we're going to be testing three hot sauces. One of them is AI and we're going to be using our breakfast eggs to test the hot sauces. And we're going to try to see if we can determine which one is the AI one and which one's not. Maybe have a little bit of fun along the way. A little bit of a background first about the brothers that came up with this business on their own. Shekeib and Shohaib were the two brothers that came up with this and their story is amazing. One of them became a data scientist, multiple degrees, and the other one became a business person. And it's just an amazing story. And the hot sauce is something they love so they kind of put their two passions together. So definitely check out that episode if you haven't had a chance. So let's start by talking about this first sauce on the left. We don't know what they are. And we don't know which one of these is the AI one. We'll reveal that at the end, but, the approach they use to generate their hot sauce is Bayesian so, I think it might be good to talk about what that is. What do you think Petter? Petter: Yeah, I think that's good. I mean, there are different optimization techniques and, you know, they differ in different ways. Maybe some can be faster than others, some might get stuck in local minima and so on. But I think it would be good, Angelo, if you could kind of explain the various optimization algorithms that we could use and what's the difference between them. Angelo: Yeah, sure. Let’s start with Bayesian. Bayesian is an optimization technique that centers around using your previous knowledge to influence the future and that works really well because some intuition will get you a little bit further. Like for example, let's just say that we have hot sauce and it has three ingredients, like vinegar, I mean, I'm not an expert in hot sauces, but there's probably vinegar, salt, and pepper, at least in these hot sauces, right? So if you start an AI algorithm, you're trying to figure out, how much of each of these should I use? And one way we can start is by just saying, let's all start at zero. Well, then you just have water, right? So you start with water. You didn't learn anything in that first batch and it takes you a little while to get out of zero, zero, and zero. Bayesian doesn't do that. Bayesian kind of learns a little bit from the previous, so you'll have one mixture and then it'll learn, how good was that? What can I do to predict what the future should be? Which is really different than just saying, try something, then try something else. Was it better? Score them. It's actually trying to be a little bit predictive to say, I bet I can do better. I can save some missed chances by going a little bit, more intuition. Petter: Yeah. And then it is also an algorithm that does not typically get stuck in local minima, right? And perhaps, one of its disadvantages may be computational expense maybe? I don't know Angelo, what do you think? Angelo: I think actually it performs really well computationally. I think the thing that it does in practical terms is it ends up cutting down the number of cycles you actually have to compute. Like neural networks, for example, need many, many generations and cycles to test. Bayesian can kind of skip a bunch of them. So I think in practical terms, you're right, you gotta do a little bit more computing than just say, you know, something plus one and try every value. It is a little more computing, but in practical terms, you end up jumping a bunch of steps because you've got a better second try. Petter: Yeah, I think you're right. I think when you get to very large spaces, though, it could be computationally expensive. Angelo: Yeah. Petter: Another algorithm is gradient descent. I don't think many people know what that means. Could you give us like a little intro to that too? Angelo: Yeah, sure. I think gradient descent is probably the most popular one and probably where we started in all this. It's super easy to implement. The algorithm basically goes like this. If you're trying to find from where you are to where you should go, imagine that you're on a mountain trying to find the valley. So let's say you're on the mountain and you're trying to figure out, do I go left, right, down, back up? Where do I go to get to the valley? And you don't know, assume you can't see the whole landscape. So what you'll do is you'll try every direction and then you'll score it and say, well, left took me a little higher, up went up, you know? Right went a little lower and straight ahead went even lower. Okay. I'm going to go straight ahead. And then you do the same thing. It's super expensive computationally because you have to try every permutation around you until you continue to find the valley. The problem (with gradient descent) is it's super susceptible to minima, so if you, for example, are going down a mountain and you hit a low point, you think I'm done, but you didn't actually. You gotta go back up to then go all the way down to the valley, right? Because there's a range of mountains. That's the problem with gradient descent. Petter: Yeah. And it's very sensitive to the choice of initial values, right? I guess, another one we could talk about maybe is something like evolutionary algorithms, perhaps, could be used for optimization, I guess? Angelo: Yeah, they can. Actually, I can tell you, that's where my thesis started. I was doing genetic algorithms and just quickly, how those things work is, imagine that every ingredient in the hot sauce is a gene. I have a vinegar gene and whatever else. And what you do is you take one set of values and then you take another set of values and you say, this one is the one parent and this is the other parent. Cut 'em in half. Say you have four genes, you have vinegar, salt, pepper, and water. I don't know. And you take maybe water and vinegar from one parent and you take salt and peppers from the other parent and then you mix 'em and you create a generation of children. And actually you do that over and over. With four, you might make eight children with all different combinations of half and half. Then you insert a random mutation where you just take one of those like salt and randomize it. Because that's what really would happen, right? You might have a child that has different hair color than yours and it came from a grandparent. So you do that and you try that. And the idea there is you're inserting a little randomization and you're trying to take the traits. The best traits will survive and the other ones will die off. The problem with genetic algorithms, and actually the reason I abandoned it in the early days of my thesis was, you really need to have a really big set of genes for it to work. Right? Like you could brute force your way with gradient descent faster than you can with a genetic algorithm if you only have a handful of these and that's the big Achilles heel. But practically speaking, Bayesian still beats it. Petter: Yeah. My experience with these algorithms and being that it was computationally expensive, but that's often not a big deal because if the value of finding the optimal solution is sufficient and, you know, the computational cost isn't really that important, but, I thought it was very difficult to tune the parameters. Angelo: It is. Petter: And then, it also seemed to converge on the optimal solution very slowly. You know, you have to be patient, right? What would be other algorithms one should evaluate? Because I think it's one of the things that those that go into AI, maybe more with a computer science background and not a math background, just picking the right algorithm can be really difficult. But let's take the hot sauce one, right? They're using Bayesian, this is probably a good choice. What would be other algorithms that you would think of ? Angelo: I would, I will say probably the most popular one is saying, okay, let's randomize them and then use gradient descent to combine the two. That's probably where everyone starts. I think Bayesian is super interesting. There's a new one that I haven't done anything with and it's actually pretty cool. And I'm not an expert in this, but it's called particle swarm. And the way that it works is like the way that birds work. If you observe a flock of birds when one turns a direction they all seem to do it. And the idea with particle swarm is what if we could not just look at the one value that I'm at now, but look at all the other ones at the same time and see how they're doing and use that to influence our next decision. So you kind of try a bunch of these at once, like a small flock, and then you say, okay, these are my best. What are your best? And then you kind of use that to influence the entire swarm and you move the entire swarm together rather than randomizing. Randomizing is really good because it's very good against local minima, because imagine you're walking down the mountain, you'll just pluck you off and randomly drop you somewhere else, and then you're now doing that again. So if you had a chance of getting stuck, you might have been moved off of that, and you do that enough, you won't get stuck. So I don't know, that one I might have played with, but that's just probably because I don't understand fully the characteristics and I'd want to, so I would probably play with it for fun. But I will tell you, Bayesian, like some of the things that we've done, we can't beat Bayesian. We've tried it over and over again. It seems to practically beat everything and it's not that hard to implement. Petter: Yeah. Simulated and kneeling would at least handle multimodal models. I would probably have tried that out as well, but one really nice thing that you said that I think is very important and the computer scientists often miss, is that it's often not a single model that gives you the optimal solution. It's often a combination of different things, right? You may filter based on one strategy and you may pick the initial hyper parameters for your model with a different strategy and so on. So it's this pipeline really of different techniques at different places and each one of them in itself is kind of easy to understand and easy to implement with the libraries that are out there. But it's having the experience to put together the right models at different places, I think is one of the things that is a little difficult for most computer scientists. So I backed into machine learning through my computer science background, but it's interesting that often when I started and I tried to apply machine learning to different models, I did the mistake of picking one, you know, oh, this is probably this strategy, you know, k-means clusterings, that sounds great. I love it. It's easy to understand. I can explain it. And you don't match up the problem or a partial part of your problem with your model well. Angelo: Yeah. You know, I'm going to point out a few things interesting that you just said there. I think that there's always this tension, I think, this tugging between math and computer science. Like I'll take neural networks for an example. I contend with many data scientists. I say, we don't even understand the mathematics behind neural networks. And they said, of course we do. And I said, okay, then tell me how do you settle on how many hidden layers to have and how many neurons and we all do the same thing. We throw the kitchen sink at it. We go, well, let's try one model that has this many layers. Let's try another one that has less, another one that has more. Let's try this many neurons in each layer, because we don't really know. So I think the computer science aspect of it, you gotta try more models. You're right. I think the other thing that you said that's interesting is the models in and of themselves might perform differently. It's kind of like thinking back to the Netflix prize. The winner was like an ensemble of 700 models, and then Netflix said, oh, here's your million dollars, but yeah, we can't implement that. That's too complicated. Right? And it performs 5% better than our current one, which is super simple. So you know, it takes something like k-means or k-nearest neighbor. Those are really easy to implement, but the trade off is you gotta keep all that data and you gotta keep it in memory every time you want to run. Whereas if you're going to train on a model that's super expensive to train, but then the run time is smaller, so you have to know computer science so that you can know, what is the impact to the thing that you just picked. Petter: Yeah. And it gets more complicated with big data because you also have to think about distribution of partitioning of the data and ensuring that you get the right data to the right nodes so you avoid shuffling. So yeah, the combination of understanding computer science and data science is a little rare as I see it. But you can always build up the right team. Bring together a good computer scientist and the good data scientists and cross train them and you usually get something good out of it, or at least that's our experience, right? Angelo: All right. Let's try some of these. Petter: Yeah. Let's try things. Angelo: So let's try sauce one. Petter: Should we try it directly with the egg or should we taste it a little bit first and then put them on the egg? Angelo: Yeah. So we'll taste each one of these, I think. Try it with the egg and then we'll just kind of determine, first, let's talk about what we think of each one as we do it. And then I got some water here because I don’t know. They’re going to be hot. Petter: you are. Are you big on the hot sauces? Angelo: Actually. I am, I have a super high tolerance, but I also tend to put a lot on. Petter: I hope they didn't give us one of those that had like a half a million, what is it, BTUs or whatever they're called? Angelo: Yes. Scovilles. Petter: Mm hmm. Scovilles. Angelo: The first one is, the first one's hot and I just took a gigantic spoonful. It's pretty hot. Petter: I didn't think it would, it's also sweet, right? Angelo: Yeah, vinegary too. Petter: Texture is a little chunky. Angelo: Mm-hmm. Petter: Right? Angelo: Yep. Petter: Maybe a little tangy too, wouldn't you say? Angelo: Yep. Do you feel the burn? Petter: No, you know my Norwegian tongue may be already numbed, right? Angelo: I chewed a bunch of the chunks that might have been part of it. Petter: Maybe I'll try it with a little egg. I'll put a little more. I was very careful with the amount that I put on it. Angelo: Really good with egg. That vinegar is perfect. Petter: Yeah, I agree. I actually really like this sauce. I don't know what it is, but… Angelo: I do too. I really like that one. Petter: It has a good aftertaste too, you know? Angelo: It's really, really good. Petter: Yeah. One was really good. I'd buy that any day. So, I have a place in Belize, as you know you've visited me there. So, in Belize, they have a sauce made by Marie Sharp.. It's on everything. I mean, they put it on everything. The only place I haven't seen it is in the ice cream parlor. Angelo: Hot sauce ice cream. Petter: Other than that, they're everywhere. So I wonder if one of the things that one should do is to optimize the hot sauce for certain food, right? Would this have been as good as, for example, on, I don't know, veggies or something, right? Angelo: I think it'll be good on something that the vinegar can cut through. Petter: With egg it was excellent. Angelo: It was, it was perfect. Well, I can tell you, I have the opposite experience of what you said. In Greece, we don't really eat hot sauce, so it's not something we do. Petter: Yeah. Well, in Norway I can tell you, if the sun shines and bounces off a jalapeno and hits your meat, you are already in trouble. They don't have a lot of hot sauce. Although, I would have to say that when I come back to Norway there have been changes. They have discovered that spices, sometimes, enhance the food. Angelo: Yeah, oh I agree with that. Okay, so let's try the second one. Sauce two, the middle one. Petter: I'm going to try it without anything. Angelo: I'm going to have a smaller spoonful this time. Petter: Oh, very different taste. That to me was more spicy, actually. Not, not chunky. Angelo: Yeah. Petter: Not as sweet. So for me, I can feel the vinegar more here, I think. Angelo: Me too. A little bit too much for my taste, honestly. Petter: Yeah. I'm going to try it with the egg. Yeah. I mean, number one was better for me as well. Angelo: Well, number one was, I think, better vinegar. Definitely. Petter: Yeah. Angelo: Number two has a lot of vinegar. Petter: Yeah. Number two has too much vinegar for me. Angelo: And I don't taste the pepper as much. I mean, I taste, like you said, that there's some heat in it. But I can't taste the pepper as much. Maybe it's just the vinegar that overpowered it. Petter: The aftertaste is pretty pleasant and it stays longer, I think, than the previous one. Angelo: Yep. Salt is nice on it. It's not too salty. Alright, we ready for number three? Petter: We're ready for number three. The aftertaste on number two is hard to get rid of though. Angelo: Yeah, maybe we should clear it. I will say that looking at the color number three is definitely greener. So that tells me there’s probably more jalapeno in that one. It's in between the two in terms of texture and puree, it looks like it's a little less fine than number two. Petter: Yea I agree. I'm going to taste it without the egg first. Whoa that is a little hotter. Angelo: Yeah, that one's hot. Yeah. I accidentally got it on my lip. That one's really hot. Petter: That is really hot. But, ahh, I like the taste. Angelo: I do too. Petter: I'm going to try it with egg now. Angelo: Oh my goodness that's hot. It's burning still. I'm glad I didn't do that one first, because I might not have been able to taste any others. Petter: I think we did them in the right order. Angelo: I think our producer probably did this right. Petter: Yeah. Well, they didn't tell us to do it in 1, 2, 3. Angelo: Oh, that's true. We could have done it completely… Petter: Oh my god that is hot. Angelo: It’s really hot. Petter: But it has a really pleasant aftertaste though. You know, when the heat kind of dies down a little bit it has a really pleasant, like, sweetness in the aftertaste. Angelo: If you say so. Petter: I don't taste vinegar. Angelo: I don't taste anything. Alright. So if you had to say between these three…first of all, which one do you like best? And then, which one do you think is the AI one and why? Because maybe we have a little bias, right? Do we like the traditional method more than this upstart AI and we don't really want to believe that it can do better, or are the data scientists in us going to say, I have to believe it's going to be better? So first, which one do you think is the best? Petter: So I would say, one is something I could eat every day. Two, not a fan of number two. Number three was really hot and I could imagine that in small doses, like for example, spicing up a béarnaise sauce or so, it would be really excellent. Angelo: Mm-hmm. Petter: It's really hot. And I think I'm going to…take me an hour or two before I can taste anything. That's how hot it is. I may have overdone it. But, yeah, I like it. It had a good taste, but I think I would rank them as 1, 3, 2. Angelo: Okay. So for me, definitely number one, I agree. I would. I like number one the best. I mean, they're all quality. I kinda want to say that. Number one, I like it. I think that's the one I would eat most of the time. I think it's, like you said. I have a feeling the heat would build the more of it you would eat. Number three, sometimes, you know, you want to feel a little pain and that's the one you want. Petter: Yeah, yeah. Exactly. Yeah, that's what I was thinking too. Yeah. Angelo: I would get that purely for the capsaicin if I wanted that. I don't know the flavor. Two, I don't know that I would buy that one. In some ways I'm thinking, please don't let that be the AI. Petter: Yeah, me too. You know, I'm starting to second guess, you know, our producers here, did they deliberately put it in the middle? Because it tastes different. Right? It's almost like, I want to guess that that's the AI because it is so different in taste than what I've tasted before. Also it's important to say that we haven't really tuned them towards our taste, obviously. Right? So it would be very interesting to do that. Angelo: So that's interesting. I think you just, you know, our entrepreneur wheels here. You could do a matching algorithm similar to the way that you have these restaurant matching algorithms in Yelp, right? Where you say, oh, well Angelo likes this. If you're like him, you'll like this sauce. Because we also have different…my lips are still burning. You and I have different backgrounds. I'm more Mediterranean in my foods and you're a Norwegian, you're Scandinavian in your foods. You know, when I get Indian food, the hotness, I go down in the scale. I mean, I think I have a high tolerance, but I don't get it number five. And different people may like different things so that, I can see to that. Petter: If you go to a Thai restaurant and everyone around you is Thai. Yeah. You order two. Angelo: Yeah, Well, they tell you, do you want it Thai hot? And I go, okay, that's an indication that I should not probably do that. Petter: Exactly. Angelo: So there has to be a way to match you based on what you think might be the same. And that's exactly what Yelp and those things do, right? There's a two stage approach. How does it know if I'm going to like a restaurant I've never been to? The only way it knows is it tries to find somebody like me who has been there and tries to figure out if someone like me liked it, then I might like it. That's harder than it looks because usually people rate things they don't like. Not many people rate the things they do like and so that's where you have a data problem. Petter: Yeah Angelo: You can't get enough good samples. Petter: I did, in a class that I teach, which is on Apache Spark, we have a section on machine learning where we actually build out an algorithm for matching. And it is interesting that we do two. One is on movies, right, and the other one is on wine. There are some good datasets. We don't have the same dataset as Netflix, but we get really, really good at it. And the students are basically saying, yeah, by ranking their movies, right, the other movies that we suggest are very close. Almost as good as, you know, Netflix and so on. So it is interesting though that with today's tools and methods that we're able to put together these things that seemed NP-complete not that long ago. But yeah clearly number one is my favorite. And should we figure out which one is which? Because I almost feel like I will need to go and buy number one. Angelo: Yeah. I'm going to definitely stock number one. Which one do you think is the AI one? So you mentioned maybe number two because it's different? Petter: Yeah, because it was so vinegary it seemed like maybe something went wrong. But I don't know. I don't know how well they've done it, right? So I would love for (number) one to be the AI one, right? Angelo: Yeah. Okay. So that's your choice. Petter: Yeah, I think so. Yeah. I'm going to do that. Angelo: Yeah, so I'm going to say. There's a lot of factors here and I'm not even sure, did they just put taste as a predictor? Is texture in there? I don't know. I think taste-wise, I'm just probably a little bit biased. I have to believe it's going to do a better job at optimizing taste than the others. And I'm not saying that I have the superior palette, but I think number one has to be the AI one because the balance of vinegar and salt is perfect. Now, I gotta think though, that a commercial sauce through generations handed down recipes is going to be that good too, but that one is really good and I feel like the balance is too perfect to be just stumbled upon by somebody. Petter: That is so interesting, Angelo, that you're actually saying, because it's better, you know, but I get it. Yeah. Angelo: I'll put the bias right out there. Petter: Yeah. This is a bias. But you know, there is something to it. When you look at something like, for example, Google's chess machine that was able to beat every chess algorithm on chess machine out there after having only played chess against itself for four hours. That's crazy to me. Angelo: It is crazy. Petter: You know, we have all these PhDs that got into building chess algorithms and the computer power was the same. So it would be super interesting if that's the case. I mean, we have different tastes. We both picked the same sauce as being, like, a perfect mix of what I would want a hot sauce to do. I actually like three also, as I said, it's just a little bit too spicy maybe for my breakfast tacos here. Angelo: Yeah, yeah. Petter: But number two, I would not buy. Angelo: No, me neither. Too much vinegar. Okay, so shall we find out what it is? Petter: Yeah, let's find out. Angelo: Alright, I got an envelope here with what is what, so let's see if we can open this thing up and find out. Okay. Oh wow. Petter: Okay. That is so cool. That is so cool Angelo: So number one was Bayesian AI. It is the AI hot sauce. Number two is Tabasco, which shocks me because I have a whole refrigerator full of that and I eat it on a lot of stuff. And sample three is El Yucateco, which I guess is really famous in Mexican restaurants, so, wow. Petter: Well, that is, that is, that is actually shocking. Because to me it was so far superior… Angelo: Yeah. Petter: …in taste. It wasn't a little bit better. It was so far superior. Angelo: Well, I'll tell you, I'm relieved because I felt a little bit bad for how I was talking about number two and I like the brothers so much and I thought, oh, I'm going to have a lot of apologizing to do if it's that one. Petter: Yeah. We're going to have to edit. We have to rerecord this. Angelo: Yes, yes. So, okay. Petter: Yeah. Well, you tell those brothers to send me a case because this is, this is amazing. Angelo: Yeah, well, it's funny, I've had one of their other ones, as you can tell, they have a habanero pineapple that's really good. And then they made this one, they did it for us Counting Sauce. Petter: Counting Sauce, yeah. Angelo: I haven't opened it yet. It's a pineapple mango version of the base, I guess. And then they put pineapple mango in it. So, I like it. And that one actually looks more pureed, now that I see it. Petter: Yeah. Well, we'd be good commercials for them I guess after this, but this is honestly shocking. Right? Angelo: Well, yeah, so I think it's worth getting. We'll put the link down in the notes so that folks can look at it. I think that, you know, one thing I just want to say about why I love using hot sauce, this whole AI with hot sauce, is, it is a perfect metaphor for machine learning. Machine learning is usually very abstract and people think about it as data and features and things like that, but it's really easy to understand when you think about it in terms of ingredients for a food. When you say, oh, what's a feature? A feature is vinegar, a feature is salt, whatever else is in it. Pepper. And you go, oh, I can wrap my head around that. I can understand what that means. And when you say you want to optimize it, what you're trying to say is you want to optimize the proportion of vinegar to all the other ingredients and all the other ingredients to each other. I can understand that. And I think that that's what I love most about this, it got me excited, is that it's the perfect metaphor. And I've been trying to find also, by the way, an AI optimized bourbon to give that a shot and see how that goes. Petter: But it's interesting, so how would they write the function for this, like the optimization? You have to run some kind of function, right, to say, this is the optimal mix. Angelo: Yup. Petter: I mean, yes, you can have a function which is a person that tastes it like we did now, but that requires a lot of tasting and a lot of people across a very large...um. Angelo: Taste is subjective. Yeah, exactly. But that's what they did. I mean, watch the episode, it's really good. But that's what they talk about is that they had humans blind taste and say, better or worse, you know, kind of the same thing, I guess with different versions of it. Better or worse. Better or worse. Until they arrived at one that they thought was...I mean, you could, I guess, continue going forever. And they still are. I don't know what version they're up to, but if you buy the bottles, they're versioned. Like it'll say this is, you know, algorithm version 30 or whatever. And so that's kind of cool. You can compare them and I've compared other brands over time and how they kind of change and you go, hey, they change the recipe on you. I imagine it's the same kind of thing. You could probably detect it. Petter: Yeah, yeah. I need to taste it now with Marie Sharp since, you know, I have an attachment to Belize. See if I should bring my own hot sauce, which the Belizeans will probably find, like bringing sand to the desert but, you know, that's amazing. Angelo: Well, Petter, thanks for joining us. I think this was a really fun episode. We got to talk about a lot of really cool stuff. Petter: I enjoyed it. That was such a pleasure. And now I have a new favorite hot sauce as well, so that's great. Angelo: And you know what? This is our first time, you and me together, and I think that what's funny about this is now the audience gets to see the kind of stuff that you and I talk about all the time. This is just what we do. We just chew the fat on different things. So, I'm your host, Angelo Kastroulis. This has been Counting Sand. Thank you so much for joining us. Please follow us on your favorite podcast platform. You can follow me on Twitter, Angelo Kastr. You can follow our company, Petter is my partner, Ballista Group. And then also our LinkedIn is below. So thank you for joining.