Counting Sand

The Boundaries of Personal Data

Episode Summary

Privacy is bound to come up in any discussion of computer science data. Where do the boundaries exist when you combine the right-to-be-forgotten policy with Artificial Intelligence's dependence on data? Our Host Angelo Kastroulis is joined with Manos Athanassoulis, Ph.D. of Boston University, who recently wrote several interesting papers on the complications that privacy brings in the system-wide use of data.

Episode Notes

Angelo and Manos' connection began in the 265 Course at Harvard University on Big Data Systems. This course inspired Angelo's thesis. The two discuss Manos' papers and how the future of Big Data is on the boundaries of Moore's Law. If you think about LSM trees (Log-Structured Merge Trees) and compacting data, what is considered acceptable deletion when users ask for their data to be removed? Is it when the data is removed from the identifying user that is good enough? In the analysis of Big Data Systems, considerations are always towards performance. An extensive delete sequence will cause a significant disruption in the system. Most people would address the completion of current execution cycles, perhaps during non-peak hours, and flag the no longer valid data. Maybe it could be that your data starts to become dirty, then what? How do you solve issues like privacy and the request for the "Right to be forgotten" or the "Right to erase"?  

Manos speaks about the papers he has written, which you can read in the links below. He addresses the delete question and boundaries with privacy in mind.  Performance is a crucial factor, and looking at the issue holistically is just as important as encryption when protecting privacy.

Mano's Research Papers 

Further Reading

Episode Transcription

Angelo: I think it's easy to see that privacy is a legal problem and privacy is a computer science problem, but what is not so obvious, it is an algorithmic problem or a data systems problem that isn't as obvious because we would just think, well, let's just delete the data, but it's not actually that easy. So today we're going to talk about the hard problem of deleting data with a view to privacy inside of a data set.

I'm your host, Angelo Kastroulis and this is Counting Sand.

I'm so excited to spend a few minutes catching up with an old friend of mine. Manos Athanassoulis. And I met when I was a student at Harvard. He was a TA. He did his PhD in Switzerland, then spent four years in Harvard. And since then as an assistant professor of computer science at Boston university, and now Manas, other than basketball philosophy and history, which are passions of yours.

I know that you have a passion for data systems and databases, and I'm really excited to talk about your latest paper and we'll get to that in a second, but it's been a little while since we chatted. How are things at Boston university? Yeah, I mean, it seems the last time we spoke, we have a lot of changes in my life and putting like, I said, whoa, this is already two years ago almost.

Manos: So yeah. Life, life goes on. The lab has, has grown. We are now four PhD students that I work with me and there is also two postdocs and every now and then we have, I have master's students and undergrads joining for a semester, you know, for some projects and so on. And I also have high school students every summer.

This is pretty fun actually. Um, they're working on, on, on, on various topics and I'm trying to. Small projects that make sense for them to work. On a few weeks, actually, BU has a very nice program that attracts top talent from high schools. It's called Bri's and they all are really excited to perform research.

Angelo: So how does that work? How do they get into be rise? Is it like an application process? Yes, there's a central application processing website essentially at BU that is why it is one of the most prestigious programs that accepts juniors, essentially juniors from high school. And it gets a lot of interest from top schools all over, all over the country.

Manos: What kind of stuff are they working on? Different things. So sometimes one of the students worked on a new idea in a very basic implementation of a new MPR where the idea is can I partition my data without knowing the data, but with knowing the queries and then access patterns. So there, the idea was to use some data mining techniques to come up with a data organization.

Only by seeing the traces of weeds, objects, equity access, and there, the basic implementation was happening centrally by phone. It was not a very heavy system, a system level implementation. We've also played with using Leo database systems and playing with their partitioning strategies, you know, to see performance differences and so on.

And one student was involved in a project. It's about different buffer pool eviction policies. When the underlying storage is, you know, we use all of the newer devices that have different properties on classical disks and C C helped a lot in them, uh, visualization of the algorithms and building a big, nice website that actually shows how these arguments.

So it's different projects all over the place and they look forward for, for new students working with us in the summers before.

Angelo: That's awesome. That's fantastic. So we, we go back a little ways. I don't remember when maybe 2015 I think is when I took. Data systems at Harvard. And you were the, I think you're the TA you were the head TA I think maybe at the time or something.

Manos: I was a TA for the class. Yeah, I think so.

Angelo: Yeah. And we were working in the lab when I worked in the lab too. So we had a lot of fun stuff that we did. And then I think in 2 65, when I took the grad level course, you were in that one too. Of course.

Manos: It's oh, yes. I thought that we were only on the undergrad one.

Angelo: Yeah. And I couldn't help it. You know, honestly, from the first class I took, I knew that's what I want to do. My thesis. That's something in data systems, because it just was 2 65 though. Was the most interesting class I think I've ever taken. There's no book there's no, I mean, it's just reading papers and then you're building something kind of interesting saying that that was a lot of fun.

I've been looking over the things you've been doing. You've been doing a whole lot of publishing, a lot of things. Today, we're going to talk about a paper on privacy or that has implications in privacy. So I was just in Greece. I kind of go every year and I spend a month there in November. And of course, privacy is central in Europe.

That's a really important topic, but it's an important topic kind of everywhere right now. Tell me a little bit about your paper.

Manos: So these there is motivated by privacy considerations, but it's not what we call privacy research in classical. Computer science. It's more about the systems approach to privacy.

So essentially when we speak about privacy, typically we think about, you know, how to encrypt data or how. And leak information from our data here. Would they give him specific? We're trying to solve a very specific problem, which is when you store your, are you task your data with data provider in the cloud, your favorite, you know, cloud-based data provider, you have rights based on recent legislation.

You have the right to ask them to delete your data. And if they believe, and if you ask them to be your data, then they have to comply with specific amount of within a specific amount of. Right. They cannot just keep your data forever because it's not theirs. It's your data, right? When you generated something, it's your data.

So essentially now what we're trying to do is to breed a little bit of legislation that has been introduced the last few years with the tools needed at the system level to be able to comply. Right? The problem is that many complimentary data providers essentially have been storing data in a very different.

With a very different mindset. They say, you know what you trust us with with your data? What you really want from us for your data is to be able to access them fast. And whenever you have. And we also try to actually learn interesting things from your data, both for you, from robots, specifically for you, but also for trans within the community.

And you know, all these goals, I'm saying that with, with the best possible intentions, even though, you know, someone might have other ideas, but most of the players have the best possible intentions to provide, you know, good services, specialized services. And so on. However, when they started designing their system, they didn't think that, okay, maybe I have to comply with.

To delete data quickly or fast enough, whatever that means. Right. And here that's exactly the focus that we're good motivated by. This is the Voxer of our work, but it was motivated by discussions with multiple people in industry that said, we now have to comply with this new legislation, like GDPR in Europe, and that's actually hard.

So we figured out that many of the data providers are using variations. I started an engine that is based on the load structure meds through technology or LSM for short. And this technology has inherently a difficulty in, in, in place update and in place removing of your actual data, because the way it works is that any new data items you have on event on, or even any deletions you might get to your system, they actually go through.

They're actually being documented with that. Changing or, or even touching a tall, the older version of the data up until much later. And this later depends on how much data you have and how frequently you do organize your data, which depends on the specific implementation of your system and your intake of data that your data provider has.

I saw they had a hard time on providing guarantees on how frequently on how soon they could actually believe the data. And that's exactly where we came in and we said, you know what? We're going to change the way these LSM trees were under the hood. And they're going to guarantee that if you have any requests for what we call persistent delayed, and we recently our debate with new keyword, deep delete, maybe deeply, it sounds a little bit better now, but we've used persistent delete in our, in our first.

So when you have Saturday such requirement, you are able to fulfill them through a new algorithm of reorganizing data as you go.

Angelo: That makes sense. So knowing how to delete is a little bit more complicated, it sounds, I mean, normally you think we'll just delete it, but we don't know exactly where all the data even is.

You know, it could be in logs and other things, but inside the internal structures of these database, As you mentioned because of its own performance characteristics. It requires it to happen later. We just won't access it. And that's not good enough. That's not what the legislation is that it has to be as if it never existed. Right.

Manos: There is a question there, right? Is it enough to come to obscure the user from this data? Well that, you know, for one point of view, that might be. I'm not a legal expert to nest, to be the best person to interpret the law, what they really mean or how it should be interpreted, let's say, but you can think of a different scenario.

If you will request a company to delete your data providers, to delete your data. Right. And they say I did it. And then six months later, they have a leak, a security security attack, and they have a data leak and your data is leaked six months. 'cause they didn't delete it. They just steal it from you. So you couldn't see the date, your date, and maybe nobody else.

No honest user could see your data and maybe they did that. Right. But what if there was a leak six months later now, did they comply? Right? And is. I don't think I know the answer to these questions, but I think these are strong motivations to actually get rid of the unnecessary data as that's possible.

Angelo: Yeah. The other reason I think, you know, it doesn't mean that there's any nefarious intent, but that data, if it's a mask, it could still be used to train machine learning algorithms that it shouldn't necessarily leak into that. And one could say that some of those can even be reverse engineered. So you want to have it truly be deleted whenever they say that.

Manos: So, this is a great point, which I'm not an expert on, but it has been discussed a lot in computer science, cyclists in general. Right? So if I have my data and so if a provider has my data and they are training muscle living algorithms with this data, what does it mean to remove my data? And it doesn't make sense to try to move the input that I gave to the machine learning algorithm.

This is not even possible as far as I understand. But still what you're saying is, is it, maybe these are a point, maybe they shouldn't keep using my data, right. If, if I asked them to build them. So that, that makes perfect sense. And, uh, and by the way, they, they have another motivation. If the regulation is indeed asking the companies to not use, not make a variable, not saying.

And so not make anything useful, not even monetize a total from your data they should have, and they have no interest in to keeping them around because they have to pay the store it's to keep them around. So actually, if indeed you enforce or if indeed the, you know, the rationale of the legislation is to enforce no use of this data, then the coupon is, would the storage provider data plus.

I would have the incentive to get rid of them as fast as possible because they are paying for storage. And in fact, in our analysis, even though our focus is not necessarily exactly from the point of view of the financial aspect of the data provider, we see that we have strong benefits by deleting unnecessary data early on.

We have strong benefits in terms of space savings, central performance savings, because we have less clutter in our data. The penalty of rewriting a few pages because we actually removed physically the data amortized with time because you have fewer data to right. Because we, we got rid of them of the unnecessary date of the invalid date.

Angelo: Yeah. So, okay. So that's interesting because the LSM tree requires a little bit of work. That's why they wait. That's why they stack it up and delay it to be able to do the deletions. Cause it's a little bit of work and you want to save this. So the deep delete is a method of not having to pay a big penalty to delete it.

And you still keep performance. It's the best of both worlds.

Manos: Yes. And if I say maybe a few words about how that works. Yes. For anybody who's familiar with LSM trees, which is not necessarily, you know, thousands of people in the world, but there's a few, essentially every we delete, we insert the tombstone, but eventually we'll meet the corresponding file that contains the invalidated data.

And then we'll discuss this. And this merging is, which is called compaction happens over time, depending on how much data you have and how a matter that you're actually getting into your, your system. So what we do is that instead of actually propagating this compactions are the random with an underwater there or with an order that depends on, um, I think like which data is, which finds more, it's more frequently.

And so on or with file has the smallest overlap with the corresponding next level? What we do is that we keep track of the oldest tombstone, but we have in a fine, which invalidates something underneath it. And we keep track of a maximum, outstanding time that we allow it to live through the system. And if this time is about to be expired, We actually propagate the specific file and we give priority to the specific fight to be over to the Marist or compacted with an next.

Essentially what we do that the way we take the threshold that is that needs to be guaranteed by the application. And we divided into different manifest holds per level. And then every time we open a compaction, we first see if there is a level that has an expired two stone. This is how we call them expired.

And if so, then we give priority to that fight over everything else. So. This takes the concept of comp works deep in the concept of compaction, which is a very important internal operation of what Ellison based data stores that is not. So to the user, they use it. Boots and gets data. That's, that's all they users that, right.

But under the hood, there's compaction happening. And we have an unlocked. We have started unlocking, I guess, the power of compaction. And we have a follow-up paper that is not about the it specifically, but it's about how many different types of compactions you could do. And this is a whole new design space of complexions that allows different qualitative performance and behaviors let's say for your SMT.

So now we go back to the delete. We give priority to the, to the expired. And that way we, uh, unless the festival is very tight, we can always get until this

Angelo: and the idea, I think of levels and compaction and tombstones. I mean, this is a concept that you see, not just an LSM trees, you see them in Apache, Cassandra and RocksDB and LevelDB and big table.

You find the same idea over and over against. I mean big table was 2005. I think. So the paper, you know, this idea has been around for 20 years and now we're getting to where we're innovating and having these new ways of doing these things. I am curious that you just mentioned that there's this other paper you have worked on with LSM trees that talks about compaction as different classifications of compaction.

So therefore you can handle them differently.

Manos: Right? So as you mentioned, the concept of is used by Martin. Uh, systems and, you know, Cassandra it's space level to be Rocks, to be a party party. It's based. I think I mentioned already and Amazon, they have dynamo to be back in the day. So there's a lot, and there's many others that they don't have at least in front of me.

Now there's a many systems that usually Sam and our work doesn't try. I mean, we expect. Most of the time with Rocks they've been living to be, but we don't. But the sense that all of these ideas do not, are not limited to those systems just takes the base. And other parts party system has been very successful the last few years.

And so in the idea, you know, stem from a paper from 96, from potato Dale, from Boston area actually, and eventually became really, you know, started hitting it off. From big table level debate about 12, 14 years ago, even though the paper is already 25 or so years old, the only the original paper to all these systems, they have similar principles of LSM under the hood, but then.

At what our tag is used by MongoDB, for example, they are not identical all the systems, right? So they have different, some of them have different unique mops, but they're similar to the actual shell. For example, some of them, they might have a size that issue tan, and some of them have, it says this your two between the consecutive levels, but then you can tune the buffer size between the systems.

You can also decide if you are meditating, eagerly or lazy. So so-called leveling or gearing approaches. Some systems have hybrid that, …. Rocks to be now. Several hybrid approaches, including universal compaction, which is a version of dealing for several levels and leveling for the bottom line.

And essentially there is, there's a lot of wealth in exactly how you combat sort of grants and, and they, and the different designs come from your workload. Right? If your workload has a lot of point queries and no one ends queries, then maybe you don't even need to have arranged based melding, maybe has.

LSM three could work just fine. Similar to what my extra Fox data with a system called faster, a very, very good piece of engineering and very nice system called faster, which sensor just has, has based organization of your data in an attendance log of getting in, you know, new instructions. So, so when you know your workload, you can figure out what is the best way to come back to your.

And, uh, in that paper, we talk about a couple of dimensions that you control your compactions with. So you can decide when to compact, which fives to compare. Whether to come back late, lazy or regularly, or, or in any in-between level. And also the devil additive compaction, you come back the whole fight, the whole level, a bunch of fines only and so on.

And in fact, this paper has kind of scratched the surface. I believe. To actually understand, essentially we can make compaction different per level. So we're going to allow every level to make its own decisions about compactors. And then, then you're going to have a more even richer design space of complexion.

So as complexions do dictate essentially how the system navigates the trade-offs between read quick. And, uh, and one quiz and the specific grit quiz. If you have point queries, I meant written right quiz before. And this specifically quizzes, if you have existing non-existing values.

Angelo: So I love that you mentioned that if you know the workload, you know, the specific thing you're going to try to accomplish, you can arrange, or you can tune or tweak things.

And this is not optimization necessarily not taking a generic system and tuning knobs. That's. But you can build your own version of it, that exploits, that particular example, the one you use. I think there's really good range for is if you don't use them, you don't need to have a system that is general purpose and what we found, same kind of thing.

We found that. We used Rocks and we tuned it in a certain way, kind of built our own components because we had a specific query in healthcare. And we know that this healthcare data, we never do range varies. Actually we are always going to do has comparisons and key comparisons. So in that case, we're always going to be looking for the patient's key to try to find the needle in the haystack.

And so because of that, you can build a system, then you get an advantage. When people say, why is this so much faster than anything else? Because if you pull an off the shelf database down, there has to be built in such a way that is general purpose, because it doesn't know the workload you're going to put on it.

And so it has to be able to be flexible enough. And a lot of people I think are afraid. When they see that, they'll say I have to build my own. That's terrifying, but that's why the Facebooks and the Googles and Amazon build their own data systems because they know their workload and it doesn't work like that.

Manos: That's actually why I wouldn't mind getting more. And essentially, just to add on this, as you said, you can tune your system, you're going to go to workload, but then if you tune in the existing system, you are somehow bound to whatever the system allows us turning locks. So if you build your own system, And of course it's a huge effort, but without naming names, I know that even companies that are using systems, sometimes they prefer to build their own system to have more control over how the system is used, how this developed, and also you have control over what the direction is the code basis take essentially as well, right?

If you're building your own system and oftentimes, you know, for several applications, it makes sense to say, I don't want, so if you're at a hospital alone, you don't want. On building your own data system. Now, if you have a group of people that are insidious and work on your data, they might say that that's the best strategy.

But in the beginning, maybe you can just say, I want someone to use the system and then I want to automatically turn it. That that might be a very good starting point in many cases. And then once you go deeper and once you have a team that is equipped with the skills to actually understand if this is enough, what if you actually need to go build a new system that targets a specific corporate?

So essentially you need to understand if this workload is important enough. I guess that's the main question. If it is important, In terms of how frequently you are using it and how big the potential benefits by changing the under the underlying system are, then you might go and pick your own sister.

Angelo: No, it makes sense. And that's a perfect example. You wouldn't want to do this. If you're a health system that is not your core business, because you have to own it. Now you have to maintain it. You have to continue to feed and take care of it. Our use case was we were building to compute quality data on millions of pounds.

And that's what that product does. One thing. Well, it made sense. Well, it gives you a competitive advantage if you build your own thing from scratch. So you're right. But I think that, I would say don't hesitate. Don't be afraid to build something if you need it. You know, the use case presents itself.

Sometimes we get so afraid of doing something new that we just live with the consequences, even though it doesn't meet our service level agreement.

How were these two papers? Uh, how are they received?

Manos: Oh yeah, they were well received. I think the first one we call it by the way, the deleted lated paper, we call it lithy, which comes from the empathy into the Greek word leafy, which means forgetfulness. And the goal is to forget things. Right. Have you done.

And that was essentially, we were very recently invited to continue to contribute to an issue on regulation related research on computer science. Exactly because of this war, that focused on how to make them regulate. How big system that can comply with the specific regulation and the analysis, the compaction analysis paper, I would say it's a more difficult paper for the broad audience, because it's really internal about how the LSM threes work on the inside that have nothing to do with an application, really just understand.

But the people that use LSM trees in their everyday operations, that they did the paperwork. Sometimes we gave talks, we visited them and give them. They said, this is so insightful. This is so interesting. You know, please keep doing that. So we understand better. All the DDoS are. Some of them will know really well because this is what our system is doing.

And some of them, we didn't then, you know, we now are actually figuring out what other systems are doing and other compaction policies. Amazing. I mean, I'm not surprised I've seen your work. I've seen you being involved with ACM VLDB. Uh, several conferences, right? Yes. I mean, first of all, we do publish with mostly focused for this work as a conference like ACM Sigma DCM is the, you know, the premier organization for computer science, desserts and sacred is the premier conference for research and data management.

And will the be will debate is, is a data management community only conference, but is essentially, it is deemed as the equivalent to say, what is an equivalent of Sigma conference? And I'm involved in those. I mean, as an author, of course, I'm a reviewer many years, but also let me take this opportunity to mention one more thing.

We have been working along with many other people that have preceded me on the goal to have reproducibility and availability of research artifacts. Ideally every paper that is published in these two conferences and in general, Not only to have the PDF of the IRB of the paper that explains ads and presented results, but also have the source code available and ideally through a common or public repository.

This could be for example, the ACM installation. In case of Sigma and also ideally again for all papers, but it's not yet, we're not get at the point that this happened. This happens for all the papers to also have an independent team that fully reproduces the results. This means that we take that this independent team takes the code, runs all the experiments tries to generate all the graphs.

And at the very least they saw that the core principles, the core messages of the paper. So, this is, it's a new endeavor because reproducibility of science has been under scrutiny over the last few years. And this makes perfect sense. We really want to make sure that the research results are actually really trustworthy.

Angelo: It's so true. I just did an episode, episode five on what happened to Rosenblatt's person. Right. And then the reason why it kind of got put away was because nobody tried to reproduce the research. It just took one of the opposing books, one of the critics at their word, and then it turned out to be wrong. And then we had the dark winter, the winter of AI because of it

Manos: actually, it's not only about the producing it's about availability. If you have the artifacts available, if you have the code that someone came up with readily available, then you can try to run something out of the box without having to reprogram it yourself.

Right? So you can start from that. Maybe you change something along the way, but, but you can start from that. You can reprogram so that you can read on the experiments that they. And then running your own, and then this actually will help accelerate the whole process of this.

Angelo: So normally our engineers always go to scholar dot, google.com and we start by doing searches there is that the best repository or the best place to start to find these papers because they're not all there. And it's sometimes hard to get. Do you have any recommendations, you know, while we're on the topic of availability of the research, or you certainly won't find code there, but at least the real availability of some of this research.

Manos: Well, the question, I think that anything that is so specific as you know, research on a specific area of computer systems for computer science, it needs. Except the level of insight and expertise, which is only big with time and with getting access to like dying and failing actually figured out the woods paper is the best one, but on the problem you are trying to work on, and what I'm going to say is that yes.

What would you call that is definitely a very good place to start. You actually have to learn with time, which conferences in general might have might address questions that, that interest. And maybe also with research groups. So then, you know what, the specific people that are working on that problem, of course, you cannot do this.

If what you are looking is very broad, you cannot know everybody and you cannot know every problem, but if you have specific applications, it's definitely worth it to try to understand. And, you know, with which conferences are producing typically the best results in that area. And maybe with which groups are working better.

Uh, and sincerely, this can happen through direct connect through, you know, coming to conferences, following conferences, one post the thing of all this whole. You know, experience the last few years with COVID, I mean, is that many conferences are now either fully online or also online. And even before COVID many computer science conferences were actually posting talks online.

So it is easy to actually go find any doc in YouTube, most of your other, you know, with the sentences and then follow the dog, get the high level idea of the work. And many conferences have tried to make the issue to video. Make the papers, like the PDF of the page. Let's let's say more valuable by opening it up, at least for the first year.

Some, some conferences are actually following the completely open approach. So we need to be in DBT. There are two data ones with conferences that all their PDFs are always available on. And, and S and ACM is pretty, is not doing it by default everything, but ACM is pretty nice into having a lot of flexibility, a lot of options to make the PDFs available, and also keep in mind that many people's.

Are actually now putting like intermediate lab versions of their work or not exceed as well. And you can find many, many papers over there. So I would say Google scholar is definitely very good because it does a lot of smart setting. So it can definitely help in that direction. If you cannot find the paper, if you find the title of a paper, that is sounds interesting, but you cannot find the paper online.

Just email the authors, I think 99.9% of the office are super happy to sell their PDFs to anybody who's interested. So, yeah. Yeah, absolutely. So I'm going to post the link to your papers, uh, in the show notes. Also, if I could post a link to the code, if it's available online, let me know if we can actually, I'm fabulous to sell things over email.

If you go to the lab's website, we have a section about the resources and then we'll have several repositories there that you can download the cool. Is it based in, you mentioned Roth's DB level babies that were loving these papers, Lithia and Ellison complexion, a nicest paper. They're both on, based on Rocks to be.

Angelo: That's fantastic. So if you use RocksDB, which I can't even tell you how many data systems use Rocks. Kafka Flink. I mean, almost everything in the big data space uses Rocks. So that's an excellent tool. Well, nose. I think our time is ending here, but I wanted to say, thank you, first of all, for taking the time to do this, it's great to catch up.

We haven't seen each other in half a decade, but it's really fantastic. And I'm happy for all the, you know, your move over to Boston university and the work you're doing there and your family graduations

Manos: Pleasure to visit and to discuss this topic of. I'm very passionate and excited to talk about those topics. So I'm happy to do it again, you know, if we'll have new questions along the way and so on, but also let me say quickly, I've watched, I've listened some of your, some good episodes, and I think it's a very nice job, what you are doing very nice way of exposing several interesting topics to maybe slightly broader audience, right?

So not everybody has to be an expert on everything to at least follow part of this, this. But also very interesting points for the experts to see something that maybe that I'm working on now. Right. So I think it's, it serves a dual purpose and it has been. The first two episodes of your podcast.

Angelo: Thank you. Well, thank you very much. Can't wait to have you back next time. You will be back. Cause I want to talk about, we all know Moor's Law is coming to an end and some of the work you're doing there is super fascinating. So that's just the little bit of a teaser to next season, but right. Thanks Manos.

Manos: I’m happy to talk about this work in the next season. Thank you.

Angelo: Absolutely. Have a great day.

Manos: Thank you.

Angelo: I'm Angelo Kastroulis. And this has been Counting Sand. Please take a minute to follow rate and review the show on your favorite podcast platform so that others can find us. Thank you so much for listening.