Ep 169: Martins Vaivars

 

What you need to know about building on top of OpenAI, how to validate business idea, why doing wrong things can be worse than doing nothing & reality of choosing startups over well-paid corporate career

Martins Vaivars is the CEO and Co-founder of RivalSense, an AI-based tool designed to help businesses monitor their competitors. He is a serial entrepreneur, this being his fourth company, thus on top of practical AI software building knowledge, Martins also brings plenty of learnings from his previous startups. 

On this episode we talk about:

  • Knowing what the competition is doing

  • Development and validation of products

  • Challenges and solutions in data management for AI

  • Quality control and error management

  • Business applications of AI insights

  • Learnings from previous businesses

We are on YouTube and Linkedin as well

 Watch select full-length episodes on our YouTube channel > https://www.youtube.com/channel/UCP6ueaLnjS-CQfrMCm2EoTA 

Connect with us on Linkedin > https://www.linkedin.com/company/pursuit-of-scrappiness/


Read the full episode transcript below

 

Uldis (00:02.573)

Hello, hello, hello, hello, dear listeners. Welcome to another episode of the Pursuit of Scrappiness podcast. Whether you're building a business, running a team, or just starting out in your career, we are here to bring you scrappy and actionable insights to help you become more productive. My name is Uldis Teraudkalns and my co -host is Janis Zeps. Hi, man.

Janis (00:23.854)

Hey, everyone.

Uldis (00:26.189)

Before we start, a quick reminder, click follow on Spotify, Apple Podcasts or the platform of your choice. In return, you will get more than 160 episodes recorded of ageless wisdom, timeless tips and I don't know what else, how you could call it, just great content. So scroll down the feed, find the founders that you always wanted to hear from and get inspired.

Click the follow button and be the first one to find out when the episode comes out every Tuesday morning. About today's show. So, this show is all about the builders. Builders who don't give up. Builders who are relentless in their pursuit of scrappiness. And builders who also help us talk about AI. These two letters might have been featured slightly in excess on the show recently and probably we're not the only ones.

But today we want to talk not about just AI as a buzzword or as an abstract, but very practical steps that you need to take to start building a business in AI and things you need to be aware of when building in AI. And to do that, we have invited Martins Vaivars from Rival Sense. Hey, Martin.

Martins (01:44.797)

Hey old hey honor. Good to see you. Thanks for inviting me. That's my chat.

Uldis (01:50.829)

So for those of you who don't know, Martins is the CEO and co -founder of Rivalsense, an AI -based tool to keep track of your competitors. That's my line, that's not from their communication, I hope I nailed it. So recently they raised a round from the likes of Change Ventures and Fiddler Capital, and prior to Rivalsense, Martins was actually building several other companies, which we will also talk about on the show.

He has a degree from Oxford and as an experienced debater. So let's expect some proper argument crafting on the show and jump into it. So to start off, you're building Rivalsense. So let's start with, did I get it right explaining what does Rivalsense do?

Martins (02:41.629)

Mostly except for the founding round amount which we have never disclosed. So I'm not sure where you got that number from but overall Exactly where basically it's a tool where you can monitor competitors of your startup of your company and every week you get a weekly curated update in your inbox where you can check out different things happening Your competitors have hired new people launch new products change their prices. So basically

We pick up all of those things with the help of AI and deliver it to your email inbox. So that's it.

Uldis (03:20.141)

Clearly our AI that is trained to follow companies funding grounds needs some more training to do. Sorry about that. So in that research phase, developing the product, what practices did you even encounter that companies employ to follow the competition? Is it something that companies really spend their time and resources on? I mean, it makes sense that they would, but what's in the reality? What did you find out?

Martins (03:49.758)

We got to, we saw a massive range. So basically every company, which is larger than say 60, 70 people, they do that a lot. So many of them actually have a dedicated person, sometimes called a competitive analyst, competitive intelligence, business intelligence. So basically some guy whose job is to monitor competitors, maintain some sort of Google sheet. On the left -hand side in rows, you have all competitors in columns.

you have random things, size, funding, the products, markets, things like that. For smaller businesses, it really depends. So some founders don't do that at all, and some founders do that religiously. My impression is that most founders do some of it, but they realize that there's a real opportunity cost of not spending too much time on manual work, just Googling stuff, checking LinkedIn pages, so they don't, so they do that a

little bit. And that's the kind of dynamic that we wanted to play with. We wanted to say you can monitor your competitors but not spend more than three, five minutes per week. So we kind of pitch the balance a little bit.

Janis (05:06.767)

Actually, I wanted to ask, what are you able to pull? I mean, how deep can these insights go? You're looking at everything that's press releases or are you also digging into, I don't know, what's happening on LinkedIn and, you know, I know they're hiring suddenly in location X and that could be interesting because they might be opening an office there and how, with the current technology, how deep can you go in this surveillance?

Martins (05:35.025)

We can go quite deep, depends on the company, but definitely LinkedIn is one of our top sources. So when people say something, actually people disclose a lot of stuff on LinkedIn. So they say things like, I'm happy to launch this new product or...

you can see that suddenly one guy quietly has ended his work term in some company. So that's an instinct sign. But we also do a lot of other sources. We do Twitter, we check things happening online. People are writing blogs, a sub stack medium. We check out a lot of databases. So for example, something happens in the ownership structure of the company. We notice changes like that.

And we're trying to add more sources because one thing that I've seen so far is actually for some companies you can get a lot out of LinkedIn, Twitter, online, but for other industries actually that is not enough and you actually need to add more proprietary sources. So it's a mixed picture.

Janis (06:42.478)

I have a product development idea for you. One of the jokes I saw on LinkedIn, it was actually a good one. One dude took a photo of a bunch of people in the park with laptops and he said that he's going to the park in Austin, Texas, asking people where they work and then shorting stock of those companies. If they are in a park with a laptop, then there's nothing good going on.

Martins (07:09.598)

Actually, that's one of the biggest challenges so far that people are posting such trash and LinkedIn. And one of the biggest challenges for us is actually monitoring companies that have these sales guys that just post garbage all day long. And it's actually quite challenging to deal with this data amount. You know, something like I went to my sister's graduation and this is where I learned about B2B sales and these guys post five posts like that every week. And even...

Janis (07:16.206)

Heheheheh

Martins (07:39.505)

even for AI it's sometimes challenging to figure out if they're saying bullshit or if they're saying something substantial.

Uldis (07:48.909)

Going back to your, I don't know, pre MVP or early building phase, so what kind of method did you kind of use to validate the idea? What kind of testing process to understand whether there's demand for your product? Because obviously you can start building and build for a while, but then realize that maybe it's not what actually people want. So how did you go about this?

Martins (08:18.207)

So a big mistake that me and Atos, my co -founder, have made in our previous companies is we have built technology before we actually proven that someone wants it. So we are kind of, we're really determined not to make that mistake this time. And we actually took two steps before we started building the MVP. So the first step at the end of last year was that I set up around 40 phone calls,

with potential clients. So basically the idea that started from my own experience, I thought this is something I would like to use, but that doesn't mean other people want to do it. So I set up 40 calls with people and every call I tried to end with a question, would you pay 40 euros for this if we launch this next, I don't know, next week? And I really tried to create this urgency. I was trying to say next month we have this product, will you pay? And...

out of 40 people I talked with, 20 said yes. And to me that was like the first real signal that something is going on here. But the problem is even then, once we actually launched the product, out of these 20 only 10 people paid. So even that situation people are lying. And that's one thing I've learned actually doing these startups in the past. You can't really trust what people say.

say because the only thing you can trust in the world is someone taking out their credit card and actually paying for something and then later sticking around because obviously retention churn all those things customer lifetime value and then after those interviews Aytos and I we built a landing page with basically accepting Stripe payments you know with the payment getaway and we already had a small prototype going

but with like strong manual involvement and we actually started accepting payments and when people actually started paying with their credit card I was like okay this is real this is not something that I have imagined in my mind because for actually more than 10 people start paying for it there must be something there because you know that that seems something more real than the interviews. Yeah.

Uldis (10:45.229)

Strong manual interaction sounds sounds like a good way how to describe an AI tool.

Martins (10:53.536)

But you should do that. It's Concierge MVP. And I have literally in my past company, one of them, we literally built technology for five months because people told us, this is amazing. This is better than sliced bread. We're gonna pay for this. And then we spent five months just building technology. And people almost like signed, almost like

statement of intent that they're gonna buy this and we actually build the technology and they said not this time not really so you can't really believe what people say they're just it's it's revealed preference you can't really trust when people say that what you're building is cool because people have this tendency of always praising you telling what you're doing is great but those are just lies you you can't trust any of that.

Janis (11:48.045)

Janis (11:53.774)

It's similar. We spoke also to Y, some people early there and one of the founders and they also just said like, you know, you wouldn't even believe for a whole long time a lot of the transactions we did manually and look how it worked out. So, yeah.

Uldis (11:54.349)

So bass.

Martins (12:11.521)

The first few months were quite painful because we had those users and I was staying up till midnight doing some of those manual components. Artus was building the underlying tech also till midnight, but somehow slowly we got out of that and actually managed to automate most of the things. So now it's completely automated. But those first few months were shit, but I think it's the right approach because.

It's much worse to build something no one wants. It's just a bad situation to be in.

Uldis (12:49.837)

I might be jumping a bit ahead of my line of questioning, but it makes sense to follow up here on the quality control, because you said that obviously when you have it very manual, then you can have a big filter of what is being spit out of your black box. But if you make it fully automatic and it basically picks up stuff on the internet, then it can pick up a lot of wrong...

or just offensive even things maybe. I guess for competitor analysis the likelihood of offensive content is not very high but you never know. So how do you train the model? How do you make sure that what you get out is true and okay for business or what's it called?

Martins (13:47.616)

Yes, I think one of the...

key things founders who even want to work with large language models or multimodal models need to understand it's a probabilistic model. So there's actually there's no way of actually getting the same result when you put in the same input for several times. So there's always this uncertainty about what sort of output you will get. And the problem is there there you have to look at the business use case because there are some

use cases that are mission critical where this uncertainty is completely unacceptable. You know, one of the thousand times you get something wrong and that's unacceptable. Legal cases are like that sometimes, basically sensitive cases. But for something like competitor analysis, we basically defined the quality standard that we need and we experimented a lot with the setup, with

how we ask questions with the prompts, with the data pipeline, actually data pipelines by far the most complicated part of what we do with cleaning up data. And now we still have some errors, but I think having one in a thousand error is, it's not the worst thing in the world. You know, like founders are smart. They get the, they get the output. Sometimes they notice that there's a problem. They flag it to us.

we try to fix it for the next time. But in general, I would warn people who are trying to use large language models or multimodal models in these very kind of mission critical sensitive applications, because there you might need human in the loop. For example, I would never willingly without any human in the loop, if I will have some big business contract, &A contract for example, there's no way I would rely on large language models to just drop that.

Martins (15:48.769)

contract and me not checking it.

Janis (15:51.694)

Well, you're in a good industry, I think, to experiment. In reality, I would imagine you feed those insights to people, you save a lot of time in their day. But if there's a really actionable thing, I mean, it would be crazy if that person would just not dig into it, right? You see, like, big competitor launching a product, it's kind of normal that that person anyway goes and searches, and then even if there's a mistake, and you know, probably...

It didn't alter the course of the company, hopefully.

Martins (16:23.328)

And humans make mistakes too. So imagine a typical company, you will hire some sort of 24 year old Ernst and Young guy to do competitive research for you. That guy will make mistakes. Like he will look at some website with your company being mentioned. He will not understand all the details. He'll not double check all the data. So,

People make mistakes as well, but the real value is not the fact that sometimes you make some mistake, the real value is can you deliver actionable insights. And actually that's another part of what me and Atos we really liked because literally for our first client, the first week when we launched it, for our first client, he was tracking his competitor and his competitor in their website silently removed one logo from their case studies.

And for me it was like, wow, this is actually quite useful, you know, what does this mean? And he immediately sent it to his sales team and his sales team reached out to that client and it turned out that that client had terminated cooperation. So it's small thing like that. Even if once in a hundred times there's a mistake, it's still useful, you know, because you can do something with it. You can close deals, you can hire people when they leave the company of your competitor, you can...

notice when they launch new products so you can adjust, you can see when they enter a new geography, maybe it's an interesting geography for you. So it's all about delivering those actionable insights and to be honest with you guys we're still trying to figure out how to really add as many actionable insights as possible because it's not as easy as it seems, there's a lot of kind of thinking and fine -tuning involved.

Uldis (18:16.205)

Have you ever encountered the model misunderstanding, humor, sarcasm or some kind of alter meaning to what the actual words that they have picked up?

Martins (18:31.453)

necessarily with sarcasm but one thing that we sometimes have made mistakes with is when people so what salespeople do a lot online they take something which is really basic and they try to spin it as if something really new has happened you know they try to say we offer five -day delivery we have and we are announcing five -day new delivery and it's sometimes

it's hard to understand whether it's something really new and unique that we should be including in this weekly email or whether it's just some marketing spin and there's a lot of marketing spin online.

But a human also, if I would know that company really well, I would immediately understand, okay, this is bullshit. But if I would be researching some company I don't know, I'm not sure if I would understand that it's just marketing stuff.

Uldis (19:17.229)

No, for sure.

Uldis (19:37.869)

Alright, let's assume that I want to build an AI assistant tool for cats or something like that. Where do I start? Like what kind of skill set do I need on my team? How much I can rely on open AI? How much my own engineering is required? Is it something like what is the kind of also maybe entry barrier level to this game?

Martins (20:07.996)

AI is a very broad term, I actually don't like the term at all.

Because it encompasses so many different things. So just to give you an example, so you have an AI app for cats and Okay, first of all, I'm not sure how you can use language to deliver something for cats But there's a very big difference whether you rely on some existing model that's out there like open AI for example GPT and you just send an API call to it with some sort of

context and data and just get an API call back and do something with this data. I don't know, show a cat... I don't know what you're trying to build with the cats, but... Exactly, it gives you some sort of details of how you should get your cats to squat or something, but...

Uldis (20:52.941)

Cat exercise gymnastics.

Janis (21:02.67)

Are you willing to pay for it? That's what we're trying to get the sensation.

Martins (21:06.876)

I'm willing to guys, just as the guys that told me I'm willing to pay 10 euros next week, just call me. It's gonna be fine. But the much harder thing guys is actually training your own model. And there's no way if you actually want to do something like supervised learning, where you actually would take a million pictures of cats and million labels to actually understand in which situation what's happening with a cat. There you actually need like proper

Janis (21:12.942)

Yeah.

Martins (21:36.829)

background of knowing linear algebra, knowing some statistics, knowing some stuff about machine learning. So it really depends on whether you're just relying on existing model or whether you want to train your own model. And there's supervisory learning, you probably have guys have heard about adversarial learning where I can train Super Mario player just by playing million Super Mario games and just doing all kinds of stuff like that.

And I feel there's a kind of misconception, something that those guys are doing, something like Mistral or OpenAI, that's really hardcore engineering because you actually need to gather lots of data. You need really, really smart people to do it. But if you're just building a wrapper on top of existing models, I don't think you need really hardcore engineering background. You need people who...

just know how to do API calls, can build a nice app front -end, back -end, and things like that. Something that we do is somewhere in the middle, because we still do a lot of stuff with like data pipeline, which actually is quite challenging, and actually my co -founder is also pretty hardcore, you know, math -sy engineer guy, so it really depends of what you're trying to do. In general, I would advise everyone who

wants to do something machine learning to learn a little bit of linear algebra and learn a little bit of statistics. You don't have to go really deep, but just take some sort of introductory course in online. There's a lot of stuff available, just free that you can just because just to learn intuitions, you know? So when OpenEye gives you this kind of random answer, you kind of understand, okay, this comes from a normal distribution. This is why this answer

was like this, not like this, you know. Or one big thing, for example, in your cat app, if for example you would, I don't know, look at the document of large document of text about different cat instructions, one thing you often do is something called vector embedding, where you take this text and turn it into embedding. And to know what that is, you need to know like very basic linear algebra like unit

Martins (24:06.19)

university first year level. So to answer your question depends on what you want to do. For most AI apps I think 99 % of businesses that will be born in the next years I don't think you need to know hardcore science AI stuff but to work for Mistral yes you need you need really hard core.

Janis (24:28.911)

I think that is the interesting application probably also for a lot of people who listen and think of their companies. Building on top of OpenAI for example. I mean in very simple terms, how would you describe the experience? You basically pay for API calls, how do they price their services?

Martins (24:53.211)

So basically the price, so they have different models. I think the most recent model is GPT -4 .0 and the price is basically, so very simply you send in text and you get text out. Yeah. And they have pricing. I think the current pricing is you basically pay $5 for 1 million tokens that you send in. A paragraph is around 50 tokens.

just to kind of calculate it and for getting output out you're paying $15 again for million tokens and the important thing to note here is actually different models have different prices so you kind of have to think about which model you need for different use case so yeah that's that essentially is it of course it's not simple as that because you actually need to get the data that you want to put in

For example, if you would have some sort of cat app where you just write some text like customer support, imagine just write some text, it's trivial because you get text, you send it to OpenAI with some instructions, some context window, and OpenAI gives you some answers. So it's trivial. But there are much harder use cases. We actually have to get data from somewhere, clean it, transform it, do stuff with it, and that makes it much harder immediately.

Janis (26:23.888)

But I mean, like what I was trying to find out, like, I mean, of course you pass that cost, I guess, to the customers, but in that sense, it's financially, it's a viable for startups, for people to kick off. It's not like something like super expensive that, you know, only big companies can afford. That's what I'm sensing, right?

Martins (26:46.171)

I think it's totally affordable. I think people are a bit scared by those numbers that they see that OpenAI or XAI Elon Musk raises these insane numbers of like insane amounts of money. But they do it to train their own models to do the weights to basically to build the weights of a model. To run your own API calls, it's not as expensive at all. Of course, you need some sort of unit economics.

because it's not free, so you need some sort of price that justifies those API calls, but it could work. But again, it really depends on the application. Like if I send some chat message, it's a very short context, it's a very limited amount of tokens. But for example, if I would take a book, which is like, I don't know, 300 pages, and I send it in and I ask questions, what's going on?

on the book, that immediately is a lot of input tokens and that just makes it much more expensive. So you kind of have to do the math there. Like one suggestion I have for founders, just few kind of cost cutting suggestions that I have, one is that there are different models out there. So as I said this 4 .0, GPT 4 .0 is the state -of -the -art one, but for example GPT 3 .5

Turbo which is an older version it's ten times cheaper it's basically it doesn't cost you five dollars to send it it costs you I think 50 cents for 1 million tokens and not for all use cases you need those skills that 4 .0 has because for some more simpler use cases you actually can easily do it with older models and still get the performance what you need.

What we saw in Arrival Sense was that actually for most cases this older models are good enough, but there are some interesting small situations where the older models don't perform well. For example, it might be a stupid situation, but basically the longer your context becomes, at some point the old model just breaks. And we don't know why. I can't tell you the reason, but it is that way.

Martins (29:15.677)

and if you basically send a long context at some point you just need the for row and there's no way around it. Yeah.

Uldis (29:27.245)

Any other important things that you have learned in this building story that people should know? One is managing costs. That obviously is an important one. Any other? And of course, choosing whether you need to just plug into an open AI or similar or build something of your own, depending on the use case. So we have two, I think, very valuable starting tips. Is there any other?

Martins (29:58.042)

Well, the third one I already mentioned before, which is this uncertainty. So basically you will never get the same output just because the model is inherently probabilistic. So...

every time you'll get a slightly different output and that might not be good for your use case. So you kind of have to think very hard about whether your use case really can handle these kind of uncertain outputs. Actually there are other natural language processing approaches that do not rely on this kind of probabilistic output so maybe it's good for you. Another thing I would recommend is if you're trying to get something out of this

open AI and do something with it. It's actually much easier sometimes to not request some text, but to request a JSON file with maybe you can already have it in code. I don't know, you can just request some sort of variable out of it. I don't know, like a string or a Boolean and you can actually do something with it and plug it in your code or do something with it.

Another suggestion, maybe I'm talking too much, so stop me, but another suggestion is often you can break up bigger tasks in a lot of smaller tasks. So instead of just, I don't know, as a very stupid example, instead of sending the whole book and saying do something with this book, like is this book interesting or something, you can often break up, think smart about what the book

kids and maybe you can, I don't know, only send the table of contents as the first thing and ask a small question about it, you know, like does the table of contents include a chapter about, I don't know, cats? And then depending on that answer, you can proceed or not proceed to kind of exclude those cases very quickly. And most use cases actually have something like that, but it really depends on your situation and kind of

Martins (32:09.308)

For us, it's a big challenge. We're always thinking about these optimizations, how we can decrease costs. So it kind of depends on what you're building, but in general, you can always try to break it down into these smaller steps. Yeah.

Uldis (32:28.333)

Okay, I think we have a good starting point. And now that we have built, managed our costs, chosen the platform, we need to sell it. And I think we have gone very quickly from almost no AI tools or at least no very openly AI tools to like articles. Here are the top hundreds B2B SaaS AI tools to boost your business. So...

How do you rate what you have encountered, like the openness of B2B clients to experiment with all kinds of AI tools, given that they usually are not the cheapest ones, at least the ones that we have encountered also on this podcast. So how easy it is to actually convince that what you're providing is more than just a...

casual entry in Chad GPT.

Martins (33:30.17)

I feel people don't care about whether it's AI or not. They care whether you can solve their problem in some tangible way. Of course, sprinkling AI on it kind of gives it a little bit of this magic and it's exciting. But people have been using AI stuff for a very long time. It's just that now there's this AI bubble, hyperbubble, but businesses have been using AI for a very long time. I don't know, just for many years now when you tag your friends and

Facebook, Facebook immediately notices where are the faces, right? And that's just image recognition. That's obviously supervised learning, that's AI. And other tools like Salesforce, HubSpot, all these kind of cold email generators, customer support tools like Zendesk, they've been using AI for a very long time and businesses have been happily buying that stuff. It's just that now there's this other thing of large language models,

AI has become this thing that people are scared of, but people, well, the most important thing is are you solving a real problem? Is it convincing? Can you demonstrate it? I think businesses are quite willing to pay for AI tools. You just have to kind of demonstrate the value.

And for us, I think with AI, we can do things that previously were not possible. So we can actually kind of, instead of just sending you some random article and saying, Hey, this article has something about your competitor. We actually can read the article and actually say why it is interesting and why you should kind of, you know,

draw your attention to it. So in that sense AI really helps us with actually, you know, saving you time. Because if we just do it as Google Alerts and send you this random article, we're not saving you time. We just say, hey, go read it. Maybe it's useful for you.

Janis (35:30.895)

I just remembered this and I thought like everything I've set up I've ended up unsubscribing because it's... I'm sure you could calibrate Google Alerts to better level but like if you just put something in you would get like a ton of totally unrelated info and especially if you want to monitor some kind of more popular like keywords or things then yeah. So yeah, makes sense that AI would...

Uldis (35:47.533)

Let's call it noise.

Janis (35:59.983)

would have made a good improvement there.

Martins (36:02.969)

Yeah, like I think one of the coolest things that we do, sorry to be salesy, but basically we don't simply read articles. We also do things like we check websites of your competitors, see if there's any small changes, try to interpret those business changes, try to tell it to you what you should do with this information. So, yeah.

Uldis (36:25.805)

gives you some sense. Let's, let's, yeah exactly. So let's...

Martins (36:28.185)

sense of your rivals.

Janis (36:31.215)

Do you also check reviews for example? Can you do like a company like employees are leaving reviews on Glassdoor or even products on Amazon or is it like a property of Trustpilot and Amazon that they don't let you into?

Martins (36:47.323)

We check it. If there's something very substantial there, we add it. But in general, just one feedback typically does not pass a threshold of whether it's important enough. We only include it if there's something very drastic in that comment. But in general, I think there are many other tools that are quite good for social listening and things like that. That's especially for B2C kind of companies. But I think that's not up.

primary focus. Our primary focus is helping a CEO or helping like head of strategy, head of corporate development, head of sales to know the most important stuff, something that you would not get out of Trustpilot.

Uldis (37:34.605)

Okay, let's move to human intelligence a bit from artificial intelligence. So basically, more than 10 years ago now, you graduated from Oxford, you had a consulting job in London for a few years, yet here you are building startups for the past 10 years or so.

It looked like you have a quite clear and well -paid career path laid out in front of you, but you decided to go the way of pain. So why?

Janis (38:16.175)

You should have been in Ernest and Young now.

Uldis (38:19.469)

at least senior vice president or whatever. So why startups? What's the driving force for you?

Martins (38:30.65)

Well, first of all, I wanted to come back to Latvia. Might sound cheesy, but I actually feel like a bit of patriot. Like I kind of wanted to really to live in Latvia. Did not want to kind of stay in London working in some private equity firm as most of my colleagues ended up doing.

And also already in university, I kind of, you guys know, I started doing this debating stuff and actually there's something really exhilarating in doing stuff with your own hands and kind of organizing something and then seeing something happen. You know, you organize a public debate, some people show up, there's something in the newspaper and you're like, you feel this weird rush of energy. And for me, that's always been the case. Like it's really, you probably have the same

feeling when you do this podcast or building your own companies. You kind of do something and it gives you this kind of weird energy as if you actually have created something with your hands. And at that point...

Uldis (39:32.877)

very weird energy when it comes to podcasting.

Martins (39:35.964)

I can imagine, I can imagine. For me, it's just that my first opportunity to come back to Latvia was to join Infragram. That was a good opportunity, I joined it. And then at the same time, I also saw my brother who's building several companies. So he was a bit of an inspiration. So I saw him do it, look cool, kind of matched what I was doing in NGOs. I just thought, okay, this is what I do in NGOs, but to make money and to do something.

something like adult. Actually, I actually still feel that a lot of people here in Latvia that are organizing the NGOs, they will be amazing entrepreneurs. But you know, that's not the direction they're taking. So yeah, that's kind of wanted to start my own company. So the first company was Tom Board and been trying to do this ever since, kind of struggling ever since. Not sure if I would be happier in life.

Uldis (40:09.069)

Good twist.

Martins (40:35.837)

then I would definitely have more money, but I'm not sure if I would be happier.

Uldis (40:43.405)

more money at least in the short term. So, yeah, so you mentioned Toneboard, there was KPI Berry, not sure if there was something else, so a couple of companies that you were building and now all led to Rival Sense today. So what lessons can you pick up from those previous experiences? One you already mentioned, this building without clear

market demand, any other painful lessons that now you can, mistakes you can avoid or successes you can build on from those stories.

Martins (41:29.435)

There's a lot to be honest with you. I'm thinking what are the top ones that come to my mind.

Well, first thing is we really burnt ourselves in the first company when we were building for an industry that we had never worked in. So we didn't really understand the mechanics. So basically at that time we're doing a machine learning solution to identify fraud cases in call centers. Someone calls you and we identify, okay, this is not all this. This is someone who's pretending that he's all this. So sounds good on paper. And all these are clients which were telco companies and also

some financial services companies, all of them said to us in meetings, yes, this sounds amazing, we want to decrease fraud, let's do this. So again, people are saying something but you don't really know what they mean. But after struggling with that for like one and a half year, you actually start to understand that industry and you understand the very simple thing, which is for all those companies by far the most important thing is to decrease friction.

when someone applies for tries to buy like a mobile phone or they're applying for a loan you realize actually for them decreasing friction and pushing through to this purchase is so much more important than decreasing fraud and that's something that I just didn't know like if I would have worked for a company like that before for like two years I would definitely know that so my kind of learning now is I really want to build products only that I myself

would buy because I kind of understand the client a little bit because it's very easy to deceive yourself if you're trying to sell into a type of organization or to a type of industry that you just don't understand in detail. Like even if you read some McKinsey report or someone tells you about that industry, you kind of get this illusion that you understand it, but that's not understanding it because there's always something hidden that you don't know.

Martins (43:37.786)

That's one big thing. Another big thing is yes, don't build product before you have validation. Actually, that's one big challenge I have right now that a lot of enterprises are really fucking around with startups by pulling them into these infinitely long pilot projects. You're a startup, you go to some enterprise, there's some sort of head of innovation or someone that sounds like that. They said, my God, this is so

let's do a pilot project and then they have an infinite pilot project that takes like a year or two years and in the end you have nothing to show for it. I just heard a case here in Latvia of a company that's doing a pilot project with one Latvian state enterprise and I just got so angry because it gave me deja vu of something that I've done in the past because companies will always talk with you because it makes them feel good, it makes them look innovative.

but what you really want as a founder is a quick no. You want someone to say I don't need this thank you and that's a very good answer but if you actually kind of waste time for startup and that happens a lot that's a big trap for a company. That's why I always tell to entrepreneurs if you're trying to sell to big enterprises have a very expensive pilot project like put a really big price on the pilot project because if you're

you will add a small price like five thousand euros they're like yeah okay let's pay five thousand euros so we can entertain ourselves and in one year you will just have nothing you will have wasted a year of your life on nothing and this guys happens so often I just see it so often now with all those bigger enterprises they're just messing around with startups.

Janis (45:27.889)

I feel like also like a lot of companies, young startups, they...

It's tough if you have never worked for a big company and you don't also understand how the internal structure and politics and incentives work. And just like you said, I mean, people have different incentives. They need to tell that they're talking to five innovative startups and exploring pilot projects because that's what their boss will like. Whether they will go ahead with it, absolutely, most likely not because they have other things to do. Maybe they don't have budgets, maybe it's too complex, legal doesn't approve.

And if you don't have any sense for it, what you said is very realistic, I think as well. You can spend a year without any outcome.

Martins (46:11.066)

When I was doing this, I had only worked for two types of companies. I had worked in my management consulting firm in London and I had worked for a startup. So I didn't understand this dynamic. I thought, you just go. Someone says they're willing to buy, they will buy. But no, in a big enterprise, there's no point of selling to head of innovation because he doesn't have any buying power. He doesn't have a budget. You should be selling to chief C level person or you should be selling at least like vice president.

of sales or something like that, because you're just wasting your time with the wrong buying persona. You can spend a lot of time that way. And the last final thing is don't work, I probably won't go into too much detail with this, but don't work with people you don't like. Don't work with people whose values don't align with you, because sooner or later that will pop up. Maybe you will think it's not a big deal,

you know I'm a resilient guy I can do something but actually it's just business and everything but actually it's not just business and you will do much better work if you actually work with people that align with your values that you kind of have the same vision and sooner or later something bad will happen probably

Uldis (47:13.517)

This is just business.

Martins (47:35.162)

too much detail on that, but I really recommend guys to work with other people that kind of share your values and kind of what you like. Those are the big learnings, I think.

Uldis (47:46.157)

bit hard to test for that, right? If you don't know them too well.

Martins (47:50.906)

If you don't know them too well, but sometimes you do know the challenges you always, you know, it's really easy to fall into a situation that you don't like because if you're, especially if you're an entrepreneur, you kind of, you feel like not doing anything is wrong. You always have to be moving ahead. You need to jump at opportunities when they come. But just as I said about these pilot projects that take a year, it's actually much cheaper not to start.

a pilot project and to just try to get another opportunity and actually it's much cheaper sometimes to think is this the person I should be working with and things like that and I think for people who are actually action driven and the most entrepreneurs it's really hard for them because they just want to jump at any opportunity they see.

Janis (48:44.592)

And also your intuition, it is working. It tells you something, but very often you put your rationale around it. Like you said, there are 10 rational reasons why I should do this and this weird intuition why I shouldn't. You're gonna take the 10 reasons over the intuition and yeah, it's how it happens.

Martins (49:03.674)

Maybe this is not exactly... Yeah, sorry, go ahead.

Uldis (49:04.109)

You also touched upon one thing that I have also encountered is this pressure of doing something or pressure of delivering, pressure of moving forward and sometimes it can lead you into very very bad paths when you are doing something not because it makes great business sense.

or it might have from some angle, but it's some kind of external pressure that you're doing it because pressure of, I don't know, peers, pressure of client group, pressure of supplier, pressure of some other stakeholder. And there's like, okay, you know.

I have so many problems, I don't want to have these people on my back. So let's do this to keep them at bay and it's not such a bad idea and it might work, et cetera. And then you can have like a totally kind of wrong dimension that you're spending your brain power, that you're spending your resources and ends up eating sometimes. It's actually better to not progress for that one month.

or two weeks if you don't have the right feeling for it. And obviously it's easy to say in hindsight. When you're in the thick of it, might also feel good. But something to really look out for is taking especially big strategic decisions under external pressure, not kind of internal pool or market pool.

Martins (50:48.282)

I agree. It's, it's, it's very hard. Like when you're in a startup, a typical example would be a big client shows up and says, I want to buy what you're doing, but build me some security features or bill me like some multiple seats. And, you know, it's very hard to say no in that situation. Or similarly, when you don't have anything at all. And I've been in this situation many times. You have some business idea and you don't, some people are saying good things.

about it and you're thinking it's just all it's in execution I just need to kind of really make this idea amazing I need to sell it well but my impression now guys is I feel most business ideas are dead or alive in the first five minutes you just can show something to the right people and if it's a good idea they have to say yes they have to say okay this makes sense because

Because in my previous projects I've had the situation where people are like, and I take it as a signal that it's good. So I feel like, so I feel you need to really have that first big push. Of course, it's easy to say because often you don't have these ideas, your unemployment benefit is running out, your girlfriend is angry and things like that.

Uldis (52:00.525)

Selective, selective listening.

Uldis (52:18.221)

Yeah, I think I have also noticed and witnessed and of course, you know, there's grit, there's perseverance, etc. But I have noticed that when something is good and you are not completely, you know, green without any network or anything, so then that's maybe a bit different. But if you have been around a block, you have a good network, you have people to validate your ideas, people who can make introductions.

If it's something good that you have started working on, people will get excited for you. They will make you introductions. They will help you with advice. They will, they will, they will, that you will kind of feel this pull effect, not only from the market, pure demand, but from just, you know, people, community, wanting to help you because they, they recognize that it's something good that you're working on. I think that's also an important thing to, to feel. And I think it's that is that.

that amazing energy when you feel that these meetings are lining up by themselves and you're catching fire.

Martins (53:23.422)

I agree, I completely agree. That's so true.

Uldis (53:26.956)

only to get there.

Alright, I think it's a good way to wrap this up on. Thank you very much, Martincz. Good best of luck with building Rival Sense and giving people the sense of their rivals. And then thanks for your insights. I think those of you listeners who listened to the end were very well rewarded with the final segment. I think it was amazing. And also practical tips on how to build in AI.

So yeah, thank you very much.

Martins (54:02.464)

Thank you guys.

Janis (54:05.266)

Thank you, bye.

Uldis (54:07.277)

And to the listeners, we're not gonna be back with another episode next week. We are going to take a short summer break. So see you already with a bit more tan and hopefully some happiness.

Janis (54:21.522)

We are people as well.

Uldis (54:24.397)

Exactly. Alright guys, thank you. Bye.

Janis (54:27.666)

Thank you for listening, bye.

 

Please note that the transcript text is AI-generated. We apologize for any potential errors or inaccuracies. Thank you for your understanding.

 
Previous
Previous

Ep 170: Jamie Haerewa

Next
Next

Ep 168: Vytautas Kubilius