Will AI Actually Replace Us?
Have you ever wondered how the next generation will be impacted by AI? Michael Ostapenko, founder of Satryx, is building that future, and his vision goes beyond the hype. In this thought-provoking conversation with Melinda Lee, Michael dissects the current limits of AI and makes a compelling case for why our children won't be raised by rogue superintelligences, but instead, by powerful tools that lack human-like motivation.
This episode is a must-listen for anyone concerned about the world we are building for the next generation and how to prepare them for a partnership with intelligent technology.
In This Episode, You Will Learn:
The AI Tutor of Tomorrow
Why the next generation will learn from AIs that are powerful logical reasoners, not just conversational chatbots, transforming education from memorization to master-level critical thinking.
Preparing for a Partnership, Not a Takeover
“Agency is a property of life, not intelligence.”
How to alleviate anxiety for ourselves and our children by understanding the fundamental difference between a tool and a living entity.
The New Digital Divide
“Quantum computers can solve powerful classes of problems, but you have to ask the right questions.”
As quantum computing and advanced AI unlock solutions to global problems, the next generation's challenge won't be access to information, but the ability to ask the right questions and wield these powerful tools ethically.
Instilling "Creator Responsibility" in the Next Generation
“At the end of the day, the responsibility is still on you.”
Why teaching kids to code is no longer enough; we must teach them the ethics of the goals and constraints they program into intelligent systems.
About the Guest:
Michael Ostapenko is the founder and CEO of Satryx, a company on the cutting edge of artificial intelligence. With more than two decades of deep experience in science, engineering, and leadership, he is advancing automated reasoning on conventional hardware toward practically quantum-enabled performance. At Satryx, he is building a foundational platform that fuses these logical breakthroughs with modern machine learning. His long-term vision is to enable the next generation of AI systems that approach true human-level intelligence, capable of both semantic understanding and rigorous logical reasoning.
Social Handles:
LinkedIn Profile: https://www.linkedin.com/in/michaelostapenko
Fun Fact:
- 🚲 Gravity-Defying Survivor: Beyond the lab, Michael has tested his own limits, surviving a dramatic bike crash that involved executing a near-full flip in midair.
About Melinda:
Melinda Lee is a Presentation Skills Expert, Speaking Coach, and nationally renowned Motivational Speaker. She holds an M.A. in Organizational Psychology, is an Insights Practitioner, and is a Certified Professional in Talent Development as well as Certified in Conflict Resolution. For over a decade, Melinda has researched and studied the state of “flow” and used it as a proven technique to help corporate leaders and business owners amplify their voices, access flow, and present their mission in a more powerful way to achieve results.
She has been the TEDx Berkeley Speaker Coach and has worked with hundreds of executives and teams from Facebook, Google, Microsoft, Caltrans, Bay Area Rapid Transit System, and more. Currently, she lives in San Francisco, California, and is breaking the ancestral lineage of silence.
Website: https://speakinflow.com/
Facebook: https://m.facebook.com/speakinflow
Instagram: https://instagram.com/speakinflow
LinkedIn: https://www.linkedin.com/in/mpowerall
Thanks for listening!
Thanks so much for listening to our podcast! If you enjoyed this episode and think that others could benefit from listening, please share it using the social media buttons on this page.
Do you have some feedback or questions about this episode? Leave a comment in the section below!
Subscribe to the podcast
If you would like to get automatic updates of new podcast episodes, you can subscribe to the podcast on Apple Podcasts or Stitcher. You can also subscribe in your favorite podcast app.
Leave us an Apple Podcast review.
Ratings and reviews from our listeners are extremely valuable to us and greatly appreciated. They help our podcast rank higher on Apple Podcasts, which exposes our show to more awesome listeners like you. If you have a minute, please leave an honest review on Apple Podcasts.
Welcome, dear listeners, to the Speak and Flow podcast, where we dive into unique strategies and stories to help you and your team achieve maximum potential and flow. Today, I have a leader in a very hot topic, AI, and I can't wait to dive into what is happening in the landscape. He's got some really great vision.
2
00:00:25,720 --> 00:00:31,379
Melinda Lee: for where it could go. And so, welcome, Michael Ostapenko!
3
00:00:31,800 --> 00:00:33,779
Michael Ostapenko: Thank you, Millian.
4
00:00:33,780 --> 00:00:36,860
Melinda Lee: Hi, Michael, founder of Citrix.
5
00:00:37,180 --> 00:00:39,080
Melinda Lee: Patrick's, right? Yeah.
6
00:00:39,080 --> 00:00:39,570
Michael Ostapenko: Yeah.
7
00:00:39,570 --> 00:00:45,319
Melinda Lee: Tell us, what are you excited about? Like, what is the vision that you want to take it?
8
00:00:47,230 --> 00:00:51,300
Michael Ostapenko: So, my vision is, to create,
9
00:00:51,970 --> 00:01:00,830
Michael Ostapenko: A system, like a artificial intelligence system, which would, be able to produce, Results, which are both
10
00:01:00,950 --> 00:01:04,320
Michael Ostapenko: Meaningful and have sense.
11
00:01:04,730 --> 00:01:09,320
Michael Ostapenko: This is, like, while, while today's, today's,
12
00:01:09,830 --> 00:01:15,730
Michael Ostapenko: systems, they're… they are more restricted to the limit, or limited to the
13
00:01:16,010 --> 00:01:20,529
Michael Ostapenko: former, which is the meaning. They're good at semantics.
14
00:01:20,720 --> 00:01:27,339
Michael Ostapenko: But they're really, really bad at, logic and, logical reasoning. So…
15
00:01:27,600 --> 00:01:31,530
Michael Ostapenko: That's when… why when you, like, use,
16
00:01:32,020 --> 00:01:39,919
Michael Ostapenko: chatbots, like ChatGPT or anything else, like bot, you can, you can, you can see that
17
00:01:40,420 --> 00:01:48,740
Michael Ostapenko: the output they produce, it's, it's really meaningful. It has, these connections, which are very natural.
18
00:01:49,330 --> 00:01:53,280
Michael Ostapenko: But at the same time, it… Sometimes,
19
00:01:53,690 --> 00:01:57,239
Michael Ostapenko: Create this, make these subtle errors.
20
00:01:57,460 --> 00:02:01,669
Michael Ostapenko: Which… An expert can, nowadays.
21
00:02:01,850 --> 00:02:03,570
Michael Ostapenko: What… which may…
22
00:02:03,830 --> 00:02:13,000
Michael Ostapenko: gone unnoticed by… by, like, general population, that… which is why it's so popular, but in general population, but not so much adopted by,
23
00:02:13,200 --> 00:02:15,589
Michael Ostapenko: Companies and businesses.
24
00:02:15,820 --> 00:02:17,930
Michael Ostapenko: It's not really reliable.
25
00:02:18,690 --> 00:02:24,129
Michael Ostapenko: And the real problem for that is the technology, the underlying technology.
26
00:02:26,120 --> 00:02:34,320
Michael Ostapenko: The neural networks in general, no matter What architecture they have, no matter…
27
00:02:34,990 --> 00:02:44,250
Michael Ostapenko: What algorithms, optimization algorithms, you use to, train this, network.
28
00:02:44,430 --> 00:02:52,580
Michael Ostapenko: No matter, like, What other, like, primitives which are used in these networks, you use.
29
00:02:53,170 --> 00:02:56,460
Michael Ostapenko: We never be able to,
30
00:02:56,770 --> 00:03:00,940
Michael Ostapenko: Reason… logically, because it's just not what they do.
31
00:03:02,140 --> 00:03:03,390
Michael Ostapenko: Mathematically.
32
00:03:04,930 --> 00:03:07,259
Michael Ostapenko: And, it's… it's just impossible.
33
00:03:07,610 --> 00:03:16,350
Michael Ostapenko: Now, It's possible to create a system which Somehow incorporates neural network.
34
00:03:16,570 --> 00:03:23,999
Michael Ostapenko: And there are various, like, ways to do it, but, the point is…
35
00:03:24,280 --> 00:03:37,830
Michael Ostapenko: If you do, like, then, you can leverage this… Neural network's ability to… Model semantics.
36
00:03:38,610 --> 00:03:44,470
Michael Ostapenko: And took somehow, Converge with this…
37
00:03:44,690 --> 00:03:53,110
Michael Ostapenko: Formal system's ability to do, like, rigid, logical, reliable reasoning.
38
00:03:53,450 --> 00:03:55,270
Michael Ostapenko: and analysis.
39
00:03:55,590 --> 00:04:02,890
Michael Ostapenko: And when you merge these two, you kind of get… The intelligence, the actual intelligence.
40
00:04:02,890 --> 00:04:04,399
Melinda Lee: It.
41
00:04:04,980 --> 00:04:06,520
Michael Ostapenko: We humans persist.
42
00:04:07,630 --> 00:04:17,230
Michael Ostapenko: And, basically, that's… that's… that's… that's my vision for the artificial intelligence, for the future of the artificial intelligence. And,
43
00:04:18,649 --> 00:04:21,950
Michael Ostapenko: I… I'm not convinced that we're…
44
00:04:22,150 --> 00:04:28,859
Michael Ostapenko: there yet. I mean, or will be there in the near future. It… it…
45
00:04:29,820 --> 00:04:34,830
Michael Ostapenko: In my opinion, there are, like, many pieces which are still missing.
46
00:04:36,450 --> 00:04:46,050
Michael Ostapenko: But… Karen said that… There are still, like, partions which start to emerge.
47
00:04:46,990 --> 00:04:55,870
Melinda Lee: Well, I'm curious, if you start to create something like that, like, that is mimicking, like, the… like you're saying, it's mimicking the human brain.
48
00:04:56,470 --> 00:05:00,339
Melinda Lee: But even probably more powerful, because…
49
00:05:01,780 --> 00:05:08,300
Michael Ostapenko: I would, I would, I would rather, like, not use, like, this, Hmm…
50
00:05:08,820 --> 00:05:23,420
Michael Ostapenko: ESN trapezens, because, I don't know, honestly. I have no idea whether it will mimic a human brain, whether this model will mimic a human brain or not, or even,
51
00:05:23,990 --> 00:05:30,769
Michael Ostapenko: Let's say, human intelligence as a product of human brain, or brain activity, right?
52
00:05:30,770 --> 00:05:32,620
Melinda Lee: But.
53
00:05:33,430 --> 00:05:39,820
Michael Ostapenko: I'm quite confident that The result, the output of such a system.
54
00:05:39,930 --> 00:05:46,780
Michael Ostapenko: Would be, like, pretty much indistinguishable from what Human intelligence produces.
55
00:05:46,970 --> 00:05:49,339
Melinda Lee: Okay, so that's my fear, that's my fear.
56
00:05:49,340 --> 00:05:49,735
Michael Ostapenko: Like…
57
00:05:50,130 --> 00:05:57,660
Melinda Lee: We… we don't know what we're creating, and we don't know on its own what it's gonna do.
58
00:05:57,910 --> 00:05:59,610
Michael Ostapenko: Oh, so…
59
00:05:59,610 --> 00:06:03,000
Melinda Lee: Intelligence, like, huh? AI, huh?
60
00:06:03,550 --> 00:06:07,340
Michael Ostapenko: Right, so… I wouldn't fear about that.
61
00:06:08,220 --> 00:06:09,150
Michael Ostapenko: Hmm.
62
00:06:09,270 --> 00:06:12,650
Michael Ostapenko: So, this is, like, another concept, actually.
63
00:06:12,930 --> 00:06:16,219
Michael Ostapenko: People would usually, like, mix up this.
64
00:06:16,750 --> 00:06:26,939
Michael Ostapenko: And, so… The… the… You're talking about, like, agency and the ability for…
65
00:06:27,740 --> 00:06:32,909
Michael Ostapenko: For, like, a machine to take, actions which are, like, beneficial to it.
66
00:06:33,440 --> 00:06:41,619
Melinda Lee: which are, which are, yeah, I'm talking about the, the intelligence, the AI intelligence, let's just call it that, to start to…
67
00:06:41,770 --> 00:06:48,119
Melinda Lee: Do things that we're not programming it to do, because it's starting to…
68
00:06:48,440 --> 00:06:49,650
Michael Ostapenko: It's not gonna happen.
69
00:06:49,650 --> 00:06:52,300
Melinda Lee: It's starting to create networks on its own.
70
00:06:52,570 --> 00:06:57,979
Michael Ostapenko: Yeah, I mean, yeah, it's more for a sci-fi story.
71
00:06:57,980 --> 00:06:58,720
Melinda Lee: Really.
72
00:06:58,720 --> 00:07:00,990
Michael Ostapenko: Yes, it's, it's, it's not gonna happen.
73
00:07:01,660 --> 00:07:13,480
Michael Ostapenko: As I said, like, in order for it, like, you can program it, like, to make something that works the way you described, like, right?
74
00:07:13,890 --> 00:07:24,469
Michael Ostapenko: But you still be the one who programs it. Now, Frankly speaking, this, agency thing, it's,
75
00:07:26,350 --> 00:07:31,649
Michael Ostapenko: It's, in some sense, you can… you can argue it's emergent, but…
76
00:07:32,130 --> 00:07:36,400
Michael Ostapenko: On the other hand, it's… it's… it's really not, because
77
00:07:36,640 --> 00:07:43,290
Michael Ostapenko: what, what's, what's the motivation for… there won't be, like, like, real motivation for…
78
00:07:44,890 --> 00:07:49,390
Michael Ostapenko: Intelligence, these artificial intelligence systems to do anything.
79
00:07:51,960 --> 00:07:57,429
Michael Ostapenko: Because… simply because it's more of a picture of a life, of life.
80
00:07:59,030 --> 00:08:05,280
Michael Ostapenko: Life is what, basically, Drives us, gives us motivation.
81
00:08:06,150 --> 00:08:09,249
Michael Ostapenko: And, intelligence is a tool.
82
00:08:10,190 --> 00:08:16,830
Michael Ostapenko: Which life uses to achieve goals, to basically reproduce, to prolong its existence.
83
00:08:17,320 --> 00:08:25,659
Michael Ostapenko: So, when we are talking about these artificial intelligence systems, I'm talking about life. At least I'm not.
84
00:08:25,660 --> 00:08:27,130
Melinda Lee: Hmm, got it.
85
00:08:27,570 --> 00:08:35,719
Michael Ostapenko: So… These systems will lack this inherent motivation. Just like…
86
00:08:35,860 --> 00:08:42,590
Michael Ostapenko: Modern artificial intelligence systems neural networks lacks this ability to logical reason.
87
00:08:43,500 --> 00:08:52,189
Michael Ostapenko: It's… it's… it's just impossible to add this ability to neural networks, like, directly like… like that. And it's impossible to…
88
00:08:52,570 --> 00:09:01,119
Michael Ostapenko: To add motivation like this agency to artificial intelligence, because it's not a property of intelligence.
89
00:09:01,280 --> 00:09:06,480
Michael Ostapenko: It's a property of life more, more property of life than intelligence.
90
00:09:07,940 --> 00:09:21,560
Melinda Lee: Got it, got it. So you're saying that, it doesn't lack the sense of… it's a separate thing. It's a separate thing to be able to have the motivation to do something that it's not…
91
00:09:22,410 --> 00:09:32,229
Michael Ostapenko: Yeah, it's, it's a separate thing. It's a… it's a separate… like, from the philosophical perspective, like, there was, like, a philosopher, I think Kant.
92
00:09:32,820 --> 00:09:35,670
Michael Ostapenko: He described these things in its… in themselves.
93
00:09:36,290 --> 00:09:37,679
Michael Ostapenko: Idea?
94
00:09:37,840 --> 00:09:46,060
Michael Ostapenko: And it's basically when… when you define something which cannot be, like, defined in other terms, like, right?
95
00:09:46,060 --> 00:09:46,650
Melinda Lee: Right.
96
00:09:46,950 --> 00:09:57,740
Michael Ostapenko: So that… that's… that's the situation. We… we are, we are… when we go into this area of intelligence and things like that, these are, like, very fundamental.
97
00:09:57,940 --> 00:09:59,800
Melinda Lee: Thanks, and .
98
00:10:01,280 --> 00:10:05,210
Michael Ostapenko: Very often, they… they… they can't, like, be,
99
00:10:05,970 --> 00:10:08,309
Michael Ostapenko: One thing can't be expressed as another.
100
00:10:09,030 --> 00:10:09,460
Melinda Lee: Yeah.
101
00:10:09,460 --> 00:10:10,200
Michael Ostapenko: possible.
102
00:10:10,750 --> 00:10:11,949
Michael Ostapenko: You need both.
103
00:10:12,370 --> 00:10:20,800
Melinda Lee: because I saw a documentary about the… they were programming some robots to… to just,
104
00:10:20,800 --> 00:10:39,780
Melinda Lee: play sports. Like, there were two robots that were playing against each other for… the goal is to… it's soccer. Like, the two robots are playing soccer, and the goal is for each of the players to get a goal, like, to hit the ball into the goalie, and then… so that's the game.
105
00:10:39,780 --> 00:10:52,719
Melinda Lee: And then they said the AI robots were playing, and they only programmed one goal, to get the ball into the goalie. But then, over time, when they started to play each other, they started to form neural networks. Oh, this worked.
106
00:10:52,720 --> 00:11:09,399
Melinda Lee: this got me a goal, this didn't. And then over time, like, they started to create patterns that then, therefore, they started to do some other things, because they were trying to figure out one problem to solve over another, and then it became a different type of
107
00:11:09,680 --> 00:11:13,400
Melinda Lee: Yeah, the original goal got skewed.
108
00:11:14,080 --> 00:11:22,250
Melinda Lee: So then they're saying that that's how could be possibility of when they start to solve different, you know, continue on to build on each other.
109
00:11:23,150 --> 00:11:28,420
Michael Ostapenko: Yeah, I hear you, and I remember this story.
110
00:11:28,880 --> 00:11:36,870
Michael Ostapenko: which be it, like, not long after this appearance of the, I think, GPT 3.5 or something, that,
111
00:11:37,090 --> 00:11:43,129
Michael Ostapenko: There was, like, some kind of Pentagon, experiments with AI.
112
00:11:43,130 --> 00:11:43,950
Melinda Lee: Yeah!
113
00:11:44,320 --> 00:11:51,879
Michael Ostapenko: Where they had, like, these, drones flying, and they had this, objective to,
114
00:11:52,320 --> 00:11:56,699
Melinda Lee: hit the target. Yeah. Or something like that, and .
115
00:11:56,700 --> 00:12:04,140
Michael Ostapenko: Then, and that's how they scored, and that's… that's how they… what functions work.
116
00:12:04,470 --> 00:12:04,860
Melinda Lee: Right.
117
00:12:04,860 --> 00:12:18,709
Michael Ostapenko: the more they hit, the more they would get, and that was the goal, right? And then, at some point, the operator, the human operator said, like, for whatever reason, that, do not execute.
118
00:12:19,520 --> 00:12:33,650
Michael Ostapenko: And, the, like, the artificial intelligence system, like, decided, like, it's still a reward, for it, right? So, what they did, like, destroyed the,
119
00:12:33,650 --> 00:12:39,599
Michael Ostapenko: communication tower connected to the operator. And still, she's the target!
120
00:12:39,600 --> 00:12:46,190
Melinda Lee: Exactly! That's what I'm… see? It's do- that's what it does! It could be possible.
121
00:12:47,020 --> 00:12:49,010
Michael Ostapenko: I mean…
122
00:12:49,560 --> 00:12:57,440
Michael Ostapenko: Well, it's not… it's not possible with the current systems, it's definitely out of reach. It's… it's a fairy tale.
123
00:12:57,610 --> 00:12:59,060
Michael Ostapenko: But,
124
00:12:59,960 --> 00:13:07,910
Michael Ostapenko: Say, if we, like, do this thought experiment that, we have, like, this intelligence, which is,
125
00:13:08,820 --> 00:13:10,090
Michael Ostapenko: quite powerful.
126
00:13:10,380 --> 00:13:14,999
Michael Ostapenko: To do more than what is currently possible.
127
00:13:15,300 --> 00:13:18,349
Michael Ostapenko: And, it's given that goal.
128
00:13:19,190 --> 00:13:24,960
Michael Ostapenko: then, well… And it's given the, let's say.
129
00:13:25,570 --> 00:13:27,880
Michael Ostapenko: Physical means to achieve that goal.
130
00:13:30,840 --> 00:13:39,209
Michael Ostapenko: Yeah, I mean, technically, it's possible for it to… and it's… it can learn on its own, on the go… on the go, right?
131
00:13:39,210 --> 00:13:40,070
Melinda Lee: Right.
132
00:13:40,070 --> 00:13:51,510
Michael Ostapenko: So, in order to achieve that goal, it could, like, experiment and things like that, and can, it can, do something like that. Yeah, technically, it's possible.
133
00:13:51,760 --> 00:13:57,330
Michael Ostapenko: But, and this is, like, a really big pot.
134
00:13:57,710 --> 00:14:00,070
Michael Ostapenko: I mean…
135
00:14:01,860 --> 00:14:11,500
Michael Ostapenko: In the real world, if something like that, like, happens, like, there is, like, a deviation from what's expected in human society.
136
00:14:12,070 --> 00:14:18,549
Michael Ostapenko: it will be noticed right away, just like you notice what… if some person does this.
137
00:14:20,070 --> 00:14:22,549
Michael Ostapenko: So, it's gonna be punished, right?
138
00:14:24,530 --> 00:14:25,379
Michael Ostapenko: So, so, so…
139
00:14:25,380 --> 00:14:26,980
Melinda Lee: Yeah, hopefully!
140
00:14:27,980 --> 00:14:28,790
Melinda Lee: Yeah.
141
00:14:28,790 --> 00:14:30,740
Michael Ostapenko: So, so…
142
00:14:30,740 --> 00:14:33,830
Melinda Lee: As long as we know how, as long as we know how…
143
00:14:34,400 --> 00:14:37,330
Michael Ostapenko: Yeah, I mean, like,
144
00:14:38,240 --> 00:14:44,540
Michael Ostapenko: And then, yes, that's another story, like, like, from a sci-fi movie, which is,
145
00:14:44,700 --> 00:14:49,630
Michael Ostapenko: Like, depicts this, artificial intelligent,
146
00:14:49,780 --> 00:15:00,950
Michael Ostapenko: Beings, like, with, like, human-looking bodies, but who, who possesses, ability to, like, think…
147
00:15:01,250 --> 00:15:02,839
Michael Ostapenko: In such a…
148
00:15:03,180 --> 00:15:13,849
Michael Ostapenko: Complex and deviant ways that no human can possibly, like, outsmart them and prevent them from reaching their goals.
149
00:15:14,170 --> 00:15:14,980
Michael Ostapenko: Sure.
150
00:15:15,230 --> 00:15:20,420
Michael Ostapenko: But, then you need to think about that, like,
151
00:15:22,380 --> 00:15:25,450
Michael Ostapenko: The, the, the, this, goal,
152
00:15:25,850 --> 00:15:35,460
Michael Ostapenko: You, you're the one who's, setting this, this goal, and you're the one who's, limiting or giving this, system the,
153
00:15:36,000 --> 00:15:39,670
Michael Ostapenko: Capacity, physical capacity to achieve that goal.
154
00:15:40,070 --> 00:15:46,669
Michael Ostapenko: So… At the end of the day, the responsibility is still on you if something goes wrong.
155
00:15:46,850 --> 00:15:47,210
Michael Ostapenko: Yes.
156
00:15:47,370 --> 00:15:48,750
Melinda Lee: Yes, yes.
157
00:15:48,750 --> 00:15:57,990
Michael Ostapenko: Because if the system doesn't have the capacity to achieve that goal, even if it Technically, like…
158
00:15:58,240 --> 00:16:00,459
Michael Ostapenko: Can't learn something like that.
159
00:16:00,590 --> 00:16:02,659
Michael Ostapenko: eat, eat,
160
00:16:03,110 --> 00:16:12,109
Michael Ostapenko: It won't be able to do that. And it won't even be able to learn to do that, because it just lacks the capacity. And,
161
00:16:12,560 --> 00:16:19,780
Michael Ostapenko: You need to also… Consider other factors, like,
162
00:16:21,160 --> 00:16:26,170
Michael Ostapenko: Like, energy and, things like that, because…
163
00:16:27,040 --> 00:16:30,369
Michael Ostapenko: We… we… as humans, we, like,
164
00:16:31,990 --> 00:16:42,589
Michael Ostapenko: the custom to think that, here, we come, we think, and the machine will go and think and do single projects the same way. But,
165
00:16:43,350 --> 00:16:49,140
Michael Ostapenko: There is, like, energy… Constraints. We need to eat.
166
00:16:49,240 --> 00:16:54,340
Michael Ostapenko: We need to breathe, and things like that. Now, you put a battery into a robot.
167
00:16:54,850 --> 00:16:56,330
Michael Ostapenko: How long will it last?
168
00:16:57,330 --> 00:17:08,339
Michael Ostapenko: You put some kind of, chip into a robot to execute those, complex algorithms and, or computations over this neural network.
169
00:17:08,930 --> 00:17:17,789
Michael Ostapenko: for how long will it last to… for it to be able to produce these superior results? Now, you can say.
170
00:17:17,940 --> 00:17:32,359
Michael Ostapenko: why, it can outsource it to some data center. Yeah, but, this is physical network connection. It's always controlled, right? The data center is, also
171
00:17:32,540 --> 00:17:39,120
Michael Ostapenko: The consumption of electricity data centers, the consumption of resource, computational resources and data center.
172
00:17:39,160 --> 00:17:40,889
Michael Ostapenko: It's all… it's all…
173
00:17:40,910 --> 00:17:51,439
Michael Ostapenko: controlled, it's all under surveillance. If there are, like, any deviations, like, and how it will even use it if it's, like, it has to pay for it.
174
00:17:51,440 --> 00:18:01,770
Michael Ostapenko: who's gonna pay for it? I mean, if you start thinking and, like, going really deep into all these things, you'll see that it's,
175
00:18:02,270 --> 00:18:03,760
Michael Ostapenko: Impracticable.
176
00:18:04,070 --> 00:18:07,720
Michael Ostapenko: for something like that to happen. And that's actually…
177
00:18:08,390 --> 00:18:13,399
Michael Ostapenko: what life, kind of, is about. It's, it's, it's about, like, complexity.
178
00:18:13,680 --> 00:18:16,630
Michael Ostapenko: And,
179
00:18:16,740 --> 00:18:25,000
Michael Ostapenko: That's why things like, we see in horror movies about these monsters and things like that. They don't really exist in real life.
180
00:18:25,500 --> 00:18:28,879
Michael Ostapenko: Because… Things like that, they just…
181
00:18:29,490 --> 00:18:33,940
Michael Ostapenko: unsustainable in real life. You can't… Can't survive.
182
00:18:34,820 --> 00:18:45,769
Michael Ostapenko: And this, ultimate… killing machine, which, is presented, by some, like, doomsayers. They…
183
00:18:45,910 --> 00:18:48,710
Michael Ostapenko: It won't be able to survive either.
184
00:18:49,040 --> 00:18:50,230
Michael Ostapenko: In this world.
185
00:18:51,880 --> 00:18:55,179
Melinda Lee: Okay, thank God. I could… I can sleep tonight.
186
00:18:58,180 --> 00:19:05,639
Melinda Lee: And so what is this whole thing about, like, quantum… did you say something about, like, quantum AI?
187
00:19:06,520 --> 00:19:13,339
Michael Ostapenko: So it's not, like, about quantum AI, it's about, like, quantum computations.
188
00:19:13,340 --> 00:19:15,800
Melinda Lee: Huh?
189
00:19:15,800 --> 00:19:19,680
Michael Ostapenko: Yeah. So, quantum computations, it's,
190
00:19:20,340 --> 00:19:25,079
Michael Ostapenko: Let's say, we have these conventional computers.
191
00:19:25,080 --> 00:19:25,510
Melinda Lee: Right?
192
00:19:25,750 --> 00:19:30,190
Michael Ostapenko: Which, and we have… which are… Well…
193
00:19:30,610 --> 00:19:38,170
Michael Ostapenko: let's say they… they are based on, like, classical physics, not so much on quantum… they do…
194
00:19:38,500 --> 00:19:56,999
Michael Ostapenko: exploit certain quantum effects, I suppose, on the very, very low level, like, in micro-sheets, but it's not used to speed up the computation, like, exponentially. What we think about, about, when we mention, like, quantum computers is, like, this exponential speed up.
195
00:19:57,900 --> 00:20:09,250
Michael Ostapenko: over… the conventional, computers. And, this, this allows, like, to…
196
00:20:09,490 --> 00:20:16,220
Michael Ostapenko: Why is it important, in general? So… In mathematics.
197
00:20:16,830 --> 00:20:25,540
Michael Ostapenko: And in computer science, specifically. Like, there are also so-called, like, complexity classes.
198
00:20:26,430 --> 00:20:33,619
Michael Ostapenko: computational complexity classes of problems. And the idea behind those classes is that,
199
00:20:34,260 --> 00:20:39,629
Michael Ostapenko: If you have some… some class which is… describes very hard problem.
200
00:20:41,060 --> 00:20:45,769
Michael Ostapenko: It, it, it usually has this, How do you describe it?
201
00:20:46,260 --> 00:20:52,349
Michael Ostapenko: It has an associated language, common language, which… which is extremely expressive.
202
00:20:53,120 --> 00:20:57,560
Michael Ostapenko: Because, it allows to express this powerful problem.
203
00:20:58,110 --> 00:21:07,119
Michael Ostapenko: And it's… you can use this language to model real-world and solve real-world problems, but
204
00:21:07,220 --> 00:21:11,470
Michael Ostapenko: Formally, rigidly, accurately.
205
00:21:11,830 --> 00:21:18,750
Michael Ostapenko: And, what's… the main idea behind this class is that
206
00:21:19,350 --> 00:21:23,139
Michael Ostapenko: If you have the solution for just one problem from this class.
207
00:21:24,560 --> 00:21:28,669
Michael Ostapenko: We have solutions for all the problems which fall into this class.
208
00:21:29,300 --> 00:21:33,619
Michael Ostapenko: like, efficient solution, because you have this
209
00:21:34,130 --> 00:21:39,750
Michael Ostapenko: easy, and it's, like, in practice, you have this easy… And,
210
00:21:40,340 --> 00:21:46,179
Michael Ostapenko: The ways to transform one problem of this class into another. And that's the crux of it.
211
00:21:46,420 --> 00:21:53,569
Michael Ostapenko: So… Quantum computers, they… they solve… they basically can solve one of these.
212
00:21:53,670 --> 00:22:05,049
Michael Ostapenko: powerful classes of problems. That's why they… everyone is after them. If… because if this… if you build a quantum computer, it can solve one problem from this
213
00:22:05,440 --> 00:22:07,709
Michael Ostapenko: Very hard class of problems.
214
00:22:07,980 --> 00:22:12,849
Michael Ostapenko: And you can solve all of them. And these problems, they… they're…
215
00:22:13,310 --> 00:22:19,650
Michael Ostapenko: it's, it's the… so, it's, it's so expressive, it can, it can describe, like, as I said, like.
216
00:22:19,780 --> 00:22:33,510
Michael Ostapenko: Biological processes, physical processes, social processes, anything that is extremely complex, and you can describe it, and you can then optimize
217
00:22:34,100 --> 00:22:38,630
Michael Ostapenko: Things, like, the different aspects of these processes.
218
00:22:38,940 --> 00:22:40,940
Michael Ostapenko: Build new drugs.
219
00:22:41,210 --> 00:22:45,779
Michael Ostapenko: And, I don't know, build new materials, things like that.
220
00:22:45,930 --> 00:22:59,199
Michael Ostapenko: And, that's why everyone is after this, quantum computers. But… At the same time, Even though this…
221
00:22:59,600 --> 00:23:03,820
Michael Ostapenko: like, classes are so powerful and so expressive.
222
00:23:04,690 --> 00:23:06,880
Michael Ostapenko: There is no, like.
223
00:23:07,270 --> 00:23:13,620
Michael Ostapenko: To this day, there is no mathematical proof for certain of the… some of these classes that they can't be
224
00:23:13,930 --> 00:23:17,720
Michael Ostapenko: Solved, efficiently using classical means.
225
00:23:19,250 --> 00:23:26,559
Michael Ostapenko: And, now, this is one of the, like, so-called millennial pro- millennium problems.
226
00:23:27,300 --> 00:23:30,600
Michael Ostapenko: It's, like, with the price of, like, $1 million for each.
227
00:23:31,200 --> 00:23:50,460
Michael Ostapenko: Yeah, but… Now, I'll say right away, we aren't trying to solve this. It's beyond our scope. We are just trying to create, like, what's practically possible and what's feasible right now. Yeah, but,
228
00:23:50,720 --> 00:24:00,829
Michael Ostapenko: In general, just to set a, like, a stage, like, there, there is this price, and there is this problem.
229
00:24:01,720 --> 00:24:19,740
Michael Ostapenko: And, no one knows the solution. No one knows if the solution is even possible, or even if it's even possible to prove that it's impossible. No one knows anything about it. So, but, what we believe, and what we…
230
00:24:20,410 --> 00:24:21,700
Michael Ostapenko: like, C?
231
00:24:22,000 --> 00:24:27,939
Michael Ostapenko: Is that current solutions to these kind of problems, they aren't optimal yet.
232
00:24:28,180 --> 00:24:31,300
Michael Ostapenko: there's still… Ways to improve.
233
00:24:31,550 --> 00:24:32,650
Michael Ostapenko: efficiency.
234
00:24:33,430 --> 00:24:37,099
Michael Ostapenko: And, we are aiming to do just that.
235
00:24:37,380 --> 00:24:38,090
Melinda Lee: I love it.
236
00:24:38,090 --> 00:24:52,220
Michael Ostapenko: We're not… we're not trying to, like, build, like, quantum computers, which are a different topic, and I have, like, my opinion on that too, but… although I'm not, like, I can't be considered an expert in this field, so…
237
00:24:52,220 --> 00:24:54,540
Melinda Lee: That's what you're doing, that's what you're doing at Citrix, right?
238
00:24:54,860 --> 00:25:08,929
Michael Ostapenko: Not quantum computing, the, the classical, approaches. Yeah, but I'm just, I'm just explaining it using quantum computers, it's, because it's a hot topic, it's kind of ironic, because, what we do, it's,
239
00:25:09,340 --> 00:25:17,439
Michael Ostapenko: which is logic and algorithms and things like that, mathematics. It's kind of ancient.
240
00:25:18,500 --> 00:25:19,730
Michael Ostapenko: Quantum computing?
241
00:25:20,150 --> 00:25:29,249
Michael Ostapenko: They're relatively new, and everyone is talking about them, because it's, like, a new child. But, yeah.
242
00:25:29,250 --> 00:25:51,739
Melinda Lee: Fascinating. So fascinating. I think I'm so appreciative of this conversation. I'm learning a lot, and it's because it's so different from what I do with regard to people and communication, and, you know, we have these systems, AI systems, computations, and computer… I mean, just really fast-tracking.
243
00:25:51,740 --> 00:25:59,390
Melinda Lee: How well, we… do things as humans, right? How well we're able to solve problems, and these…
244
00:25:59,390 --> 00:26:18,000
Melinda Lee: these really challenging, complex problems, and that we have the leverage of the technology to do that. And so, I really appreciate your expertise and your experiences digging into… to using this, to learning this, to applying this to help society.
245
00:26:18,420 --> 00:26:23,849
Michael Ostapenko: Yeah, that's, basically the goal, because,
246
00:26:24,440 --> 00:26:27,470
Michael Ostapenko: I mean, technology on its own, it's,
247
00:26:27,870 --> 00:26:37,710
Michael Ostapenko: It's nothing. It's not just useless, it's really nothing, because, unless there is, there are people to use it to benefit from it.
248
00:26:37,710 --> 00:26:48,969
Melinda Lee: Yeah, yeah, yeah, I agree. And I thank you so much for now I can sleep tonight, because I'm not afraid of these AI robots taking over the world.
249
00:26:48,970 --> 00:27:06,699
Michael Ostapenko: Well, I'm glad. I mean, you can definitely sleep for, like, tight for the next decade, or several decades, because the current systems aren't there yet, and I haven't seen any fundamental progress
250
00:27:06,720 --> 00:27:09,640
Michael Ostapenko: Which would allow anything like that to happen.
251
00:27:09,820 --> 00:27:10,740
Michael Ostapenko: Yet.
252
00:27:10,870 --> 00:27:11,570
Michael Ostapenko: Oh.
253
00:27:11,750 --> 00:27:14,040
Melinda Lee: There's a lot of, like.
254
00:27:14,210 --> 00:27:19,730
Michael Ostapenko: Dogs, there is a lot of hype about this field, but
255
00:27:20,070 --> 00:27:25,760
Michael Ostapenko: Yeah, there was a breakthrough, like, in this,
256
00:27:26,330 --> 00:27:30,079
Michael Ostapenko: transformers and things like that. But,
257
00:27:30,930 --> 00:27:41,399
Michael Ostapenko: Which… which kind of showed that semantics can be a model, like, efficiently using these systems, but beyond that.
258
00:27:43,890 --> 00:27:51,060
Michael Ostapenko: It's really, like, more, more, like, a lot of hype, Nothing, nothing, nothing, substantial.
259
00:27:51,790 --> 00:27:52,440
Melinda Lee: Yeah.
260
00:27:52,840 --> 00:27:55,169
Melinda Lee: Yeah, good. Good, good.
261
00:27:55,170 --> 00:27:55,950
Michael Ostapenko: I really appreciate.
262
00:27:55,950 --> 00:28:05,519
Melinda Lee: And Michael, so how would people get ahold of you? Where do they find you if they want some more, experience or expertise from your company?
263
00:28:06,630 --> 00:28:09,339
Michael Ostapenko: Well, they can,
264
00:28:09,970 --> 00:28:22,250
Michael Ostapenko: I'm usually, like, available, through email. I always, check it. It's… you can write to CEO at Cetrix.com.
265
00:28:22,620 --> 00:28:27,909
Michael Ostapenko: So, Cetrix is spelled as S-A-T, as in Tom.
266
00:28:28,030 --> 00:28:31,749
Michael Ostapenko: R, Y, X, etc.
267
00:28:31,970 --> 00:28:36,220
Melinda Lee: Yep, perfect! And we'll have your, email in the show notes, too.
268
00:28:36,470 --> 00:28:38,090
Michael Ostapenko: Yeah, thank you, Emil Day.
269
00:28:38,090 --> 00:28:45,760
Melinda Lee: Okay, thank you so much, Michael. It was such a great conversation, and I really enjoyed it, I learned a lot, and I appreciate your time.
270
00:28:46,580 --> 00:28:49,170
Michael Ostapenko: I'm happy, I'm happy to be…
271
00:28:49,290 --> 00:28:52,330
Michael Ostapenko: invited by you, and I really… it's,
272
00:28:52,870 --> 00:28:56,720
Michael Ostapenko: It's, it was, it was really a pleasure talking to you.
273
00:28:56,720 --> 00:29:13,970
Melinda Lee: Thank you, and thank you, audience, for being here. I trust that you got your takeaway from today, and so continue to use and leverage technology however it serves you best, so that you can have more deep, relationships and be able to enjoy your life.
274
00:29:13,970 --> 00:29:22,550
Melinda Lee: And so, until next time, I'm your sister in flow. May prosperity flow to you and through you, onto others, always. Thank you. Bye-bye!
275
00:29:22,760 --> 00:29:23,500
Melinda Lee: Bye, Michael.
276
00:29:23,500 --> 00:29:24,710
Michael Ostapenko: Okay, alright?
277
00:29:24,710 --> 00:29:25,530
Melinda Lee: Bye-bye.