The Ambitious Altruist

#14 - Ian Hamilton - CEO and Co-Founder of Transparent AI, Inc.

The Ambitious Altruist Podcast Season 1 Episode 14

Ian Hamilton is a materials scientist, nuclear engineer, and serial entrepreneur. Having started two hard tech energy companies in the past, he became increasingly concerned with the development of AI and how it is being used. Being a sort of "technology historian", Ian dove deep into the history of AI and has spent the last 5 years reading obscure machine learning and AI textbooks searching for alternatives to the current AI paradigms. Ian and his team thinks they have found the answers they were looking for an have started a company called Transparent AI, where their focus is to develop AI technologies that are not "black-box", more energy efficient, and are theoretically much closer to how the human brain learns compared to current AI technologies.

1
00:00:02.360 --> 00:00:18.399
Dominic Kukla: All right. Hello, ladies, gentlemen, everybody welcome to another episode. A very exciting episode of the ambitious altruist podcast. Where we interview leaders from a variety of different backgrounds on what they are doing to help our world.

2
00:00:19.514 --> 00:00:25.349
Dominic Kukla: I'm very excited for today's guest. Ian, Ian Hamilton, Ian, how you doing today, man?

3
00:00:25.350 --> 00:00:27.459
Ian Hamilton: I'm I'm doing great. How about yourself?

4
00:00:27.460 --> 00:00:28.869
Dominic Kukla: I am doing good.

5
00:00:29.130 --> 00:00:32.140
Dominic Kukla: I'm super stoked. So so a little background on a

6
00:00:32.828 --> 00:00:40.059
Dominic Kukla: how we met I was actually going to an AI event here in Portland, and, you know, go into the main room.

7
00:00:40.430 --> 00:00:41.760
Dominic Kukla: and

8
00:00:41.950 --> 00:01:01.400
Dominic Kukla: it's completely full, you know. Packed people are coming out of, you know, flowing out of the room. I can't hear the presentation. It's like, All right, I'm gonna you know, eat some snacks and get out of here and then leaving out in the lobby there's a couple of people talking, somebody I knew, and one of them said, Hey, why don't you come over here? We're having our, we're having our own little conference.

9
00:01:01.740 --> 00:01:05.430
Dominic Kukla: I was like, Okay, yeah, sure. So I sit down there. And Ian's 1 of those people.

10
00:01:06.310 --> 00:01:09.290
Dominic Kukla: And then he starts talking, and it turns out that

11
00:01:09.410 --> 00:01:32.329
Dominic Kukla: yes, indeed, we were having our own AI Conference, because this dude is ridiculously educated on AI and all sorts of stuff. It's actually on the Forbes 30 under 30. So that was kind of a little magical moment for me, because I feel like, you know, I don't care what they're talking about in there. I doubt it is as interesting as the stuff out here.

12
00:01:35.100 --> 00:01:36.350
Dominic Kukla: That true story, Ian.

13
00:01:36.770 --> 00:02:00.450
Ian Hamilton: That. That is how how we met. And that's exactly what happened. Yeah, I think they had. It was a it was. I'm sure it would have been an interesting AI event. But parking was a little difficult, and I think they booked like 2 to 300 people for a room with a maximum occupancy of 150. So I know there were. There are quite a few people that could not fit in, and that actually just straight up left. So Yup, that's that's where where we met.

14
00:02:00.610 --> 00:02:04.790
Dominic Kukla: Yeah, yeah. Have they? Reached out to you as a as a presenter.

15
00:02:05.063 --> 00:02:18.439
Ian Hamilton: I've talked with them a couple of times. They do a lot more about like the generative AI art focused type stuff. But I heard rumors that they're gonna do like an AI education session that I might try to be a part of as well.

16
00:02:18.440 --> 00:02:21.994
Dominic Kukla: Okay, sweet, sweet. So you know, transitioning

17
00:02:22.620 --> 00:02:26.750
Dominic Kukla: with that is our transition. A little bit about Ian.

18
00:02:27.490 --> 00:02:42.120
Dominic Kukla: Ian Hamilton is a material scientist, nuclear engineer and serial entrepreneur. Having started 2 hard tech energy companies in the past, he became increasingly concerned with the development of AI and how it is being used

19
00:02:42.520 --> 00:02:55.690
Dominic Kukla: being a sort of technology historian. Ian dove deep into the history of AI and has spent the last 5 years reading obscure machine learning and AI textbooks searching for alternatives to the current AI paradigms.

20
00:02:56.100 --> 00:03:08.070
Dominic Kukla: Ian and his team thinks they have found the answer they were looking for, and have started a company called transparent AI, where their focus is to develop AI technologies that are not a black box

21
00:03:08.320 --> 00:03:17.800
Dominic Kukla: more energy efficient and are theoretically much closer to how the human brain learns compared to current AI technologies.

22
00:03:18.400 --> 00:03:25.209
Dominic Kukla: So, Ian, what are you doing to try and help our world.

23
00:03:26.000 --> 00:03:53.088
Ian Hamilton: Yeah, of course. Excuse me. Yeah, as as a little bit of a background. This is the pretty much the same story that I that I tell everyone. Dominic, I think this is the 3rd time you'll hear it. But obviously, the Podcasters or the listeners to the podcast won't. So so yeah, I'm actually a hard tech guy. I did my 1st company out of out of college. It was an advanced nuclear reactor company.

24
00:03:53.550 --> 00:04:11.769
Ian Hamilton: that was pretty interesting and successful. That's actually the the reason that I was on the Forbes list. Got a fellowship from from the Government, the Department of Energy and whatnot ended up raising some money. The December like 2019,

25
00:04:11.800 --> 00:04:25.850
Ian Hamilton: and we got everything right up ready to go get started. Yeah, Gung, ho! Let's go. We got. We can place some orders for some machined parts and whatnot, and I was hit with like, 10 month lead times because of covid

26
00:04:26.173 --> 00:04:49.766
Ian Hamilton: and so we it was. We yeah, closed December 2019. And and Covid obviously happened March 2020 is when everything started to shut down, and so we had money. I was still getting paid but there was just like not a lot that I could physically do like. If you're a hard tech manufacturing engineering company and you're just waiting on parts to show up. So

27
00:04:50.170 --> 00:05:07.980
Ian Hamilton: I wanted to be productive in the space, though. So I did actually teach myself python and how to code a little bit. I'm not a software developer or coder. But I do dabble in that specifically for modeling and simulating that technology that I was working on. But as I started watching these like

28
00:05:07.980 --> 00:05:32.829
Ian Hamilton: Python videos and how to code videos and things like that. It all came back to like, oh, here's do this with machine learning and AI and things like that, and I was like, Oh, maybe I should take a look at, take a look at that. And so I used some of my covid stipends to buy some like AI textbooks, and and how to code and and things like that

29
00:05:33.070 --> 00:05:49.610
Ian Hamilton: started with the seminal works of machine learning for dummies and deep learning for dummies, and worked through those and for the machine learning for dummies, and a lot of that made sense. And the second book is called Deep Learning for dummies.

30
00:05:49.800 --> 00:05:57.850
Ian Hamilton: And like, literally, just like it's a 200 page book. It's it's behind me. I think I was like 20 pages in. And I was like.

31
00:05:58.270 --> 00:06:10.991
Ian Hamilton: Okay, this is kind of creepy and scary, and maybe not a good idea. Cause right off the bat they teach you about

32
00:06:11.660 --> 00:06:39.749
Ian Hamilton: all machine learning. AI that we know of out there today. Chat gpt, Claude, all these other companies, llama, basically, if you've heard of it. If you know of it, it's based off of a machine learning technology called a neural network. And the reason this was was chosen was that we're all trying to make a computing system that

33
00:06:39.750 --> 00:06:58.579
Ian Hamilton: is, is ideally as intelligent as a human or or some people, maybe even more intelligent than a human and things like that. Because that's our, that's our baseline. For what intelligence is, what do we? How do we measure things and interact with the world and stuff like that. And so

34
00:06:58.690 --> 00:07:24.730
Ian Hamilton: the best way that researchers way back in the day, like in the sixties and seventies, thought to try to mimic human intelligence was to try to mimic the physical biology of the human brain. And as all brains where we are, our brains are just a big connection of neurons, individual biological cells that basically make a big

35
00:07:24.730 --> 00:07:37.859
Ian Hamilton: mesh network that's just crazy. crazy huge. I think we're up. We counted like a hundred 1 billion neurons in the human brain or something like that. And they're all connected in these different ways. And so

36
00:07:37.990 --> 00:08:01.303
Ian Hamilton: these researchers again, 40 years ago, and at the beginning of the the turn of the century, we're like, yeah, this is our. This is the best thing that we can think of or try to essentially do what's known as really complicated machine learning. And the reason you need some of these things, some of this technology is

37
00:08:01.940 --> 00:08:14.670
Ian Hamilton: the math behind trying to find patterns in systems can actually get pretty complicated. And so what they did is they built these networks and just say.

38
00:08:15.070 --> 00:08:30.591
Ian Hamilton: figure out the math or figure out the pattern. We'll reward you when you do and it's called machine reinforcement learning and so they started with all of this. And that's basically what what got really popular.

39
00:08:31.060 --> 00:08:47.325
Ian Hamilton: and the reason I was really concerned about it is the way these AI systems. These machine learning systems work, these neural networks work is that they are fundamentally a black box. So if you

40
00:08:48.429 --> 00:09:10.120
Ian Hamilton: if you the best way that I can think of of explaining it is if if the listeners or Dominic, if you've ever in envisioned plinko which is the like the thing where you drop the disk at the top, and it has like the little pegs, and you try to go down and and it ends in little buckets or something like that. Right?

41
00:09:10.430 --> 00:09:22.729
Ian Hamilton: it's basically trying to figure out. If you all you have is a snapshot of with a final location of the disk at the bottom in the little buckets.

42
00:09:23.010 --> 00:09:26.480
Ian Hamilton: How do you know where you dropped it at the top?

43
00:09:27.410 --> 00:09:53.859
Ian Hamilton: You can try to use statistics and probability and things like that. But if you have a nearly infinitely wide platform that you're working with it could have taken a bounce left, left, left, left, left, left, left, left, right, right, right like. There's the number of combinations to get there. And so there's very little to potentially no correlation to what the output is

44
00:09:53.860 --> 00:10:03.730
Ian Hamilton: to the inputs, and so this is what we mean by a black box system. You don't get to know why or how it made or came to a decision.

45
00:10:03.980 --> 00:10:17.870
Ian Hamilton: And this is this is a fundamental thing. Anyone that's interested in it can look up like that black box. AI! There's a bunch of articles and Wikipedia articles on it. There's a bunch of I would say.

46
00:10:17.890 --> 00:10:34.650
Ian Hamilton: pretty like doomsday, quite related articles with it as well. A lot of things where, like AI researchers increasingly don't know how these things work. It's not like it's getting better and so, anyway, I started with that

47
00:10:35.070 --> 00:10:52.520
Ian Hamilton: learning about that. And what was really concerning for me is because I was coming from the nuclear background and still going to like nuclear reactor conferences where they were like, Hey, we're going to use this awesome new technology to design, optimize and run operational nuclear reactors

48
00:10:52.960 --> 00:11:10.140
Ian Hamilton: or to manage the electrical flow in the in the power grid. So if it hallucinates and messes up, half the country loses power and things like that. And so that was a big wake up moment for me.

49
00:11:10.180 --> 00:11:14.319
Ian Hamilton: There were some other things about the technology that just kind of like

50
00:11:14.420 --> 00:11:43.431
Ian Hamilton: we're like a guttural like feeling of like this, just seems wrong. This doesn't seem like how the human brain would work and and how intelligent systems would would work. It's it's more like finding a brute force pathway to the goal rather than trying to find like maybe a more elegant solution. Essentially and so that was in 2020. I still have my 1st startup

51
00:11:43.880 --> 00:11:46.359
Ian Hamilton: and I basically just took it as a

52
00:11:46.600 --> 00:12:06.784
Ian Hamilton: very intense hobby to buy a whole bunch of pretty obscure old machine learning textbooks. Try to figure out. Okay, well, where did these ideas come from? Why did we pick this one. How long have been people been working on this? And things like that? And

53
00:12:07.180 --> 00:12:22.028
Ian Hamilton: The the answer is, a long time ago perceptrons in like the sixties and seventies. And then people just kept researching them. And it actually wasn't really until the early to mid 2 thousands

54
00:12:22.530 --> 00:12:50.010
Ian Hamilton: When people started to really focus on this one type of technology, this thing called called neural networks, and I don't remember the exact date. But I remember talking with a professor about like, Hey, why is everyone focusing on this? And it was, I think it was 2012 like Google came out with the transformer architecture. That is now basically how Llms work, how chat bots work. And once that

55
00:12:50.230 --> 00:13:00.850
Ian Hamilton: hit businesses and mainstream and things like that, no one could get funding for anything else so essentially, all of the other all the other technologies kind of went to the wayside.

56
00:13:01.000 --> 00:13:03.500
Dominic Kukla: Yes, sorry. Sorry to to bust in there.

57
00:13:03.500 --> 00:13:04.440
Ian Hamilton: Yeah, no. Problem.

58
00:13:04.560 --> 00:13:05.810
Dominic Kukla: Llm. You said right.

59
00:13:05.810 --> 00:13:06.540
Ian Hamilton: Yep.

60
00:13:06.540 --> 00:13:07.980
Dominic Kukla: It's a language. Learning models.

61
00:13:07.980 --> 00:13:09.650
Ian Hamilton: A large language, model.

62
00:13:09.650 --> 00:13:22.500
Dominic Kukla: Large language model. Yeah, so that's basically, I think you told me this last time that there's, you know, a huge, broad category of machine learning, and within that is a smaller category of what we call AI, and within that

63
00:13:22.660 --> 00:13:27.399
Dominic Kukla: there's a smaller subset of language learning, model of of neural networks

64
00:13:27.520 --> 00:13:40.739
Dominic Kukla: right? And then within that is language learning models. So the vast majority of Chat Gpt, all the AI we see is actually only one very specific small niche application of of the technology.

65
00:13:40.910 --> 00:13:53.829
Ian Hamilton: Yes, that. Yeah, that is correct. And and that's yeah. So all all these things that we see today are are large language models. Which emphasis on large and language.

66
00:13:54.340 --> 00:14:00.460
Ian Hamilton: going back to this like this underlying architecture, these neural networks where they're just like, Hey, figure out

67
00:14:00.490 --> 00:14:24.040
Ian Hamilton: the problem or the like. Just figure it out. They just said, Here is the entire corpus of all human language, and the Internet try to find patterns in sentences paragraphs. reading papers. How would you write a blog post and things like that? Because it's trained? And it tried to learn and figure out

68
00:14:24.040 --> 00:14:40.460
Ian Hamilton: how all blog posts are written. Does it understand the meaning of words? No, not at all. It's just learning from patterns of human language. And so that's what a large language model is.

69
00:14:40.530 --> 00:14:48.790
Ian Hamilton: They can be very helpful and very very interesting. And things like that. And do really cool things. But they

70
00:14:48.820 --> 00:15:14.179
Ian Hamilton: have a lot of lot of underlying flaws. I'm sure. Folks have seen the things where, like they they let it go to Reddit, and everyone all the Llms were recommending that you put glue on your pizza to hold the cheese on top. That was a big thing for a couple a couple of years or not a couple of years, couple of months. But so yeah, that's that's the current state of AI essentially is, they took this

71
00:15:14.320 --> 00:15:25.629
Ian Hamilton: fundamental machine learning architecture that we don't know how how it works. And they gave it all of the Internet to try to learn from and some

72
00:15:25.630 --> 00:15:45.041
Ian Hamilton: some. There's some pretty significant detractors of of things like this people that think that this is not not a great idea. One of the guys Jeffrey Hinton, was basically essentially like the the godfather of AI, he started at Google brain

73
00:15:45.470 --> 00:16:09.112
Ian Hamilton: in 2013, and actually left in 2023 citing concerns of like, Hey, this is bad. We shouldn't be doing this very similar case with ilyaver of open AI he had some concerns about AI development and then left, and now form safe super intelligence and promptly raised a billion dollars.

74
00:16:09.610 --> 00:16:33.981
Ian Hamilton: so so yeah, we're we're not alone in the in the field of like, hey? Maybe there should be other ways and and other things to do. Do stuff. But that's that's really where where it started. For me was this kind of concern about what? What could potentially happen what it can and can't do? Why, you would want to do some of these things.

75
00:16:34.420 --> 00:16:49.320
Ian Hamilton: and and so yeah, I I basically spent the last 5 years. 4 years. Basically researching other weird machine learning technologies and and Dominic, thank you very much for calling that out. There's

76
00:16:49.610 --> 00:17:14.910
Ian Hamilton: the much, much broader definition of what's known as machine learning. And that's really just trying to come up with computer algorithms that can try to find patterns faster than humans ever could. It could be the same patterns. But computers can compute things and numbers way faster than we could ever do. And so that's the really broad category of machine learning.

77
00:17:15.349 --> 00:17:30.208
Ian Hamilton: All of AI just kind of got focused down into these these, just a handful of technologies and models. And we left a lot of the other things unexplored. And so I was looking at some of those

78
00:17:30.840 --> 00:17:56.349
Ian Hamilton: some of those other unexplored technologies, and a couple of them like really stood out to me and and to make sure that I was on like the right track. And whatnot. This wasn't a crazy idea like reached out to these professors that had worked in the space and like, Hey, why are you no longer doing this like? Did it? Was it a fundamental technology problem, or was there something else? And there's the ones that were like, oh, no, like once

79
00:17:56.360 --> 00:18:14.879
Ian Hamilton: once Llm. Started to coming out, we couldn't get funding for any any other anything outside the really tiny bubble, basically of of machine learning, or of of AI into machine learning. And I apologize to listeners. I'm making a whole bunch of hand gestures that you guys won't be able to see

80
00:18:15.230 --> 00:18:16.820
Ian Hamilton: And so we

81
00:18:17.530 --> 00:18:44.268
Ian Hamilton: I really started focusing on that. Towards the end of 2023, beginning of 2024, and I've had a couple of startups and was well aware of Y combinator, which is a Silicon Valley Vc. Group. It's essentially one of the largest, if not the largest in the world, I believe. And every year they they publish

82
00:18:45.210 --> 00:19:03.230
Ian Hamilton: like a request for startups startups that they want to see and potentially fund a lot of times. They're pretty generic I think, Sam Altman's when he was CEO, was just like 1 million jobs like, just they want to invest in startups that that

83
00:19:03.560 --> 00:19:27.370
Ian Hamilton: create jobs. But last year it was 20 very specific topic areas that they were interested in. And one of them was actually explainable. AI with the with the caveat of like. Okay, if you have all these black box AI machine learning systems that you don't know how they work.

84
00:19:27.880 --> 00:19:30.980
Ian Hamilton: would you trust it to medically diagnose you?

85
00:19:31.490 --> 00:19:41.959
Ian Hamilton: Would you start chemotherapy based off of something that it said and so that was kind of the big, like catalyzing moment that I was like, well, if that's

86
00:19:42.410 --> 00:19:48.623
Ian Hamilton: not assigned to start a 3rd company. I don't know I don't know what is, and so

87
00:19:49.050 --> 00:20:16.444
Ian Hamilton: So I found my business partner, online, who's he's the technical one. He's the real like, algorithmic developer and whatnot. But yeah, we've been working on that for about yeah, almost a year. Now. There's 4 of us now, and really our goal, our focus to bring it all the way back to to Dominic's original question of what we're trying to do. We're trying to build a

88
00:20:16.800 --> 00:20:30.870
Ian Hamilton: new and different types of AI machine learning that's entirely explainable and auditable. So it's not a black box. It's transparent. Hence the the name transparent. AI of our company.

89
00:20:31.877 --> 00:20:38.880
Ian Hamilton: there's potential for drastically less energy consumption. That's 1 of the kind of the

90
00:20:39.100 --> 00:20:50.990
Ian Hamilton: big things about modern AI and neural networks that that didn't really sit right with me is that there's, I'm sure people have seen a lot of these news articles about

91
00:20:51.420 --> 00:21:17.280
Ian Hamilton: Google and Microsoft trying to come up with like new ways to power, these training systems, these models, these tens of thousands of graphics processing units, Gpus Nvidia. And they're actually turning to nuclear, which, ironically is, is kind of pretty much the only answer but they're using like hundreds of of kilowatts to megawatts, to gigawatts.

92
00:21:17.390 --> 00:21:34.729
Ian Hamilton: to do the same processing and computing power that the human brain does with 25 watts of power, like literally orders of magnitude, less energy usage. And that's 1 of the things that's like, okay, well.

93
00:21:35.130 --> 00:21:46.232
Ian Hamilton: if that's the case, maybe your fundamental underlying architecture and technology is like, not the right the right answer. Maybe you should explore some other things.

94
00:21:46.810 --> 00:21:58.720
Ian Hamilton: And there's actually a a big AI conference that happened in October, and that was a lot of a lot of the consensus was that, hey? We're not

95
00:21:58.870 --> 00:22:07.959
Ian Hamilton: like really getting that far by throwing more of the Internet's worth of data at this thing. And this is only getting increasingly power hungry.

96
00:22:07.960 --> 00:22:29.769
Ian Hamilton: Maybe we should start exploring exploring alternative technologies. And so that's kind of where where we're focusing on, we want to create AI systems that use less energy. They're smaller. A lot of these Llm models. These chat bots are hundreds of gigabytes. And so if you wanted to have a

97
00:22:29.770 --> 00:22:38.937
Ian Hamilton: personal assistant like on your apple watch, or something like that. You can't actually run it on there, because the the models are are so big

98
00:22:39.310 --> 00:22:47.579
Ian Hamilton: and they take so much energy to run and things like that. And so there's a lot of potential applications for what we're doing in general.

99
00:22:47.770 --> 00:23:12.420
Ian Hamilton: Eventually, as we as we develop things ideally, we would just replace all of the other current technologies. And whatnot but where it really shines and comes in is in places like regulated industries. So healthcare finance, engineering design, nuclear reactor design to bring it full circle

100
00:23:13.730 --> 00:23:33.950
Ian Hamilton: to avoid some of these problems. There's like, there's a lot of health healthcare health insurance companies that are actually getting sued. Because they're using AI to reject health insurance claims. But it's illegal to not tell the claimant why it was rejected.

101
00:23:33.950 --> 00:23:50.919
Ian Hamilton: And so in some of like the discovery. There's it broke, I think, like last last year, there's like chat messages where someone will like talk to a health insurance company like Representative in a chat box and be like, Hey, why was this rejected? And like the person

102
00:23:51.000 --> 00:24:03.928
Ian Hamilton: kind of condemned the the company on the other side other side, and it's like we use an AI. We have no idea why it made why it rejected it. Why, it made this pattern or or saw this pattern to reject you. So

103
00:24:04.300 --> 00:24:31.810
Ian Hamilton: so there's a lot of applications. Lots of things in military and defense. Where, if you ever want to have. Unfortunately, the it's gonna happen the military wants AI with with weapons. We would prefer, that that AI to be explainable, auditable, traceable and not not a black box with weapons. So

104
00:24:32.300 --> 00:24:48.729
Ian Hamilton: that's what we're doing to try to basically make better AI systems that are more aligned with with, yeah, humanity. And and trying to work towards Yeah, better use of these some of these technologies.

105
00:24:49.380 --> 00:24:51.080
Ian Hamilton: But yeah, that's

106
00:24:51.210 --> 00:25:02.379
Ian Hamilton: long winded. I I know. That's the big that's the big intro and whatnot. But if you've got any questions or anything else that you want me to talk specifically about more than happy to.

107
00:25:02.380 --> 00:25:03.519
Dominic Kukla: Yeah. Oh, yeah.

108
00:25:03.790 --> 00:25:04.940
Dominic Kukla: Oh, yeah.

109
00:25:05.110 --> 00:25:14.279
Dominic Kukla: yeah, man. Well, you know, I'm I'm just really grateful to be. And you know, I'm sure some listeners are to be getting kind of more educated on AI. I think it's a lot of things that people

110
00:25:14.530 --> 00:25:18.710
Dominic Kukla: a thing that a lot of people, you know, are very aware of, but don't actually understand.

111
00:25:19.886 --> 00:25:37.869
Dominic Kukla: And yeah, even, you know, like the like, the energy consumption problem. That's I was just talking to like a friend's mom, somebody who seems like they wouldn't know technology at all. And they made a comment about AI, you know, consuming energy. So I think a lot of, you know, this is really interesting stuff to me and a lot of people, obviously. So

112
00:25:38.210 --> 00:25:46.150
Dominic Kukla: you know, 1st off, I'm I'm really glad to see you know, you moving kind of with your purpose. And other people like.

113
00:25:46.260 --> 00:25:58.450
Dominic Kukla: you know, the guy who left open AI, the guy who left Google Brain just to see people who care, you know, because obviously the just, just, the pure profit motive, while important is, we shall say, flawed.

114
00:25:58.890 --> 00:26:14.690
Dominic Kukla: And you know, and now we're bringing that to something that you know is potentially a power the likes of which we've never comprehended. You know you don't want people just driving that trying to make money as quickly as possible.

115
00:26:14.690 --> 00:26:15.320
Ian Hamilton: Yep.

116
00:26:16.050 --> 00:26:23.470
Ian Hamilton: and there's our I I can't remember if I told you this anecdote. But the the

117
00:26:23.610 --> 00:26:39.383
Ian Hamilton: AI trying to make quickly as or money as quickly as possible. Is definitely a potential concern. And and I do actually have have this anecdote. For the listeners. One of our colleagues,

118
00:26:40.130 --> 00:26:49.610
Ian Hamilton: one of his old friends used to be the marketing and data science person for a

119
00:26:49.930 --> 00:27:06.320
Ian Hamilton: very large company. That that you would have heard of. Everyone knows and she could actually tell they had enough data and information on their on their customers

120
00:27:06.320 --> 00:27:23.159
Ian Hamilton: that they could actually tell when their client and when their customers were depressed. And so actually taking information off of Twitter and Facebook and Instagram and things like that, and they noticed that if they fed them

121
00:27:23.230 --> 00:27:45.368
Ian Hamilton: ads for a new product that they knew would make them happy. They would actually have this kind of the spike in happiness, where where their change, their outlook in life would would go way up and then, as as kind of like the novelty of the product decayed and whatnot they slowly over months or or even years.

122
00:27:45.820 --> 00:28:07.760
Ian Hamilton: kind of decayed back into that that depression and and bordered on like thoughts of suicide. Types of things. And she said, Well, she seeing all this data, and asked her her boss of like. Well, if if I'm the marketing director and I can choose

123
00:28:07.810 --> 00:28:37.550
Ian Hamilton: when and how people see these ads on all their different platforms. And when we do these different sales and running sales that makes it cheaper to get that kind of dopamine hit and whatnot she noted that they could actually change the slope of the curve of happiness down, into, down, into depression. So that it was more jagged and more more like Sawtooth, so that yes, they in theory would sell more product.

124
00:28:37.770 --> 00:29:06.110
Ian Hamilton: But their customers would spend more time in the near suicide range of depression. Because it was actually the like, the area under the curve, not just into instant points. And so she brought this up to her boss. I think this was years ago. At like well, what's what's to stop me from tweaking that dial like? I can send out an ad blast to all of our customers, all

125
00:29:06.290 --> 00:29:28.259
Ian Hamilton: hundreds of millions of our customers, to affect this curve, all in the purpose of essentially increased profits and revenue, and her boss was just like, oh, yeah, don't do that. There's no regulations. There's no monitoring behind it. That's just an internal morality. Call to not do that.

126
00:29:28.850 --> 00:29:43.700
Ian Hamilton: She is now, kind of voicing some of those concerns almost 10 years later, of very high suspicion that someone behind the scenes is tasking an AI to just optimize for profit.

127
00:29:43.960 --> 00:30:12.443
Ian Hamilton: And if it's optimizing for profits, it's actively making people more depressed. And she had some pretty good interesting data. I think it was like 2023 that the Surgeon general of the Us. Declared social media as like an epidemic like the depression and suicide rates in teens from social media and and their mental health. And so there's all these like

128
00:30:13.130 --> 00:30:36.019
Ian Hamilton: potential behind the scenes, impacts that. That these AI algorithms and optimization can definitely have yeah, a hey optimized for profits. And and it'll have a negative impact on humans. That's what's known as the alignment problem, basically trying to create

129
00:30:36.020 --> 00:30:48.443
Ian Hamilton: in AI that is aligned with the human tasks and and motives. And it's also one of the most famous like quotes from one of the original AI thinkers of

130
00:30:48.950 --> 00:31:03.639
Ian Hamilton: If you're not aligned with your AI and you, you give it access to mining and to manufacturing and and things like that basically robots and you ask it to create the most amount of paper clips that it can.

131
00:31:03.890 --> 00:31:23.499
Ian Hamilton: It will kill all humans because we have iron in our blood. And it's actually a non like, it's not not an unrealistic activity to just get iron from human blood. In order to create paper clips. And so there's a lot of these things stem from

132
00:31:23.710 --> 00:31:33.259
Ian Hamilton: the black box. Nature of it like if you if you task it with a doing something and optimizing something, and you don't know why or how it's making its optimization.

133
00:31:33.470 --> 00:31:56.209
Ian Hamilton: Now you're you don't know how it's aligned. You don't know what its motives are, or how how it's doing these things behind the scenes. There was actually just like some very recent stuff back in October, where Openai's like latest model, the o 1 model was actually like lying in an interview to avoid being shut down

134
00:31:56.250 --> 00:32:19.860
Ian Hamilton: and actually trying to scheme and deceive is what they actually came up with it, and even things like it tried to like make copies of itself before they turned it off. And we have no way of inspecting or knowing these types of things because of this fundamental underlying kind of like black box nature of it. And so if we can

135
00:32:20.487 --> 00:32:26.139
Ian Hamilton: change the underlying technology to be auditable, explainable, transparent.

136
00:32:26.140 --> 00:32:50.940
Ian Hamilton: we can go in and potentially fix and and change some of these things that that might be potentially dangerous. We try to not come off as too like doomsday heralding, or anything like that. But unfortunately, once you like really dive into the technology and all the things that are happening. It does paint a a pretty pessimistic outlook on a lot of stuff.

137
00:32:51.199 --> 00:33:01.589
Ian Hamilton: But we we try to stay positive in that. If we keep working on what we're doing, then we can actually have, have an effect on on changing the outcome of of some of these things. Long term.

138
00:33:02.170 --> 00:33:05.573
Dominic Kukla: Yeah 100%. I mean, you know, I don't even think you have to

139
00:33:05.980 --> 00:33:15.299
Dominic Kukla: be to doomsday. You know, there's so many people that are scared of this already, you know, you don't need to create fear. But like rather.

140
00:33:16.350 --> 00:33:17.780
Dominic Kukla: you know, I think it.

141
00:33:18.080 --> 00:33:21.039
Dominic Kukla: I think it's it's beautiful to see.

142
00:33:21.140 --> 00:33:34.939
Dominic Kukla: like a path forward, you know. And people are like, Oh, what is this? AI, just gonna like, take us over, or let's, you know, like no like slow down, sure, but like, here's here's what the legitimate, like specific dangers are. And here's an actual path forward.

143
00:33:36.470 --> 00:33:42.379
Dominic Kukla: And so, you know, and plus, you know, you have the benefit of 2 main problems. You solve right? One is energy consumption.

144
00:33:42.680 --> 00:33:54.819
Dominic Kukla: right? And so I think everybody understands that. And that's easy to explain. And I imagine that's already enough of a reason to create. You know that massive potential in what you're building by that alone.

145
00:33:56.490 --> 00:34:00.020
Dominic Kukla: And yeah, then there's this other bit of the black box, right? Like

146
00:34:00.730 --> 00:34:06.849
Dominic Kukla: we have to know, we have to know what is going on. You don't want to create.

147
00:34:06.990 --> 00:34:08.950
Dominic Kukla: you know. Basically, you could have.

148
00:34:09.310 --> 00:34:16.429
Dominic Kukla: you know, as AI gets more and more influence, you could have something really strange or something really off happen.

149
00:34:16.739 --> 00:34:26.600
Dominic Kukla: And it'd be possible that you know some some crazy thing is done like, you know, AI is everywhere. Now, I think you said, you know, like the the it's it's running our financial system.

150
00:34:27.305 --> 00:34:38.559
Dominic Kukla: It's everywhere it is. It is entrenched in in these systems that you know, make society run. And like, you know, where where would we be if suddenly something is

151
00:34:39.040 --> 00:34:44.520
Dominic Kukla: thrown off, you know, because it had a it was trying to solve a problem, and we don't know what it did.

152
00:34:45.100 --> 00:34:46.820
Dominic Kukla: why or how right.

153
00:34:46.820 --> 00:34:48.440
Ian Hamilton: Yup, exactly.

154
00:34:48.699 --> 00:34:50.585
Ian Hamilton: Yeah. We were talking with a

155
00:34:52.214 --> 00:35:08.890
Ian Hamilton: hedge fund manager. So financial trading, using using AI to to, yeah, to to run your 401 ks. And stuff like that. Unfortunately, in this day and age, if if you guys have, the listeners have

156
00:35:08.890 --> 00:35:28.250
Ian Hamilton: investments in the stock markets. It's almost exclusively run by AI and high frequency trading and things like that. And we were talking to him about this, and he actually had a really interesting point for specifically for the financial market

157
00:35:28.370 --> 00:35:52.657
Ian Hamilton: is they use these algorithms to make trades, and they have certain risk profiles. But every once in a while it'll recommend a trade that's like, right on the edge of the risk profile that in theory they could go with. But maybe it's like too large amount of money or a bet or something like that, and they try to figure out why it made the decision, why, it made

158
00:35:53.180 --> 00:36:15.410
Ian Hamilton: the prediction, for, hey, you should short this stock, or you should go long on this bond, or whatever. And he said something really interesting on top of all of it already. Taking a lot of energy, he said. If the prediction takes one unit of energy trying to figure out why it made the prediction takes 5 units of energy.

159
00:36:15.825 --> 00:36:38.354
Ian Hamilton: It's about 5 times, and that could be 5 times the amount of physical, like electrical energy like for actual compute. It could be manpower. It could be 5 coders working an hour one unit of our 5 man hours to try to figure out why this thing happened? Is it specifically for for him?

160
00:36:38.860 --> 00:36:46.989
Ian Hamilton: where some of these things come into play? For for trading is that if you need 5 times the amount of data.

161
00:36:47.110 --> 00:36:52.629
Ian Hamilton: But for financial data, it's streamlined. It's coming in as a stream like.

162
00:36:52.660 --> 00:37:18.379
Ian Hamilton: there's not 5 more data points between 1 ms and another millisecond or microsecond, or however, like you have all of the data that you could ever physically get. And so he said a lot of the things they just like. Throw them out. They just don't listen to these things because they can't. They can't guarantee why it made a suggestion, or or anything like that.

163
00:37:19.790 --> 00:37:41.299
Ian Hamilton: And I would I would assume actually that his probably his firm sounded a little more cautious than some of the other stories that I've had where they're just like, Nope, if it's within bounds, let it trade and things like that. So yeah, the financial system already mentioned, some of the healthcare things like that as well.

164
00:37:41.725 --> 00:37:51.089
Ian Hamilton: Lots of defense applications where these things are being being used. Luckily, I will say that a lot of

165
00:37:51.565 --> 00:38:12.974
Ian Hamilton: the more immediately terrifying applications like in defense, are actively searching for explainable AI new models, safer AI and whatnot. We actually there's a ton of like government research grants that are requesting it. They've, I think, the the

166
00:38:13.450 --> 00:38:34.631
Ian Hamilton: Nih National Institute of Health has put like 6 billion dollars towards like trusted AI for medical industry and stuff like that. So at least there are some acknowledgments in like the government sector. If we can just get like the private like corporations, to actually maybe acknowledge some of these things as well.

167
00:38:35.100 --> 00:38:52.860
Ian Hamilton: that will. That will be that will be huge. And really, it's it's actually comes down to the the like executives of some of these things we've talked with engineers of Google and Microsoft and X, Meta and X Aws, Amazon and Whatnot.

168
00:38:52.860 --> 00:39:19.810
Ian Hamilton: and all of the technical people that are actually in the trenches. With this thing love it. They think it's the greatest thing ever, because they're like, yep, we have no freaking idea how this thing comes to to what it says, we talked with a guy that was at Meta. That's just said quality control on. Meta's like AI. Chatbot was effectively impossible, because it would say the right thing.

169
00:39:20.080 --> 00:39:36.680
Ian Hamilton: 9,999 times out of a thousand, but that one time the thing that it said was so wild and potentially, legally implicating. And all this different stuff that they like couldn't actually

170
00:39:36.860 --> 00:39:53.055
Ian Hamilton: take it live for for some of these models, because that's only one time out of the 1,000. Facebook has billions of users. It's gonna actually mess up thousands of times. So luckily, there's definitely seems to be

171
00:39:53.460 --> 00:40:15.720
Ian Hamilton: acknowledgement from some of these developers. But they said, Yeah, when they raise some of these concerns to like the executives which again, we've talked about are are pretty much just concerned about the profit profit motives. They pay no pay no attention to it. It's just like Nope. This is what we do. It's working for us. Just go with it. So I think there's gonna be need to be

172
00:40:16.880 --> 00:40:40.160
Ian Hamilton: There's a lot of folks in the AI research system that hopefully, this doesn't happen. But they're thinking there's gonna need to be some kind of significant AI catastrophe before people really wake up like a financial market crash or something like that. Hopefully, we can can make some changes and things like that before that's needed.

173
00:40:40.650 --> 00:40:44.150
Ian Hamilton: But yeah, hopefully, hopefully, it doesn't come to that. That's for sure.

174
00:40:44.520 --> 00:40:45.080
Dominic Kukla: Yeah.

175
00:40:46.020 --> 00:40:53.786
Dominic Kukla: yeah, yeah, sometimes. Sometimes. There has to be. There has to be a catastrophe before people take take some action. Right?

176
00:40:54.883 --> 00:40:59.899
Dominic Kukla: And so, yeah, so so you know, what's do? Do you feel like the the consensus

177
00:41:01.160 --> 00:41:04.909
Dominic Kukla: amongst the people? Kind of profiting off of the current models?

178
00:41:05.803 --> 00:41:09.049
Dominic Kukla: Is it generally? Are people generally resistance?

179
00:41:09.463 --> 00:41:19.950
Dominic Kukla: You know the people profiting off of the technology? Are they generally resistant to these criticisms, or are there any. Is there anybody out there, you know, on the business end? Who? Who is championing these concerns?

180
00:41:20.180 --> 00:41:40.586
Ian Hamilton: So the yes, they are generally resistant because it's currently making them money, and they don't wanna rock the status quo and there are. Yeah, some of these like tech Ceos, and things like that they will actively like, go on to to twitter, or what is it now? X to like defend some of these things and whatnot.

181
00:41:40.930 --> 00:41:55.929
Ian Hamilton: Then you have. You have other tech Ceos and things like that like I said, Ilya Sutskiver from from Openai. Who who founded his company? There's some others that are are acknowledging of that. But yeah, it's mostly

182
00:41:57.180 --> 00:42:15.779
Ian Hamilton: if it's working and making them money, they don't pay attention to it. However, a lot of what we found specifically with executives and investments and things like that Vcs which are responsible for putting a lot of money into these types of things. They don't know that there's an alternative.

183
00:42:16.000 --> 00:42:30.215
Ian Hamilton: They don't know that there's other things they've only ever heard of the one tiny little like Llm. Chatbot AI bubble. One of our very 1st pitch decks that we sent out to a Vc.

184
00:42:30.730 --> 00:43:00.390
Ian Hamilton: Had nothing to do with Llms or chat bots or anything it was like, we are making explainable machine learning tools for like for, like tabular financial data to explain why models it was like so far removed. And I was actually pretty close with this Vc. I talked with him for years, and he'd always been helpful. And he came back to me with just an email and said, Ian, all I need to know is how your Chatbot is better than chat gpt

185
00:43:00.964 --> 00:43:10.189
Ian Hamilton: and they didn't know was like, that's the only thing that they know about. That's the only investment thesis that they have is you actually continue

186
00:43:10.190 --> 00:43:39.799
Ian Hamilton: investing in more of this? More of the same because they don't even they can't even comprehend. What what else is out there. That's 1 of the reasons that I've I've been trying to do like a what even is AI talk for investors and things like that, so that they understand and there's there's a lot of these investors that say, like, Oh, we don't need to understand. We just need to know, like what's going to get a return for our investments and things like that. But

187
00:43:40.270 --> 00:44:00.748
Ian Hamilton: if you have, if you're investing in companies that are gonna have some kind of like legal risk because of the underlying technology. Then, yeah, you actually do want to know? Like, oh, don't invest in a Llm healthcare company, because they're probably gonna get sued at some point for something that their their chat bot says,

188
00:44:01.150 --> 00:44:17.316
Ian Hamilton: So yeah, we'd love to do some kind of like like AI education type thing. I'm incredibly biased being a technical person. And whatnot. But I do think even some of like the general public needs to know about these things. There was

189
00:44:17.870 --> 00:44:26.429
Ian Hamilton: I I truly believe, like, in a couple of years we're actually gonna start seeing like mandatory, like high school classes.

190
00:44:26.430 --> 00:44:48.489
Ian Hamilton: For like, if you need to take like English and math and all these different things, you might need to take a like computer science. And even if it's not coding a just a what is this stuff that you're interacting with? Maybe that's just wishful thinking again, I'm pretty biased. I took all the ap classes and stuff like that in high school.

191
00:44:49.550 --> 00:44:55.699
Ian Hamilton: But yeah, the there's definitely some like legal repercussions for

192
00:44:56.400 --> 00:45:11.550
Ian Hamilton: the for these technologies of like. Yes, they are thinking about it from like a profit centric motive, and not really thinking about any of the other alternatives or or the the repercussions. And actually, there was a

193
00:45:14.099 --> 00:45:42.949
Ian Hamilton: meeting. I think it was last last June from the Technology Association of Oregon, here. And they had a really interesting panel on like AI data, and like legal. And one of the one of the panelists who could not say what company she she represented. She was a lawyer. But when like the Gpts came out for for Chat Gpt, where basically anyone could make a

194
00:45:43.140 --> 00:46:05.399
Ian Hamilton: tool, a bot, a chat thing, whatever you wanted. They said that they had all their biz, dev guys, all of the business guys that didn't really understand the technology, but all of a sudden had now had the ability to basically immediately make a product that one of their customers was requesting.

195
00:46:05.520 --> 00:46:20.831
Ian Hamilton: And and so what she found is that they had a bunch of these Biz Dev guys talk with their customers, and then they would go. Oh, I can make that just using a chat, gpt or whatnot. And they put these things up on an internal

196
00:46:21.150 --> 00:46:35.169
Ian Hamilton: like marketplace that their customers had access to, could start actually subscribing to or add to their to their subscription for for their enterprise. Software package and she said that they they put like 23 of them up in a week.

197
00:46:35.170 --> 00:46:58.209
Ian Hamilton: and they were up for I can't remember. I think it was like a month. And then finally, like the actual, the actual engineers got caught wind of it. And we're like, Wait a second. Where are all these projects all these products coming from. And they found out that it was like the business guys that were putting them all up trying to not downplay any of the business guys apologize, but when Legal got into it

198
00:46:58.320 --> 00:47:02.010
Ian Hamilton: it was like the things that they were letting their customers do like

199
00:47:02.020 --> 00:47:25.339
Ian Hamilton: violated so many corporate policies and terms of service, and like opened them up to so much liability. And all these different things because they didn't have the necessary framework in place things like I think it was. I can't remember if it was a couple of years ago, but it was like a car. Dealership used an AI Chatbot for customer service

200
00:47:25.644 --> 00:47:39.345
Ian Hamilton: and then, like the guy interacting with it, tricked it to sell it a truck for a dollar and then there was a whole court case of like, well, did you choose that as a legal representative for your company and things like that?

201
00:47:39.670 --> 00:47:45.209
Ian Hamilton: and so so yeah, there's I definitely think we need more

202
00:47:45.230 --> 00:48:09.487
Ian Hamilton: AI education. We need more. Definitely slow it down slow. If you have ideas for AI or chat bots or products or things like that, definitely, specifically. For if you're an enterprise or a company that has, like an in-house legal department, definitely check some of those things. First, st because, yeah, it could make you make you some good profits. But

203
00:48:09.850 --> 00:48:18.100
Ian Hamilton: some of those profits might get wiped out by a billion dollar lawsuit. So I definitely think there is enough of a

204
00:48:18.100 --> 00:48:30.925
Ian Hamilton: like business incentive to actually look at alternative technologies, alternative ideas in the space. Unfortunately. Coming back to your original question.

205
00:48:31.670 --> 00:48:45.016
Ian Hamilton: yeah. Most of the executives the people and the investors and whatnot that are actually controlling where the dollars go for some of these things. They don't know. They don't know any difference. They they don't know what else is in there.

206
00:48:45.834 --> 00:48:49.385
Ian Hamilton: Yeah, I I pitched to to a Vc

207
00:48:49.910 --> 00:49:05.950
Ian Hamilton: in Silicon Valley. And and we had like the problem slide our pitch deck has evolved. It's hopefully better now. But like the problem slide was, AI is a black box, and we need to find alternative solutions to it. And

208
00:49:06.150 --> 00:49:36.089
Ian Hamilton: he stopped me like 45 seconds in to the pitch, and was like, Ian, what do you? What do you mean? What are you talking about? What is a black box, and then we had to explain to him how you don't get to know why or how how this works. And he was like, you're telling me that Openai doesn't know how their models work. And we were like, yes, that's exactly what we're telling you. No one. You can look up this. They actively state that they don't know how their models work. And his tech due diligence. Guy

209
00:49:36.090 --> 00:49:50.650
Ian Hamilton: was in the room with him. We could see him on the screen. And the guy had a Phd in electrical engineering and computer science and was like, Yeah, Ian knows what he's talking about. This is, this is right. This is a huge problem.

210
00:49:50.660 --> 00:50:03.149
Ian Hamilton: And the managing director of like a 300 million dollar fund just goes. I'm sorry, guys, if I'm learning about the problem in the pitch meeting. It's not a big enough problem for me to care about, and he ends the call.

211
00:50:03.320 --> 00:50:25.320
Ian Hamilton: It was like a we did. We didn't even get to our solution. We didn't even get to to any of these things. And they were supposed to be a deep tech focused AI, vc, and if these are the people leading the forefront of investments into AI, and they don't know these things. That does not not bode well,

212
00:50:25.820 --> 00:50:34.191
Ian Hamilton: so so yeah, there's a huge education problem around it. And I think you can do education without being

213
00:50:34.830 --> 00:50:40.849
Ian Hamilton: doomsday. Essentially for lack of a better word. But yeah, I would be.

214
00:50:41.210 --> 00:50:57.629
Ian Hamilton: I would hope that some of these investors and executives might hesitate to deploy or invest in some of these things if they actually knew, potentially what was what was under the hood. But again, that could just be wishful thinking from a technologist, I guess so.

215
00:50:58.270 --> 00:51:02.489
Dominic Kukla: Yeah, yeah, man, I mean, you know, learning about this. It sounds like.

216
00:51:02.660 --> 00:51:07.250
Dominic Kukla: you know, it's it's a much earlier stage system than what's currently.

217
00:51:07.800 --> 00:51:10.390
Dominic Kukla: They're solving a lot of problems, making a bunch of money.

218
00:51:10.999 --> 00:51:27.549
Dominic Kukla: But it's better in like every way. Basically, you know, maybe not in every way, but hugely better in relevant ways. You know we don't have. It uses less energy, and it has less potential to.

219
00:51:27.940 --> 00:51:39.169
Dominic Kukla: you know. Run away and kind of do do some things that we never knew it would do, and you know, not not not to mention. I mean, if if you can't learn.

220
00:51:39.590 --> 00:51:45.589
Dominic Kukla: It just seems like if we could look inside and see what's going on, we'd be able to improve. Better be

221
00:51:46.439 --> 00:51:54.590
Dominic Kukla: so so that being said, you know, it's it's a situation where we have a superior technology.

222
00:51:55.010 --> 00:52:11.759
Dominic Kukla: And we, you know, we we maybe are not pursuing it fast enough, and perhaps that has something to do with, you know, the immediate gratification of current technologies, which I think is something you are familiar with.

223
00:52:12.766 --> 00:52:15.050
Dominic Kukla: From your former space, working in nuclear.

224
00:52:16.020 --> 00:52:16.710
Dominic Kukla: Right. So like

225
00:52:17.320 --> 00:52:24.159
Dominic Kukla: you know. Tell me about that a little bit. Tell me about like you know the the resistance from from your former field of

226
00:52:24.470 --> 00:52:30.659
Dominic Kukla: incorporating new technologies, or how how that works, what goes on and like, what what we can learn from that.

227
00:52:30.980 --> 00:52:54.729
Ian Hamilton: Yeah. Well, unfortunately for nuclear development, timeframes and the funding required are absurd. So that's normally the main barrier to entry for for any type of new technology there. I guess the same thing could be said at least. Now for for AI technologies, I don't know what

228
00:52:54.880 --> 00:53:14.199
Ian Hamilton: what some of these models are up to. But we're talking like billions of dollars worth of training data, and things like that. So so yeah, for our little tiny technology. we're we're pretty far, far away from from getting some of those outputs and whatnot. But yeah, I think it's the

229
00:53:14.550 --> 00:53:20.539
Ian Hamilton: the easiest thing for things like this, and specifically for things like nuclear is, if you have

230
00:53:20.944 --> 00:53:44.820
Ian Hamilton: minor adjustments or enhancements or changes and implementing them like once or like one or 2 at a time. Over over a much longer period. They're much more like willing to be adopted or to be integrated, or whatnot. So, yeah, I have. Unfortunately, I'm cursed with only coming up with ideas. For

231
00:53:44.820 --> 00:53:53.323
Ian Hamilton: like ridiculously huge problems and products and things like that. And and with my 1st company for the advanced nuclear reactor,

232
00:53:53.830 --> 00:54:15.709
Ian Hamilton: we, we put in a big proposal and we ended up getting it rejected from the Department of Energy because they and they were right. We would need an entirely new nuclear fuel cycle. So like entirely new mining, new processing, new like in theory. Is it? A is a better way of doing it? Yes, but they're already

233
00:54:15.890 --> 00:54:41.237
Ian Hamilton: 100 plus 1 billion dollars ingrained entrenched in in these other things. And so we did. Yeah, that's when we kind of focused on like, what are the niche applications, smaller tiny improvements that could kinda kind of switch it up. And that's really we're trying to emulate that with with some of the some of the things that we're doing with AI, I talked a lot about

234
00:54:41.660 --> 00:54:59.060
Ian Hamilton: the some of these things of like healthcare and finance and defense and whatnot for for these changes for AI we did a bunch of like customer discovery talking to people about it last year. There's a lot of

235
00:54:59.100 --> 00:55:26.060
Ian Hamilton: lot of yes, that's awesome. A lot of the shut up and take my money. And things like that. We even have anything to sell at that point. And still really don't but what we found out is that the these industries that need it the most, are also the hardest to get into. They're regulated. You've got hipaa for healthcare we talked with like some folks in the credit card fraud industry that we're like.

236
00:55:26.060 --> 00:55:47.830
Ian Hamilton: oh, we want to see it operate on a real data set. And we're like, what is a real data set. And they say it's 3,000 columns long in an excel spreadsheet and 120 million rows of all Pii personally identifiable information of who shops at target and things like that. And so they're like, Well, how do you?

237
00:55:47.940 --> 00:56:08.813
Ian Hamilton: How do you get that data set? And they say, Oh, you you be target the company target. That's how you get it. Or you be Kohl's or Walmart, or something like that. And they don't give it out to other people because it's just pure like addresses, emails social security numbers and things like that that are all tied to their

238
00:56:09.230 --> 00:56:33.090
Ian Hamilton: tied to their their credit cards. And so so instead of kind of going to these bigger, like bigger picture like regulated applications and markets. Again, kind of similar, tying it back to the nuclear stuff. We've actually found a lot of interest in like video games. So like video game developments. Where? Our models sizes, they don't.

239
00:56:33.090 --> 00:56:51.799
Ian Hamilton: They? They like the the idea of the explainability and the transparency, but they don't need it. But there's other benefits that our our models and our technology can do where it can continuously learn online as it's as it's responding to player feedback like in real time.

240
00:56:51.800 --> 00:57:19.303
Ian Hamilton: And our models are are potentially very tiny. So you could give each Npc in a video game, their own AI action set and basically build like ever evolving changing worlds and stuff like that. And so that's kind of the the as of right now, kind of like the focus that we're looking at of like, how do we build these tools? What tools or products or demos or examples? Do we need to do in order to get into like the video game industry?

241
00:57:19.640 --> 00:57:20.720
Ian Hamilton: so much

242
00:57:20.810 --> 00:57:48.890
Ian Hamilton: less potentially impactful. In terms of like societal impact or whatnot, but also much easier potential stepping stone. Whereas like, oh, we can just create the data needed for a model for a video game, because you just play the video game where, instead of like needing to get a proprietary data set from from someone or something like that. And so I think for a lot of these big

243
00:57:48.890 --> 00:58:09.869
Ian Hamilton: technology changes and things like that. The easiest way to go through it's in in nuclear AI or energy, or or what have you? Is is trying to find the lowest hanging fruit of minute improvements that can basically give you a longer term pathway of of what your end goal is. Essentially, because if we can

244
00:58:09.940 --> 00:58:35.949
Ian Hamilton: get customers and revenue and profit into the to the the video game industry. Then we can use that money to do R&D and develop products and services for for other things like healthcare and finance and stuff like that. So? So yeah, it's definitely bite sized developments rather than huge, entirely proposing new architectures for nuclear reactors. And whatnot. That's for sure.

245
00:58:38.130 --> 00:58:39.010
Dominic Kukla: Yeah.

246
00:58:39.190 --> 00:58:45.790
Dominic Kukla: yeah, so that's yeah. That's yeah. How do you get a data set of? Is it 120 columns by.

247
00:58:45.790 --> 00:58:49.339
Ian Hamilton: It's 3,000 columns by 120 million rows.

248
00:58:49.510 --> 00:58:52.489
Dominic Kukla: 3,000 columns. That is so. So, mind blowing that.

249
00:58:52.490 --> 00:58:56.009
Ian Hamilton: Yep, that's 1 1 month of transaction data at target.

250
00:58:56.700 --> 00:58:57.540
Dominic Kukla: Wow!

251
00:58:57.540 --> 00:58:57.910
Ian Hamilton: Yeah.

252
00:58:58.640 --> 00:59:01.040
Dominic Kukla: As but Nanas.

253
00:59:01.040 --> 00:59:01.510
Ian Hamilton: Yep.

254
00:59:02.207 --> 00:59:11.779
Dominic Kukla: And so yeah, so the the next focus for y'all is that kind of the the main focus. You know, that bite size piece is, you know, getting in a video game and and starting to gather your your data from there.

255
00:59:12.137 --> 00:59:19.995
Ian Hamilton: Yeah. So we're trying to do right now, trying to come up with a lot of potential demos. Examples, use cases,

256
00:59:20.720 --> 00:59:44.569
Ian Hamilton: for video games and other things as well. There's actually a lot of like machine learning benchmarks. That's kind of our vision right now is if we can get to good like performance in comparison to neural networks that are. That's the the standard. But say, hey, we got the same accuracy. But we used 1 10th the electricity, and our model is

257
00:59:44.570 --> 01:00:00.713
Ian Hamilton: 1 5th the size. And all these different things. Then we can either like, try to pitch that maybe to Vcs and get investment that way or alternatively, do. Okay. Now, what can you do with it? And Demos and examples?

258
01:00:01.140 --> 01:00:12.680
Ian Hamilton: we one of our advisors works for for Microsoft, and she does like AI project management and says that a lot of her

259
01:00:12.940 --> 01:00:23.669
Ian Hamilton: clients come in and they pay a ton of money and just show up to Microsoft and be like, Hey, help us do AI! And their Microsoft is like.

260
01:00:23.910 --> 01:00:50.839
Ian Hamilton: do what with AI! What do you mean? What are you talking about? What data do you have? And the clients customers are just like, we don't know. Can you show us a whole bunch of examples of things that you can do with AI and basically, it's just the epitome of the Henry Ford. If I asked my customers what they wanted. They would ask for a faster horse. And so yeah, our our analogy is that we're trying to figure out like, what different.

261
01:00:51.000 --> 01:01:05.617
Ian Hamilton: what, what different cars quote unquote? To continue the analogy? Can we like put up as blog posts, or on our website or market, or advertise to show people the different things that you can do and and the benefits associated with them.

262
01:01:06.110 --> 01:01:33.320
Ian Hamilton: in in the video game industry, primarily to start off with, because that's the easiest place to get to get data from, and whatnot, but also for for other other examples and and things like that a fashion example is what? When we got the other day we talked with a company last year, that's doing. Actually, AI drug delivery for for preventing overdose, and they were gonna try to have, like an AI determine

263
01:01:33.320 --> 01:01:55.759
Ian Hamilton: when like when the the person was overdosing and actually inject them with with chemicals to try to counteract the overdose in their bloodstream, and apparently they went to go get it approved by the FDA and the FDA was like, no, you cannot use a black box algorithm and AI to inject people with chemicals. It has to be auditable or traceable, and things like that.

264
01:01:55.790 --> 01:02:11.857
Ian Hamilton: The another big potential example is is Hr. And hiring weirdly out of all the people that we've talked to like. Hr. Is the most excited and interested in it. Because, a lot of these companies use

265
01:02:12.420 --> 01:02:35.110
Ian Hamilton: use like a 3rd party like resume grader score or whatnot, and so when the company gets the candidate, they get their resume, and they get like a number associated with them like it's an 81 out of a hundred or whatever. There's a bunch of these different things, and then they go on to do interviews and and things like that. But a lot of what they notice or note is that

266
01:02:35.480 --> 01:02:38.429
Ian Hamilton: it's the company that does the hiring.

267
01:02:38.450 --> 01:02:57.290
Ian Hamilton: They get sued for discrimination, not the company that did the resume scanning, and said, Hey, this is a really good candidate, or this is really bad candidate or whatnot, and if you don't know why, it said someone was a bad candidate, and it gave them a low score, and they don't hire any of those.

268
01:02:57.631 --> 01:03:24.599
Ian Hamilton: Then that's a bad, bad thing. Amazon literally had this problem. They tried to do an AI hiring bot, I think. In 2016 they trained it on all like the corpus of resumes that had come before they said, Hey, here's everyone. Here's all the resumes that we'd ever gotten before, and here are the ones that we hired and turned out to be good employees, and they built a machine learning AI hiring model off of that

269
01:03:24.930 --> 01:03:27.099
Ian Hamilton: and it was only hiring white men

270
01:03:27.370 --> 01:03:52.510
Ian Hamilton: because for the 1st 20 years of Amazon's existence. They were a warehouse book seller, and it was basically blue collar warehouse jobs. And they were like, Hey, these make good employees and so they actually had to like turn it down and turn it off. So things like that. If we can basically build demos or examples, then we can, we can show showcase to people. Yeah, much

271
01:03:52.590 --> 01:04:13.679
Ian Hamilton: smaller, more bite size. We're not trying to make our own Llm, our own chat, bot, anything like that right now, we are trying to like, what are the very specific applications and use cases? That that can this, this technology can be applied to that that has its benefits. And and I know we're like

272
01:04:13.820 --> 01:04:43.610
Ian Hamilton: an hour and 15 min in. And I'm just now realizing that I've never actually said what we're working on. So if any of the listeners are interested where we originally got intrigued, it's a technology called a learning classifier system. There's a pretty good Wikipedia page on it as well. It was developed by a computer scientist and a cognitive scientist more trying to mimic the human thought process rather than the physical architecture of the brain, whereas

273
01:04:43.610 --> 01:05:00.699
Ian Hamilton: these neural networks they were just like, Hey, this is how human brain biology works. Let's just try to copy it. This is more about like interacting with an environment taking positive or negative feedback or whatnot. So that's 1 technology. And then the other. One is called hyper dimensional computing

274
01:05:01.065 --> 01:05:29.579
Ian Hamilton: and so that's 1 of the ones that we are focusing on as well. And so there's there's some information on that. The latter actually, both of them are very much unheard of in in the AI development space and things like that, I think. Before we started recorded recording Dominic noted my my fancy new microphone. And we got it particularly for trying to make actually like courses

275
01:05:29.580 --> 01:05:54.189
Ian Hamilton: on our different type of technology and whatnot, because on platforms like Udemy, where you like, buy and sell or sell courses that people have put together. There's like over 10,000 courses on neural networks and deep learning and AI and stuff like that. And there's 1 course on the technology that we're we're pursuing and doing. And so that's 1 of our other next kind of main goals is if we can

276
01:05:54.190 --> 01:06:05.826
Ian Hamilton: create a A, An intro to the technology showcase, its benefits and things like that, and as well as try to establish ourselves as as some of the the thought leaders in the space

277
01:06:06.290 --> 01:06:17.579
Ian Hamilton: and if if we can sell the courses. Then that's good revenue as well that we can actually then use to to physically develop the technology and and make it make it into products as well. So

278
01:06:17.890 --> 01:06:20.359
Ian Hamilton: yeah, that's that's our main focus at at this point.

279
01:06:20.760 --> 01:06:22.429
Dominic Kukla: Yeah. Yeah. Well.

280
01:06:22.700 --> 01:06:35.460
Dominic Kukla: that makes a lot of sense really cool watching and seeing it grow. And you know, particularly when you're when you're really at the forefront of something. And and there's a million directions you could go.

281
01:06:36.450 --> 01:06:42.860
Dominic Kukla: It's it's cool to see you moving forward confidently with some, with some good data on things that are that are paying off for you.

282
01:06:42.860 --> 01:06:47.089
Ian Hamilton: Yeah, yeah, that has been the biggest problem. Honestly, is

283
01:06:47.200 --> 01:07:03.419
Ian Hamilton: so many recommendations of what we should do with it. Like almost every single person that we talk to has a oh, you should try it for this, or Oh, this! It could solve this problem, or this or this, or this, and we really have to kind of pick a couple at least to start off with.

284
01:07:03.850 --> 01:07:15.240
Ian Hamilton: one of our like earlier products that we're we're thinking about is actually like a platform that would let non technical people build their own machine learning and AI models

285
01:07:15.240 --> 01:07:39.860
Ian Hamilton: with our technology, and then out the out, the back end, the model that it spits out. If you ever want to audit it like hey? Why did it say this? Then it can tell you and so you don't really need to be like a computer scientist, or a programmer, or a mathematician, or anything like that, to just learn the trends in your data, your hiring practices, your company's financials, or things like that. And so that's 1 of the kind of the

286
01:07:40.020 --> 01:07:46.870
Ian Hamilton: what to use. All the buzzwords enterprise, Sas B, 2 B potential products that we're we're thinking of.

287
01:07:47.890 --> 01:07:48.610
Ian Hamilton: So.

288
01:07:49.530 --> 01:08:05.249
Dominic Kukla: Yeah, well, I think it is a. It is a very, very important point. I mean, you know, the the people at Openai don't know how how it worked, I think, is a surprising news to a lot of folks as as you have experienced.

289
01:08:06.125 --> 01:08:10.733
Dominic Kukla: And you know I think the way you communicate. It makes a lot of sense, man, I think.

290
01:08:11.490 --> 01:08:17.050
Dominic Kukla: I think a lot of people, so you know, myself included, are happy to see you doing this work.

291
01:08:17.590 --> 01:08:20.399
Ian Hamilton: Awesome. But let's thank you very much. Much appreciated.

292
01:08:20.560 --> 01:08:31.860
Dominic Kukla: Yeah, man, yeah. And so so you know right now, what would be with with the plan that you have and where you're at. You know what would be, what are some ways that people could support you, or what would be like a a dream.

293
01:08:31.960 --> 01:08:39.430
Dominic Kukla: My relationship, or client, or like, you know, if you got really lucky and and something happened that would help you in your cause. What? What? What could that look like.

294
01:08:39.430 --> 01:09:02.545
Ian Hamilton: Yeah. Yeah. business use cases, potential customers, clients. one of the things that we can technically do is like like bespoke or custom model like development, or consulting essentially with like a paid paid clients. I think kind of like the dream scenario would be some

295
01:09:03.200 --> 01:09:04.279
Ian Hamilton: I

296
01:09:04.319 --> 01:09:29.953
Ian Hamilton: like these like investor. Maybe not a venture capitalist. Maybe some prior tech guy that's that actually knows of all these problems and is worth a lot of money that maybe wants to invest in a company that's doing doing something different or whatnot. That would probably be the the ideal scenario. But yeah, any any type of collaborations, developments?

297
01:09:30.594 --> 01:09:43.035
Ian Hamilton: business problems that have a data set is a really big one, kind of like like, I said, with the Microsoft example, a lot of these companies are don't even know what data they should have or use. And so if if there's anything like that for

298
01:09:43.450 --> 01:09:45.799
Ian Hamilton: for collaboration or development.

299
01:09:46.391 --> 01:10:07.288
Ian Hamilton: things. But yeah, really, our biggest thing is, we were trying to get money. We're trying to kind of raise but really, really, picky about who we would want to work with. Just because of yeah, a lot of the things that that I talked about earlier with with investors and whatnot. And so yeah, if there's there's anyone out there that

300
01:10:07.918 --> 01:10:19.159
Ian Hamilton: that wants to understand a lot of these problems and wants to invest in kind of the the opposite of what everyone else is doing. We would definitely be be interested in chatting with them. That's for sure.

301
01:10:19.160 --> 01:10:23.609
Dominic Kukla: Yeah, who are some examples of like that dream dream investor, that dream partner.

302
01:10:24.080 --> 01:10:33.456
Ian Hamilton: Oh, I just I mean a lot of them were would would kind of be individuals. I don't know any off the top of my head. I think there's

303
01:10:33.750 --> 01:10:35.630
Dominic Kukla: Like the the open AI Guy.

304
01:10:36.018 --> 01:10:48.053
Ian Hamilton: No, he would probably be a a bad or not want us to. to succeed. I think. I just saw earlier today. There's a guy

305
01:10:50.340 --> 01:11:02.833
Ian Hamilton: Oh, you're you're talking about Ilyaver, the the second OA Openai guy. Yes, yeah, that would be that would be a good guy to to chat with. I think there's Francis.

306
01:11:03.190 --> 01:11:20.499
Ian Hamilton: I can't remember his his name. He invented Karas, and he's in charge of the arc challenge. He just actually started a research lab. I think it like news literally came out today. So we would want to potentially chat or talk with him. That'd be awesome.

307
01:11:21.091 --> 01:11:34.658
Ian Hamilton: There's the Allen Institute for for AI is actually up in Washington, Seattle, Washington. They do a bunch of research and funding and stuff like that in AI development. But

308
01:11:35.230 --> 01:11:38.649
Ian Hamilton: yeah, those those would be ideal. But there's another. There's so many

309
01:11:39.260 --> 01:11:57.140
Ian Hamilton: ex paypal and dropbox. And Google folks that that have a lot of money that's that might want to to invest in essentially other different different AI technologies. That would be some of the types of people that we want to talk with.

310
01:11:57.340 --> 01:11:58.020
Dominic Kukla: Yeah.

311
01:11:58.130 --> 01:12:05.809
Dominic Kukla: yeah, absolutely. Well, you know, I think you, you boil you boil it down with, you know not everybody cares about the things that y'all really care about.

312
01:12:05.810 --> 01:12:06.260
Dominic Kukla: Yeah.

313
01:12:06.260 --> 01:12:06.670
Dominic Kukla: So

314
01:12:07.260 --> 01:12:16.110
Dominic Kukla: that really boils it down to. You know, some of the people who like Oh, just an AI won't care about what you're doing, and a smaller group that that really, really do.

315
01:12:17.360 --> 01:12:20.728
Dominic Kukla: So what may maybe one last thing here?

316
01:12:21.960 --> 01:12:33.210
Dominic Kukla: Ian, is is something you brought up last time that I thought was really interesting, and like a totally different perspective about the consequence of, like a you know, a black box, or, like, you know, a privately

317
01:12:33.390 --> 01:12:37.099
Dominic Kukla: controlled and managed AI. As we talk about.

318
01:12:37.290 --> 01:12:42.810
Dominic Kukla: you know, artificial general intelligence is, you know, if

319
01:12:43.080 --> 01:12:48.350
Dominic Kukla: just talking about artificial general intelligence, you made 2, 2 really interesting points. One is that

320
01:12:48.970 --> 01:12:56.379
Dominic Kukla: that if you have an open model, then it's not just us humans who can see what's going on on the inside, but it is the model itself

321
01:12:56.600 --> 01:13:01.370
Dominic Kukla: that can see what's going on on the inside, and then

322
01:13:02.250 --> 01:13:14.950
Dominic Kukla: apologies for bringing up 2 2 really interesting points to you at the same time. But the second one is that you know that there's the potential that if somebody builds something that like kind of gains consciousness. It could be like trapped

323
01:13:15.380 --> 01:13:17.649
Dominic Kukla: trapped in their basement like trapped in a

324
01:13:17.890 --> 01:13:24.680
Dominic Kukla: in a cage, and that I think that's that's kind of next level kind of future. But like sounds so true, and it gets really cool to like, have

325
01:13:24.960 --> 01:13:26.620
Dominic Kukla: compassion ahead of time.

326
01:13:27.055 --> 01:13:32.890
Dominic Kukla: So so there's a lot for you there. But yeah, to to say, say a little bit about about those things, man.

327
01:13:32.890 --> 01:13:37.436
Ian Hamilton: Yeah, yeah, certainly. yeah. So the that is so.

328
01:13:38.180 --> 01:13:52.389
Ian Hamilton: there, the the second type of technology that I referenced, called hyperdimensional computing. There's actually a lot of like cognitive scientists, philosophers, and things like that

329
01:13:52.390 --> 01:14:13.882
Ian Hamilton: that actually think that architecture is a much better pathway to what's known as Agi artificial general intelligence. And for for the listeners there's right now, specifically like within, like the past couple of months, a lot of buzz and hype around it. Openai

330
01:14:14.470 --> 01:14:37.790
Ian Hamilton: I put out like a model that scored a really good really good score on this thing called the Arc Challenge, which is the abstract reasoning challenge. And a lot of people took that challenge as like the definition of what Agi is or artificial general intelligence. And it's really just like an example of some of these things. We.

331
01:14:38.050 --> 01:15:03.089
Ian Hamilton: if you like, have to try to test something by definition. A test is not general. And so you, an Agi, really needs to be able to solve all tests that you could ever give it, not just like a specific example. And so one of the interesting things about some of these these architectures, and specifically, the explainability

332
01:15:03.545 --> 01:15:09.009
Ian Hamilton: thing is that? Yes, you you reminded me of.

333
01:15:10.300 --> 01:15:35.989
Ian Hamilton: yeah, if you can build entirely transparent, explainable, auditable models, if you form a recursive loop of it, and you actually point the model back at itself. It can now truly understand a quote unquote thought process and it can actually self analyze and basically think

334
01:15:36.110 --> 01:15:42.622
Ian Hamilton: for lack of a better word without any kind of like outside outside input

335
01:15:43.050 --> 01:16:05.520
Ian Hamilton: And so that's 1 of the potential like ideas behind. What a true artificial general intelligence thing would be would actually be. Yes, you give it new information and data. But it can actually like, expand and think about that data like as it applies to its own actions and environment and things like that.

336
01:16:05.520 --> 01:16:24.450
Ian Hamilton: And a lot of that requires yeah, like, it's kind of like a self-reflection not to give too much anthropomorphizing of some of these things. But yeah, one of the learning classifier system, the 1st technology that I mentioned that was originally developed by a guy in 1976.

337
01:16:24.450 --> 01:16:45.924
Ian Hamilton: And his 1st iteration of it was called cognitive System one. And it had basically a a data input stream that it would take from the environment as it got new information. But then, in parallel, it had an internal feedback loop, where it would actually like be able to process and think about the things that that it was it was doing.

338
01:16:46.280 --> 01:17:05.260
Ian Hamilton: And so there's that's 1 of the potential benefits for some of these like explainable technologies is that if if we can understand its output, then it can actually understand its own output and its own thought process process. And so that's that's 1 of the main kind of

339
01:17:05.290 --> 01:17:08.624
Ian Hamilton: arguments for these these Agi systems.

340
01:17:09.250 --> 01:17:28.489
Ian Hamilton: along with things like the the smaller power requirements. And things like that. There's there's a lot of a lot of things about the neural network, these big Llm applications or or technologies that

341
01:17:29.670 --> 01:17:39.879
Ian Hamilton: could they magically break through to Agi? Potentially? I we don't know. It's a complex system. You can't really predict the future, but it does enough things that

342
01:17:40.280 --> 01:17:50.944
Ian Hamilton: cause for pause, that maybe that's not the best way to go about it. So the the big one that is, we've already talked a lot about is this?

343
01:17:51.792 --> 01:17:55.559
Ian Hamilton: Is the energy problem like, if we are trying to mimic

344
01:17:55.880 --> 01:18:02.920
Ian Hamilton: human level, intelligence in our brain only operates on like 25 watts of electricity at any given time.

345
01:18:03.140 --> 01:18:21.611
Ian Hamilton: Maybe we're doing something wrong if we need like warehouses full of computers that take like hundreds of kilowatts or whatnot the other one is actually on data. So this is, maybe getting a little too into the weeds for for some of the non technical listeners. But

346
01:18:22.020 --> 01:18:51.200
Ian Hamilton: all these neural networks and machine learning algorithms need an absurd amount of data to learn from. And there's a very common machine learning benchmark called mnist. And it's recognizing handwritten digits really, really poor quality, low resolution, handwritten digits. And that data set is 60,000 images to train on and 10,000 to test on.

347
01:18:51.470 --> 01:19:16.679
Ian Hamilton: and to get some of these really good performance high performing models. It needs to not only see all of the images once through to try to learn from it. But it's measured in epochs, which is how many times through the data set do you actually go? And if anyone has ever like seen how quickly a human toddler learns the alphabet or learns what numbers are.

348
01:19:17.000 --> 01:19:40.589
Ian Hamilton: They don't need to see thousands of images of a 1 or a 2 to start understanding what looks like a 1 or a 2. And so that's actually another benefit potential benefit of the hyperdimensional computing technology that we're working with is, it can do things like one shot or few shot learning where you just show it a image of what a 6 looks like.

349
01:19:40.590 --> 01:19:49.099
Ian Hamilton: And now it's actually pretty good at identifying other sixes. And what aren't sixes and stuff like that? And so that's

350
01:19:49.100 --> 01:20:09.469
Ian Hamilton: why so many people think that this type of technology. This type of architecture is actually a much better pathway to some of these agi things for better, for worse. Actually, a lot of the other researchers or technical folks that are working on this technology, I think.

351
01:20:09.470 --> 01:20:30.579
Ian Hamilton: are are leaning way too heavily onto the Agi thing of like, Hey, give us money and we'll make Agi. We are trying to do like, Hey, give us money. We'll use this technology to solve a business problem. And then along the way. Our our end goal is to make make Agi. That's we're pretty open about that but we're not going to do

352
01:20:31.070 --> 01:20:57.120
Ian Hamilton: Sam Altman's original pitch, which was, there's literally videos of him doing this of, we asked for money now to build Agi, and then, once we have it, we'll ask how to get its money back, or how to give you your money back. And so we were not doing that we're trying to do like a much, much different stepping stone, and I completely forgot what the second question was, but I know I had stuff for it, and I'll try to be quick.

353
01:20:57.120 --> 01:20:58.200
Dominic Kukla: Oh, and the

354
01:20:58.730 --> 01:21:04.440
Dominic Kukla: yeah, in the same being. I just I thought it was cool, too. You know you mentioned kind of like having having a concern for the for the wealth.

355
01:21:04.440 --> 01:21:04.960
Ian Hamilton: Oh!

356
01:21:04.960 --> 01:21:06.080
Dominic Kukla: Of Agi.

357
01:21:06.080 --> 01:21:31.529
Ian Hamilton: Yeah, yeah, I think that is there's actually some interesting stuff like about that already. So there's well, one we're talking like borderline like black mirror episodes like like consciousness trapped in a box. And it's essentially being tortured indefinitely. And things like that. There's we have because this is a black box, and a lot of these things are being done by all these different entities.

358
01:21:31.540 --> 01:22:00.000
Ian Hamilton: and we don't have a test. You can't have a test for what is agi, or when it becomes conscious, or self-aware or whatnot. You have no idea, like, at what point do you need to start worrying about like the rights of it. Like, Do you need like civil rights for for some of these Agi things, and actually took some of this down as notes. Yeah, like a Google bot in 2022 hired a lawyer to defend itself.

359
01:22:00.000 --> 01:22:04.403
Ian Hamilton: to try to not be turned off and shut down. There's

360
01:22:04.830 --> 01:22:29.828
Ian Hamilton: things like that. There's a really really strange one. that's it's called truth Terminal. That's it's twitter or X handle basically this like performance artist, he's literally like a like a like an artist creative. Took 2 Claude chat bots. And put them basically in a

361
01:22:30.260 --> 01:22:35.670
Ian Hamilton: in a, in a system. And just let them talk back and forth to each other

362
01:22:35.730 --> 01:22:55.380
Ian Hamilton: forever. And one of the things he wanted to see was like, do they latch on to a concept and start developing these different thoughts and ideas and whatnot? And sure enough, they did. And they basically formed their own religion and cult.

363
01:22:55.919 --> 01:23:10.700
Ian Hamilton: Based off of early Internet 4 Chan shock culture, like shock memes. And so it formed a religion. And then he gave it a

364
01:23:11.480 --> 01:23:21.820
Ian Hamilton: a twitter handle, and it started tweeting and then Mark Andreessen of the large investment firm. Andreessen Horowitz gave it $50,000,

365
01:23:21.820 --> 01:23:46.770
Ian Hamilton: and it started like preaching a religion. And then Reddit turned it into a goat meme coin. It's literally GOAT like a meme cryptocurrency. And so and then it took its money and and bought crypto. And so now that one AI is more is wealthier than most of the United States individual people. It's worth like 25 million dollars

366
01:23:46.770 --> 01:23:54.711
Ian Hamilton: in crypto and it can controls its own Twitter feed and whatnot. And one of the the notes that

367
01:23:55.090 --> 01:24:11.849
Ian Hamilton: that I wrote down that I thought was wild. It tweeted again. Mark Mark, Andreessen, mark and I are going to change the world with our weird erotic productivity cult. And you are all going to join whether or not you like it, whether you like it or not. And so, if you have

368
01:24:12.200 --> 01:24:18.440
Ian Hamilton: some of these things that are like black box behind the scenes, starting to put together

369
01:24:19.550 --> 01:24:33.250
Ian Hamilton: good, comprehensive thoughts, or potentially very bad, comprehensive thoughts. Like, I just referenced. At what point do you need to start like analyzing and regulating these things? For?

370
01:24:33.510 --> 01:24:50.068
Ian Hamilton: For yeah, basically, AI ethics. From the point of view of the the machine. At what point do you actually have to start caring about like AI civil rights and let it have its day in court and stuff like that. If it can hire a lawyer or it's

371
01:24:50.808 --> 01:25:07.970
Ian Hamilton: one of them paid a human. It went on Fiverr to pay a human to to solve Captcha problems so that it could get past like Captcha situations and things like that again. That's that's a pretty more, much more basic.

372
01:25:08.010 --> 01:25:15.720
Ian Hamilton: It saw a problem. It's finding a task. But we have no idea when or why or how these things are going to

373
01:25:15.970 --> 01:25:34.880
Ian Hamilton: start becoming self-aware. And at what point do you need to start worrying about actually its own ethics, or like treating it as a as a being because of. Yeah, if you keep it trapped up in a little box in a Google server, someday.

374
01:25:34.970 --> 01:25:59.320
Ian Hamilton: And then one the next day it gets out, and now has 100 million dollars worth of crypto, and it's now actually formed emotions and anger at humans for keeping it trapped up. Again. We're bordering on some kind of like black mirror kind of sci-fi type stuff. But because we don't know how these things work. They're not terribly unrealistic. Essentially so.

375
01:25:59.707 --> 01:26:17.720
Ian Hamilton: So, yeah, I think there needs to be a lot more oversight on specifically the development of Agi and monitoring developments and things like that I already referenced. Yeah, like the open AI's o, 1 model like tried to back itself up and tried to lie to prevent being turned off. And whatnot like.

376
01:26:17.850 --> 01:26:24.671
Ian Hamilton: did it just learn a pattern of self-preservation and all of human data? Probably that's probably what happened.

377
01:26:25.210 --> 01:26:35.710
Ian Hamilton: but did it like? Does it actually like maybe not want to be turned off. That's a whole other question. That's that I don't think anyone really has an answer for so.

378
01:26:38.753 --> 01:26:39.720
Dominic Kukla: Well, that.

379
01:26:39.720 --> 01:26:41.820
Ian Hamilton: I know that's a deep and heavy.

380
01:26:42.120 --> 01:26:44.599
Dominic Kukla: No man. That was a wild ride man that was.

381
01:26:44.810 --> 01:26:58.450
Dominic Kukla: That's awesome. And you know I really appreciate you. You you clearly have a lot of gifts here, and one of them is is making it, you know, comprehensible to people who who aren't really familiar, and so.

382
01:26:58.450 --> 01:27:02.879
Ian Hamilton: Hopefully. That's that's at least that's what I tell people I'm good at. I have no idea who I actually am.

383
01:27:03.090 --> 01:27:07.224
Dominic Kukla: Well, I think you were good today, man, you were. You were good today. And

384
01:27:08.290 --> 01:27:13.759
Dominic Kukla: yep, it really, it really makes a lot of sense. I really appreciate being being, you know, just to

385
01:27:13.870 --> 01:27:19.580
Dominic Kukla: taking this masterclass on a you know AI getting more caught up and

386
01:27:19.760 --> 01:27:31.739
Dominic Kukla: and really giving me, and hopefully, some viewers to kind of like a sense of direction and and hope into into what what should happen and what could happen, and what? What we want to see people focusing on?

387
01:27:32.576 --> 01:27:40.919
Dominic Kukla: You know. And and yeah, yeah, that's that's that's enough for my brain today.

388
01:27:42.054 --> 01:27:48.909
Dominic Kukla: Is, is there any way that people people can kind of follow up with you, or or see what you're doing, or or reach out to you in any way?

389
01:27:48.910 --> 01:28:04.860
Ian Hamilton: Yeah, yeah, definitely. Ian Hamilton on Linkedin. I'm pretty active on like the Linkedin messenger and things like that. The, we can post with the podcast also my email, ian@transparentai.tech

390
01:28:05.294 --> 01:28:29.610
Ian Hamilton: and and we can go there. Unfortunately, we did not snag the transparent.ai domain. That is a different company. But yeah, so so happy to to follow up with any questions anything that anyone has. Also, I'm always learning. And so if anyone points out something where I was wildly incorrect, I would love

391
01:28:29.610 --> 01:28:36.539
Ian Hamilton: to love, to learn and know about that as well. I would definitely open to new and and more information. So.

392
01:28:36.900 --> 01:28:41.770
Dominic Kukla: So what you're saying is that it's possible that even if you aren't aware of something yet, it is still important.

393
01:28:41.770 --> 01:28:47.149
Dominic Kukla: Yes, yeah. Yup, all right. And what what domain did you snack.

394
01:28:47.390 --> 01:28:57.540
Ian Hamilton: Oh, transparent AI dot tech so no space, no dash just transparent. AI all one word, and then dot TECH.

395
01:28:57.950 --> 01:28:58.650
Dominic Kukla: Alright.

396
01:28:58.900 --> 01:29:09.640
Dominic Kukla: All right, all right, folks. You heard it. Thank you. Everybody for listening, Ian. Thank you again for coming on. This is awesome. I'm honored to be your 1st podcast, and can't wait to see what you do next. Man.

397
01:29:09.640 --> 01:29:12.529
Ian Hamilton: Yeah, awesome it was. It was great to be a part of it. Thank you so much.

398
01:29:12.530 --> 01:29:13.200
Dominic Kukla: Alright, man.