Watch on Youtube

Full Transcript

Noah: So I'm here with Caden. I went to an AI event in Cleveland last Tuesday and I heard Caden give an awesome presentation on the state of AI and kind of what he does, but I'll let him tell you a little bit more about himself. Caden, do you want to go ahead and introduce yourself?

Caden: Yeah, sure. Thanks. Yeah, my background... my name's Caden Kacmarynski. I went to Case Western. I did Engineering Physics in undergrad there, and then I'm finishing up a master's program currently in Entrepreneurial Physics. My background was quantum computing, so I did quantum optics research at Case and Argonne National Lab for a few years.

While I was there, actually, I got into AI before ChatGPT came out. So, the last four and a half years I did some AI and quantum both for a while, and then as I graduated college, I decided to pivot harder into AI, and that's where I met the people running the company I'm at right now, Moreland Connect.

We're really a dev shop—custom software for the last 20 years or so. Former Accenture partners really started it and we're doing that type of service. But two and a half years ago, we found ourselves saying, "AI is really good at doing our jobs. Do we have a firm?"

So we really dug our heels down and learned AI. And so now we're more of a tech company. We have in-house AI tools. We sell an AI platform now to businesses and do a lot of enterprise software still. So, I'm an AI Architect and Engineer at Moreland Connect, and our company has been on a huge growth curve adopting AI and helping enterprises in the area and beyond adopt AI as well as normal custom systems.

Noah: Excellent. So I might ask you a little bit more about that towards the end, but I do want to talk about just a high-level view on the AI space over the past few years. When you think of it, kind of, what do you think of? What do you see as changing, and where's your thought on the current state of AI?

Caden: That's a tough question because I have probably infinite thoughts. I guess at a high level, obviously, the space is advancing really quickly. We're seeing technology that is alien to what we thought was even possible a year ago, two years ago, five years ago. I think the space is both over-hyped and undervalued at the same time.

Some people's expectations are way off, both positively and negatively. I think there's a huge world of opportunity right now, and there are also inflated expectations. Actually, like in my talk on Monday, we talked about the Gartner Hype Cycle, but the thing is, AI's adoption to different industries is different.

So, like software, we're seeing gains right now, but knowledge work is a couple of years lagging behind. So that's a lot of what we work on. But also, economically, some people think there's a bubble, and in some areas, there may be a bubble; in other areas, I don't think there is.

But sometimes if it looks like a bubble, it could be around it as well. But I think it's even more opportune than the dot-com era because there's much more infrastructure built out at this point, so we don't have to go actually lay the lines. There's infrastructure to be taken advantage of.

Noah: That makes a lot of sense. Okay. When you're using AI tools, I'm sure you're using some custom solutions like you mentioned, but just in your daily life, which of what I call the "NVIDIA Five"—the Claudes, the Geminis, the xAIs—which one of those tools are you using most and why?

Caden: Yeah, I guess I would call the Claudes, the xAI, and Google the "Frontier Models." Like, I see those as models, not tools, because I use a lot of tools that use all the models. So I use all the models. As a developer, I really like Anthropic's stack. The things I use every day, no matter what, are Cursor and Claude Code. So I'm building things, but even when I'm doing presentations—like my slides from the talk—I vibe coded a website with Claude and I used the HTML as my slides. Although Claude now has a good tool for PowerPoints and they're in Excel... but all over the place depending on the use case. But I'd say I probably have a list of 50 to a hundred tools that I rotate through for various different reasons. But yeah, Claude and Cursor would be the two I use daily.

Noah: Okay. Yeah, Claude Code is awesome. As I said before, I'm not really an engineer, but I was on a two-month binge with Claude Code, trying to build some things out. It is a great tool. Yeah. So let's get a bit more specific. If you can talk to me about some of those workflows that you're using or some of those specific tools that you're using day-to-day, and how you use them and why you use them?

Caden: So basically, I see all the AI tools and workflows as just like tools in my toolbox as I kind of architect or build different systems. I know you had mentioned a lot of times people have different template prompts or systems they walk through. I suppose I do if you write it out, but we like to call it a lot of like "expert in the loop."

A lot of people talk about human-in-the-loop workflows, but at Moreland, we have experienced developers. And so one of the things that you'll see on Twitter or X—like a lot of different "vibe coders"—people talk about vibe coding and building software. And the classic vibe coder, they don't understand their API keys are public. Their security's not good. They're missing out on these huge steps.

So what we do is we have some level of system prompting. So we set up a good system with guardrails. We also build in a lot of different custom tools and features for our flow on how things are built. Like we have custom Git flows. We have repos with starter kits. I have code blocks that I pull from reference libraries. So a lot of tools are built in-house there and we use them. But then my general flow is I'm just overseeing the models, like a manager.

So oftentimes I'll have five to 10 agents running and I'll be checking all of them at certain stop points, or I'll use different stop hooks to make sure they stop at points I think they could go wrong. And a lot of it's AI... I always tell people, think of AI as an overzealous intern. They're going to do really good at whatever you tell them to, or the best they can. They don't know if they're making a mistake, so you have to try to catch them before they start spending tokens and effort down the wrong route.

So a lot of it's like just supervision across tools. And then to me too, the switching costs are lower than they've ever been for different software. It's like the only thing that a lot of these models have is the memory of what you said. So that's where, for me, it's super easy to just pivot between all these different tools I have on my own local database and Docker container with tons of different plugins, MCPs, and tools. Sometimes a lot of that's also overhyped. I think that it's not that hard to just straight up use the tools and learn prompting techniques to carry on between things.

So things like "be concise" will always help you shorten it up. You just add that onto your prompt or spec-driven workflow, if you've seen that. "Here's what the business user wants. Let's define these use cases, and then let's get them into a PRD"—a Product Requirement Doc—which is not quite a technical document, but it's like a business user sense document for the use cases and features. Then I'll take that to a spec where it's like, that's a technical document where we've gone and we've done some architecture.

A lot of times I'll ask AI instead of telling AI. "Tell me the answer." I say, "Propose three possible solutions and weight them on the strength." And then I'll use my technical expertise or my team's technical expertise to figure out, okay, which one's right. And that really helps reduce cycles. And that's model-based... like Nvidia and Google, I know engineers there working on their chips and they use different software to help speed up modeling. And a lot of times you run a few different simulations, pick a couple, and then you use the people who actually understand the systems to pick instead of just full autonomous AI agents. So basically, AI-enabled workflows are much better than pure AI workflows, at least right now.

Noah: That makes a lot of sense. And I like what you said about the simulations or giving different options. I've been reading about kind of the possibilities of AI being able to run thousands of simulations or millions of simulations to determine different things. Let's talk a little bit about the future and then we can come back to Quantum as well. Four, five years down the line, 10 years down the line. What are you most excited about when you're building things?

Caden: It's different too between different industries. I think it's like, to be honest, it's tough to predict 'cause only time will tell. And I think like one thing right now is it's interesting to me as a scientist to see where the science is going, and I can see where the technology is going, and I think that's gonna be huge advances.

The thing I'm trying to learn more about and being surprised by is our human adoption curves and going with human nature, because if the labs stopped innovating right now—really, I think we could still have another decade of innovation just off the adoption of the current technology. If you take the best model right now, this Opus 4.5 model, and we spend the next decade getting that into more useful ways for consumers and businesses to use them, we would still see huge growth.

But the labs are gonna keep innovating, keep coming out with things. So, I don't know, it's gonna be quite exciting to see what happens. I think that I'm not like a doomsday scenario person. I, at one point, I definitely had an existential crisis of "what are we, where are we going, what are we doing?" But I don't know, I do subscribe to Yann LeCun's idea of LLMs probably aren't the answer overall. I think we probably will need more innovation from there. We're running towards an energy problem. So that's possibly where quantum could come in. But yeah, I think it'll be interesting to see what happens with energy and infrastructure and resources. So, lots of thoughts there. Not sure.

Noah: Yeah. No, that's fair. It's hard to narrow a couple of things down. There's so much going on. I agree that Opus 4.5 is the best model. That's the one I use as my main. All right. Let's go back to Quantum for a little bit. We got a couple of questions left.

Quantum. It's hard to explain. My understanding of it is basically just around like CPUs, GPUs, TPUs. I was reading about that company Extropic—I don't know if you were following that—with P-bits, probabilistic bits. And then, okay, how quantum is essentially, from my understanding, can be both at the same time, but you know a heck of a lot more than me on that. So can you give me a 30 or 60-second definition of what quantum computing is, and then we'll talk about a little bit of how it affects AI in that space in general?

Caden: Yeah, sure. It's always tough to pick the analogy 'cause it depends on which part we wanna talk through. But I think quantum... So a lot of times people talk about qubits versus bits, like I'll just take the shot at that. So like normal bits that are in normal computers are ones or zeros, where qubits can be in multiple states. But you can think of it as like a light switch where you could either have a normal light switch you flip, where there's on and off—that's your one and zero—whereas a qubit is like a dimmer. So you could have different possibilities of where you're at.

The other way I've explained quantum computers—and at least where it made me super interested in them—is following Moore's Law and transistors getting smaller and smaller to the point they are a few atoms wide. With a quantum computer, instead of using a transistor, our qubit, our smallest unit, is just an atom. So you're using the physics of atoms and the smallest units that we have available to us to do this compute.

So the thing with quantum is we don't have the "silicon transistor" yet. We don't have the one type of qubit. There's tons of different types. I did a lot with optics, and so we use lasers or photonics. There's trapped ion, superconductor... Most people see the superconducting qubits. That's the big chandelier that you normally see from like IBM—the big gold chandelier. Most of that chandelier is just cooling. You just have a small chip on there that you send your waves through to do it.

But I think the impact of quantum computing—again, this is pretty generalistic and high level—but from a level of the abstractions layer: With a normal computer, you start at this zero or one and it's pretty easy to understand zero, one, binary. But if you want to get up to linear algebra, where a lot of computer science and the power of algorithms and such happen, it takes a few levels of work to get from zeros and ones to matrixes. Where the mathematical representation of a qubit is a matrix because you're dealing with the different states of this atom. So you really can't go lower than a matrix. So like a qubit is represented by a two-by-two matrix. That's the simplest way you can put it.

So there is a barrier to entry to understand that because you have to understand some linear algebra to model it. I think that—like my nonprofit, the Quantum Coalition that I serve at and work is addressing—we're working on some of the bridging the entry there. And I think that at some point in the future, an intro course into quantum or high school should be like ways that quantum impacts you. Like many people, most people now can use their computers, but they don't have to know how these transistors work. So that's why oftentimes it doesn't have to go into the bits.

But really, you think about AI? AI just uses a bunch of linear algebra. It's just a bunch of matrixes. So instead of spending all this effort to get us up to this matrix, we're just at that level with qubits already. Now qubits have their own issues. Quantum has its own issues. So two phenomena in Quantum are superposition and entanglement. I'm sure you've read about those as you get into it. And the issue with entanglement is these particles can entangle with themselves when they're not supposed to, so it's a big noise issue. So a lot of people oftentimes say quantum has an engineering issue, or quantum is an engineering issue. It's just trying to figure out these different systems to control the noise, which has happened with almost any technology we've scaled. I know that was a little bit longer, but that was like the full across few analogies there.

Noah: Thank you for that. That's excellent. And I look forward to breaking that down a little bit too. Give me just something that interests you. 'Cause I know there's, we're talking about a lot of exciting things and there's so much going on. It's hard to be driven in on one specific industry, but just give me a use case—business-wise or lifestyle-wise—where quantum computing will make an impact in the future?

Caden: I'll answer that first with an analogy and then I'll directly answer the question. I think in terms of quantum impact, one thing that I think should be just more clear is: think of a computer, a regular computer or traditional computer, as a light bulb, and a quantum computer as a laser.

So we all use light bulbs every day. In the rooms you and I are both sitting in, we have light bulbs and they're super useful. And lasers? We don't really need a laser to light up the room. It's too much power. It's not effective. But lasers are extremely powerful for scientific research and different use cases like scanning, MRI, solid-state computing. So lasers are super, super useful for technology and kind of the world as a whole.

Whereas quantum computers—like, don't expect us all to have quantum laptops anytime soon. Or like a consumer-grade quantum computer. Just like we didn't start like that with computers to start off with. Big cryptography problems in ENIAC, which filled a room. Just like right now, Shor's Algorithm is a cryptography problem and quantum computers are quite large with the engineering to set up the bits. So there are a lot of analogies to be drawn there.

But yeah, in terms of what could quantum impact, there's a lot of areas and a lot of it starts with these bigger problems. So people talk about climate change, financial modeling... like banks and insurance companies are pretty involved with it. But I think because I think the biggest issue with AI and the future could be energy, and quantum could help with energy—that draws me to the energy issue or area where quantum could help.

So quantum computers aren't scaled up to enough bits to do like massive amounts of computer algorithms yet. But they're much more energy-efficient, like drastically. That could be helpful in terms of just us doing compute if we were able to get AI that was able to use QPUs, or like you had mentioned, where you can start separating different compute out for different states. So that could be helpful. Or just running simulations that can help us solve traditional energy problems anyways will be interesting. So I think the energy sector, as a place that needs some innovation—or we just saw fusion, which has also always been 20 years away for a long time. So one of those things should help us hopefully.

Noah: That's fascinating. Cool. I don't have too many other questions. Is there anything that I didn't talk about where I didn't ask you about that you wanted to dive into?

Caden: Yeah. I guess the last thing is I saw you had a question on there about advice to a high school senior and a college senior. I think that was a good question and relevant for me 'cause I'm fresh outta college and hired some students and also consult with the undergrads a lot. My advice to anyone is: become the best AI-enabled version of yourself.

A lot of people are saying, "AI will take your job." At least in the near term, AI's not gonna take your job, but someone using AI definitely could. So definitely do what you can and learn about AI. And Kevin Roose from the New York Times, he was at Case long ago and he had a really good saying that I like to steal: know the difference between "forklifting" and "weightlifting" as you're using AI.

So forklifting would be essentially just automating your work completely, and weightlifting would be using it to get better and training yourself and training your muscles. So don't let AI do thinking for you. If you fall on that trap and lose critical thinking skills, you are not gonna be doing well.

I saw a quote on Twitter as well: "Either you're gonna be delegating to Claude Code or someone else's Claude Code will be delegating to you." Be the first one. So definitely nurture creative thoughts. Taste is really important. Like right now, we're in this information age where you can get AI to build things for you so quickly and you can learn things quicker than ever before. And it's democratized to more people than ever before. You have to be willing to ask the right questions and know how to filter out information. So creativity and critical thinking.

And that is—that would be to anyone. And then a high school senior... I still think getting a degree in hard sciences is worth it. As a physics major, I think it's still worth solving physics problems. Just at a level of doing hard systems-level thinking can really help you supervise this area. Like, I had a physics degree. I never did anything with software. Now I'm an architect of enterprise software very quickly 'cause I've solved hard problems before. But you don't have to do it with school. I wouldn't tell anyone they have to go to college or do one specific thing. I think the goal should just always be as you're growing is maximize learning and solve the hardest problems that you can. And I think that will lead to growth.

And then someone graduating college right now... The job [market]—a lot of people I know are having trouble with jobs, and one thing is like putting yourself over the AI hump or gap. There's like CS specifically, there's a lot of developers out there, like people aren't hiring freezes because they're using AI to automate things. But if you actually understand how to use AI and you go to a company and tell 'em why they should use AI, then you've all of a sudden entered this bubble of you're extremely valuable. So there's almost like two modes right now. Either you're almost unemployable because you're not using AI, or you're highly sought after and there's a huge demand in pay for you because you are using AI. And the gap is actually not that hard to cross. But at least knowing it's there and attempting to cross it will be very fruitful for anyone currently looking for work.

Noah: That's great advice. Where would you say should be the starting point for that for getting into AI to be able to get over that hump? Where should those people start at?

Caden: The best conditions to start is just to start wherever you're at. Figure it out. So just whatever you do. But yeah, you can ask ChatGPT questions. You can ask Anthropic questions. I think a lot of people are in like paralysis analysis of "there's so many models, there's so many things." Just pick something and start.

Think about your daily life. Is there a problem you think you can solve? Is there something you're interested in? ChatGPT or any of these models... I like Anthropic, but it doesn't mean you have to use it. I think a lot of times now between all the frontier models, they're all good enough that if you just pick one and figure it out, then you're gonna make a lot of progress.

And then work on... our general advice is don't put super personal information in any of these models as you start. Like ideally keep your social security number and banking information and stuff out of the models as you figure out what you're doing. And then look for projects. Like I would say it's the age to build. Even there's the Lovable and Replit out there that can help people just totally build apps. They're not fully complete yet, but if you're able to even pick up a little bit with a Claude Code or get into a CLI, that'd be really helpful.

But I've talked to a lot of people who don't even want to go that far, but just using the docs, using the different LLM models to just ask them questions. And just make sure as you're doing stuff, you question it. Like, "Why did you do this? Why are you thinking that way?" And if it doesn't make sense, don't be afraid to start over. In general, but also like in your conversations. So if your context window—the conversation you're having—gets really confusing or bogged up, it's gonna get more and more wrong. And "hallucinate" is the term for it, but if you get to that area, then try to start over, try a fresh conversation. Like the classic engineer and entrepreneur in me is gonna tell you, innovate and solve problems whatever they are, and that's gonna make you better.

Noah: I agree. I think people trying to get into AI should be making sure they understand what's going on behind the scenes. I think that's really important. So they're not just vibe... the example you used earlier of vibe coding stuff and leaving API keys out in the open and leaving everything open for hackers. I think that same example applies across the AI space where you're actually understanding what you're doing so you can replicate it and so you can poke holes in it if there is an issue or hallucination. All right. Excellent. Caden, I appreciate you coming on and great conversation.