Webinar #3: AI Technology and the Future

AI Technology and the Future with guest Todd Carrico

AI expert Todd Carrico discusses his views on where AI is today relative to the past where he was building a multi agent AI platform to support Defense logistics.

Webinar Transcript

Marv: [00:00:02] So good afternoon, this is Marv Langston with guest speaker Todd Carrico. This is the third in the series that Jim Pietrocini and I have been trying to do for webinars to talk about future challenges for the planet, technology and its relationship to national security and other interesting topics as they arise in this third series. Todd is been involved in artificial intelligence through his Ph.D. program while he was in the Air Force. And then when he became the DARPA PM, he was putting together a distributed bot based A.I. platform for logistics. And Todd has been involved in that platform for his company, Cougar Soft, I mean, cougar software ever since then. So Todd is willing to come aboard today and give us his thoughts on the state of A.I. today, now that it’s in a new hype cycle. And we can discuss that further as we go. We shoot for these to be about 30 minute podcasts or webinars. And if we go longer, we’ll break it up in the moment. Todd, hello. Thanks for joining us today.

Todd: [00:01:20] Glad to be here. Thank you very much. Really appreciate it. I appreciate the opportunity to talk about one of the things that I love the most.

Marv: [00:01:29] So you and I have had a couple of discussions about how AI went through its cold winter after the 90s, the late 90s, when most of the time back in those days we were working on AI image recognition or AI natural language processing. And today, of course, we see it helping power things like self-driving or speech recognition in our social media or Google platforms or etc.. So what are your thoughts on where AI is today and where it’s going to go from here?

Todd: [00:02:09] I think we’re in a really exciting point right now. We’ve got tremendous amounts of data in order to Traini and to build the support systems. We’ve got computational power, memory storage that is sufficient to do some of the more advanced techniques that we haven’t been able to do in the past. And we’ve really got sort of the support of the business community, military community, research community to figure out how to start putting more of these pieces together. A lot of it really started with variants of e-learning, which are neural networks, which benefited from the massive amounts of data available. But it’s really starting to branch out from there as they’re looking at other ways to apply it and ways to combine different techniques. So it’s really it’s a very exciting time for the field and I think for the world because some of the benefits of what’s coming out of that artificial intelligence explosion of applications is really producing some very powerful capabilities that we’ve never had before. And I think it’s opening windows and doors to to explore things in ways we haven’t been able to do in past.

Marv: [00:03:20] So I know you’re aware of this, but the Chinese have stated over the last couple of years that their intention is to create significant improvements by 2025 and to be the dominant country in the world by 2030. So how do you see our relationship with AI and in China? Is that a new Cold War race that we got going on? Or is is it going to be advantageous to both sides of that equation?

Todd: [00:03:53] I think it is very challenging. Clearly, there are the most very powerful technologies, there are applications for good and applications real that I can be used very effectively for the Chinese are very effectively using it for surveillance. They’re gathering tremendous amounts of data and there’s concern that they’re going to continue to grow their capabilities and artificial intelligence and may not use them for the best purposes. And this is where I think the community at large really has some challenges. We want to be open and collaborative and help advance the state of science in a very collegial way. But when members of that community are looking for opportunities to get new technologies to use them in various ways, you know, you’re really, really in the struggle of how much do we keep in the open? How much do we have these open conversations about where technology is going and how best to use it and all of that when, you know, there’s there’s sort of bad actors. But this has been true for so many technologies, from genetic engineering to chemical to nuclear. I mean, all of these have had tremendous power, but also tremendous risk. And I think artificial intelligence is one of those categories where we’ve got to try to walk that fine line. We want to develop it as a technology and be collaborative. But I think we also need to put some safeguards.

Marv: [00:05:22] So it’s interesting to me that we’ve gotten to a point now where AI has been raised to the level of White House presidential excitement, where they call out the need to continue to bring it to the fore to to support our national security and our country’s best interests. We see that we have the in the DOD, we have this joint EHI center, the Jake, and we have the new command, command and control effort called Joint Area Defense Command and Control to be the the overall DOD, AI Base Command and control capability with each of the services contributing components in the Navy’s case, they call that overmatch. But in your own background, because I know you work a lot with the militaries in Europe, with your platform, how was AI evolved since you first started working on your DARPA platform up to these 20, 20 years later?

Todd: [00:06:20] Yeah, they take a step back before I answer that more directly. So artificial intelligence is really this sort of big umbrella of a lot of different advanced algorithm techniques that are meant to emulate aspects of human reason. There’s all kinds of different algorithms, some planning, some for image recognition, some for optimization. That whole field of AI has gone through some fits and starts when I was at the project working on some of the systems, which is another bane of the whole field of AI for a while, Agents was a very exciting topic. There was lots of work going on in all the universities. And underneath that, we were doing things like planning and optimization, not a lot of image recognition. There are some other programs are working on that, but that what is now called good old fashioned. I was it was very hot and very exciting. We have some computational issues with data issues, but we were able to demonstrate the capabilities technology shortly thereafter. The scale up of those technologies really didn’t come to pass. And as you mentioned, it sort of went into another dark period in the early 2000s, mid 20s, they started exploring and revisiting some of the older technologies, one of them being flavoursome, and that’s one learning. And because of other things that were happening in the industry, power, availability, data, broadband everywhere, cloud computing, there was some great opportunities for that particular part of AI

Todd: [00:08:05] to flourish. And they found some good applications, image recognition, for example. And it was very successful. And that sort of really the whole area of artificial intelligence, even though, you know, that is sort of one small vein of artificial intelligence. What’s kind of exciting is the other stuff never really went away. It never stopped completely. It just didn’t get as much attention. It didn’t get as much funding. Now, what I think the community as a whole is really wrestling with is how do we start bringing some of these other things together and start revisiting earlier discussions about about architecture and control. You know, one of the big challenges with neural networks in all forms is the explained ability. Dapo started a program unexplainable that we ultimately need to be able to build AI systems that can incorporate many different forms of reasoning and we can trust. And the only way we’re going to trust them is if we can sort of understand the whys and how of them coming up with this answer. So it’s really exciting because we look all the way back to the the operational planning initiative that was heavy in the planning and then that sort of died off and the multi agent systems came in and then that kind of died off.

Todd: [00:09:20] Now neural networks are coming in and there’s all kinds of attention around that. But they’re starting to hit a little bit of a wall. Right? They’ve they found a bunch of very successful application areas, and that’s fantastic. They’ve been able to solve some wonderful hard problems. But that class of reasoning can’t solve everything, right? Can’t can’t perform inferential reasoning. You can’t perform deduction. It can generate hypotheses. So there are classes of problems for which that particular type of technology doesn’t work particularly right. So so we start hitting the walls and applying it. People are starting to revisit some of the other techniques and saying, well, maybe it’s time to start bringing some of those back or bringing them together with some neural network techniques. And this kind of hybrid approach, that’s really it’s very exciting because there’s a lot of great work going on. And the underpinning is the larger business and government community has really been working hard to make data available to, as you mentioned, think about from a society perspective, from a military perspective, from a government perspective, how do we help the whole field of AI flourish and how do we find effective applications that can potentially make a dent into some of the big problems the world’s facing today?

Marv: [00:10:41] But you mentioned the inexplainable area. I think to the layperson today, most people don’t even realize that A.I. has been applied every time they speak to their to their Amazon speaker or their Google speaker, and it plays music for them or they talk to it on their telephone or most people don’t even know that. Nowadays you can get a universal translator through Google to press with foreign languages. That’s all the A.I. that started from the natural language processing. We were doing a DARPA back in those days. And so that’s what wildly successful. In fact, an Amazon leader of the department told me that you could get context ninety nine percent of the time listening to language that is even in almost all languages, which is a phenomenal achievement. But the other thing that most people don’t realize the data has been applied to is every time we search for anything on the Web, it registers what we’re searching for. And then that keeps showing up as an advertisement that we see all the time in annoying advertisements. But sometimes it turns out to be something they might be interested. And so I’m not all that against it, but but most of the for the most part, these are not thinking allies, as you’re pointing out there, they’re one trick pony. They’re trained to do something and they can do that one thing very, very well, as long as you don’t ask you to do something different. So what is it going to take to move us towards general A.I. or explainable A.I., as you were pointing out?

Todd: [00:12:08] That’s a that’s a really challenging question and a lot of people are trying to go through that right now, there are some who believe that you can just scale up various neural networks and make them bigger and more sophisticated, use what’s called transfer learning to be able to have them apply effectively what they’ve learned underneath remains and potentially keep combining different pattern recognition systems and into one larger network. I personally don’t think that’s going to work, but there are some folks that are exploring that as a possibility. I personally believe that the right answer is, is more along the lines of what Marvin Minsky was exploring. How do you bring many of these different techniques together? And his was fundamentally an agent based approach. He actually started this entire field system. But the idea was you take any very large, complex problem and you break it down into smaller and smaller pieces. And as you get down to the lymph nodes, you use the best technique with that leap. And it might be known that it might be a genetic algorithm, might be a star routing algorithm, whatever it happens to be. But the architecture has to break the problem apart. You use all these different techniques, maybe some math techniques, some operations, research optimization techniques, everything in your kitbag, and then the architecture helps you put all those pieces back together again.

Todd: [00:13:32] And the reality is there’s no reason why we have to have a single way of breaking the problem. You might break it down 10 different ways and produce multiple answers and then compare the answers for cost and time and resources. So there’s there’s lots of really interesting ways of doing that. But ultimately, we need to be able to embrace a wide variety of techniques and we need to have both the architectural technologies as well as the systems, engineering methodologies that will let us design, build and test those systems. And part of the challenge is those systems effectively or at least could start creating emergent properties like behaving in ways that is not clear from the individual pieces. So from a test point of view, which ties back into the whole how do we trust and how to explain, we’ve got to be able to have mechanisms integrated into the architecture that will help us understand how it created that solution. One of the pieces that we need to be able to ask questions of that answer and be able to pull them out of this very complex interaction of components. And the hardest part is we need to be able to have some guardrails on the system so that we know that the answer is within a defined space.

Todd: [00:14:52] When you start getting into very complex, dynamic systems, it’s almost impossible to do an exhaustive analysis to ensure that you will always get the right answers. So sometimes what you need to do is, is create the guardrails around the solution space and basically say we are going to ensure that any answer comes up. We can’t see these. You know, so much like the fly by wire for an F-16, there are controls, surface limits that that aircraft enforces because the aircraft can do what you want it to do, but it will tear it apart. So they put control surface limits on it. And we need to be able to do the same thing with our complex systems. We need to be able to accept that they may generate solutions that we don’t fully understand. They may be able to explain aspects of it, how it got there, but we won’t necessarily in advance be able to compute the stability of the system, but instead need to trust that the system architecture guardrails will protect us from instabilities or essentially tearing the wings off this system.

Marv: [00:15:59] I love that analogy because it’s easy to understand you don’t want to allow your self driving aid to repair wings of your airplane or crash your car. But when you’re talking about something like command and control, where you’re dealing with deploying forces and deploying weapons, how you get the guy to do the right thing within the right limits for that much more complex problem is is very, very interesting and challenging. Let me shift up for a second. I’m fascinated by some of the work I’ve learned about with Tesla doing their full self-driving because they’ve made a big company decision, or I guess you could say Elon Musk has made a decision to make that full self-driving based only on camera. So basically by video or imaging, just like our eyes are the way we move around, along with the rest of our sensors. So other car manufacturers are doing similar kinds of applications, but they’re applying light hours and radars and cameras in combination to make their full self-driving. But Musk seems to be thinking that if he does it with cameras only, he can basically. Apply the resultant technology that they get out of that to robotics in general so that you can have basic robotic vision. What are your thoughts about that?

Todd: [00:17:18] I think that’s a really interesting area that they’re exploring and and I’m kind of torn because on the one hand, we as humans, we only have to ocular sensors, they’re separated so we can get stereo vision and we’re able to perceive and navigate the world very effectively. So there is an argument for not needing light on radar and all these other things in order to be able to to navigate the world. But from the cognitive psychology side of me, there’s a piece missing in just the image recognition. So we as humans form these internal models of of ourselves, of our environment. We see the kid at the side of the street with the ball, and we create a mental model that says, which direction is he going? Is it holding or throwing the ball? Is there a chance that he’s going to go out into the street because of whatever he’s playing? We do this very, very fast, right. And our mind is constantly updating these models and moving around and driving around, unless there is an underlying model behind that image recognition. I’m not convinced that’ll be the richness of understanding and of being able to to anticipate the way we anticipate. And maybe they can build that in or maybe they can. But without that piece, I’m not sure imagery alone is going to be able to do what humans do when we drive, which is we’re constantly evaluating and anticipating who’s going to cut us off, who’s going to swerve on their bike in front of my car. Right. Which is why we are very effective. You’re going to need to, I think, get at least the equivalent of that in a self-driving car, or we’re going to see a lot of kids playing ball side of the street, getting hit unnecessarily.

Marv: [00:19:09] It’s a great point, and I like the way you laid that out. And then, of course, if you relate it back to us, what we do is we run all of our sensors all of the time. So we have a model of the ball and the kid and the noise and the sound and the smell and the vision, even though we’re dominantly vision animals. So we are doing anything with just vision, the way he seems to be trying to do it for full self driving. They must run into your point, have at least a model of the objects that they think they’re seen so that they could decide whether or not the dynamics of a car or a kid or a ball is going to intersect with the dynamics of the vehicle at that particular time. So it’ll be very interesting to see how this unfolds as as it continues to unfold among the auto manufacturers. And, of course, the other challenge for them is they want to make these systems as low cost as possible. So every time you have to add a light or a radar or anything else, then you’re adding cost to the vehicle and they worry about that. So let’s shift the discussion again and tell me a little bit about your current platform and what how you’re using that platform to support military A.I. and the kinds of military A.I. you’re thinking about.

Todd: [00:20:22] Absolutely so know I’m a big fan of Marvin Minsky, the work that we did at Dapo was really built around some of his concepts of multiengine systems. So we’ve been we’ve been carrying that forward. And basically we’re building multi agent systems that are designed for planning and execution, are able to do data curation, are able to do situational reasoning. Some are controlling autonomous systems, the higher, higher brain functions of autonomous systems, where we’re building the situational models and trying to understand the behavior of the system. Over time, we’ve created a core capability around a hybrid knowledge graph that lets us maintain all of the rich relationships of the data, whether it’s for planning, where the knowledge graph is basically plan the environment resources or for being able to cleanse complex data. It’s the relationships to contextual information reference. But the key in everything we do, though, is each of the agents is specialists to doing the piece, and we give that agent the processes and the knowledge and the algorithms and everything it needs to be an expert in doing something. So we’ve got we’ve got one agent that is an expert about it, and he’s got our routing algorithm. And that’s all he does. He he’s an expert at that one thing. But what makes our systems very powerful is there’s other agents that are doing other parts of the problem. And when they need something routine they requested of him and give them some information comes up with multiple routes that are optimized for various parameters and can provide that back.

Todd: [00:22:05] So this distributed tasking and being able to communicate and decompose these problems and pull them back in again, I really think and we’re not anywhere close to generally are yet, but I really think that’s the path. And if you look at some of the things that industry is doing with my services and researchers and interfaces and all those kinds of things, it really is starting to move us in the direction where we can now have lots and lots of small agents. They might all be within a company or outsourced or whatever. But really what we’re doing is we’re assembling little chunks of expertise. And if we have a way of breaking a large problem apart, we can apply this expertise to solve various pieces of it and then put it back together. The things we’ve been able to do are pretty impressive. And one of the things I love is we’ve not just been sort of developing the agents, but we’ve been developing the architecture and the methodology so that we can build larger and more complex systems. And one of the things that we’re getting pretty excited about is basically turning the technology on itself. So if you’ve got this kind of planning, reasoning, situational understanding technology, could you use it in the design process? Right. Essentially use this kind of technology to design and build technology.

Todd: [00:23:25] So, you know, down the road it’s called model driven architecture is it’s one of the things that I think will ultimately allow us to get through the limits us as a community, because we continue to suffer from the complexity problem. We design larger and larger systems and nobody can really understand it. And they start getting too complex and know security problems start coming in and stability problems and all of those kinds of things. We need to be able to have very large, distributed, complex systems be built in such a way that we can have trust, reliability, maintainability, security and all of those Iltis we’ve always dreamed of. But when you’ve got what is it, the F thirty five has twenty five million lines of code or something like that. Now you get systems that big, there’s just no way a human can understand the complexity of those systems. Well, we have tools to help us, but ultimately it’s still the human designing building. So the ultimate is OK, if you can get this reasoning and automation technology, have it help you build it, have it find all of the the code, have flaws and the instability points and edge conditions so that you can build a better system. So, you know, we’re building decisions for systems right now, but I think we’re laying a foundation that will ultimately let us build even larger, more complex systems, potentially achieving some degree of MDX.

Marv: [00:24:55] So that is fascinating, and I know you’re working hard to do that. You mentioned in passing there the idea of using this to do data cleansing because using A.I.M. processes is very dependent on good data and getting good data. I know is one of the hardest things any of our systems have to worry about. So just say a little bit about what that means, because it seems to me like breaking it down into smaller tasks and making an expert in smaller tests may be a better, quicker use of it than when we try to apply it to these giant complex problems.

Todd: [00:25:27] Yeah, exactly. So we’re pretty excited about this. And this is still in sort of the advanced prototype stage. So it’s not a full up product yet. I’m really not not trying to be a salesman, but what we’ve been able to do is basically go back to sort of Claude Shannon first principles of information theory and build a system using these agents where each agent is looking at different aspects of the data or relationships of reveal the forms of the data. So if you look at the large structured data set, even if it’s got lots of errors and lots of problems, there’s underlying patterns. There’s patterns in the identifications, there’s pattern value ranges, there’s patterns and the spread of errors. There’s all these interesting underlying patterns. And today most organizations are spending twenty five to thirty five percent of an A.I. project budget, having their data scientists work with the rest of us to write a whole bunch of our scripts or python scripts to try to cleanse the data. It’s a very time consuming, very expensive process. But what they’re really doing is understanding the nature of the underlying data and developing a pattern based techniques to try to cleanse or to filter to repair their cross correlating columns or sets of columns against other reference data and try to sell things. And none of that is very specific to the data. And none of that is something that can’t be automated. So so we’re looking at that, taking each Ashqelon of that understanding, characterizing the type, characterizing the format of it, characterizing the relationships between columns and just building and building each piece, trying to add just a little bit of metadata, a little bit of insight into the form structure and relationships until eventually you can understand enough to now start making some assertions.

Todd: [00:27:21] And once those assertions start getting generated, now your estimate, skip the data scientist completely. You’re asked some you can come in and say, yes, that’s a correct assessment. Yes, that is an address or yes, there is a correlation between these two fields. That’s a part. And that’s the price. So so these insights don’t need to be told in complex schemas and all of these kinds of things. The system is basically figuring out the context and semantics of the data itself from first principles, which if if you can take that all the way through, could give us a completely new way of cleansing, structured and structured data without requiring any scripting. And the neatest part about it is once you’ve gone through this process with a large data set, the agents learn. Right. They’ve learned the patterns and structure of the data. So now you can basically throw away that large corpus of data. You’ve got the rules. So now you can attach those rules to a streaming set of data streaming, transactional leader, sensor data or whatever it happens to be. And they can keep applying those rules, essentially repairing in synch, which is really cool.

Marv: [00:28:33] So that’s interesting. I have in the past couple of years, I’ve interacted with both some of the big players like Oracle and Amazon and looked at the functions that they already offer to do similar kinds of things. So you can you can set a set of data into one of their services and their service will decide what’s the best kind of tool to use on the data and then what is in the data. They’ll let you know things about that data so that you’ll know what is in it, so you can move further from that. How does that kind of service differ from what you’re working on?

Todd: [00:29:12] So I think we’re trying to be left of that. So we are trying to prepare the data that would go into that kind of activity. So what you’re talking about is the the different forms of data, retail, transactional data versus language, data versus text, that they all have different properties and characteristics. And because of that, different network structures and different weighting techniques and things like that, training techniques, feedback, recognition components are more effective. So what their system is basically doing is trying to characterize, given the type of data, the structure, the schema, all of that. What’s the best technique so that you don’t you don’t have to figure that out or try some stuff and then figure out how effectively you’ve already characterized and classified all of that. So it’s a matter of how you characterize your data. And it’s. Well, here’s the best one for you to use. And let me go ahead and set it up with the standard set of weightings and all of that to kind of get you started, which is fantastic. These are as engineers as we start to mature these fields. Right. We get better and better tools. We get more automation. And it just allows everything to go faster and be more successful, which is wonderful. But up till now, we still have challenges on just the dataprep side because it requires lots of data scientists and lots of SMEs, and they have to kind of get together and understand each other, but you’ve got to train them. The data scientists understand what the data means and then they’ve got to write the scripts. And typically that goes through many cycles until they can get it clean enough to then go to the next stage is what you’re talking about, which is starting to actually put together the models that will allow you to then do the training and start evaluating the science.

Marv: [00:31:05] So that is very fascinating and I hope you have great success on. So given that we’re we’re passing our 30 minute mark, let’s wrap it up with a couple of thoughts. You and I have talked a little bit about the new book I just finished reading by Jeff Hawkins, A Thousand Brains. And it’s very interesting to learn in the end of that, after discovered after he talks about how he thinks that they have much more knowledge about how the brain works now than they did ten years ago when he first started his first book about the brain. But he’s now saying that he thinks that we can train machines to have consciousness and he describes what consciousness is. And then he talks about why we don’t have to be afraid of these machines, taking it, taking it over. So we won’t have to have a follow on discussion about his ideas and some of the related thoughts about the.

Todd: [00:31:54] I’d love to.

Marv: [00:31:55] I’d love to. So any last thoughts you want to impart to the audience?

Todd: [00:32:00] I think I, like so many other great technologies, is extremely powerful. But it doesn’t it’s not a silver bullet. It doesn’t solve all problems. When we talk about artificial intelligence, one of the things I would stress is you got to remember, there’s a whole bunch of different flavors of that. So if you’re really talking machine learning, then there’s a broad class of problems that machine learning is fantastic at. But if you’re really talking about machine learning, you can’t assume that every problem can be solved purely with them. So you’ve got to kind of peel back that onion just a little bit and understand the nature of the problem and the specific technique. You know, if you’re looking at having somebody help you design and build a system. Right. And is that technique the right technique or in the family of right techniques for the problem you’re trying to solve? You’ve seen lots of times people try to use a hammer on a screw and it doesn’t work particularly well. And I as an entire kick that. So make sure you’re using the right tool for the right thing and explore it. It’s very, very powerful. I think it’s going to transform our entire world. And I’m really hoping that we can get far enough this time that, A, we don’t fall back into winner. And B, we actually have a shot at solving some of the big problems we have, you know, pollution, energy, clean water. You know, there’s there’s lots of challenges that we as a society have. And maybe I can help with some of those. And I’m just I’m excited to be a part of that. And I hope this time we we don’t start seeing snowflakes.

Marv: [00:33:42] Well, I really appreciate and thank you for your time today. And I think you’re doing phenomenal work with the technologies that you work with every day. And and I hope that you have major breakthroughs over the next days and weeks, months. And we’ll get you back on to talk about it a little later when we can.

Todd: [00:33:58] Thank you very much, ma’am. I really appreciate the opportunity and have a wonderful day.

Marv: [00:34:02] Right. Thank you.