Nota Bene Episode 141

Artificial Intelligence Technologies: Past, Present, and Forward with Siraj Husain

Thank you for downloading this transcript. 

Listen to the original podcast released September 1, 2021 here: https://www.sheppardmullin.com/notabene-328

Artificial intelligence is growing rapidly and exponentially. As technology advances, there are many new concerns legally, ethically and socially. Many wonder how patent offices will handle new AI generating inventions, or even how AI lethal weapons will be regulated. As countries are trying to catch up legally to the latest AI technology, the European Union is taking the lead with a new risk-based approach to regulating AI technology. Joining me is Siraj Husain, who gives great perspective into the world of artificial intelligence with his expert insight.

Guest:

Siraj Husain was a partner in the Intellectual Property Group of Sheppard Mullin’s Palo Alto office.  He graduated from University of California Riverside with an undergraduate degree in computer science. He obtained his law degree from Loyola Law School, where he wrote for their law review. Siraj advises on IP strategies with a focus on artificial intelligence. He has specialized experience in many fields such as machine learning, deep learning, cloud computing technology, cryptography, and many computer hardware and software technologies. Siraj has also served as board member and pro bono committee co-chair for The South Asian Bar Association of Northern California.

Transcript:

Michael P.A. Cohen:

Welcome to Sheppard Mullin's Nota Bene, a weekly podcast for the C-suite, where we tackle the current national and international legal headlines affecting multinationals doing business without borders. I'm your host, Michael P.A. Cohen. Let's get started.

Michael P.A. Cohen:

Welcome to episode 141 of the Nota Bene podcast. And thank you so much to all of our listeners in more than 100 nations around the planet for your continued participation in our ongoing conversation, and for your feedback. Please keep it coming. It has influenced our programming over the past three years in almost every episode and will continue to do so. This is our first episode back after a bit of an August hiatus where I really did need some head space. And thank you all for your gracious allowance of giving me a couple weeks off. And I am thrilled to be back today and start our season, I guess the remainder of our 2021 season, here with our first episode in September 2021, with a repeat guest, Siraj Husain, to talk to us today about artificial intelligence technologies in 2021, both past, present and forward as he did in January when he came on the show in episode 108 to talk to us about 2020's developments in AI.

Michael P.A. Cohen:

So Siraj is a repeat guest on the show for those of you who may not hearken back all the way to our January episode, which I do believe may have even been repeated in my recent hiatus in August. So we're getting a lot of Siraj, but as far as I'm concerned, you can't get too much Siraj. So I'm thrilled all around. Siraj obtained his undergraduate degree in computer science from the University of California, Riverside. He obtained his Juris Doctorate degree from Loyola, a law school, which I presume is the Los Angeles Loyola, where he wrote for the Law Review, which is indeed a prestigious award and honor in law school. It is the peer review journal for the law school.

Michael P.A. Cohen:

Siraj practices in the Silicon Valley, where he advises clients on IP strategies with an emphasis on software and artificial intelligence. In the artificial intelligence field, specifically Siraj's experience is really too rich to list. But by way of representative sampling at the very least, he has specialized experience in machine learning, deep learning, autonomous vehicle technologies, augmented and virtual reality technologies, cloud computing technologies, cryptography, which I might throw out is probably one of the most complex fields on the planet that hardly ever gets its due, but is likely the only reason we can do what we're doing on the internet, at least in so many consumer transactions and many others across the world as well.

Michael P.A. Cohen:

And he specializes in just many other computer hardware and software technologies like as I said, that are really too broad to list, but we will attach his biography to the show notes. And I do encourage our listeners to click through and see the richness and profundity of Siraj's experience in both the academic and practical sense. And like I said, it is rich indeed. Siraj served as a board member and a pro bono committee co-chair for the South Asian Bar Association of Northern California. I think that is a fairly interesting mark on his cue to put there. He speaks fluent Hindi and Urdu in addition to American. And he is back with us. Siraj, welcome back to the podcast and thanks so much for being my first guest back on the show.

Siraj Husain:

Oh yeah. Thanks for the introduction, Michael. It's great to be here.

Michael P.A. Cohen:

Well, it's so great to have you. I so enjoyed our conversation in January when we talked about artificial intelligence technologies and their rapid business adoption and how you sort of translated that landscape for our listeners. And I remember at the time we committed to getting back together at about this time. And we seem to be doing it. I think we may have flagged sort of mid-year or early summer. And we're kind of into the later side of summer, but I almost feel that serves us better in some ways. So tell me, where should we start our conversation today? Let's talk about artificial intelligence technology 2021, and present and forward. Where should we start, Siraj?

Siraj Husain:

Oh yeah, there's been a lot of development so far in 2021. I would say we should start with the technological developments that have occurred so far in this space. And well, from what I've been seeing, we last talked about a transformer model called GPT-3, which was able to translate language into all sorts of different things like computer code, for example. And now we're seeing an evolution of those models this year. And so after GPT-3, which was a model that had the size of about 175 billion parameters, and these are all little parameters that learn from lots and lots and lots of data to provide some sort of predictive output. 175 billion parameters was considered a milestone achievement when GPT-3 was released. But earlier this year, the Beijing Academy of Artificial Intelligence released their own model. It's called WuDao 2.0. And they say it has about 175 trillion parameters that are learned and trained and used to predict things in both Chinese and English. So that's about 10 times bigger than GPT-3 three.

Michael P.A. Cohen:

I think this is actually worth talking about because I worry about America. If there's anything, I've lived a life in this nation that has just so accelerated and enjoyed a position of technological superiority in the world. And I listen to the debate in America and it's largely by people who are talking about things that just don't matter anymore. So you're telling me that when we talked in January, the milestone was artificial intelligence that could cover 175 billion parameters, which itself is mind blowing, just mind blowing. Think back to when you were in school. If somebody had told you that that would happen in 2020, you might actually really get into a fist fight about it, whether that is even possible. But what you're telling me right now is that in 2021, this is the first time I've heard it too, that Chinese in Beijing University, a nation, by the way, that has more math majors in graduate programs than all students in college, in the United States period. So just think about that.

Michael P.A. Cohen:

And I have been saying for years that that is going to start to produce results. And here we are in 2021, where Beijing Academy has bested the GPT-3 technology by tenfold, and now includes artificial intelligence with 175 trillion parameters. This is I think worth marking globally because we're no longer in an era, it seems to me, Siraj, and I'd like your reaction to this. We're no longer in an era of US technological superiority. I mean, we are literally now in an era of whether or not we can even keep up. What's your reaction to that?

Siraj Husain:

I think you hit on a very important point there, Michael. The rate at which this technology's being developed globally is astounding. And there's a lot of new nations that are interested in perfecting this technology for obvious reasons. I think I don't quote Vladimir Putin often, but he's famous for saying that whoever masters this technology will rule the world. And I don't think he's exaggerating there. And I think once you can build a brain that can basically do anything you tell it to do in any space, whether that's a civilian space or a military space, it's a powerful thing to have.

Michael P.A. Cohen:

Yeah. Really incredible, but it's a super incredible point. I'm glad we paused on it. Thanks for starting us off the gate in an extraordinary way, Siraj. What's next? And I'm almost afraid to ask, frankly. I should know.

Siraj Husain:

Well, these language models are being applied in all sorts of interesting ways, not just in generating different types of techs, but to build new products and then companies. There's the ability to use GPT-3 as a member of the public, request access to it, and build your own products and services. And people have used it to use some text that a user provides to generate advertisements and basically taking the human out of the loop as much as possible. So we're seeing all sorts of interesting developments there. One really fascinating development is also from OpenAI, it's called Codex. And this model is able to take natural language and use it to executable computer code. So you basically tell the computer what kind of program you want to build and it produces code that it's learned from lots and lots of different types of code that it's trained on to produce that source code. That's making development a lot easier for people. And once it's out, it goes into the mainstream, anyone can really build programs just by seeking it.

Michael P.A. Cohen:

That's worth pausing for a moment because I remember back to my first computer, I'll call it a computer science class of sorts. To put this in some perspective, I'm 56 years old. So this was likely when I was in high school. I think it was 1982 and we were programming in DOS language, okay. So not BASIC, not BASIC anything. I mean, it was just raw DOS programming, green screen programming, so to speak. And I remember how long it took me to create a single loop command. And now you're telling me that a seven year old will be able to sit down at a computer, tell it what it wants to create, the computer will create it and create it without all of the human error that I put in and it took me three days to create whatever I was.

Michael P.A. Cohen:

This is literally magic or as close to it as folks could come. That's fairly extraordinary. And when I think about something like that, what immediately comes to mind is the human displacement. As we're seeing more and more rapid business adoption, we are seeing more human displacement faster than we can come up with it. But this is human displacement of pretty sophisticated jobs.

Michael P.A. Cohen:

Now I mean, I've always wondered how long the law job will actually be in business. And in many parts of this field, I really do wonder about that. But I wonder how many general counsel just search Google for the answer to their question right now, rather than picking up the phone and calling an attorney who is one of their outside counsel. But this hits computer programmers. I mean, this hits just all kinds of things. How many voices do you need talking to a Codex to tell it to create? Am I way off the mark here? Am I way ahead of myself? But just how seismic this is, I think is what I'm trying to ask, Siraj. You have this way of delivering these things to me in this most modest of tone. And this seems pretty huge, again, as I'm hearing about it for the first time. So kind of give me some parameters of how big this is.

Siraj Husain:

It is a significant achievement. If you think about the implications of businesses having to deal with developing costs. It's time and cost and how to get a product out the door. And if they can find ways to speed that process up and reduce the cost, which I think this technology would allow them to do, it's quite an achievement there.

Michael P.A. Cohen:

That puts it in all of its practical terms. That's why we are seeing the adoptions. It leads to all kinds of other interesting discussions about whether or not universal basic income is actually an important goal. And we need to stop the (beep) around that conversation and start talking about how much and a lot of other things. But I think it is absolutely another seismic and quite fascinating and enabling and empowering development in a whole lot of ways. What's next on your list?

Siraj Husain:

I will add though, to your point though, that from what I've been seeing in the development of different industries, I don't think anybody's really safe. I think there's a very huge economic plus associated with artificial intelligence and deploying it in virtually every industry possible. And yeah, I don't think computer science or computer programmers are safe either from having some of their responsibilities outsourced.

Michael P.A. Cohen:

Well, you think about just basic, the kind of the evolution of legal tasks. Let's take our own profession for a second. You got a bunch of people too clever by a half over here coming up with an argument and you got people too clever by a half over here coming up with an argument and the judge goes back and he's got a clerk who tries to find the neutral grounds of the law in between. And he tries to make the right decision. Pretty soon, how long is it going to be before the judge goes back and has a technology where he can talk into a computer and get the answer, at least in the legal landscape, just really rattled out to the judge in a pretty quick amount of time and take much of the rhetoric and argument off the table. Because at the end of the day, Solomon's job isn't really based on what the argument is. It's based on making the right decision for the facts in front of the court.

Michael P.A. Cohen:

And that is something that judges have to work through now and almost work through in an area where lawyers are in the way. Because they are presenting their view of the facts in a way that is most favorable to their arguments. And what courts try to do a lot of times is cut through that crap and find out what's going on and then make a decision. It does seem to me that a lot of things in the world that we think of as the human province, the art of Socrates examination, and so many other things are really subject for improvement or displacement. And whether or not that's an improvement, that's more of a characterization. I'm sure people could be on all sides of that debate.

Michael P.A. Cohen:

And you can talk about, well, how fast will it be adopted? All these other kinds of things. But you made this point moments ago. This is about business. There's a bottom line here. There's an economic driver. And in American society, business is king. We take a hands-off approach, generally speaking, and we let the markets sort itself out. We'll definitely have rapid adoption, but really? We're going to let a market sort that out in these kinds of contexts. It's going to be a wild time for sure. We'll see what happens. But it is probably worth a whole episode where we could talk about that. And many people are. You can go do their podcast with folks from MIT, from Caltech, from all the leading AI research centers talking and forewarning American society about the human displacement that is to come as well as other things we may discuss today. But just like so many things, nobody's paying any attention to it. Certainly not the government. And so we'll just have to continue our rollercoaster ride in this part of the world and see what happens.

Siraj Husain:

Yeah. We are at an inflection point. I think AI could be used to generate massive amounts of wealth that could take care of every human being on the planet. Everyone can have a house to live in. It is possible, but it remains to be seen which direction we go.

Michael P.A. Cohen:

Right. How will we use it? What will we do? From what I've seen about the debate on topics that don't matter in this country over the past five years, it's impossible for me to understand even today's subject matter that I talked about earlier where I have to take an hour out of my day. I mean, it's impossible for me to have any kind of present clarity that we're capable of in any way of dealing with what's coming, but we'll see. What's next on your technology update?

Siraj Husain:

Well, there's an issue in intellectual property law that seemed like it was pretty much dealt with, but there's been some recent developments that reignited the issue. In intellectual property, there are typically two ways to protect AI technology. You use trade secrets or patents. Trade secrets just require a secret that has economic value. And you take efforts to maintain its secrecy. And patents, you can patent all kinds of things like the model's architecture or the training and validation processes for the model or the outputs of the model.

Siraj Husain:

Well, now we have an inventor by the name of Stephen Thaler who created this AI system called DABUS. DABUS stands for Device for the Autonomous Bootstrapping of Unified Sanctions. And he filed patent applications for a couple of inventions that were generated, not created, generated by the AI system DABUS that he created. So the issue here is that the patent applications are generated by an inventor that is not a human and whether the law allows for that kind of accommodation.

Siraj Husain:

And the group behind these patent applications, they filed patent applications all over the world and were rejected by the major patent jurisdictions like the United States, UK, even Australia for failing to have a human inventor. Well, they recently got a patent in South Africa which people thought was an achievement, but it turns out that there's really no formalities examination in South Africa. So this patent is ripe for a challenge. And, then shortly after that, a federal decision came out of Australia where a judge put aside the decision rejecting the application for failing to have a human inventor. And this justice, he recognized a scenario where an AI system can be an inventor, but not necessarily the owner, leaving open the possibility that an organization that created this AI could have ownership rights to those patents. And it would still be okay within the Australian patent regime.

Siraj Husain:

That's a pretty significant decision because from what we were seeing, most of the patent officers were in alignment saying, look, we're not going to go down this route. It sets up a pretty dangerous precedent where you can have a single entity that creates AI and then just patents inventions all day long, created by machine. And that's a big concern.

Michael P.A. Cohen:

Yeah. So let's go back to that word choice that you're carefully distinguishing between creation and generation. When we talked about DAVIS creating... well, I don't want to get caught up in it. When DAVIS made an invention, let me say that. You used the word generated and said not created. How do you distinguish that the invention didn't exist before DAVIS. It did exist after DAVIS. DAVIS was responsible for it. What's the difference between creation and generation by the DAVIS AI, when it comes to that invention, Siraj? I'm trying to find the way to formulate this question, but is that making any sense? Am I getting somewhere close to understanding the question?

Siraj Husain:

Yeah. Yeah. I mean, it's a good point. It's actually DABUS, D-A-B-U-S, the system.

Michael P.A. Cohen:

Oh, sorry, DABUS.

Siraj Husain:

No problem.  In this system, we typically associate human invention or invention as a human effort. It's something that the mind comes up with a solution to a problem. And it's something to be rewarded for. That's just how this country works. But if you have a system, what this system really does is it takes in a bunch of random noise and has been trained from all sorts of data to put together things in interesting combinations. And then one-

Michael P.A. Cohen:

But stop right there. Let's just stop right there. That's all the human mind does.

Siraj Husain:

That's true. Yeah. Yeah.

Michael P.A. Cohen:

It's a processor. It sees almost nothing. The only difference is that the human mind simply processes a bunch of inputs from senses, neurons, electricity, through plasma rather than through conduction. But it's still electrically generated neurons that are sensors picking up inputs from the environment that goes through the processor and comes out and generates an idea, an invention or whatever it is. I mean, isn't DABUS just doing the same thing?

Siraj Husain:

It's a model that's pretty similar to what the human mind does. It spits out inventions that it thinks would be interesting. And it criticizes them for some sort of value like, does this invention make sense? So I could see humans taking the same thought process. You come up with all kinds of brainstorming ideas and they're not all great. And then we hone in on the ones that are great. So there is some of that. But I think the key question is just that to walk down the line of letting a machine generate these things, and then have someone else claim ownership of them, it sets a dangerous precedent because what reward is there for everyday humans to come up with solutions to problems if there's an autonomous agent that can do it 24 hours a day without taking a break. And it's just a policy issue at that point.

Michael P.A. Cohen:

So let's say that happens. Let's say it's a policy issue. But in an America where a government can't even agree on the need to replace bridges and roads in an infrastructure bill because everything is a quote unquote about somebody's view of whether or not their Liberty is unlimited or infringed. That policy question is never going to get decided. And it's just not. We're not even talking about it. We're talking about it here, but it's not even on the radar. And it's happening. So there's no way policy is going to catch up to that. And from a productivity standpoint, it makes all the sense for the owners of DABUS to have it out there doing just what you described. So what happens when the "human" invents something that DABUS generated eons ago and has already been put to practical use in the marketplace by its owner? Aren't we going to be displacing human innovation in the same way? I mean, it's just incredible, right?

Siraj Husain:

Yeah. I see that as, yeah, exactly. Yeah. It's a displacement of human innovation. It's basically the machines overriding any effort that humans could do to make this world what it is. It's all built on human effort. And now we're replacing that with machine effort. I'm not sure where we will go from there.

Michael P.A. Cohen:

How do you think that may pressure intellectual property law and policy, kind of returning to where you started here? We have this interesting intellectual property law and policy question about an artificial intelligence innovating or generating innovations and they're unpatentable. But if this kind of technology flourishes, which it's likely to do because of the mass business application practically for this kind of generation of innovations and there's practical use in the marketplace by these things, but they're unpatentable, does that put pressure on rethinking the intellectual property law? And can you foresee anything in that aspect? I mean, Rob Masters and I did an episode moons ago. I think it feels like now, but we talked about the wild swings in the pendulum of American patent policy. Do you have any thoughts on an American patent policy keeping up with this world of AI generated invention?

Siraj Husain:

Yeah. The policies have evolved to keep up with this technology over the years. I mean, there wasn't too long ago where the patent office was not a big fan of machine learning technologies and the patenting of those technologies. And it took years and different guidelines put out by them to allow for people to go in that direction. But this is a whole new direction of allowing machines to generate ideas or inventions. I think from what I've been seeing, I expect a stiff resistance from patent offices. And I think that what we're seeing is coming out of Australia, the decision's still up for appeal. So I feel like it's going to go back in the direction of everyone's on the same page, because a distortion of intellectual property law in one major jurisdiction like Australia will have effects in other jurisdictions as well. And it'll change business strategy, patent strategy. I have a feeling it'll come back. So we'll see where it goes.

Michael P.A. Cohen:

So let me ask you this question. Let's go back to China for a second, which already has an AI language technology that is 10 times more powerful than the technology we talked about at the beginning of 2021. By the way, that is just going to increase exponentially at this point. So to say it is 10 times now is what's going to be in six months? Off the chart. What do you think China is going to do with AI that generates inventions? Are we going to be running around the world trying to patent things that China's AI invented and put to use a year ago, two years ago, six years ago? I mean this is like there's a practical upshot here.

Michael P.A. Cohen:

This is what I mean by displacement. The Chinese don't give a shit about Australian or American patent policy and the fact that they disdain it potentially for interesting reasons now, meaning that they can generate the "invention" that the Western world gives protection to well ahead of that invention by the Western human mind. And so I guess that's kind of my question. The pressure I see on patent policy is whether it can remain relevant if it's confined to human creation.

Siraj Husain:

Yeah. Well, I mean, patents are territorial. Things that are invented in China with AI, I mean, I think they'll have some effect on the marketplace, but the remaining jurisdictions will still stand by the rejection of those patents so that they won't be able to enter new markets or it can force the rights that are associated. What you're saying makes sense. I mean, the diversion of policy between different nations is probably inevitable because of the value associated with this technology. I mean, what is the incentive for nations to cooperate here? I think of AI like the nuclear bomb. I mean, it's really such a powerful thing and only select nations have once they perfected it. And they really don't want it to go in the wrong hands. Could an AI treaty exist in the future? I'm not sure if it's out of the realm of possibilities. I think it's possible.

Michael P.A. Cohen:

We have a nice treaty with Putin on AI. There's no wonder he's cozying up to the Chinese so quickly. I mean, they seem to be more aligned at least in certain thinking. And that certainly is true and when it comes to computer science where Russia has remained relevant, frankly, and put a lot of eggs in doing so, as their constant attacks on the United States make perfectly clear. Well, that is fascinating. I think this is one to watch for sure, Siraj. And I look forward to your own thoughts on it as we track it forward. What a super interesting development. What's next on your list of things we need to know?

Siraj Husain:

There have been a number of regulatory developments in this space. For a long time, we didn't see much regulation of AI. And that was because countries wanted companies to develop AI and make it what it is today. But more countries and jurisdictions are starting to realize that this is something that has a lot of power and it can be used for good or bad. They're trying to think of ways to draw some lines around what people can do with artificial intelligence, especially as it relates to the marketplace.

Siraj Husain:

And so in the United States, we saw the introduction of the Algorithmic Justice and Online Platform Transparency Act earlier this year. And this is legislation to prohibit online algorithms from discriminating on the basis of race, gender, age, and among other protected classes. It's associated with a number of transparency requirements, such as the disclosure of what processes are used and how they are used to analyze personal information and record keeping of information such as how the AI system was trained and being used. It's intended to grant the FDC the power to enforce. So it's going to have some teeth to it.

Michael P.A. Cohen:

Wow. That's a whole new field in essence. That's literally going to be a whole new field of law being created overnight. It's hard even to understand how to keep up with that, but that's actually amazing. What an amazing development, and an interesting development in the sense that we've been talking about whether law and policy can keep up. And the answer to that is probably no, but it doesn't mean that it's nonexistent. Even if it lags, there are certain folks who are certainly thinking about this, and that seems super important. Where do you see enforcement occurring in this area if you were to have to look into a crystal ball of early cases? Which is a totally unfair thing to ask of mathematicians and computer science, because you're going to tell me that there is no crystal ball and that's just correct. But if I had to ask you, where would you see this headed?

Siraj Husain:

I would actually look at some regulations, draft regulations set forward by the European Union. They set forward these regulations; it's called the AI Act. And it's basically a risk-based artificial intelligence. So they said, oh, think of it as a pyramid. At the bottom of the pyramid, you have the minimal risk of AI. Then you have a limited risk, a high risk. And then at the top of the pyramid is the unacceptable use of AI. And from these, what European Union at least sets forward, what will regulate, it'll be areas that involve AI in the use of the employment sector like hiring people, making hiring decisions, AI in the use of critical infrastructure, such as the maintenance of critical infrastructure. And that's obviously a big one from what we've seen in terms of cybersecurity incidents this year. And then as well as biometric identification.

Siraj Husain:

So there's going to be certain areas that are going to be given much more scrutiny versus the more benign uses of AI that we see, for example, a recommendation for a piece of content or filtering spam email. I mean, those are less dangerous uses. But in terms of enforcement, I would expect the use of AI that's being in those sensitive areas and of course, anything that involves the use of AI on consumers can't be unfair, deceptive.

Michael P.A. Cohen:

Well that's huge. Europe tends to move fast, relatively speaking, in the regulatory environment. What I mean by that is that they pass things. I mean the American Congress doesn't pass things. There is no legislation anymore. There's budget reconciliation and there's bypasses to the bicameral process. But the American Congress is a failure right now. And the European Union is not. I mean, they have the regulatory capacity and ability to pass things. And a lot of that stuff actually comes out of the member nations, particularly Germany, where they're already ahead of the Union. They have kind of a laboratory in Europe that our states still serve, I think, in America because we do see states pass a lot of things. Most of it is really bad stuff, but they do bad stuff. So it will be interesting to follow what happens in Europe. What kind of timetable do you see for this European Union artificial intelligence act? Is it a law now? Do you know where it is in the process, Siraj?

Siraj Husain:

It's still at the draft stages. So, even to get to this draft, the commission spent years consulting experts on how to get to the right balance of incentivizing innovation while also regulating its use. This is probably still maybe two years away from actually being implemented. And then I think they also get another two year grace period to ensure compliance. So it's not going to happen soon, but it's in the works and Europe has a way of ensuring companies comply, even the ones that are outside of the European market. So this law would apply to AI models that are put onto the European market or whose outputs are used in the European market. So the model doesn't have to be in the European Union, it could just still be applied. So I think American and worldwide developers in this space will eventually need to pay careful attention to these. And the penalties are actually, proposed penalties are pretty stiff. I mean, they're talking between 20 to 30 million euros for violation or four to six percent of global annual revenue, which seems like a lot.

Michael P.A. Cohen:

Wow. Yeah, no, that'll get people's attention. That's sort of above and beyond the normal cartel fines or competition law fines, which have really led the way in Europe in the past. So it's a whole different level of thing. We'll have to see. It sounds like Europe's out front and may create a model that at least becomes the basis for discussion around the world. And for our multinationals, that's certainly something they need to know. What's next on your list for this show, Siraj? And by no means, do I ask that question with reluctance, but with all enthusiasm and eagerness.

Siraj Husain:

Well, no. I thought I would round out the show with maybe touching on some societal and ethical issues, business use cases. And as far as the use of AI, the technological development has been beyond impressive. We're seeing tenfold increase like we just saw. It seems that as long as companies put in more hardware and more computing power, more data into these models and make them just simply make them bigger, as they scale, they just produce better and better information or predictions. And so it doesn't even require a huge insight of how do we build these models better? We could just make them bigger and they produce better outputs. And that's really raising a concern that is highlighted by some Stanford researchers saying that as these models become put into the marketplace and they're being put out as APIs that can be used by the public, application programming interfaces, or as open source models that can be deployed.

Siraj Husain:

The people that use them for downstream applications, they are going to inherit all the flaws that are associated with those foundational models like biases that may be in the model or any other security vulnerabilities that may be associated with the models. And that's creating a real risk for future applications because once if only a handful of people have the resources to build these huge models and then put them out on the marketplace and everyone just builds on top of those and there might be some issues later on that we'll see. So that's a big concern.

Michael P.A. Cohen:

It's a huge concern. Some of the podcasts I referred to earlier, where there have been technologists in the AI space, I should have included Stanford in the list of universities I rattled off. And by no means, do I mean to exclude many other great colleges and universities. Well, mostly universities because we're really talking about research centers at the PhD level. There are many around the world focused on AI and incredible scholars across the globe on this issue. And they are uniformly raising a flag on ethics, saying, "Hey man, we have got to slow down our approach. We need to get a lot of people involved before things go out." The capability already is kind of beyond the ethical considerations and the potential risk of harm is huge.

Michael P.A. Cohen:

But are we doing that? If we take what you just described, Siraj, I mean, I think you hit an alarm bell correctly. It's just a fact that if there is a problem with the AI and it is used as a foundational structure in downstream programming, that problem persists all the way through it. But are we doing anything about that or is that just the world we're in?

Siraj Husain:

There are definitely people working on technological solutions like maybe filtering what goes into the model so it doesn't produce undesired outputs or filtering the outputs so they don't provide undesirable outputs. But in terms of legal efforts, I mean, we're really, the act that we just discussed and at the federal level in the United States, that would help combat these types of issues, but we don't really have much else at this point.

Michael P.A. Cohen:

Yeah. Well, I think that leads us back to the law and policy side of a lot of this and the need to have you on the show. The AI developments over the past eight months almost dwarf the AI developments we talked about in January. And eight months from now, what could we possibly be talking about? And it's interesting to see the regulatory and policy sides come up with this. But the fact of the matter it seems to me, and this is just my view, is that the rapid capacity and incentive for business adoption of artificial intelligence is going to, at least in the short term or near term, almost always make the legal and policy side of the fence catch up. That we're really going to be a worldwide natural experiment in new brains that have more capacity than our own processors unleashed on the world.

Michael P.A. Cohen:

There's only so long you can control that, man. I mean this is the stuff of Westworld. I mean, it really is. I'm always mindful when I think about science fiction because I really think science fiction is actually our own processor's ability to predict the future. I just can't get over Jules Verne and the nuclear submarine and many other inventions in his science fiction novels of the century in which he lived that materialized not too long after that. Now I watch a science fiction show on Amazon Prime or something about folks mining the asteroid belts and people living on Mars. And it just doesn't seem like science fiction. I mean, it just seems so real. Westworld might be the most dangerous science fiction that we've ever created. I mean, the whole premise behind Westworld is that humans get artificial intelligence realizing that humans are inferior and trying to contain it. Who decides to win?

Michael P.A. Cohen:

Some of the things that you're talking about don't seem that farfetched to me when we kind of just talk about those basic themes. So I thank you for being on the podcast and making it fun and interesting as well as so factually important for our multinational business audience, because every one of these areas is something that they are likely already employing and thinking about as they map the world from a regulatory standpoint, an intellectual property protection standpoint, an innovation standpoint, as so many other things.

Michael P.A. Cohen:

One thing I guess we should touch on right before we go, on the same lines we've been discussing, it's a corollary to the ethics of bias. But what about the weaponization of AI? What about the use of AI as a weapon? I mean, I'm not sure we're not seeing some of that already directed against the United States. And I have no idea what the United States is directing against others. Israel has openly kind of given a wink and a smile for many years now about the weaponization of its AI and capabilities. And what do you see in that area going forward, Siraj?

Siraj Husain:

Oh yeah. That's a great point that you bring up because this year we actually documented, there was a UN report documenting the first instance of an AI attack drone that was used in combat in Libya. And this AI is basically that was trained to hunt down humans without any human involvement. And unlike the drones that are used today by most nations, there's no human at the other end of the wire making sure the AI's doing something that it was supposed to do and not supposed to do. That really triggered this big concern that the weaponization of AI is inevitable. The huge advantage that's provided by this to militaries around the world, it's a no brainer for them to want to invest in it.

Siraj Husain:

And we're already seeing, there was actually news of an attack helicopter, an autonomous attack helicopter that was released by a company in China, and it's already being exported around the world. And it basically comes in three different payloads. You can put a machine gun to it, or you can attach a bunch of bombs to it, or whatever else, like a laser guided thing. And there's a demand for it. There's always a market for arms. And for arms that do things on their own, there's an even bigger demand for it.

Siraj Husain:

So the concern here is that these AI autonomous weapons were known as lethal autonomous weapons systems. There's really no agreement among the world nations about how to regulate and control it. There's a number of nations, actually probably the majority of nations, that want to ban it completely, similar to nuclear weapons. But there are also a number of nations that are opposed to a universal ban because they see that the weaponization of AI or the AI in a military context could have positive effects like humanitarian missions or improving the precision of which bombs are dropped, so to minimize visual civilian casualties. But the main countries making these arguments and opposition to a ban are, of course, the United States, Russia and China, which are also one of the largest arms manufacturers in the world. So there's a huge market for it in AI and no consensus.

Michael P.A. Cohen:

Yeah. And it's also worth noting that for all three of those nations, the military investment dollars have, over time, driven an enormous amount of technology. So now you're talking about a technology arms race. We've always been in a technology arms race between those three nations. And that has almost no ability to slow down because there's no treaty that could be a treaty of trust between them. The game theory becomes whether you call it offense or defense, it's the same end. I can't not have it. That's the bottom line you reach, no matter how many logic models you create. We'll continue to keep an eye on that, too. Thanks for sharing that fact about the UN report. That's super interesting.

Michael P.A. Cohen:

Anything else we should know before we wrap up? I've kept you for my full allotted time. I don't think I ever let you go early. And my sincerest gratitude as always for such a wonderful overview of where we are in August and September, I suppose, of 2021. And I'll call the near term the next quarter, because I think that once again, I have learned from you that, if anything, we need to do this again sooner or rather than later. We'll talk in the next week about when to get you back on the show because eight months, nine months, whatever it is, that's way too long, man. There's way too much going on here. But any final thoughts, Siraj?

Siraj Husain:

I think you're right. I mean, the pace at which this is moving, there's a concept known as Moore's law, which is an observation that the number of transistors on an integrated circuit doubles about every two years. While recent studies have shown that the pace of artificial intelligence development is actually outpacing Moore's law. And we're seeing massive doubling of model sizes in matter of months, not years. So yeah, this is an exciting time. I mean, there are obviously a lot of risks associated with this technology. I'm hopeful that, as a society, that we can find ways to address those risks while still benefiting from the technology. I think people who are interested in building a business based on AI or injecting AI into their businesses have a lot of opportunities here with GPT-3, with pre-trained open source models that are available. And then, especially with this Codex thing that allows you to write code by just speaking it. I mean, the opportunities are just continuing to grow. And so it's an exciting time.

Michael P.A. Cohen:

It is exciting. I look at all of this in this kind of Orwellian way, I suppose, because I think it is probably more literary than I am analytical. But I love your message here, which is, look, folks, there's practical business going on here that is super exciting and moving really, really quickly. And you're right in the center of it there, Silicon Valley. Where are we now? Are we in Palo Alto or Menlo Park? Have we moved yet, Siraj? I know that there's a big move planned.

Siraj Husain:

There's a big move. Yeah. We'll be in Menlo Park next year.

Michael P.A. Cohen:

Next year. And of course, if we're not still in our homes.

Michael P.A. Cohen:

But you are. You're right in the middle of it. I mean, you're right in the middle of everything from large to small going on in that noise. So to our audience, if I might say, add on anything, Siraj Husain is what I will call not only tuned in, but plugged in this world of artificial intelligence. And anybody out there with questions or thoughts about what is happening ought to click through on his bio and give him a call. And Siraj, as the host of this show, I can't thank you enough for your second appearance and can't wait for your next one. Thank you so much for being here.

Siraj Husain:

It's been a pleasure, Michael. Thanks for having me. I look forward to the next time.

Michael P.A. Cohen:

Great. Well, that's it for this week, folks. Next week, we'll turn our attention interestingly to the world of cybercrime and cyber security with noted scholar, Lisa Thomas, who has published a new book we plan to discuss. Siraj's conversation today may help me ask her whether that book will remain relevant. We'll see. But she is keeping up, at least with law and policy in this area and will give us an overview of what to look for and where things stand in that world from a law policy standpoint, presently. Until then, as always, be well.

Contact Information:

Siraj’s bio: https://www.sheppardmullin.com/shusain

* * *

Thank you for listening! Don’t forget to FOLLOW the show to receive every new episode delivered straight to your podcast player every week.

If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show in Apple Podcasts, Amazon Music, Stitcher Radio, Google Podcasts, or Spotify. It helps other listeners find this show.

Be sure to connect with us and reach out with any questions/concerns:

LinkedIn

Facebook

Twitter 

Sheppard Mullin website

This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.

Jump to Page

By scrolling this page, clicking a link or continuing to browse our website, you consent to our use of cookies as described in our Cookie and Advertising Policy. If you do not wish to accept cookies from our website, or would like to stop cookies being stored on your device in the future, you can find out more and adjust your preferences here.