Alumni Spotlight

Radding ’75 Speaks on Cybersecurity, AI at International Conference

Rory J. Radding ’75

View Archives


Rory J. Radding ’75 assisted Saudi Arabia as it moves into the future under a plan called Vision 2030 where international experts assist the country with strategic economic, societal, and social goals, to decrease the reliance on oil and gas industry and move  the country into the digital age.

Radding, a partner in the Technology law boutique, Mauriel Kapouytian Woods LLP, spoke earlier this year at the Saudi International Conference on Justice as a guest of the Saudi Arabian government and covered artificial intelligence and bias/ legal analytics. Radding, an international IP strategist with over 45 years of experience, is also an accomplished first chair trial lawyer and has extensive experience representing clients before the Federal Courts and the United States International Trade Commission (ITC). Before attending Albany Law School, he was a scientist. He is an emeritus member the Albany Law School Board of Trustees.

Radding spoke on bias in judicial decision making and was one of only 25 speakers before nearly 4,000 conference guests.

“It was an interesting conference from a cultural standpoint and also from a legal standpoint, because the focus was the justice system in the digital age. The speakers and audience were  an eclectic group with varying points of view from many different countries,” he said.

Rory Radding

His session, “Data Analytics for Justice Enhancement,” focused on different forms of AI, including, “generative AI, deep and machine learning, neural networks, prediction, and probabilistic reasoning.”

He spoke about the ways machine learning and AI are being introduced in the legal field. Technology that is commonplace in many American courtrooms, like online records or virtual hearings are new to some places like Saudi Arabia. The country’s government hopes to draw on modernized legal systems around the world to model their own, Radding said.

“From a legal standpoint, there will always be changes made to how we do things. As change happens in our profession, what we will have to do as lawyers is continue to develop a strong ability to have sound judgment. That's one thing that we have that a machine doesn't have right now,” he said.

As technologies like ChatGPT take over tasks like writing briefs, rendering decisions based on the information it is given, and creating first drafts of documents, humans ultimately have the final say.

“AI can render a decision, but it's up to the judge whether he or she wants to accept it. Judges can modify it and  do whatever they want. The AI decision is  not the final word,” he said.

Radding also presented, “Artificial intelligence and bias in the field of justice: causes and possible cures” where he provided an overview of how AI bias may be introduced.

“It turns out that bias in this artificial intelligence world generally is not intentional. It can be, but it often is not because humans are creating all the parts that ultimately result in AI. You've got the hardware, which is the computer itself. So there is hardware bias. Someone had to create it and figure which components to use. Then, you have the software. That creates software bias. Someone had to write that the software and make a selection. Then there is the selection of data. Someone has to tell the computer what it should do, what data to use and how to train the computer with the data. So there is data bias.Then there  is the cultural in which the AI is used. So you have cultural bias that can be introduced into the AI, because sometimes the same things have different meaning depending on where in the world you are.”

Radding gave the example of a car insurance rate study done on a red car. He said in the United States, the driver was given rates based on statistics that showed people in red cars are risky drivers, speed often, and may be in more frequent accidents. Thus, their rates were high. However, those rates were not applicable elsewhere. In Asian cultures, driving a red car is seen as less risky and a red car is a safe, desirable vehicle because red is a lucky color. In the U.S., red might be associated with aggressiveness and in Asia, it’s the opposite, he said. If the AI is not trained to recognize the cultural differences, an Asian driver would pay a higher rate.
Many conversations around AI are negative or discuss what it will replace, but Radding predicts, if it’s done right, it will enhance our capabilities. Similar to the introduction of the modern computer or even the internet, he says, once humans learn to work  with it, AI can advance what’s possible.

“A human being needs to be involved in every step  of the process and the use of AI. My view, and as a former technologist, is that  we're not going to get replaced. We will have to change, but we are constantly changing,” he said. “Look at things like email, online shopping which changed how we live Similarly, AI will be a tool that we will adapt, adopt and come to use .”


Rory Radding