Global Finance: FICO, formerly Fair, Isaac and Company, has been a pioneer in the use of predictive analytics and data science in financial services. Why do you use blockchain technology to track the end-to-end provenance of your firm’s machine-learning models?
Scott Zoldi: We have about 300 data scientists, all very bright, all allowed to choose their algorithms and choose their approaches. You cannot manage it using Google Sheets or typical Agile [project management] methods. So we had the idea to create a responsible AI model development standard and then employ a blockchain tool to help enforce it.
GF: You’ve written that blockchain technology “essentially records an immutable instance of the contract between my data scientists, managers and me.” What did you mean?
Zoldi: The blockchain records the entire journey of building these AI models, including mistakes, corrections and improvements. I must be assured that every step is followed around ethics testing. A model trained on data from an existing applicant pool may not perform well on a new applicant population with different characteristics, though this may not be apparent initially. For each scientist who develops a model, another checks the work, and a third approves that it’s all been done appropriately. Three scientists have reviewed the work and verified that it met the standard.
GF: AI is becoming controversial. Some countries such as Canada are already opposing OpenAI, for instance. Regulations are coming, no?
Zoldi: Demonstrating that your product and AI meet corporate standards for ethics and regulations will be necessary—whether in the UK, the US, China or Brazil. We did a survey with Corinium recently of 100 North American financial services companies. Only 8% said their responsible AI development practices fully matured, with model development standards consistently scaled. So looking ahead, we need to demonstrate to our managers, executives, boards and regulators that we have a process that is adhered to within the organization.
GF: In February, you were granted a patent for your “use of blockchain to advance responsible AI.” But you began using a permissioned ethereum blockchain for model development in 2017, when blockchain technology was in its infancy. What motivated you?
Zoldi: Banks told us that most of their AI models—between 70% and 80%—never made it into production. Why? For one thing, the models had to go through all sorts of governance processes post hoc. Maybe they discovered that not all relevant data was captured at development time, inappropriate algorithms were used, or there was a lack of ethics/stability testing. There had to be a better way than, say, having Sally build a model, then finding six months later, when she’s no longer with us, that she didn’t record the right information. We want to know when the model is being developed that it is meeting the thresholds for an unbiased model, not later.
GF: What sort of reaction do you have internally from your data scientists? Did they resist initially having to enter everything on a digital ledger?
Zoldi: Initially, it was a change. Some felt that it would impact their creativity. We said: We’re not here to hamper innovation. We are building a gold standard of model development. These models perform better because we follow the standard. We have fewer customer complaints and better quality. Our scientists aren’t called in on weekends to fix things. It reduces pressure on scientists. The reaction has been really good.
GF: What lessons have you learned that may interest other financial innovators looking to build responsible AI projects?
Zoldi: Getting the proper organizational support is critical. There must be a conversation about responsible AI at an executive or board level. If the issue is buried very low in your organization, you must raise it. Also, it takes time. If you have these analytics “city-states” and five different leaders or ideas, there will be some intellectual friction. You just have to have the patience to work through it.