As AI and automation technology mature, the need for inherently interpretable, explainable and responsible models has become the key focus. Moreover, with the changing regulatory landscape, organizations must explain to regulators how their system works, demonstrating compliance with applicable regulations.
The Reserve Bank of India recently firmed up a regulatory framework to support the orderly growth of credit delivery through digital lending methods while mitigating regulatory concerns. One of the many recommendations accepted for immediate implementation include:
i) REs (Regulated Entities) to ensure that the algorithm used for underwriting is based on extensive, accurate and diverse data to rule out any prejudices. Further, the algorithm should be auditable to point out minimum underwriting standards and potential discrimination factors used in determining credit availability and pricing.
ii) Digital lenders should adopt ethical AI, which focuses on protecting customer interest, promotes transparency, inclusion, impartiality, responsibility, reliability, security and privacy.
*As mentioned in RBI press release on 'Recommendations of the Working group on Digital Lending - Implementation' (link)
Organizations need to be aware of the regulatory and ethical implications of the guidelines on business.
About the workshop:
The primary reasons driving the motivation for explainable AI (XAI) are getting insights into model behaviours, meeting regulatory and compliance requirements, reducing the risk of using models in production (especially for core functions) and achieving trustworthy AI. In this interactive workshop, we discuss:
- Impact of the guidelines on AI/ML in Lending
- Overview and understanding of XAI in underwriting
- How can institutions achieve responsible, transparent and trustworthy AI
- Steps to formulating ‘AI Governance’ in Lending