As AI and automation technology mature, the need for inherently interpretable, explainable and responsible models has become the critical focus. While this development is being encouraged, there has been an increased emphasis on managing associated risks with these technologies. The AI/ ML regulatory landscape in US is changing rapidly; it has become imperative for organizations to make requisite tweaks in their business processes and explain to regulators how their system works to demonstrate compliance with applicable regulations.
The US government has geared up its ongoing efforts on ‘Responsible AI’ and emphasise the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and society. In May 2023, the Biden-Harris Administration announced new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.
While there has been palpable excitement around Responsible AI and AI Governance, it is all still in the conceptual phase. Achieving AI governance will help organizations manage AI risk and scale while complying with the growing AI regulations.
In this whitepaper, we summarize the emerging regulatory framework for AI in the US and propose concrete steps companies can take to comply with such regulations.
- Introduction and components required for ‘AI Governance’
- Designing an AI Governance framework:
- Fairness and Bias
- ML Explainability
- Algorithmic Auditing
- Data Privacy
- Responsible AI
- AI usage risk