CeRAI is an interdisciplinary research center aimed at ensuring ethical and responsible development of AI-based solutions in the real world.
The Indian Institute of Technology Madras (IIT Madras) has established the Center for Responsible AI (CeRAI), an interdisciplinary research center aimed at ensuring ethical and responsible development of AI-based solutions in the real world. CeRAI is geared towards becoming a premier research center at the national and international level for both fundamental and applied research in responsible AI with immediate impact in deploying AI systems in the Indian ecosystem. The Center was formally inaugurated on April 27, 2023 by Shri Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology and Skill Development and Entrepreneurship.
Google is the first platinum consortium member and has contributed a sum of $1 million for this Center.
CeRAI conducted its first workshop on ‘Responsible AI for India’ last May 15.
“I am sure that the deliberations that will happen today (15th May 2023) in this workshop and the various panel discussions will go a long way in helping us evolve our framework, our guidelines and our policies for responsible AI,” said Abhishek Singh, Managing Director and Chief Executive Officer, Digital India Corp. “AI is playing a major role in all our lives. Whether we know or not, every day we are using AI-based technologies in some part of our life. It is very important that those at the policymaking level and those who are working at the cutting-edge of developing technologies are aware of the risks and challenges that remain while we are using the same technologies for solving societal problems, ensuring access to healthcare, making healthcare more affordable and making education more inclusive and making agriculture more productive… There is a need for non-biased and non-discriminatory AI framework as we have unique requirements that require customization as per our requirements.”
One of the primary objectives of CeRAI will be to produce high-quality research outputs, such as publishing research articles in high-impact journals/conferences, white papers, and patents, among others. It will work towards creating technical resources such as curated datasets (universal as well as India-specific), software, toolkits, etc., with respect to the domain of Responsible AI.
“As India’s digital ecosystems increasingly adopt and leverage AI, we are committed to sharing the best practices we have been developing since 2018 when we began championing responsible AI. To help build a foundation of fairness, interpretability, privacy, and security, we are supporting the establishment of a first-of-its-kind multidisciplinary Center for Responsible AI with a grant of $1 million to the Indian Institute of Technology, Madras,” said Sanjay Gupta, Google’s Country Head and Vice President, India.
A panel discussion on ‘Responsible AI for India’ was also held during the workshop. CeRAI aims to foster various partnerships and collaborations with government organizations, academic institutions and industries.
CeRAI is actively engaged with NASSCOM’s Responsible AI initiative to build course material, skilling programs, and toolkits for Responsible AI, with Vidhi Legal to work on developing a Participative AI framework, with CMC Vellore, to explore areas of mutual interest in the domain of responsible AI, SICCI to help their members better understand the implications of Responsible AI, and TIE to help mentor startups in this space besides RIS, a think tank of the Ministry of External Affairs, Government of India.
“We have now reached a stage where we have to assign responsibility to AI tools and interpret the reasons for the output the AI gives. Aspects of human augmentation, biased data sets, risk of leakage of collected data and the introduction of new policies besides substantial research must be addressed. There is a growing need for trust to be built around AI and it is crucial to bring about the notion of privacy. AI will not take away jobs as long as domain interpretation exists,” said Prof. V. Kamakoti, Director, IIT Madras.
“It is important for the AI model and its predictions to be explainable and interpretable when they are to be deployed in various critical sectors/domains such as Healthcare, Manufacturing, and Banking/Finance, among other areas,” said Prof. Balaraman Ravindran, Head, IIT Madras. “AI models need to provide performance guarantees appropriate to the applications they are deployed in. This covers data integrity, privacy, robustness of decision making, etc. We need research into developing assurance and risk models for AI systems in different sectors.”
CeRAI will also provide Sector-specific recommendations and guidelines for policymakers. With the achieved research outputs, the center will help to formulate sector-specific recommendations and guidelines for policymakers and provide all stakeholders with the necessary toolkits for ensuring ethical and responsible management and monitoring of AI systems that are being developed and deployed.
CeRAI also plans to create opportunities for conducting specialized sensitization/training programs for all stakeholders to appreciate the issues of ethical and responsible AI in a better manner so as to enable them to contribute meaningfully towards solving problems in respective domains. It will hold a series of technical events in the form of workshops and conferences on specialized themes of deployable AI systems with a strong focus on ethics and responsibility principles that need to be followed.