Received: 13 November 2025; Revised: 25 December 2025; Accepted: 25 December 2025; Published Online: 27 December 2025.
J. Collect. Sci. Sustain., 2025, 1(3), 25411 | Volume 1 Issue 3 (December 2025) | DOI: https://doi.org/10.64189/css.25411
© The Author(s) 2025
This article is licensed under Creative Commons Attribution NonCommercial 4.0 International (CC-BY-NC 4.0)
Responsible AI in the Age of Generative Media: A
Comparative Study of Ethical and Transparent AI
Frameworks in India and the USA
Bhavya S. Tripathi,
1,*
Shrawani C. Gaonkar
1
and Anagha Dhavalikar
2
1
Department of Artificial Intelligence and Data Science, Vasantdada Patil Pratishthan’s College of Engineering & Visual Arts, Mumbai,
Maharashtra, 400022, India
2
Department of Electronics and Computer Science, Vasantdada Patil Pratishthan’s College of Engineering & Visual Arts, Mumbai,
Maharashtra, 400022, India
*Email: bhavyatripathi02@gmail.com (B. S. Tripathi)
Abstract
Advances in generative artificial intelligence, such as Google Gemini, ChatGPT and DALL·E, are opening new
possibilities for creativity on digital media but also raising pressing concerns about misinformation, deepfakes and
declining trust in media outlets. This paper explores Responsible Artificial Intelligence (RAI) efforts at the policy level
in India and the United States, comparing their unique and shared approaches to addressing generative media -
where policies are driven toward balancing innovation with transparency, accountability and fairness. India's official
AI for All strategy speaks to inclusivity and social development, though concrete enforcement mechanisms and
safeguards against generative media misuse are currently lacking. The US relies on the National Institute of Standards
and Technology (NIST) AI Risk Management Framework, emphasizing risk assessment, technical robustness and
accountability but lacks a robust regulatory mechanism that aligns its varying state and sector-specific initiatives.
Alongside policy analysis, we designed and ran a pilot survey of college students and working professionals through
September 2025 to capture awareness, trust and concerns related to Responsible AI. At a preliminary level, our
findings showed low relative exposure to national policy frameworks on RAI but high expressed concern related to
potential misuse of generative AI around deepfakes, manipulated images and the lack of checks on content
authenticity. Respondents expressed particularly high endorsement for content labels that would mandate labeling
artificial intelligence generated media. We conclude that India and the USA display parallel and diverging paths on
RAI for generative media but both experiences are marked by a gap between policy aspirations and public
understanding. Moving ahead, we see a need for greater policy clarity, cross-border coordination and public outreach
to foster transparency and responsibility in adoption of AI for media.
Keywords: Generative artificial intelligence; Responsible AI; Ethical AI adoption; AI transparency and accountability;
Policy frameworks: India and USA; Deepfakes and media trust.
1. Introduction
Generative AI is transforming how machines create content by producing text, visuals, and videos at scale.
[1]
Systems
such as ChatGPT, Gemini, and DALL·E now shape a significant share of digital output; approximately 12% of false
online material includes machine-generated media.
[2]
While access to creative tools has expanded, trust in information
has weakened due to growing authenticity risks and ethical concerns.
[3,4]
Ethical oversight frameworks based on
fairness, accountability, transparency, and explainability (FATE) aim to align technological progress with societal
values.
[5]
Organizations such as UNESCO and IEEE advocate people-centric approaches to AI;
[1]
however, country-
level implementations vary. India’s “AI for All” initiative (launched in 2018) emphasizes inclusivity and accessibility,
[6]
whereas the United States’ National Institute of Standards and Technology (NIST) AI Risk Management Framework
1.0 (2023) prioritizes accountability and structured risk management.
[7]
Despite these efforts, public awareness of such
frameworks remains limited, and practical trust in AI systems is correspondingly low. Research examining public
perceptions of responsible AI use in contexts such as synthetic images and voice cloning remains scarce. This study
combines policy analysis with findings from a survey of 40 individuals in India, including students and working
professionals, to explore awareness, trust, and value perceptions related to generative AI. It examines alignment
between policy objectives and public understanding. Unlike prior studies that focus primarily on theoretical ethics,
this work employs empirical evidence to identify gaps between policy intentions and user experience and proposes
approaches to strengthen transparency and responsibility in AI systems across both India and the United States.