##plugins.themes.bootstrap3.article.main##

Ankit Bansal

Srikanth Bharadwaz Samudrala

Shalmali Patil

Swathi Suddala

Praneeth Reddy Amudala Puchakayala

Abstract

This paper navigates accountability in "Responsible Generative AI" or RNAI. It is essential to consider the ethical implications of the for-profit use of state-of-the-art models, which can generate human-like text, images, and sounds, before incorporating these technologies into products. RNAI offers practical strategies for developers who are not AI ethics experts to be more informed about and potentially manage the societal impacts of their models. In this paper, ethics tools are realigned to address copyrights over generated outputs and manage the publication of outputs that could further misinformation. The motivation for this exploration was generated from a principal finding in a recent deep dive into expert raters' ethical interpretations of generated outputs. Specifically, generated song lyrics and musical compositions received a median rating of Likely Inappropriate with a 1st - 99th percentile range spanning Not Readily Inappropriate to Blatantly Inappropriate. Though there are praiseworthy RNAI initiatives, many negative raters called for responsibilities and restrictions on model developers. Addressing rater feedback necessitated an exploration of existing strategies and norms for responsibilities over original work, perhaps the closest ethical parallel to dealing with generated work, namely, copyrights. Additionally, in the case of song lyrics, it may be necessary for model developers to curate the publication of outputs that could, accidentally or not, lead to misinformation; in practice, this is not feasible without a new approach to dealing with generated outputs. To these ends, the paper develops and then navigates a set of strategies and suggestions as well as a new perspective on misinformation that should be useful to any developer considering their responsibilities.

##plugins.themes.bootstrap3.article.details##