domingo, 26 de mayo de 2024
Protecting scientific integrity in an age of generative AI
https://www.pnas.org/doi/10.1073/pnas.2407886121?utm_campaign=morning_rounds&utm_medium=email&_hsenc=p2ANqtz-8to7S_4xLK9IlR6bDnHz9S97UmME0vOlXb0pojPtr0kQwkYWnHzccmtszJK8qzoKGRPFqskw0WTUj03-KuTbfq6zpCLA&_hsmi=308143698&utm_content=308143698&utm_source=hs_email
Scientists call for a strategic council to guide AI use
The advent of increasingly powerful AI algorithms has scientists both excited and nervous. Artificial intelligence offers new research opportunities and problem-solving abilities — but it also opens the door to new kinds of ethical violations, as a new editorial in the Proceedings of the National Academy of Sciences points out.
The authors discuss five ways to help maintain scientific integrity in the context of AI. Scientists should be accountable for the content or inferences they might draw from generative models, for one, and AI-generated work or data should be clearly documented. AI should also be vetted to avoid causing harm, including perpetuating biases — and to lead the way, the authors suggest the National Academies create a “strategic council on the responsible use of artificial intelligence in science.”v
Suscribirse a:
Enviar comentarios (Atom)
No hay comentarios:
Publicar un comentario