Observatorio IA

Emily Bender Youtube (12/08/2023)
Ever since OpenAI released ChatGPT, the internet has been awash in synthetic text, with suggested applications including robo-lawyers, robo-therapists, and robo-journalists. I will overview how language models work and why they can seem to be using language meaningfully-despite only modeling the distribution of word forms. This leads into a discussion of the risks we identified in the Stochastic Parrots paper (Bender, Gebru et al 2021) and how they are playing out in the era of ChatGPT. Finally, I will explore what must hold for an appropriate use case for text synthesis.
(01/01/2023)
The pace of progress in learning sciences, learning analytics, educational data mining, and AI in education is advancing. We are launching this new publication, Learning Letters, to accelerate the pace at which science in learning moves from the lab to dissemination. The traditional publication process often takes 12 to18 months to move from article submission through to publication. We are interested in developing a new approach to publishing research in the “educational technology, learning analytics, and AI in learning” space. In particular, we want to reduce time to publication and put a sharper focus on results and outputs of studies. Learning Letters features innovative discoveries and advanced conceptual papers at the intersection of technology, learning sciences, design, psychology, computer science, and AI. Our commitment is a two week turn-around from submission to notification. Once revisions have been made, the article will be published within a week of final editing. As a result, an article could move from submission to publication in less than four weeks, while having undergone rigorous peer review.
The use of AI-powered educational technologies (AI-EdTech) offers a range of advantages to students, instructors, and educational institutions. While much has been achieved, several challenges in managing the data underpinning AI-EdTech are limiting progress in the field. This paper outlines some of these challenges and argues that data management research has the potential to provide solutions that can enable responsible and effective learner-supporting, teacher-supporting, and institution-supporting AI-EdTech. Our hope is to establish a common ground for collaboration and to foster partnerships among educational experts, AI developers and data management researchers in order to respond effectively to the rapidly evolving global educational landscape and drive the development of AI-EdTech.
This article examines the potential impact of large language models (LLMs) on higher education, using the integration of ChatGPT in Australian universities as a case study. Drawing on the experience of the first 100 days of integration, the authors conducted a content analysis of university websites and quotes from spokespeople in the media. Despite the potential benefits of LLMs in transforming teaching and learning, early media coverage has primarily focused on the obstacles to their adoption. The authors argue that the lack of official recommendations for Artificial Intelligence (AI) implementation has further impeded progress. Several recommendations for successful AI integration in higher education are proposed to address these challenges. These include developing a clear AI strategy that aligns with institutional goals, investing in infrastructure and staff training, and establishing guidelines for the ethical and transparent use of AI. The importance of involving all stakeholders in the decision-making process to ensure successful adoption is also stressed. This article offers valuable insights for policymakers and university leaders interested in harnessing the potential of AI to improve the quality of education and enhance the student experience.
A new set of principles has been created to help universities ensure students and staff are ‘AI literate’ so they can capitalise on the opportunities technological breakthroughs provide for teaching and learning. The statement, published today (4 July) and backed by the 24 Vice Chancellors of the Russell Group, will shape institution and course-level work to support the ethical and responsible use of generative AI, new technology and software like ChatGPT. Developed in partnership with AI and educational experts, the new principles recognise the risks and opportunities of generative AI and commit Russell Group universities to helping staff and students become leaders in an increasingly AI-enabled world. The five principles set out in today’s joint statement are: Universities will support students and staff to become AI-literate. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience. Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access. Universities will ensure academic rigour and integrity is upheld. Universities will work collaboratively to share best practice as the technology and its application in education evolves.
This study aims to develop an AI education policy for higher education by examining the perceptions and implications of text generative AI technologies. Data was collected from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, using both quantitative and qualitative research methods. Based on the findings, the study proposes an AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning. This framework is organized into three dimensions: Pedagogical, Governance, and Operational. The Pedagogical dimension concentrates on using AI to improve teaching and learning outcomes, while the Governance dimension tackles issues related to privacy, security, and accountability. The Operational dimension addresses matters concerning infrastructure and training. The framework fosters a nuanced understanding of the implications of AI integration in academic settings, ensuring that stakeholders are aware of their responsibilities and can take appropriate actions accordingly.
Enrique Dans Enrique Dans (30/07/2023)
Estamos como en Poltergeist: delante de la televisión sin sintonizar, y diciendo eso de «ya están aquí«… No, no es malo, no es negativo, no lo vamos a prohibir, ni a regular de manera que no ocurra. Es, simplemente, inevitable. O desarrollamos una forma de distribuir la riqueza que se adapte a esos nuevos tiempos que ya están aquí, o tendremos un problema muy serio. Y no será un problema de la tecnología: será completamente nuestro.

Pages

Sobre el observatorio

Sección en la que se recogen publicaciones interesantes sobre inteligencia artificial no dedicadas específicamente a la enseñanza de ELE. Los efectos de la IA sobre la enseñanza y aprendizaje de lenguas van a ser, están siendo, muy notables y es importante estar informado y al día sobre este tema.

Temas

Autores