Generative AI is transforming teaching and learning experiences for both students and faculty. This webpage serves to familiarize, support, and guide faculty in AI matters that affect both their practice and the student experience. Here, you will find resources that provide comprehensive insights into the principles of generative AI, its applications, and ethical considerations. All faculty and staff have access to Microsoft Copilot, your AI assistant for education.
Visit NSU's AI website to read about the University's AI policy and gain access to other support resources, tools, and services related to AI.
Syllabus Statement - Faculty are encouraged to create a generative AI use policy for their courses. View considerations for your AI policy and see some example statements.
Having Conversations with Students - Talking with students early and often is important to establish clear expectations on how AI will and will not be used in your courses. Learn some tips on how to have these important conversations.
AI Use in Assignments and Assessments - Explore opportunities to integrate AI into teaching and learning activities.
Generative AI can enhance faculty productivity by automating routine tasks, such as creating lecture materials or drafting research proposals. It can also assist in personalizing learning experiences for students by generating tailored content or assessments. By integrating generative AI into their workflow, faculty members can allocate more time to other areas that require critical thinking and creativity.
Faculty Assistant for Teaching and Learning - Generative AI can serve as a faculty assistant in various aspects of teaching and learning. It can collaborate on brainstorming, help with organization, and significantly save time.
AI for Productivity - Faculty can use generative AI for a variety of administrative tasks. This will allow faculty to save time and effort from such tasks as writing emails and creating presentations and focus more on teaching and engaging students.
AI for Research - Explore different tools and use cases to advance your research skills with AI.
While useful, AI detection tools have varying degrees of reliability and are not foolproof. While they can sometimes identify AI-generated content correctly, they do not always. Unfortunately, we cannot depend on their accuracy. AI detectors can produce false positives (human-written text marked as AI-generated) and false negatives (AI-generated text marked as human-written). They also tend to show bias, particularly against non-native English writers. While AI detection tools can be helpful, they should not be relied upon as the sole method for determining the origin of a text. It’s important to use them in conjunction with other methods and to be aware of their limitations.
Generative AI tools present various ethical dilemmas and considerations. One of the most obvious is the potential for academic or professional dishonesty. Bias is also a major concern, since these models are trained on a wide range of online information which itself includes biases. There are also concerns about intellectual property, since these models are often trained on content that wasn’t approved for use by the authors of that content, and also since these tools can generate content in the style of content creators such as writers, artists, etc. without their permission.
As you consider these ethical challenges, it is also valuable to educate students about them so that they can be aware and develop their genAI literacy skills. Let students know that while there are appropriate uses for genAI, there are also inappropriate and unethical uses, and share with them concerns about bias and intellectual property rights. Express the value of what you are teaching them so they can be motivated to use the tools appropriately when allowed, and not use them in a way that hinders their learning. Be clear with students, both verbally and in written policies, as to when it is appropriate to use these tools for your courses and whether and how they should disclose that they used them.
Generative AI tools use information from prompts and results to further train the models. For this reason, it is important to be aware that anything you put into the tool has the potential to be shared or found by others in the future. It is advised that you refrain from putting any personal data or proprietary information into these tools to ensure data privacy for you, our students, and NSU as a whole. All FERPA guidelines should be considered when it comes to data privacy and genAI. Identifiable information, like names and personal data, should be removed from any inputs. Proprietary content should not be put into genAI tools.
An exception to this standard comes with NSU-approved tools. For these, you may consider allowing course content or other NSU content, but steps should still be taken to remove identifiable information if possible, and information such as financial data, social security numbers, etc. should never be put into any tool whether NSU-approved or not. Please check NSU guidance and policy regularly to keep track of which genAI tools have been officially approved for use.
Recent trends show a significant increase in job postings that are looking for people with experience in artificial intelligence, and particularly generative AI (LinkedIn, 2023). High percentages of jobs worldwide continue to be changed by genAI (World Economic Forum, 2023), and by 2024, over 60% of workers were already utilizing genAI at their jobs (McKinsey & Company, 2024). At the same time, more than half of recent college graduates say they feel unprepared for their careers when it comes to genAI, with a supermajority saying genAI training should be incorporated into their courses (Cengage Group, 2024).
Giving students guided exposure to genAI in their coursework builds the genAI literacy skills that will help them know how to use these tools ethically, appropriately, and effectively. Career-specific training will help students to be better prepared for their professional future. It is useful to consider this when designing courses and making curricular decisions that will affect students’ career readiness. Students graduating with genAI skills will be more sought after, and more successful, in the job market.
References:
Cengage Group. (2024). CG-2024-Employability-Survey-Report.pdf. Cengage.widen.net.
https://cengage.widen.net/s/bmjxxjx9mm/cg-2024-employability-survey-report
LinkedIn. (2023). Future of work report AI at work. https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/PDF/future-of-work-report-ai-august-2023.pdf
McKinsey & Company. (2024). The state of AI in early 2024: Gen AI adoption spikes and starts to generate value | McKinsey. Www.mckinsey.com. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
World Economic Forum. (2023). Future of jobs report 2023. In World Economic Forum.
Generative AI has the potential to significantly improve accessibility for people with disabilities. For example, AI can generate accessible text formats like captions, descriptions, or summaries for people with visual impairments. It can also adapt to individual needs by providing tailored support based on user preferences and disability types. AI can also facilitate communication by translating languages or providing assistive features for individuals with speech difficulties, and it can be used to identify potential accessibility issues in digital content.
While generative AI has many benefits for accessibility, concerns with accessible content are also presented. One way that AI can generate inaccessible content is by creating visual content that lacks alt-text or descriptions. Automatic speech recognition is another area that affects accessibility. Sometimes, these systems fail to interpret the speech patterns of individuals with speech impairments accurately. Furthermore, the lack of diverse training data means that AI systems may not be adequately trained on a broad spectrum of disability types and communication styles, limiting their ability to provide effective support for all users. While data collection does become more inclusive over time, users of AI can be intentional with data input to ensure the output is accessible.
Use the F.A.S.T.E.R principle to help ensure generative AI is being used responsibly.
Examine these Responsible Use Scenarios for generative AI. These scenarios can be used with students as an activity or to generate discussions on the appropriate use of generative AI.
Questions/Comments/Suggestions? We welcome your feedback. If you would like to propose content to be added to this page or discuss a specific issue related to using generative AI for teaching and learning, please contact us at lec@nova.edu.
Last updated February 2025