Artificial Intelligence Guidelines

Policy code: CG2102
Policy owner: Chief Operating Officer
Approval authority: Vice-Chancellor and President
Approval date: 01 October 2024
Next review date: 08 March 2027

Purpose

These guidelines provide transparency on the use and interpretation of Artificial Intelligence (AI) for the purpose of learning, teaching and assessment practice and for conducting and maintaining the operational functions of Federation University.

Scope

These guidelines apply to all students, staff and other individuals associated with the University who are responsible for Artificial Intelligence (AI) usage in the context of research, learning, teaching and/or operational functions.

These guidelines consider:

  • Awareness, literacy and usage of AI in the learning and teaching approaches conducted across the University
  • Assessment artefacts and modes applied to assess student learning
  • Awareness, literacy and usage of AI in research
  • Development, approval, use and management of AI software, systems or platforms developed or created for use by Federation University
  • Approval, use and management of AI software, systems or platforms procured for use by Federation University

AI is a rapidly evolving technology, which provides a significant opportunity for enhanced productivity, automation, and improvement. The University acknowledges, however, that due to its ongoing and rapid advancement, this also presents risks which the University will seek to monitor and mitigate as appropriate. These guidelines will be continually updated to reflect the increasing changes to application and regulation that are anticipated.

These guidelines are not standalone and must be read in the context of other applicable University governance documents (including policy, procedures and work instructions).

Guidelines statement

AI may be used at Federation University for functions that support the University’s Strategic Plan, including for administrative and operational functions and learning and teaching activities.

Ethical principles of AI usage

Federation University acknowledges the voluntary Australian Artificial Intelligence Ethics Framework to guide the use of AI by staff and students of the university. These principles encourage and guide usage in a manner which is:

  1. Focused on human, societal, and environmental wellbeing: AI should bring beneficial outcomes for individuals, society and the environment for Federation University and the broader community. Positive and negative impacts are considered and accounted for throughout the AI lifecycle.
  2. Human-centred values based: AI integration and usage will support human rights, autonomy of individuals and diversity.  
  3. Fair: AI will be accessible and inclusive at Federation University and monitored to ensure they do not perpetuate injustice and disparity, particularly to underrepresented and marginalised groups.
  4. Protective of privacy and security: AI systems and outputs will respect the privacy rights of staff and students at Federation University. Appropriate mitigation measures will be enforced to reduce the risk of potential security vulnerabilities, unintended application and abuse risks.
  5. Reliable and safe: AI system usage at Federation University will be reliable and operate according to the intended purpose of the application. Appropriate risk mitigation measures will be in place to ensure users are not subject to unreasonable risk.
  6. Transparent: Federation University staff and students can understand when they are being significantly impacted by AI and understand when an AI system is engaging with them.
  7. Contestable: where any AI system significantly impacts a member of the Federation University community, there will be a timely process to allow people to challenge the use or outcomes of the AI system.
  8. Accountable: Human oversight of all AI systems and implementation will occur.
  9. Sustainable: AI systems are developed, deployed, and managed to minimise environmental impact, optimise resource utilisation, and foster long-term societal well-being.
  10. Explainable: AI system usage has understandable explanations for decisions and actions so users can comprehend the rationale behind AI-generated outputs, aiding in trust-building, error detection, and accountability.
  11. Trusted: AI users can have confidence in the capabilities, decisions, and outcomes of artificial intelligence systems

The use of AI at Federation University must follow all applicable state and federal laws and meet regulatory requirements for both Higher Education and Vocational Education and Training.

Operational functions

Embracing the potential of AI, our University seeks to leverage this technology across its operations, extending beyond learning and teaching into operations, procurement, and support. The rapid advancements in the field of AI have created numerous opportunities to engage with tools that can have a profound impact on productivity.

1. Operational efficiency, competitiveness and adaptability

Federation University supports a culture of continuous improvement with the usage of AI potentially enhancing operational efficiency across the University, in line with the University’s operational and strategic goals.

AI technology may be used for a wide variety of operations to enhance consistency, speed, and organisational agility. It can also be implemented to identify issues, risks, opportunities, and control improvements.

2. Student wellbeing and support

Federation University encourages the use of AI technology to enhance student wellbeing and support, with appropriate strategies for maintaining student privacy and obtaining consent to engage in AI content (and the associated ability to opt out if requested). AI powered systems may support the provision of course and career advice, academic assistance, and wellbeing support. The use of AI systems will be utilised to enhance the student experience, assisting to ensure the regulatory requirements of the University are continually met.

3. Appropriate protections and responsibilities

The University will ensure that appropriate protections are in place to maintain data and privacy protections. Robust protection measures will be included in operational plans relating to data collection, retention and sharing, and the associated privacy considerations in line with legislation and regulatory requirements. Where necessary, consent will be sought for the use of AI in the use and analysis of data.

Information Technology Systems will be regularly updated and enhanced to ensure the suitability, reliability and protection of infrastructure for AI.

Staff and students are responsible for any content produced or published that includes AI-generated material.

4. Development and procurement of AI tools

Higher Education institutions, including Federation University, will play an important role in the advancement of AI technology, fostering innovation, and facilitating the exchange of knowledge and technology. The University supports both the development and procurement of AI tools to provide educational and/or commercial opportunities.

The University supports the purchase of AI technologies to enhance research capabilities, streamline administrative processes, and enrich educational experiences. The selection and purchase of AI tools require rigorous evaluation, considering factors like cost-effectiveness, scalability, ethical considerations, and compatibility with existing systems. The purchase of AI tools is subject to the Finance Governance Procedural Manual - Procurement of Goods & Services, Corporate Purchasing Card, Travel, and Motor Vehicles.

Learning and teaching

AI provides significant benefits to learning and teaching in both higher and vocational education. The inherent risks to academic integrity will be mitigated by the University.

1. Building awareness, understanding and Generative AI literacy

Federation University is committed to building AI literacy in all staff and students.

Federation University academic and TAFE teaching staff have a crucial role to play in supporting learning and assessing the capability and competency of students. It is anticipated that the role of staff will progressively change and adapt to meet the ongoing changes AI will bring, however, their role will remain vital in student education.

Students are encouraged to explore and use AI technologies as tools for learning and research under the guidance of teaching staff. Course and unit coordinators must clearly define the extent to which AI tools can be utilised in coursework. Trainers and assessors will provide guidance to VET students on when AI technology can be utilised in learning and assessment activities.

Staff are encouraged to stay informed about the ethical implications of AI and integrate responsible AI use into their teaching and assessment methods. Staff will have access to training tools, relevant to their needs.

2. Learning, teaching and assessment design and practice

Federation University requires staff to provide clear communication to students regarding the use of artificial intelligence as part of assessment practices. Authorised use, and limits of use, need to be clearly stipulated and communicated to students within the Unit Description HE / Unit Outline (VET) (as part of learning, teaching and assessment details).  Assessments should consider the potential use of AI tools, ensuring that the assessment measures students' understanding and critical thinking skills.

Where AI is permitted, it is likely to impact the expectations made of students to demonstrate learning or attainment of learning at various stages of their studies. Regular review of learning outcomes and associated assessment is required to ensure that learning and assessment is focused on learning tasks not readily performed by AI.

3. Student academic achievement

Federation University supports the use of AI to identify students who are ‘at-risk’ early in their education. The University will use data collected to report to funding and regulatory bodies as required. Students will be provided with information on available student support services as appropriate. Students have the option to access these as deemed appropriate.

AI may also be used to identify high-achieving students, who may be encouraged to undertake further education, placements, or award applications as appropriate to their circumstance.

4. Academic integrity and misconduct

Academic Integrity is the cornerstone of our educational community. The responsible use of AI technologies can enhance learning and research opportunities, but it must be done with integrity and transparency. Where questions of Academic Integrity arise, guidance can be found in the Academic Integrity Procedure. Where a student has engaged in Academic Misconduct, the Student Misconduct Procedure provides guidance on the appropriate processes to undertake in the event of suspected academic misconduct.

Research

AI may provide significant support for researchers, but it is important that ethical research is maintained. . Researchers must be alert to inherent risks to research integrity associated with the use of AI in research.

Federation University encourages the ethical use of AI technology in research. Staff and students involved in research must play a crucial role in maintaining the integrity of rigorous methodology, collaboration, innovation and publication. The role of researchers will be impacted by the ongoing changes in the AI landscape, however research integrity must remain at the cornerstone for research output at the University.

1. 1. Research integrity, bias, and data sources

Research integrity encompasses the principled conduct of scientific inquiry, ensuring honesty, transparency, and ethical considerations throughout the research process and is crucial to the quality and reputation of the research conducted at Federation University.

When using AI, researchers are expected to maintain research integrity, considering AI-specific challenges such as bias mitigation, transparency in algorithmic decision-making, and ensuring the fairness and reliability of AI-generated outcomes. Information generated by AI-generated content must be cross-verified with credible sources. As GenAI lacks the ability to perceive, rationalise or critique information and could produce inaccurate or biased content, i.e. “AI hallucination”, also known as confabulation, researchers must ensure the content's accuracy and avoid the propagation of AI-generated errors or "hallucinations."

There are strong concerns related to bias caused by generative AI tools. For example, use of generative AI as a brain-storming tool can result in bias to creep into the human judgement, such as gender specific information from AI influencing human judgement. It is the researcher’s responsibility to consciously identify, critically interrogate, and mitigate these biases, and verify the accuracy of any content developed with the help of GenAI. 

Whilst AI serves as a versatile tool, assisting exploration, comprehension, and innovation, researchers are expected

2. Ethics, privacy and confidentiality in research

It is researchers’ responsibility to ensure compliance with research ethics requirements, particularly for projects involving sensitive/confidential data. The underlying mechanism of AI tools is so designed that the information or data fed into them is captured by the tool, and may be released in public domain and potentially used for training future products without researchers’ explicit knowledge. This may result in transmission of sensitive data outside of the intended audience or outside of Australia, even if restricted, as the AI services may be hosted on servers located in other countries.

Authors should use AI tools responsibly, ensuring that their use does not violate privacy. This includes adhering to data governance and informed consent protocols, especially when AI is involved in processing personal or sensitive data. Uploading this information may violate ethical principles such as those outlined in the National Statement on Ethical Conduct in Human Researchand other regulations. It could pose risks to the participants and society as the generative AI could produce harmful or malicious data or content that could be used for unethical or illegal purposes such as theft, fraud, discrimination and misinformation. Researchers must implement robust measures to protect the privacy of individuals whose data are analysed or generated by AI.

Researchers must ensure the use of AI tools is clearly disclosed in their methodologies and ethics application, detailing the AI's role in the research process, the kind of data it processed, and how decisions were influenced by AI outputs. This is particularly true in studies involving human participants, where the protection of these individuals must be weighed against potential benefits to the wider population. The rapid advancement of AI adds layers of complexity to these issues, introducing concerns about data governance, consent, and accountability.

University governance documents will support University Research Schools in their guidance of ethical AI use in research across the university. The extent that AI can be used to aid research must be clearly defined by each Research School in relation to the expertise and specificity. As AI is continually evolving, it is important that the University Research Schools remain well informed of the changes and potential impacts on ethical research within the sector.

3. Accountability

Accountability is an important aspect of research at Federation University. Authors are fully responsible for research integrity, including accuracy of their publications. Note that a generative AI tool, being non-human, CANNOT be considered as a co-author and cannot be assumed to be responsible for content. Researchers are encouraged to be aware of the ethical implications and potential biases of their AI use in research, which can be mitigated through a commitment to transparent processes.

4. Research grant and publications

Researchers must check granting agencies’ guidelines when submitting a grant. The National Health and Medical Research Council and Australian Research Council each have policies on the use of GenAI for the purpose of crafting and reviewing grant proposals.

Researchers must not use GenAI to assess peer review material (e.g. grants, manuscripts, HDR student theses, ethics applications), as this may breach the requirement to maintain confidentiality of the content.

The authors must be fully aware of, and familiar with, the AI related rules of Federation University as well as the specific journal in which they wish to publish. While some journals might allow use of text and images generated by AI, some others do not allow its use unless express approval is obtained from the publishers. For further information on scholarly publishing and GenAI, please refer to the Federation University Library website.

5.Copyright, ownership and intellectual property

All creative content by a researcher must be their own work, unless otherwise acknowledged. When generative AI software tools are used in research for generating content, analysing data, or providing insights, their role must be clearly and explicitly acknowledged.

Researchers must only input data into commercial GenAI tools that would also be appropriate to share with external organisations, companies, and competitors to avoid risk of reuse of the information. Researchers must refrain from submitting the following categories of data into commercial and external generative AI platforms or services:

  1. private/personal information, such as names, email addresses, student ID numbers, phone numbers, images, audio recordings, financial information such as bank account details and credit card numbers, login credentials, health and personal information, and other sensitive research data.
  2. data or information that is inherently confidential, commercial-in-confidence, human data, collectively owned as Indigenous Cultural and Intellectual Property, or that is protected by copyright or Federation University’s intellectual property. Researchers are advised not to share data or information with GenAI models that they wouldn’t normally make publicly available.

6. Higher Degree Research

HDR Candidates are reminded that the purpose of a doctoral degree is to develop new knowledge and that the thesis reflects this original contribution to knowledge. GenAI may be used to support the writing process but should not be used entirely for interpreting data or for drafting the thesis. Students may use more advanced GenAI models to repurpose published scientific manuscripts into presentations to translate the research for a different audience, however under no circumstances may confidential information be uploaded into GenAI models. 

GenAI tools must not be used to process examiner reports and subsequently write-up reports to the Thesis Examination Committee.

Prior to using GenAI, students must discuss this with their supervisor and/or their supervisory panel or other researchers with experience in GenAI to ensure adherence to the principles of responsible research. 

If there is any doubt, HDR candidates are advised to contact the HDRCs and the Graduate Research School. Further, they are advised to carefully read the comprehensive information provided by TEQSA guidelines.

7. Referencing

All AI generated material in the preparation/generation of their research content, must be appropriately acknowledged and cited in accordance with University’s research policies. This may include the name of the AI tool, the version, and the date of interaction. Refer to the instructions from the Federation University library for citing content from AI tools. Knowingly using generative AI tools, either directly or via a third party, to generate/write/ produce any work (paid or unpaid), without proper acknowledgement of the source, is tantamount to deliberate cheating and may be treated as academic and research misconduct.

University defence mechanisms

Federation University is committed to meeting legal, ethical and regulatory requirements. To maintain its obligations, several defence mechanisms ensure data safety, privacy of staff and students and educational and research integrity.

Defence mechanisms include critical review of tools to understand:

Purpose and Use: utilising tools that have a clear purpose and match the desired use requirements.

Reliability and trustworthiness: utilising tools developed by reputable, trustworthy providers, with verification of the accuracy and reliability of data outputs. This includes the provision of adequate supports and training to ensure usability.

Legal and ethical considerations: ensuring the products chosen for use across the University maintain the privacy and security of staff and student data; clear intellectual property and copyright requirements; ensuring equity of access and sustainable practices.

Accountability and risk mitigation

Federation University has robust governance structure and practices to ensure the ethical use of AI across all aspects of learning, teaching and operations. The Artificial Intelligence Working Group has been established to address all aspects of university AI usage. This group will provide academic and operational guidance and recommendations.

The Federation University Council serves as the ultimate accountable body, overseeing the strategic direction and ethical governance of AI initiatives. University governance documents, including policies and guidelines, will be regularly reviewed, and adapted to evolving AI technologies and regulatory changes.

Through established governance structures, such as Council and Academic Board sub-committees, the university ensures that appropriate approvals are obtained for AI projects, including considerations for data privacy, fairness, and potential societal impacts. These structures provide a framework for evaluating the ethical implications of AI applications and ensuring alignment with institutional values and regulatory requirements. Transparency mechanisms, including clear reporting and documentation, will be utilised to demonstrate and provide clear decision-making. Training and awareness programs for staff and students, will be utilised to build a culture of responsible AI use in all learning and teaching and operational aspects.

Continuous monitoring and auditing practices will be utilised across university departments to assess AI systems for bias, errors, and performance issues. These evaluations should inform ongoing improvements and adaptations.

Breaches and complaints

Breaches of these Guidelines will be addressed under the Staff Code of Conduct or the Student Code of Conduct as appropriate. For complaints regarding AI usage please follow the Complaints Management Procedure.

Legislative context

  • Federation University Act
  • Federation University Statute 2021
  • Federation University Regulations 2021
  • Australian Skills Quality Authority ASQAStandards for Registered Training Organisations 2015
  • Australian Qualifications Framework AQF
  • Federation University Act 2010
  • National Vocational Education and Training Regulator Act 2011
  • The Higher Education Standards Framework (Threshold Standards) 2021
  • Other Federal and State Legislation and Regulations as appropriate

Responsibility

  • The Vice-Chancellor and President (the Approval Authority) is responsible for monitoring the implementation and outcomes of this guideline.
  • The Chief Operations Officer (as the Document Owner) or delegate is responsible for maintaining the content of this guideline document and scheduling its review.

Promulgation

The Artificial Intelligence Guidelines will be communicated throughout the University via:

  • A FedNews announcement and on the ‘Recently Approved Documents’ page on the University’s Policy Central website.
  • Distribution of emails to Deans / Directors / Directors / Managers / University staff.
  • Documentation distribution, eg. posters, brochures.
  • Notification to Institutes, Schools and Business Units.

Implementation

The Artificial Intelligence Guidelines will be implemented throughout the University via:

  • An Announcement Notice via FedNews website; and
  • Staff and student induction/training sessions.