Artificial Intelligence Guidelines

Policy code: CG2102
Policy owner: Chief Operating Officer
Approval authority: Vice-Chancellor and President
Approval date: 16 January 2026
Next review date: 15 January 2029

Purpose

These guidelines provide transparency on the use and interpretation of Artificial Intelligence (AI) for the purpose of learning, teaching, and research and for conducting and maintaining the operational functions of Federation University.

The rapid advancements in the field of AI, especially the recent emergence of Generative AI (GenAI), have created numerous opportunities for all to engage with AI tools that have a profound impact on productivity (Calvino et al., 2025). However, along with these benefits comes significant risks to academic integrity, data security, and the role of universities in ensuring that graduates have developed the future employability skills (WEF, 2025) and to function as contributing members of the community. AI is also susceptible to bias, hallucinated responses, and “are not transparent about how they collect and use inputted data” (Venaruzzo et al., 2023, p. 4). These AI guidelines will be continually updated to reflect the rapid changes to application and regulation that are anticipated and as such, they are not standalone and must be read in the context of other applicable University governance documents (including policy, procedures and work instructions).

Scope

These guidelines apply to all students, staff and other individuals associated with the University who are responsible for AI use in the context of research, learning, teaching and/or operational functions and therefore, take into consideration:

  • Awareness, literacy and usage of AI in the learning and teaching approaches conducted across the University
  • Assessment artefacts and modes applied to assess student learning 
  • Awareness, literacy and usage of AI in research
  • Development, approval, use and management of AI software, systems or platforms developed or created for use by Federation University
  • Approval, use and management of AI software, systems or platforms procured for use by Federation University

Definitions

Term  Description 
Academic integrity   Academic integrity is the honest and respectful engagement with the scholarships of learning, teaching, research and community. It is an essential moral code to be upheld by the academic community inclusive of staff and students.
AI   Artificial Intelligence (AI) refers to computer systems capable of performing tasks that typically require human intelligence, such as reasoning and decision-making. AI encompasses a wide range of technologies, including machine learning, deep learning, and natural language processing. 
Acknowledgement  A short note appended to staff or student work specifying whether and how GenAI tools were used. 
Generative Artificial Intelligence (GenAI)  GenAI refers to AI systems and tools that can create content such as text, images, audio, video, code, or other outputs in response to user prompts.   
Hallucination  A response created by GenAI that, whilst presented as fact, is a fabrication. 
LLM  Large Language Model (LLM) is a neural network trained on a multitude of parameters sourced from vast amounts of training data, often scraped from the internet. 
Prompt Crafting/ Engineering  The process of creating a clear and concise statement(s) for use with GenAI in order to garner the best possible response. 
Research Integrity   Research Integrity means conducting research ethically, responsibly, and honestly. It includes a commitment to the generation of genuine knowledge, adhering to recognised research standards, and sharing results transparently and openly.   

Guidelines statement

AI may be used at Federation University for functions that support the University’s Strategic Plan, including for administrative and operational functions and learning and teaching activities (Federation University Australia, 2021).

Ethical principles of AI usage

Federation University acknowledges the Australian Government’s voluntary Artificial Intelligence Ethics Principles encourage and guide AI use  in a manner which are focused on:

  1. Human, societal, and environmental wellbeing: AI should bring beneficial outcomes for individuals, society, and the environment for Federation University and the broader community. Positive and negative impacts are considered and accounted for throughout the AI lifecycle.
  2. Human-centred values based:AI integration and use will support human rights, autonomy of individuals and diversity.
  3. Fair: AI will be accessible and inclusive at Federation University and monitored to ensure they do not perpetuate unfair discrimination particularly for underrepresented and marginalised groups.
  4. Protective of privacy and security: AI systems and outputs will respect the privacy rights of staff and students at Federation University. Appropriate mitigation measures will be enforced to reduce the risk of potential security vulnerabilities, unintended application and abuse risks.
  5. Reliable and safe: AI system usage at Federation University will be reliable and operate according to the intended purpose of the application. Appropriate risk mitigation measures will be in place to ensure users are not subject to unreasonable risk.
  6. Transparent: Responsible disclosure to all Federation University staff and students to ensure they understand when they are being significantly impacted by AI and understand when an AI system is engaging with them.
  7. Contestable: where any AI system significantly impacts a member of the Federation University community, there will be a timely process to allow people to challenge the use or outcomes of the AI system.
  8. Accountable: Acknowledge responsibility for outcomes of an AI system and ensure human oversight of all AI systems and implementation will occur.
  9. Sustainable: AI systems are developed, deployed, and managed to minimise environmental impact, optimise resource utilisation, and foster long-term societal well-being.
  10. Explainable: AI system use has understandable explanations for decisions and actions so users can comprehend the rationale behind AI-generated outputs, aiding in trust-building, error detection, and accountability.
  11. Trusted: AI users can have confidence in the capabilities, decisions, and outcomes of AI systems

The use of AI at Federation University must adhere to all applicable state and federal laws and meet regulatory requirements for both Higher Education and Vocational Education and Training.

Operational functions

Embracing the potential of AI, Federation University seeks to leverage this technology across its operations, extending beyond learning and teaching into operations, procurement, and support. The acceleration of AI development has expanded the range of available tools, offering significant opportunities to enhance efficiency and overall organisational productivity.

1. Operational efficiency, competitiveness and adaptability

Federation University supports a culture of continuous improvement with the usage of AI potentially enhancing operational efficiency across the University, in line with the University’s operational and strategic goals (Federation University Australia, 2021).

The adoption of AI offers the benefit of enhanced operational efficiency through streamlined processes, improved decision-making, and more responsive service delivery. Furthermore, it can also be implemented to identify issues, risks, opportunities, and to manage initiatives.

2. Student wellbeing and support

Federation University encourages the use of AI technology to enhance student wellbeing and support, with appropriate strategies for maintaining student privacy and obtaining consent to engage in AI content (including ability to opt out if requested). AI powered systems may also support the provision of course and career advice, academic assistance, and wellbeing support.The use of AI systems will be utilised to enhance the student experience and to ensure the regulatory requirements of the University are continually met.

3. Appropriate protections and responsibilities

The University will put in place robust safeguards to protect data and uphold privacy. Robust protection measures will be included in operational plans relating to data collection, retention and sharing, and the associated privacy considerations will align with legislation and regulatory requirements. Where necessary, ethical consent will be sought for the collection  and analysis of data.

Information Technology Systems will be regularly updated and enhanced to ensure the suitability, reliability and protection of infrastructure for AI.

Staff and students are responsible for any content produced or published that includes AI-generated material.

4. Development and procurement of AI tools

Higher Education HE institutions, including Federation University, play an important role in the advancement of AI technology, fostering innovation, and facilitating the exchange of knowledge and technology. The University supports both the development and procurement of AI tools to provide educational and/or commercial opportunities.

The University supports the purchase of AI technologies to enhance research capabilities, streamline administrative processes, and enrich educational experiences. The selection and purchase of AI tools demands a rigorous evaluation process that considers cost-effectiveness, scalability, ethical implications and alignment with existing systems. The purchase of AI tools is subject to the Finance Governance Procedural Manual - Procurement of Goods & Services, Corporate Purchasing Card, Travel, and Motor Vehicles.

Learning and teaching

The AI guidelines also align with the Federation University Artificial Intelligence Learning and Teaching ASSURE Framework expectations for the responsible integration of AI into learning and teaching.

1. Building awareness, understanding and Generative AI literacy

Federation University is committed to building AI literacy in all staff and students.

Federation University academic, HE and TAFE teaching staff have a crucial role to play in supporting learning and assessing the capability and competency of students. The evolving influence of AI is expected to reshape academic work, requiring teaching staff to adapt their practices as new technologies become embedded across the sector. However, human agency remains central to high-quality learning and teaching,  aligning with Federation University’s ASSURE Framework, which states that AI must enhance (not replace) human creativity, academic ownership, uphold academic integrity, and critical thinking. AI works best when embedded responsibly within teaching environment.  

Students are encouraged to explore and use AI technologies as tools for learning and research under the guidance of teaching staff. Course and unit coordinators must clearly define the extent to which AI tools can be utilised in coursework. Trainers and assessors will provide guidance to VET students on when AI technology can be utilised in learning and assessment activities. Independent Capability assessment tasks must be invigilated for assurance of learning. For Integrated Application tasks, staff must consider how students can acknowledge AI use and explain this in the Course/ Unit Description. 

Staff are encouraged to stay informed about the ethical implications of AI and integrate responsible AI use into their teaching and assessment methods, and they will have access to training tools, relevant to their needs.

2. Learning, teaching and assessment design and practice

Federation University requires staff to provide clear and transparent communication to students regarding the use of AI as part of assessment practices. Staff may use AI tools to support the development of learning material and assessment design. They must also comply with institutional and legal requirements, ensuring no copyright restricted material and confidential information are uploaded into any external tools. AI tools can assist with improving workflow, generating examples and drafting resources; however, final decisions remain firmly with staff. Integrated and ethical uses of AI tools need to be clearly stipulated and communicated to students within the Unit Description HE / Unit Outline (VET) (as part of learning, teaching and assessment details). Assessments should consider the potential use of AI tools, ensuring that the assessment provided measures students' understanding and critical thinking skills. Assessment design at the course level will include a balance between Independent Capability and Integrated Application tasks. Learning activities will model ethical AI use, ensuring HE and VET students’ demonstrate transparent, reflective, human-centred judgment. Regular review cycles will use analytics and student feedback to refine AI integrated pedagogy.

Where AI use is expected it is likely to impact the expectations made of students to demonstrate learning or attainment of learning at various stages of their studies. Students need to be reflective and critical users of the information provided by GenAI tools.

3. Student academic achievement

Federation University supports the use of AI tools to identify students who are ‘at-risk’ early in their education. The University will use data collected to report to funding and regulatory bodies as required. Students will be provided with information on available student support services as appropriate. Students have the option to access these as deemed appropriate.

AI tools may also be used to identify high-achieving students, for targeted intervention. High achieving students may be encouraged to undertake further education, placements, or award applications as appropriate to their circumstance. AI analytics will contribute to continuous improvement cycles within student success programs. 

Support strategies will be customised and reviewed annually to ensure equity and inclusivity. As a supported alternative to AI, Federation University provides human feedback via Peer Support Tutors, assessment and writing support through Learning Skills Advisers, and other programs.

4. Academic integrity and misconduct

Academic Integrity is the cornerstone of our educational community. The responsible use of AI tools which can enhance learning and research opportunities, must be done with integrity and transparency. Where questions of Academic Integrity arise for both staff and students, guidance can be found in the Academic Integrity Procedure. Where a student has engaged in Academic Misconduct, the Student Misconduct Procedure provides guidance on the appropriate processes to undertake in the event of suspected academic misconduct.

Disclosure and attribution of permitted AI use is recommended for students.It is recommended that students retain drafts, including AI prompts as evidence of authentic learning. Integrity reviews will focus on learning intent and process emphasising  transparency and student explanation. This AI Guidelines document will be reviewed regularly to ensure alignment with technological developments, sector standards, and the University’s Academic Integrity processes.  

Research

AI may provide significant support for researchers, but it is important that ethical research is maintained.  Researchers must be alert to the inherent risks to research integrity associated with the use of AI in research.

Federation University encourages the ethical use of AI technology in research. Staff and students involved in research must play a crucial role in maintaining the integrity of rigorous methodology, collaboration, innovation and publication. The role of researchers will be impacted by the ongoing changes in the AI landscape, however,  research integrity must remain at the cornerstone for research outputs at the University.

1. Research integrity, bias, and data sources

Research integrity encompasses the principled conduct of scientific inquiry, ensuring honesty, transparency, and ethical considerations throughout the research process and is crucial to the quality and reputation of the research conducted at Federation University.

When using AI, researchers are expected to maintain research integrity, considering AI-specific challenges such as bias mitigation, transparency in algorithmic decision-making, and ensuring the fairness and reliability of AI-generated outcomes. AI-generated content must be cross-verified with credible sources as GenAI lacks the ability to perceive, rationalise or critique information and could produce inaccurate or biased content, i.e. “AI hallucination”, also known as confabulation. Thus researchers must ensure the accuracy and reliability of all material generated.

Significant concerns exist about the potential for AI tools to introduce or amplify bias in research and decision-making. When used for brainstorming, these systems may reproduce gendered assumptions, cultural stereotypes and other distortions embedded in their training data, subtly influencing human judgement. Researchers must therefore recognise and interrogate these risks, critically evaluate AI-generated content against reliable sources, and take steps to mitigate embedded bias. Such vigilance ensures that AI-assisted work remains rigorous, inclusive and ethically sound.Whilst AI serves as a versatile tool, assisting exploration, comprehension, and innovation, researchers are expected to mitigate the risks of AI use through transparent use and acknowledgement.

2. Ethics, privacy and confidentiality in research

It is researchers’ responsibility to ensure compliance with research ethics requirements, particularly for projects involving sensitive/confidential data. The underlying mechanism of AI tools is so designed that the information or data fed into them is captured by the tool and may be released into the public domain and potentially used for training future products without researchers’ explicit knowledge nor permission. This may result in transmission of sensitive data outside of the intended audience or outside of Australia, even if restricted, as the AI services may be hosted on servers located in other countries.

Authors should use AI tools responsibly, ensuring that their use does not violate privacy.This includes adhering to data governance and informed consent protocols, especially when AI tools are involved in processing personal or sensitive data. Uploading this information may violate ethical principles such as those outlined in the National Statement on Ethical Conduct in Human Research (2023) and other regulations. It could pose risks to the participants and society as AI could produce harmful or malicious data or content that could be used for unethical or illegal purposes such as theft, fraud, discrimination and misinformation. Researchers must implement robust measures to protect the privacy of individuals whose data is analysed or generated by AI.

Researchers must ensure the use of AI tools is clearly disclosed in their methodologies and ethics application, detailing the AI's role in the research process, the kind of data it processed, and how decisions were influenced by AI outputs. This is particularly true in studies involving human participants, where the protection of these individuals must be weighed against potential benefits to the wider population. The rapid advancement of AI tools add layers of complexity to these issues, introducing concerns about data governance, consent, and accountability.

University governance documents will support University Research Schools in their guidance of ethical AI use in research across the university. The extent that AI can be used to aid research must be clearly defined by each Research School in relation to the expertise and specificity. As AI is continually evolving, it is important that the University Research Schools remain well informed of the changes and potential impacts on ethical research within the sector.

3. Accountability

Accountability is an important aspect of research at Federation University. Authors are fully responsible for research integrity, including accuracy of their publications. Note that a GenAI tool, being non-human, cannot be considered a co-author and cannot be assumed to be responsible for content. Researchers are encouraged to disclose their use of AI tools and remain attentive to the ethical implications and potential biases these tools may introduce. Such risks can be reduced through a strong commitment to transparency in research processes. 

4. Research grant and publications

Researchers must check granting agencies’ guidelines when submitting a grant. The National Health and Medical Research Council and Australian Research Council each have policies on the use of GenAI for the purpose of crafting and reviewing grant proposals.

Researchers must not use GenAI to assess peer review material(e.g. grants, manuscripts, HDR student theses, ethics applications), as this may breach the requirement to maintain confidentiality of the content.

The authors must be fully aware of, and familiar with, the AI related rules of Federation University as well as the specific journal in which they wish to publish. While some journals might allow use of text and images generated by AI, some others do not allow its use unless express approval is obtained from the publishers. For further information on scholarly publishing and GenAI, please refer to the Federation University Library website.

5.Copyright, ownership and intellectual property

All creative content by a researcher must be their own work, unless otherwiseacknowledged. When GenAI tools are used in research for generating content,analysing data, or providing insights,their role must be clearly and explicitly acknowledged.

Researchers must only input data into commercial GenAI tools that would also be appropriate to share with external organisations, companies, and competitors to avoid risk of reuseof the information. Researchers must refrain from submitting the following categories of data into commercial and external generative AI platforms or services:

  1. private/personal information, such as names, email addresses, student ID numbers, phone numbers, images, audio recordings, financial information such as bank accountdetails and creditcard numbers, logincredentials, health and personal information, and other sensitive research data.
  2. data or information that is inherently confidential, commercial-in-confidence, humandata, collectively ownedas Indigenous Cultural and Intellectual Property, or that is protected by copyright or Federation University’s intellectual property. Researchers are advised not to share data or information with GenAI models that they would not normally make publicly available.

6. Higher Degree Research

HDR Candidates are reminded that the purpose of a doctoral degree is to develop new knowledge and that the thesis reflects this original contribution to knowledge. GenAI tools may be used to support the writing process but should not be used entirely for interpreting data or for drafting the thesis. Students may use more advanced GenAI models to repurpose published scientific manuscripts into presentations to translate the research for a different audience, however under no circumstances may confidential information be uploaded into GenAI models.

GenAI tools must not be used to process examiner reports and subsequently write-up reports to the Thesis Examination Committee.

Prior to using GenAI tools, students must discuss this with their supervisor and/or their supervisory panel or other researchers with experience in GenAI to ensure adherence to the principles of responsible research.

If there is any doubt, HDR candidates are advised to contact the HDRCs and the Graduate Research School. Further, they are advised to carefully read the comprehensive information provided by  Federation University’s ASSURE framework (2025) which is aligned with TEQSA guidelines (2025).

7. Referencing

All AI generated material in the preparation/generation of their research content, must be appropriately acknowledged in accordance with University’s research policies. This may include the name of the AI tool, the version,and the date of interaction. Refer to the instructions from the Federation University library for acknowledging content from AI tools. Knowingly using GenAI tools, either directly or via a third party, to generate/write/produce any work (paid or unpaid), without proper acknowledgement of the source, is tantamount to deliberate cheating and may be treated as academic and research misconduct.

University defence mechanisms

Federation University is committed to meeting legal, ethical and regulatory requirements. To maintain its obligations, several defence mechanisms ensure data safety, privacy of staff and students, and educational and research integrity.

Defence mechanisms include the critical review of tools to understand their:

Purpose and Use: utilising tools that have a clear purpose and match the desired use requirements.

Reliability and trustworthiness: utilising tools developed by reputable, trustworthy providers, with verification of the accuracy and reliability of data outputs. This includes the provision of adequate support and training to ensure usability.

Legal and ethical considerations: ensuring the products chosen for use across the University maintain the privacy and security of staff and student data; clear intellectual property and copyright requirements; ensuring equity of access and sustainable practices.

Accountability and risk mitigation

Federation University has robust governance structure and practices to ensure the ethical use of AI across all aspects of learning, teaching, assessment, and operations. The Artificial Intelligence Working Group has been established to address all aspects of university AI usage. This group will provide academic and operational guidance and recommendations.

The Federation University Council serves as the ultimate accountable body, overseeing the strategic direction and ethical governance of AI initiatives. University governance documents, including policies and guidelines, will be regularly reviewed, and adapted to evolving AI technologies and regulatory changes.

Through established governance structures, such as Council and Academic Board sub-committees, the university ensures that appropriate approvals are obtained for AI projects, including considerations for data privacy, fairness, and potential societal impacts. These structures provide a framework for evaluating the ethical implications of AI applications and ensuring alignment with institutional values and regulatory requirements. Transparency mechanisms, including clear reporting and documentation, will be utilised to demonstrate and provide clear decision-making. Training and awareness programs for staff and students will be utilised to build a culture of responsible AI use in all learning and teaching and operational aspects.

Continuous monitoring and auditing practices will be utilised across university departments to assess AI systems for bias,errors, and performance issues. These evaluations should inform ongoing improvements and adaptations.

Breaches and complaints

Breaches of these Guidelines will be addressed under the Staff Code of Conduct or the Student Code of Conduct as appropriate. For complaints regarding AI usage please follow the Complaints Management Procedure.

Legislative context

  • Federation University Act
  • Federation University Statute 2021
  • Federation University Regulations 2021
  • Australian Skills Quality Authority ASQAStandards for Registered Training Organisations 2025
  • Australian Qualifications Framework AQF
  • Federation University Act 2010
  • National Vocational Education and Training Regulator Act 2011
  • The Higher Education Standards Framework (Threshold Standards) 2021
  • Other Federal and State Legislation and Regulations as appropriate

Responsibility

  • The Vice-Chancellor and President (the Approval Authority) is responsible for monitoring the implementation and outcomes of this guideline.
  • The Chief Operations Officer (as the Document Owner) or delegate is responsible for maintaining the content of this guideline document and scheduling its review.

Promulgation

The Artificial Intelligence Guidelines will be communicated throughout the University via:

  • A FedNews announcement and on the ‘Recently Approved Documents’ page on the University’s Policy Central website.
  • Distribution of emails to Deans / Directors / Directors / Managers / University staff.
  • Documentation distribution, eg. posters, brochures.
  • Notification to Institutes, Schools and Business Units.

Implementation

The Artificial Intelligence Guidelines will be implemented throughout the University via:

  • An Announcement Notice via FedNews website; and
  • Staff and student induction/training sessions.

References

Australian Government Department of Industry, Science and Resources. (n.d.). Australia’s AI Ethics principleshttps://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles 

Calvino, F., J. Reijerink and L. Samek (2025), The effects of generative AI on productivity, innovation and entrepreneurshipOECD Artificial Intelligence Papers, No. 39, OECD Publishing, Paris, https://doi.org/10.1787/b21df222-en  

Federation University Australia. (2025). Federation University Artificial Intelligence Learning and Teaching ASSURE Framework (Version 4.4). Federation University Australia. Restricted access: https://federationuniversity.sharepoint.com/teams/ArtificialIntelligence  

Federation University Australia. (2021). Strategic plan 2021-2025https://federation.edu.au/strategy/strategic-plan 

National Health and Medical Research Council. (2023). National Statement on Ethical Conduct in Human Research 2023https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2023  

Noy, S. & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381, 187-192. DOI: 10.1126/science.adh2586  

Tertiary Education Quality and Standards Agency. (2025). Gen AI knowledge hub (Higher Education Good Practice Hub). https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-hub/gen-ai-knowledge-hub  

Thompson, K., Corrin, L., & Lodge, J. M. (2023). AI in tertiary education: progress on research and practice. Australasian Journal of Educational Technology, 39(5), 1–7. https://doi.org/10.14742/ajet.9251 

Universities Australia. (2024). Submission to the House of Representatives inquiry into the digital transformation of workplaceshttps://universitiesaustralia.edu.au/wp-content/uploads/2024/07/UAS-SUBMISSION-TO-THE-HOUSE-OF-REPRESENTATIVES-INQUIRY-INTO-THE-DIGITAL-TRANSFORMATION-OF-WORKPLACES.pdf 

Venaruzzo, L., Ames, K., & Leichtweis, S. (2023). ‘Embracing AI for student and staff productivity.’’ Australasian Council on Open Distance and eLearning (ACODE) White Paper. Canberra. Australia. (9 March). DOI: https://www.acode.edu.au/pluginfile.php/13426/mod_resource/content/5/ACODE88-Whitepaper.pdf 

World Economic Forum. (2025, January 7). The future of jobs report 2025: 3. Skills outlook. https://www.weforum.org/publications/the-future-of-jobs-report-2025/in-full/3-skills-outlook/