As seen on eCampus News
Schools need to establish clear, consistent guidelines that integrate AI from the time applications come in to graduation day--and beyond.

Universities can’t block AI use in applications–but they can ensure fairness

Schools need to establish clear, consistent guidelines that integrate AI from the time applicants apply to school to graduation day--and beyond

By Kelly Dore, Acuity Insights May 29th, 2025

Key points:

There’s no question whether applicants are using AI to write their college applications. As of 2024, more than one-third of recent applicants said they used AI tools when applying to schools–and that’s just the ones who were willing to admit it. We can only expect these numbers to increase in future application cycles.

But here’s a question that’s more difficult to answer: Is using AI for college applications fair?

According to the applicants we recently surveyed, the answer is no–at least not as it currently stands. In fact, 69 percent of applicants said AI tools give certain applicants an unfair advantage.

But their perspective is a bit more nuanced. Applicants also don’t believe that AI tools should be banned entirely (or that schools would even be able to do so). The vast majority (74 percent) said applicants should be able to use AI during the application process to some extent; most just believe that some guidelines and rules are needed.

The AI era is here–and institutions will need to evolve alongside it. As AI becomes embedded in the application process, university leaders will need to reexamine traditional academic and admissions practices to strike a better balance between innovation and integrity.

How are applicants leveraging AI?

Applicants can ask ChatGPT to brainstorm ideas for essays or double-check their grammar and spelling with Grammarly. They can draft potential interview questions and develop a rough outline of responses to help practice their answers. Or they may simply use AI tools to research universities, find scholarships, and help keep track of deadlines.

Ultimately, applicants use AI tools for a simple reason: They make the complex, highly competitive application process easier and more accessible. Eight in 10 applicants who used AI said these tools were very effective in helping them throughout the admissions process.

For institutions, the potential benefits are also compelling. AI tools enable applicants to apply to more schools and aim higher, with six in 10 applicants who used AI saying they applied to more schools and a higher caliber of schools. The result is that admissions teams gain a broader and more qualified pool of candidates, enabling them to attract top talent to their institutions.

In addition, universities can leverage AI to create efficiencies, such as updating the language on their website and adapting their mission statement to ensure better online discoverability when students use AI to search.

Yet, while there’s no shortage of possibilities, AI raises important ethical and practical questions. For example, if applicants submit AI-generated essays and materials, how can admissions teams effectively evaluate their writing and critical thinking skills? More fundamentally, how can they capture information that is less influenced by AI to gain a more authentic understanding of the applicant?

As university leaders work through these challenges, they will need actionable steps and clear guidelines that maintain the integrity of the admissions process while offering applicants the advanced technology tools to succeed in today’s AI landscape.

3 ways to adapt admissions processes for the AI era

Most applicants still do not use AI in the application process (or at least they’re not willing to admit it). But institutions face a pivotal moment as a growing number of applicants turn to AI tools, with even more likely to follow in the coming years.

It’s clear that both applicants and institutions will need to adapt to ensure that AI tools enhance the admissions process rather than detract from it. Based on our findings, there are three critical areas where institutions will need to focus their efforts:

1. Develop clear, practical policies and guidelines.

Many institutions lack clearly defined rules about AI tools in the application process. In fact, six in 10 applicants were confused about the extent to which they could use AI tools when applying to schools.

Students are craving practical guidelines and defined restrictions about what constitutes acceptable and unacceptable AI usage. It’s up to admissions leaders to establish clear, practical policies and ensure applicants fully understand these rules. That may entail providing real-world examples about how to properly integrate AI outputs, instructions on how to disclose AI usage, and a list of approved or recommended AI tools, among other guidance.

Universities should learn from their peers. One way they can do this is to come together as a community and develop best practices to openly share with their peers at other institutions about what works, where they struggle, and how they’re evolving.

2. Establish consistency across academic programs.

Admissions are an important first opportunity to set expectations and educate applicants about academic integrity in the context of AI. But that’s only the start: Institutions also need to ensure consistent guidelines and expectations for how AI can be used in the application process, the classroom, and throughout applicants’ entire academic careers.

As they work to establish guidelines, admissions teams can follow the lead of faculty and academic leaders who have experience implementing AI policies and best practices. AI guidelines in admissions should closely resemble the rules and guidelines that applicants will encounter in classes and coursework.

3. Reexamine the admissions process.

Even before ChatGPT, Bard, and other AI tools entered the mix, personal statements, essays, and recommendation letters provided low reliability. Some applicants could rely on family or friends to help them draft and refine materials, while others don’t have the built-in network or support.

The rise of AI presents an opportunity for institutions to reexamine their admissions practices and develop metrics that go beyond traditional written materials, testing, and recommendation letters. Many institutions find that defensible holistic admissions practices can provide a more comprehensive approach to assessing an applicant’s life experiences, diverse capabilities, and the full potential they bring to campus.

Approaching AI as a learning opportunity

AI tools aren’t going away in higher education. In fact, 82 percent of applicants believe their peers will continue using AI regardless of institutional policies.

But instead of banning AI outright–or leaving applicants to figure out best practices on their own–schools need to rethink their institutional practices and establish clear, consistent guidelines that integrate AI from the time applicants apply to school to graduation day and beyond.

With the right rules and guidelines in place, AI can provide an opportunity to make admissions fairer and more beneficial for institutions and applicants alike. It will just take a commitment to learning and adapting–and that’s something higher education is uniquely positioned to do.

About the Author:

Kelly Dore is Co-Founder and VP of Science and Innovation at Acuity Insights.

eSchool Media uses cookies to improve your experience. Visit our Privacy Policy for more information.