California State University Recommends Integration of Cal Maritime and Cal Poly for Fiscal Sustainability and Workforce Alignment
A California State University campus, whose system-wide $17 million OpenAI contract is up for renewal in June 2026. Thousands of faculty and students are now urging administrators not to sign again, citing budget priorities and academic freedom concerns. WIKIMEDIA COMMONS / Daderot

It arrived, as so many institutional decisions do, in the form of an email. In February 2025, faculty at California State University campuses across the state opened their inboxes to find an announcement: the system had signed a $17 million deal with OpenAI, giving all 460,000 students and 63,000 faculty and staff access to ChatGPT Edu. Nobody had asked them. Nobody had warned them. And for many, the news landed like a slap.

"In February 2025, we all got an email out of the blue announcing the AI-Empowered CSU initiative that we hadn't heard anything about," said Martha Kenney, a professor of women's and gender studies at San Francisco State University. "In the middle of the budget crisis, it's best to invest in the humans that make the CSU system great, rather than buy in to Silicon Valley's hype."

San Francisco State had, by that point, already eliminated 615 lecturer positions over two years and offered buyouts to all tenured and tenure-track faculty. The CSU system had been staring down a potential $375 million state budget cut when the OpenAI contract was signed. To many on campus, the juxtaposition was impossible to ignore: colleagues losing their jobs while administrators wrote a nine-figure cheque to a Silicon Valley tech company.

Now, with the CSU contract up for renewal in June, a growing movement of thousands of faculty and students is urging the system not to sign again — and the California case is just one front in a widening national and international rebellion against top-down AI procurement decisions in higher education.

"From my perspective, the impacts on teaching and learning are beyond significant; they have the potential to unsettle the entire purpose of higher education."— Lori Emerson, Professor of Media Studies, University of Colorado Boulder

THE COLORADO STANDOFF

Hundreds of faculty, staff, and students at the University of Colorado system signed a formal letter of dissent earlier this month after the university entered a $2 million-per-year, three-year agreement with OpenAI in February — committing to provide ChatGPT Edu to more than 100,000 people across its four campuses, with a planned rollout by March 31.

Critics argued the deal lacked transparency and technical oversight, and raised serious concerns that campus leaders had not adequately addressed around student privacy, academic integrity, corporate influence, and environmental sustainability. Faculty said they had not been meaningfully consulted before the agreement was finalised.

The privacy concerns were particularly pointed. While CU's contract states that student data remains university property and that OpenAI cannot use it to train its public models, faculty worry that anonymised or aggregated versions of student and faculty interactions could still be used by OpenAI for product development — effectively commercialising university activity. There were also concerns that Colorado's public records laws could expose private chat logs to law enforcement requests, potentially ending any reasonable expectation of academic privacy.

The faculty pushback drew a partial victory. Under pressure from the Faculty Council, CU has delayed student access to ChatGPT Edu until at least August 14 — the start of the fall semester — giving professors space to end the academic year without being required to redesign their courses around AI access. Faculty and staff access proceeded on March 31 as originally planned.

For faculty critic Lori Emerson, a media studies professor at CU Boulder, the delay was meaningful but insufficient. The underlying issue, she argued, was not logistical but philosophical. The university's move, she said, "suggests that the integration of these AI products, particularly into higher education, is inevitable and that we must prepare our students for this world. But nothing is inevitable."

A PATTERN REPEATING ACROSS CAMPUSES

The CSU and CU situations are not isolated incidents. They reflect a pattern emerging across dozens of institutions as universities rush to announce AI partnerships — often framed in the language of workforce readiness and digital equity — while faculty and students find themselves sidelined from the decisions.

At the University of Southern California, a letter signed by 12 professors and sent to the student newspaper in November 2025 pointedly criticised the university's institutional ChatGPT subscription. "USC has told students it can't afford to pay the real people they trusted," the letter read. "Instead, it's buying them a pretty toy." Faculty who signed said they had not been consulted before the deal was made, raising what they described as fundamental concerns about shared governance. Writing professor Patti Taylor, one of the co-authors, said she had once been enthusiastic about generative AI's potential — but changed her view after studying its classroom effects. She cited emerging research on "deskilling," in which professionals who relied heavily on AI tools experienced measurable skill deterioration within months.

Across the Atlantic, more than 350 staff at the University of Edinburgh signed an open letter demanding their institution end its partnership with OpenAI entirely, describing the company's products as "unsafe" and "insecure." The letter cited multiple data breach incidents, ongoing litigation, concerns about OpenAI's labour practices, and a recently announced partnership between the company and the US Pentagon — which critics argue fundamentally misaligns with a public university's values. "We wish to express our concerns and ask that the relationship with OpenAI does not continue," the letter stated plainly.

THE DEEPER ARGUMENT: GOVERNANCE, NOT JUST TECHNOLOGY

What unites these protests across vastly different campuses and continents is less a blanket rejection of artificial intelligence than a pointed demand for democratic process. Faculty are not simply saying AI is bad. They are saying: we were not asked, we were not heard, and that matters.

According to a 2025 survey by the American Association of University Professors, 15 percent of faculty reported that their institution mandates the use of AI — and 81 percent said they are required to use learning management systems and other educational technology embedded with AI tools that they cannot turn off. The survey paints a picture of a profession in which technological choices are being made above and around faculty, not with them.

The Conference on College Composition and Communication — the world's largest professional organisation of writing educators — passed a formal resolution this month affirming the rights of students and faculty to refuse the use of generative AI in writing classrooms. The vote was overwhelming. "This is an academic freedom issue, and students and teachers should be able to make a choice," said Jennifer Sano-Franchini, an associate professor of English at West Virginia University and immediate past chair of the organisation. "Those claims — 'It's here to stay,' 'Students need it for their careers' — are all things we can unpack more. I'm not particularly convinced."

Faculty critics have also raised concerns that go beyond the classroom. Many pointed to the significant environmental cost of running large language models at scale — water usage, power demands, and carbon footprint — and argued that universities championing sustainability goals cannot square that commitment with mass AI adoption contracts.

"No policy document can reflect the way in which these corporate AI products are participating in dismantling the principles of public education that our universities were built on."— Faculty dissent letter, University of Colorado

WHAT FACULTY ARE ASKING FOR

Across the institutions where resistance has been most organised, faculty critics are not simply calling for contracts to be cancelled. They are calling for a different process — one in which faculty councils, student representatives, and academic governance bodies are meaningfully involved before deals are signed, not informed about them afterward.

Specific demands have included faculty-led ethics frameworks for AI use; clear, enforceable policies governing how AI tools interact with grading and assessment; transparent data agreements that specify exactly how student and faculty information will and will not be used; and a shift away from "productivity" metrics in favour of learning outcomes as the measure of educational success.

At CU, the Faculty Council's success in delaying student rollout offered a proof of concept: organised faculty pressure can move institutional decisions. Whether it can affect the terms of contracts — or prevent them from being signed in the first place — remains to be seen.

THE ROAD AHEAD

The CSU's June renewal deadline will be the next major test. With thousands of faculty and students on record opposing the deal, and with the system's financial pressures still very much present, CSU administrators face a choice between a well-resourced tech partnership and a faculty body that has made its views unmistakably clear.

For Martha Kenney in San Francisco, the answer is straightforward. The CSU system's identity — and its academic integrity — depends on investing in people, not platforms. Whether the people who sign the contracts agree remains, for now, an open question.