Artificial Intelligence (AI) technology is rapidly making its way into campuses. The New York City Department of Education announced that the latest guidance on the use of AI in public schools will be released this month, with parents given the opportunity to provide feedback. However, some parents and education professionals believe that the regulations are being implemented too slowly, leading to an increase in plagiarism issues and heightened risks of student privacy breaches.
At a meeting of the Panel for Educational Policy (PEP) held last week, Miatheresa Pate, the Chief Academic Officer of the Department of Education, stated that the city government is developing the “guardrails for the next steps” to regulate the use of AI on campuses, emphasizing that parents can participate in providing feedback.
Several parents have pointed out concerns that due to the delayed establishment of clear policies by the city government, different schools have varying standards for the use of AI, causing worry among the community.
Some parents fear that technology companies may have access to and retain their children’s biometric data, posing risks of data leaks. Therefore, they are urging the government to set clear regulations, notify parents in advance, and provide an opt-out option.
A group of parents have jointly signed a petition advocating for a two-year moratorium on the implementation of all AI technologies in schools. The petition highlights that as the largest school district in the United States, New York City public schools should utilize their purchasing power and moral influence to protect students, rather than letting children become subjects of a “surveillance-style experiment.”
The education sector and parent groups have also criticized the Department of Education for its inconsistent stance and sluggish pace regarding AI policies. The Department initially prohibited the use of ChatGPT on campus shortly after its release, but later lifted the ban. Meanwhile, the teachers’ union collaborated with major tech companies to promote a training program on “responsible AI usage.”
In recent months, the Panel for Educational Policy has rejected numerous contracts due to concerns about AI technology. Members stated that tech companies are actively pitching products to the Department of Education, with “significant funds driving these tools into schools.” They believe that without a concrete policy in place, related technology contracts should not be approved.
Last week, the Panel for Educational Policy narrowly approved a contract allowing the educational software company Kiddom to provide online materials for supplemental reading and math courses. The contract had previously been rejected until the company, along with Education Director Kamar Samuels, assured that the product version did not include AI functions.
Samuels emphasized that this assurance was “crucial,” as the current primary goal is to avoid activating AI platforms on campuses. Abbas Manjee, co-founder of Kiddom and former public school teacher in the city, pointed out that while AI can assist teachers, this product version does not utilize AI and includes privacy protection mechanisms.
The Department of Education has established two working groups, one focused on privacy issues and the other on comprehensive AI policies.
During a WNYC interview in January, Education Director Samuels expressed cautiously optimistic views on AI, stating that the primary task at hand is to alleviate external fears about AI while establishing necessary safeguards. He stressed that if used appropriately, AI has the potential to reshape teaching methods and accelerate student learning outcomes.
