By: Tessa Moreland
As artificial intelligence continues to weave itself into nearly every aspect of modern life, one voice stands out for its clarity and humanity. Marisa Zalabak, an educational psychologist, AI ethicist, and futurist, has spent her career exploring how technology can enhance human potential without eroding the qualities that make us uniquely human. Her work, spanning collaborations with global organizations like IEEE, the United Nations, and the World Economic Forum, focuses on creating a framework for responsible innovation rooted in empathy, ethics, and education.
When asked how she sees AI shaping the future of learning, Zalabak explains that the transformation has already been underway for years—long before the public explosion of generative AI tools like ChatGPT. “AI had already shaped learning prior to the big jump in 2022,” she says. “Generative AI has impacted schools and education in both positive and negative ways.” The problem, she adds, is not with the technology itself, but rather with the lack of preparation for those who use it.
“The real potential benefits are hampered by a lack of real training for school leaders, educators, families, and students in the ethical use of AI,” she notes. Without this foundation, institutions risk deepening the social and psychological challenges already emerging from technology use. Concerns such as plagiarism, privacy breaches, digital addiction, and even unhealthy emotional attachments to AI systems are becoming increasingly urgent.
Yet, Zalabak remains optimistic about what is possible when technology is used responsibly. “There are fantastic uses of these technologies when used ethically,” she emphasizes. “Educational leaders, teachers, and families need better information and training on how to implement responsible uses of technology that can optimize learning while protecting themselves and the youth they serve.”
Her experience with organizations like IEEE has also given her a front-row seat to the complex process of building global standards for AI ethics. While many might assume these challenges are primarily technical, Zalabak believes the true difficulties lie in human diversity. “There is no one set of ethics because our world is beautifully diverse,” she explains. “Ethical standards are influenced by cultural, economic, geographic, environmental, and political beliefs.” The goal, she says, is to find alignment around human rights and dignity.
Today, many countries and alliances worldwide have formal AI policies or strategies in place, with more emerging each year. But Zalabak argues that progress depends on greater collaboration and education to harmonize standards that serve humanity collectively. “We need to align and harmonize standards globally that can help all humans thrive amid the constant evolution of AI technologies.”
A major focus of Zalabak’s work involves integrating social-emotional intelligence into the design of AI systems. She advocates for what she calls transdisciplinary collaboration, where technologists work alongside ethicists, psychologists, social scientists, and educators to design systems that prioritize human well-being from the start. “This work must happen at every phase,” she says. “It should never be an afterthought following deployment when we are forced to repair damage that could have been avoided.”
She also cautions against the trend of designing AI systems that mimic human beings too closely. “Programmed responses in chatbot-based systems should be designed differently, with more transparency,” she explains. “Users need reminders that they are interacting with a machine, not a person.” Without such boundaries, people can develop what she calls “artificial-human relationships,” which have already been linked to serious mental health issues, including reality distortion and even self-harm. Her team is now developing ethical practices and social-emotional education programs to teach individuals how to engage with AI in ways that improve, rather than diminish, quality of life.
When asked what innovations most excite her, Zalabak’s eyes turn toward the transformative power of technology in healthcare, crisis prevention, education, and environmental restoration. “The innovations in these areas are inspiring,” she says. However, she is equally clear-eyed about the risks. “The greatest dangers are privacy breaches, the use of AI in weaponry and warfare, and the anthropomorphizing of AI systems—making machines seem human.” She warns that these trends enable the rise of unvetted technologies such as so-called “AI therapists” or “AI companions,” deepfakes, and other exploitative applications. For Zalabak, the antidote lies in what she calls “conscious, adaptive leadership” that insists on humanity-centered development.
Her message to leaders, educators, and policymakers is straightforward but profound: ask better questions. “Ask first, ‘Why are we using this technology? Who potentially benefits, and who could be harmed?” she says. “Education, education, education.” She urges organizations to invest in advisors who can translate complex ethical issues into actionable insights and to establish AI Ethics teams capable of continuous assessment and oversight.
Zalabak also highlights the importance of staying informed about the lesser-known consequences of AI use, including its environmental impact on global electrical grids and carbon emissions, as well as the depletion of precious water resources. “Until recently, many people didn’t realize how energy-intensive generative AI systems are,” she points out. “We need to remain aware of the ecological footprint of these technologies.”
Ultimately, Zalabak believes that building a responsible future for AI begins with rethinking how we teach and lead. “Learning to ask better questions, learning how to think instead of what to think, is essential for leaders, educators, and policymakers,” she says. “We are navigating a world of constantly emerging and changing capabilities. Our best tool is our capacity for reflection, empathy, and wisdom.”
Through her work, Marisa Zalabak reminds us that the true potential of technology lies not in what it can do, but in how consciously we choose to use it. The future of AI, she insists, must be as much about ethics and education as it is about innovation. And in that balance, humanity can find both safety and possibility.





