
Humans “Create Meaning”: UNIMED Study Reveals How We Understand AI
The rapid development of language-based artificial intelligence is transforming how humans communicate. Yet, meaning in these interactions is still constructed by humans themselves. This is the key finding of a recent study led by Muhammad Natsir from Universitas Negeri Medan (UNIMED), along with Nadya Anggita Lubis, Fanny Agnesya Siagian, Sry Juniar Limbong, and Yoanne Simbolon. The study was published in 2026 in the International Journal of Integrative Research and explores how people interpret, understand, and emotionally respond to interactions with AI.This research is significant because human–AI interaction has moved beyond simple information exchange. AI is now widely used in education, personal reflection, and decision-making. This raises a fundamental question: does AI truly understand humans, or do humans construct meaning from AI responses?
Background: The Illusion of Understanding in the AI Era
Advances in language-based AI have made conversations feel increasingly human-like. Many users feel understood by AI, even though these systems lack consciousness or intention. From a cognitive pragmatic perspective, meaning does not come solely from words but from human mental processes—such as interpreting context, filling information gaps, and inferring communicative intent. This study examines these processes through real user experiences.
Methodology: Exploring Real User Experiences
The research employed a qualitative approach using Interpretative Phenomenological Analysis (IPA), focusing on users’ subjective experiences when interacting with AI. A total of 10–15 adult participants who regularly used chatbots in academic and professional contexts were interviewed in depth. Each interview lasted 45–60 minutes and was complemented by analysis of real AI interaction records. The data were analyzed using thematic analysis to identify patterns in how users construct meaning during AI interactions.
Key Findings: Three Ways Humans Make Sense of AI
The study identified three interconnected themes:
The Illusion of Understanding
Many users felt that AI “understood” them. However, this perception emerged because AI responses appeared coherent and contextually relevant.
The study found that:
-Users actively fill in missing meaning in AI responses
-Understanding is produced by human cognitive processes
-AI provides linguistic output, not actual meaning
In short, the sense of understanding comes from the user—not the AI.
Attribution of Intention and Politeness
Users often describe AI responses using human-like traits, such as:
-“helpful”
-“neutral”
-“too formal”
In reality, AI has no intentions or emotions.
The study shows that these attributions function as cognitive strategies to make interactions understandable. Users apply human communication norms—such as politeness—to interpret AI responses.
Emotional Distance and Ethical Reflection
Although users sometimes feel emotionally supported, they still maintain a clear boundary between themselves and AI.
Key observations include:Users are aware that AI is not human
-There are concerns about over-reliance on AI
-Interactions often trigger ethical reflection, especially in decision-making contexts
Interestingly, AI is also seen as a “safe conversational partner” because it does not judge, encouraging openness and exploratory thinking.
Implications: The Importance of Digital Literacy
This study has broad implications, particularly for education and everyday technology use.
Muhammad Natsir from Universitas Negeri Medan emphasizes that meaning in AI interaction is entirely user-constructed. This makes critical thinking skills essential.
Key implications include:
For education:
AI should be used as a thinking tool, not as an absolute authority.
For society:
Users need to understand that AI does not truly “understand,” to avoid misplaced trust.
For AI developers:
Systems should be designed transparently and avoid creating the illusion of human-like awareness.
Academic Insight
Muhammad Natsir and his team from Universitas Negeri Medan conclude that meaning in human–AI interaction “does not originate from AI’s communicative abilities, but is actively constructed through users’ inferential processes and awareness.”
Author Profiles
-Muhammad Natsir – Lead researcher, Universitas Negeri Medan; expertise in linguistics and cognitive pragmatics
-Nadya Anggita Lubis – Researcher, Universitas Negeri Medan; digital communication
-Fanny Agnesya Siagian – Researcher, Universitas Negeri Medan; language and human–technology interaction
-Sry Juniar Limbong – Researcher, Universitas Negeri Medan; education and language studies
-Yoanne Simbolon – Researcher, Universitas Negeri Medan; communication and linguistics
Source
Natsir, M., Lubis, N. A., Siagian, F. A., Limbong, S. J., & Simbolon, Y. (2026). Cognitive Pragmatics in Human–AI Interaction. International Journal of Integrative Research (IJIR), Vol. 4 No. 3, pp. 139–152.
0 Komentar