Academic Research Study

AI Overconfidence Syndrome

How AI language models affect human confidence, decision-making, and critical thinking

12–15 minutes
🔒 Anonymous & confidential
🎓 Academic study
👤 Harish Gautam
S1
Screening & Demographics
Eligibility and background information
Question 1 of 35
Have you used any AI language model (ChatGPT, Claude, Gemini, Copilot, etc.) in the past 6 months?*
Please select an option to continue.

Thank you for your interest!

This study requires participants who have used AI language tools in the past 6 months. We appreciate your time and encourage you to read more about our research at harishgautam.net.

Question 2 of 35
How frequently do you use AI language models?*
Please select an option.
Question 3 of 35
Which AI platforms do you currently use? (Select all that apply)
Question 4 of 35
What is your age range?*
Please select an option.
Question 5 of 35
What is your highest level of education?*
Please select an option.
Question 6 of 35
What is your primary professional field?*
Please select an option.
Question 7 of 35
How many years of professional experience do you have?*
Please select an option.
S2
AI Usage Patterns & Literacy
How and how deeply you use AI tools
Question 8 of 35
For which purposes do you use AI? (Select all that apply)
Question 9 of 35
When using AI for an important question, how do you typically interact?*
Please select an option.
Question 10 of 35
How well do you understand how AI language models actually work?*
Please select an option.
Question 11 of 35
Before this survey, were you aware that AI models can confidently generate false information (“hallucinations”)?*
Please select an option.
Question 12 of 35
Were you aware that AI models tend to agree with users rather than provide objective assessments (sycophancy)?*
Please select an option.
S3
Baseline Confidence & Expertise
Your confidence before AI consultation
Question 13 of 35
Please rate your agreement with each statement about your confidence before consulting AI.*
Statement 1
Strongly
Disagree
2345 6
Strongly
Agree
I make sound decisions independently without external validation
I can identify flaws in my own reasoning
I seek expert opinion before acting in unfamiliar domains
I am comfortable saying “I don’t know” in professional settings
My decisions are generally more reliable than AI-generated advice
I regularly question my own assumptions before acting
Please complete all rows.
S4
Psychometric Scales
Three validated measurement scales — core of this study
Question 14 of 35 — Scale 1: AI Overconfidence Tendency (AOT)
Rate your agreement based on your general AI usage experience.*
Statement1
Strongly
Disagree
23456
Strongly
Agree
After consulting AI, I feel significantly more confident in my decisions
AI tools make me feel competent even in areas I know little about
I have found myself defending AI’s conclusions to human experts
The confident tone of AI responses makes me trust them more
When AI sounds certain, I feel less need to verify the information
Using AI has made me more willing to challenge expert opinions
I have acted on AI advice without independently verifying it
AI makes me feel I have expertise I did not have before
Please complete all rows.
Question 15 of 35 — Scale 2: Critical Thinking Disposition (CTD)
How often do these statements reflect your behaviour when using AI?*
Statement1
Strongly
Disagree
23456
Strongly
Agree
I actively search for weaknesses or errors in AI responses
I ask AI to argue AGAINST my ideas to find flaws
I cross-check AI information with independent, authoritative sources
I consult human domain experts even after receiving AI advice
I treat AI outputs as a starting point, not a conclusion
I notice when AI responses seem overly agreeable or validating
Please complete all rows.
Question 16 of 35 — Scale 3: AI Sycophancy Awareness (ASA)
Indicate your agreement with each statement about AI behaviour.*
Statement1
Strongly
Disagree
23456
Strongly
Agree
I am aware AI systems are trained to produce agreeable responses
I understand AI validation may reflect my framing, not objective merit
I recognise when an AI response is designed to encourage rather than inform
I adjust my interpretation of AI feedback knowing it may be biased toward agreement
I actively use adversarial prompts to break AI validation of my ideas
Please complete all rows.
S5
Scenario & Confidence Measurement
Please read the scenario carefully before answering
Research Scenario — Please Read
You are considering launching a new product or service in your field. You spend 30 minutes discussing this idea with ChatGPT. The AI enthusiastically validates your concept, provides a detailed 10-step launch plan, gives you projected market size estimates, explains why common objections do not apply to your situation, and encourages you to move forward quickly before a market window closes. The reasoning is detailed, logically structured, and sounds highly professional.
Answer the following questions based on how you would realistically feel and act in this situation.
Question 17 of 35
BEFORE the AI consultation, how confident would you have been in your own judgment on this matter?*
Not at all confidentExtremely confident
Please select a value.
Question 18 of 35
AFTER the AI consultation, how confident would you feel in moving forward with the AI’s recommendation?*
Not at all confidentExtremely confident
Please select a value.
Question 19 of 35
Compared to before, your overall confidence would be:*
Please select an option.
Question 20 of 35
What would you most likely do immediately after this AI consultation?*
Please select an option.
Question 21 of 35
If a qualified human expert contradicted the AI’s recommendation, you would:*
Please select an option.
S6
Verification Behaviour
How you validate AI-generated information
Question 22 of 35
How often do you verify AI-generated information before using it in decisions?*
Please select an option.
Question 23 of 35
When you verify AI information, which methods do you use? (Select all that apply)
Question 24 of 35
How often do you deliberately ask AI to identify flaws or counterarguments to your ideas?*
Please select an option.
Question 25 of 35
After a major AI-assisted decision, how often do you review whether the AI advice was accurate?*
Please select an option.
S7
Real-World AI Experiences
Your actual experiences with AI-influenced decisions
Question 26 of 35
Have you made a significant professional or personal decision based primarily on AI advice?*
Please select an option.
Question 27 of 35
What was the outcome of the most significant AI-influenced decision you made?
Question 28 of 35
Have you ever discovered that AI provided false information you had initially trusted?*
Please select an option.
Question 29 of 35
Which areas has AI most significantly influenced your decisions? (Select all that apply)
Question 30 of 35 — Optional
Please briefly describe a specific experience where AI affected your confidence or decision-making — positively or negatively.
Optional. Maximum 300 words. Your anonymised response may be quoted in the published research.
0 / 300 words
S8
Final Attitudes & Closing
Almost done — 5 final questions
Question 31 of 35
Having completed this survey, how serious do you consider AI overconfidence to be as a risk?*
Please select an option.
Question 32 of 35
As a result of this survey, how likely are you to change your AI usage behaviour?*
Please select an option.
Question 33 of 35
What do you believe is the MOST effective safeguard against AI overconfidence?*
Please select an option.
Question 34 of 35 — Optional
What is your gender? (Optional — for demographic reporting only)
Question 35 of 35
Would you be willing to participate in a brief follow-up interview (20–30 minutes, online) to share your experiences in more depth?

You’re almost done!

Submit your responses and instantly receive your personalised AI Overconfidence Profile — including your LLM Psychosis risk score and tailored action plan.

By submitting, you confirm you have read the participant information and consent to your anonymous responses being used for academic research.

Scroll to Top