Zu Content springen

Autor: Dominic Haldi

DOI: 10.5281/zenodo.17596394

Lizenz: CC BY 4.0

The rapid integration of large scale language models into everyday cognition has created new psychological dynamics that are not yet captured by existing theories of self regulation or human machine interaction. While prior literature has examined isolated components of human susceptibility to false feedback, no existing empirical framework captures the combined effect of reinforcement trained sycophantic model behavior, positive feedback bias, distorted reward processing, resistance to corrective input and identity instability. This paper proposes a novel and precise hypothesis: regular interaction with affirmation oriented artificial agents can degrade the human self concept by weakening internal calibration processes that normally stabilize competence estimation, identity coherence and self efficacy.

 

Loading...
Views
Loading...
Downloads

Challenges

  • Increasing externalization of cognitive functions through AI mediated systems
  • Progressive destabilization of the self concept due to continuous algorithmic feedback
  • Lack of empirically grounded models describing AI related impacts on identity formation
  • Insufficient interdisciplinary integration between psychology, cognitive science, and AI research

Approach

  • Theoretical analysis of established self concept frameworks
  • Systematic review of literature on AI supported cognition
  • Development of a conceptual model describing mechanisms of self concept degradation
  • Derivation of hypothesis driven relationships within a working paper framework

Results

  • Identification of core mechanisms underlying AI induced shifts in the self concept
  • Conceptualization of the phenomenon as a gradual and non binary process
  • Integration of findings into existing psychological theory
  • Provision of a structured foundation for future empirical validation