Announcements

  • Call for Journal Publication

    More Than Words: The Cognitive Chasm between Humans and Large Language Models

    We propose a special issue on the human vs AI language neuroscience, which compares how the brain comprehends human-generated language in contrast to AI-generated language. Methodologically, this special issue seeks to align with the proposed approach of “artificial cognition” (Taylor & Taylor, 2021), in which LLMs are considered as “participants” or tools of comparison or analysis, while conceptually it aims to understand the connection between human and artificial cognition (Siemens et al., 2022).

    Background

    Large Language Models (LLMs) like GPT-3 and BERT have revolutionized Natural Language Processing, outperforming traditional models by addressing long-range dependency issues with self-attention mechanisms. LLMs are now widely used in various applications, from digital journalism to auto-correct in emails, and even creative writing tasks, challenging the notion that AI can’t handle creative language generation. Questions about the ethical implications and the possibility of LLMs replacing humans in language tasks are subjects of ongoing research, although they do not imply sentience. Recent neuroscience research suggests that language and cognitive skills are stored separately in the human brain, explaining LLMs’ success in language tasks but highlighting their limitations in areas like social reasoning and world knowledge. 

    LLMs excel in formal language but may struggle with practical language skills, showcasing the distinction between language and cognitive networks in the human brain. Despite their success in many language-related areas, LLMs appear to fall short compared to humans in certain domains. Mahowald et al (2023) identify several such areas, including formal and social reasoning, world knowledge and situation modelling. Although these aspects are not linguistic abilities per se, they are crucial components of any significant human conversation, and this is where LLMs seem to be deficient. Therefore, we can deduce that LLMs are not yet capable of having a cognitive mechanism similar to humans.

    We cordially invite interested individuals for article submission on the following themes:

    Themes

    1. Analysis of various linguistic levels including phonemes, morphemes, lexemes, syntax, and discourse context across multiple languages; i.e., studies that include participants from different linguistic backgrounds to increase the generalizability of the results.
    2. Computational modelling
    3. Statistical learning
    4. Behavioral experiments and various neuroimaging techniques.
    5. Interdisciplinary research teams that combine the expertise of cognitive neuroscientists, (experimental) philosophers, linguists, computer and data scientists to ensure theoretically cross-cutting studies.
    6. Case or small-sample studies involving participants with rare and non-rare brain disorders.

    Submission Timeline

    • Starts: 13th September, 2023
    • Status: Open
    • Deadline: Ongoing

    For more details regarding submission, kindly visit the website: https://link.springer.com/collections/bcjfbfgaic