PRIN 2026 Research Dossier

NIGHT

Neurocognitive Investigation of how Humans Grasp Technologies

A research agenda seeking to define how the human brain enables the comprehension, manipulation, and cultural transmission of technologies.

Duration 36 months
ERC Domains SH4 The Human Mind and Its Complexity
Operational units 4
Core facilities 4
Flagship experiments 4
Project at a Glance

Immediate orientation for scientific, operational, and evaluative review.

36 months

Integrated delivery horizon

Three years with milestone-driven oversight

5 units

An interdisciplinary national consortium

Operational roles mapped to experiments and outputs

4 platforms

Core facilities in continuous use

Behavioural, eye-tracking, HD-EEG, and neuroimaging pipelines

12 outputs

Planned scientific and translational deliverables

Protocols, datasets, papers, and open resources

Consortium budget EUR 920,000
Committed person-months 132
Planned participant observations 396
Research Logic

The scientific narrative is organised as evidence, not ornament.

Why This Project, Why Now

A missing account of how humans reason about technology

The project addresses a gap between cognitive theories of tool use and the rapidly changing reality of digital, hybrid, and distributed technologies.

Current research explains fragments of technological behaviour, yet lacks a unified account of how people build causal models of tools, transfer know-how between physical and digital settings, and adapt when technologies become collaborative, opaque, or algorithmically mediated.

TECHNE positions this question at the intersection of cognitive neuroscience, experimental psychology, and computational modelling. It asks how technological reasoning is represented, which neural systems sustain it, and how it can be measured across ecologically meaningful tasks.

Open research logic
Scientific Logic

Four objectives connect theory, methods, and measurable outputs

The scientific programme is structured as a chain from theoretical specification to experimentation, modelling, and translational release.

  • Objective 1. Define the cognitive architecture of technological reasoning across physical and digital tools.
  • Objective 2. Identify behavioural and neural markers of transfer, adaptation, and failure in complex tool-mediated tasks.
  • Objective 3. Model cross-unit evidence using shared protocols, harmonised datasets, and comparative pipelines.
  • Objective 4. Deliver reusable experimental assets, open materials, and policy-facing outputs for research and training.

Each objective is tied to specific experiments, facilities, milestones, and measurable outputs, reducing ambiguity around feasibility and expected contribution.

Inspect experiments
Governance and Feasibility

Coordination is designed as an operational research office, not a symbolic layer

Leadership, risk review, ethics, data stewardship, and timeline management are embedded in the work plan.

The coordinating unit manages a shared protocol calendar, a quarterly steering cycle, milestone reviews, and quality checks on data, preregistration, and recruitment progress. Unit leaders participate in monthly implementation meetings and structured decision gates at each major milestone.

Risk mitigation is tracked at the experiment and work-package level, with escalation paths for recruitment delays, equipment access, and protocol drift. This governance model is designed to reduce operational uncertainty while preserving scientific agility.

See the timeline
Open Science and Legacy

Impact is framed as scientific gain, methodological reuse, and territorial capacity building

The project is not only expected to publish; it is expected to leave behind reusable scientific infrastructure.

TECHNE will release protocol packages, harmonised metadata standards, curated experimental materials, analysis templates, and dissemination assets. The objective is to improve reproducibility and enable later work across cognitive neuroscience, educational technology, and human-technology interaction.

Public-facing outputs are complemented by training activity, cross-unit mentoring, and targeted dissemination for academic, clinical, and territorial stakeholders.

Review impacts
Units and Complementarity

Each unit has a visible operational role, resource footprint, and research contribution.

UNISOB Naples Principal Investigator

Unit 01. Coordinating Unit for Behavioural and Translational Cognition

Università Suor Orsola Benincasa · Naples, Italy

Lead: Giovanni Federico, PhD

Expertise

Unit footprint

  • EUR 285,000 budget
  • 42 person-months
  • 1 facility
  • 1 linked experiment
Padua Imaging Neuroimaging and structural modelling lead

Unit 02. Neuroimaging and Connectomics Core

University of Padua · Padua, Italy

Lead: Associated PI

This unit brings MRI-informed modelling, structural connectivity estimation, and cross-modal integration to the programme. Its role is to connect behavioural signatures of technological reasoning with neurobiological constraints and individual-difference structure.

Expertise

  • fMRI-informed modelling
  • Structural connectivity and lesion-informed inference
  • Cross-modal harmonisation pipelines

Unit footprint

  • EUR 235,000 budget
  • 34 person-months
  • 1 facility
  • 1 linked experiment
Facility profile

3T MRI access, connectivity processing pipelines, reproducible analysis workstations, and advanced statistical support.

Bologna EEG Temporal dynamics and mechanistic markers

Unit 03. HD-EEG and Neural Dynamics Laboratory

University of Bologna · Bologna, Italy

Lead: Associated PI

This unit measures the fast temporal structure of technological reasoning, focusing on planning, monitoring, and error recovery during tool-mediated tasks. It strengthens the proposal by making dynamic evidence directly comparable across experiments.

Expertise

  • High-density EEG acquisition and analysis
  • Error monitoring and adaptive control
  • Multimodal integration with behavioural markers

Unit footprint

  • EUR 210,000 budget
  • 30 person-months
  • 1 facility
  • 1 linked experiment
Facility profile

256-channel HD-EEG, synchronised stimulus control, behavioural logging, and protocol quality assurance.

Rome Modelling Computational integration and reproducibility

Unit 04. Computational Modelling and Open Infrastructure

Sapienza University of Rome · Rome, Italy

Lead: Associated PI

The modelling unit operationalises project-wide inference, builds shared data standards, and produces open analytical resources that make the consortium outputs reusable after project completion.

Expertise

  • Bayesian and comparative modelling
  • Open-science infrastructure
  • Reusable software and metadata standards

Unit footprint

  • EUR 190,000 budget
  • 26 person-months
  • 1 facility
  • 1 linked experiment
Facility profile

Secure compute, version-controlled pipelines, harmonised repositories, and dissemination-oriented documentation.

Facilities and Infrastructure

Infrastructure is presented as enabling capacity with clear methodological value.

Behavioural platform

Behavioural and Eye-Tracking Suite

Naples

A flexible suite for causal inference, tool-use simulation, and multimodal response capture, designed to support tightly controlled behavioural protocols and ecologically rich tasks.

  • High-precision eye-tracking
  • Rapid task prototyping
  • Integrated behavioural logging
Imaging facility

MRI and Connectomics Platform

Padua

The imaging platform supports structural and functional inference related to technological reasoning, transfer, and network-level organisation.

  • 3T MRI access
  • Connectivity estimation
  • Cross-modal data fusion
Electrophysiology

HD-EEG Core Facility

Bologna

An advanced electrophysiology core for capturing rapid planning, monitoring, and adaptation signals during complex task execution.

  • 256-channel acquisition
  • Time-frequency analysis
  • Performance-linked event coding
Computational infrastructure

Open Modelling and Data Backbone

Rome

A project-wide infrastructure for harmonised datasets, model comparison, reproducible pipelines, and release-ready resources.

  • Reproducible compute environment
  • Metadata governance
  • Reusable analysis templates
Experiments

Methodological seriousness is made legible through questions, methods, timing, and outputs.

EXP-01 M2-M10 · Q1-Q4

How do participants construct causal models when tool affordances are visible, hidden, or algorithmically mediated?

This experiment compares physical-tool, hybrid-tool, and interface-based tasks to identify where transfer succeeds and where reasoning collapses. The design creates a common behavioural backbone for the full project.

  • Lead unit: UNISOB Naples
  • Sample size: 180
EXP-02 M7-M18 · Q3-Q6

Which neurobiological systems support abstraction from specific tool interactions to general technological structure?

This stream connects behavioural measures to neuroimaging-informed modelling, focusing on abstraction, planning, and domain transfer.

  • Lead unit: Padua Imaging
  • Sample size: 96
EXP-03 M13-M23 · Q5-Q8

Which temporal markers predict successful adaptation when technological plans fail or must be revised?

The HD-EEG stream examines rapid monitoring and recovery under increasing technological complexity, linking behavioural disruptions to temporally precise neural signatures.

  • Lead unit: Bologna EEG
  • Sample size: 120
EXP-04 M17-M31 · Q6-Q11

How can heterogeneous evidence be integrated into reusable models of technological reasoning and transfer?

This stream integrates behavioural, imaging, and electrophysiological evidence into reusable models and project-wide dissemination assets.

  • Lead unit: Rome Modelling
Timeline and Delivery Evidence

A reviewer-grade plan linking work packages, deadlines, and milestone review points.

Project Schedule

4 work packages are visualised directly from relative project months and quarters.

4 scheduled 0 pending
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 M15 M16 M17 M18 M19 M20 M21 M22 M23 M24 M25 M26 M27 M28 M29 M30 M31 M32 M33 M34 M35 M36
WP1 M1-M7 · Q1-Q3

Theory, harmonisation, and shared protocols

WP1 establishes the conceptual and operational baseline for all later experiments.

Objectives

Consolidate conceptual architecture, shared measures, governance rules, and protocol standards across the consortium.

Deliverables

  • Protocol handbook
  • Governance playbook
  • Shared metadata template

KPIs

  • All protocols approved
  • Common task structure released
  • Data standards adopted by all units

Execution Frame

  • Lead unit: UNISOB Naples
  • M1-M7 · Q1-Q3
M1. Shared protocol charter approved M4 · Q2
WP2 M5-M17 · Q2-Q6

Behavioural and multimodal experimentation

WP2 delivers the empirical backbone of the project, including eye-tracking, behavioural signatures, and shared task variants.

Objectives

Run the core behavioural and multimodal experiments that operationalise technological reasoning across contexts.

Deliverables

  • Benchmark dataset
  • Experiment reports
  • Cross-unit protocol update

KPIs

  • Recruitment on schedule
  • Cross-site comparability confirmed
  • Primary behavioural outcomes locked

Execution Frame

  • Lead unit: UNISOB Naples
  • M5-M17 · Q2-Q6
M2. Behavioural benchmark dataset locked M13 · Q5
WP3 M10-M25 · Q4-Q9

Neural systems and computational integration

WP3 tests neural and computational accounts of abstraction, monitoring, and transfer.

Objectives

Explain behavioural results through imaging, HD-EEG, and comparative modelling.

Deliverables

  • Imaging derivatives
  • EEG marker framework
  • Integrated modelling release

KPIs

  • Cross-modal models compared
  • Shared inference pipeline complete
  • Mechanistic manuscript drafted

Execution Frame

  • Lead unit: Padua Imaging
  • M10-M25 · Q4-Q9
M3. Imaging and EEG integration checkpoint M20 · Q7
WP4 M18-M36 · Q6-Q12

Impact, open science, and dissemination

WP4 packages the project into open-science outputs, training resources, and reviewer-ready impact evidence.

Objectives

Transform project outputs into reusable scientific, educational, and territorial resources.

Deliverables

  • Open resources portal
  • Dissemination kit
  • Legacy and sustainability plan

KPIs

  • Resources publicly released
  • Training events delivered
  • Impact documentation complete

Execution Frame

  • Lead unit: Rome Modelling
  • M18-M36 · Q6-Q12
M4. Open infrastructure release candidate M30 · Q10
Automatic Figures

Charts are generated from structured project records, not manually redrawn.

Budget distribution
Person-month commitment
Participant observations by experiment
Impact Pathways

Scientific, societal, territorial, and open-science value are articulated separately.

science

A unified account of technological reasoning

The project integrates behavioural, neural, and computational evidence into a coherent explanation of how people reason about tools and technology.

4 Integrated empirical streams
society

Transferable evidence for training and human-technology adaptation

Outputs will inform educational and applied contexts where technological understanding, error monitoring, and adaptive planning matter.

3 Target stakeholder clusters
territory

Infrastructure growth across the national research network

The project builds interoperable methods and assets that strengthen capacity across participating sites and beyond.

4 Sites aligned through shared standards
open-science

Reusable protocols, datasets, and release-ready pipelines

TECHNE is designed to leave behind a tangible methodological legacy that can be inspected, reused, and extended.

12 Planned public-facing outputs
Project Updates

Operational signals and delivery checkpoints.

Platform Apr 2026

Review-ready platform scaffold initialised

The dossier has been configured to expose project logic, facilities, experiments, impacts, and editable evidence blocks in a reviewer-oriented structure.

Governance Nov 2026

Cross-unit protocol handbook drafted

A unified protocol handbook aligns terminology, metadata rules, and reporting standards across all operational units.

FAQ

Quick answers for reviewers and project editors.

Yes. The admin panel supports structured editing of narrative sections, units, facilities, experiments, work packages, milestones, impacts, gallery content, updates, and general project metadata.

Yes. Charts are generated from database records such as unit budgets, person-months, work-package dates, and experiment sample sizes.

Yes. Each major content type includes an image field with upload support and live preview within the admin panel.