Curricular Internships

Open Calls

This page lists the internship projects currently available in the Center for Cybersecurity of Fondazione Bruno Kessler (FBK). Please note that these are curricular internship projects (which does not include financial compensation) intended specifically for bachelor’s and master’s university students, and not employment contracts. Please refer to jobs.fbk.eu/ for job offers and open positions.

Procedure

  1. Application: submit your application for the internship project you are interested in using the designated online form and providing the required information. Make sure to apply before the specified deadline. You are advised not to apply to more than two projects at the same time.
  2. Selection: project supervisors will review the applications and choose the most suitable candidate. If needed, they may request an oral interview during the selection process. Each project is evaluated independently.
  3. Results: once the selection process is complete, all applicants (both selected and not selected) will be notified of the outcome for the specific project.

For general inquiries, you can email internships-cs@fbk.eu. If you have specific questions about a project, please reach out to the project supervisor directly.

Please note that applications sent via email will not be considered.

Projects are listed starting with those that have the earliest submission deadlines.

Large Vision-Language Models for face-swapping detection ST

ID: p-2025-st-9

Published on: Monday, 15 December 2025

Deadline for Applications: Friday, 23 January 2026 at 23:59 Wednesday, 25 February 2026 at 23:59 (extended)

Description:

The generation and manipulation of facial images and videos have achieved increasingly hyper-realistic results in recent years. Modern face-swapping techniques can operate in real time and produce highly convincing results, making them one of the most diffused attack vectors for perpetrating impersonation-based scams and frauds online [1,2]. Technical means allowing for an early detection of compromised video evidence are therefore essential, and now required by technical standards for sensitive applications like remote identity proofing (see [3, Sec. 8.4.2]). A large body of research on face manipulation detection has led to the development of numerous dataset and detection approaches. Despite these efforts, deep-learning based detectors often struggle to generalize to real-world scenarios that differ from the data distributions seen during training. This generalization problem remains one of the primary challenges in deploying reliable detection systems in real-world scenarios [4].
A recent research direction explores the use of Large Vision-Language Models (LVLMs) for deepfake detection. These models possess strong generalization capabilities and the ability to provide natural language explanations of the model’s predictions that enhance interpretability [5]. Such properties make LVLMs compelling candidates for robust and explainable face-swap detection.
This research aims to investigate the performance of LVLMs in detecting facial deepfakes on two novel datasets [6,7], assessing both their visual prediction and the textual explanation. Particular focus will be placed on the ability of the model to identify and localize different levels of face-swapping artifacts, from clearly visible artifacts to more subtle, high-fidelity manipulations.
The student will survey related work, become familiar with LVLMs architecture, and experiment with selected models on two novel deepfake detection datasets. Depending on time and interest, this analysis may also be extended from static images to videos, allowing the evaluation of LVLMs performance in more dynamic scenarios.

Type: Internship + Thesis

Level: MSc

Supervisors: Riccardo Ziglio (rziglio@fbk.eu), Cecilia Pasquini (c.pasquini@fbk.eu)

Time frame: Starting in February/March 2026.

Prerequisites:

  • Proficiency in Python
  • Experience with PyTorch or other deep learning frameworks [preferred]
  • Basic knowledge of LLMs architectures and related practical tools (e.g., LM Studio) [preferred]

Objectives:

  • Survey on existing literature on LVLMs for deepfake detection
  • Experimenting with a selection of LVLMs models
  • Investigating the generalization capabilities of LVLMs on novel datasets [6,7] in the visual and textual domains.

Topics: Artificial Intelligence, Large Language Models, Face Manipulation Detection

Notes: Depending on the student's needs, the activity may be carried out solely as a Thesis.

References:

  • [1] Threat Intelligence Report 2025: Remote Identity Under Attack • Link
  • [2] Finance worker pays out $25 million after video call with deepfake 'chief financial officer' • Link
  • [3] ETSI Technical Standard 119 461 "Electronic Signatures and Trust Infrastructures (ESI); Policy and security requirements for trust service components providing identity proofing of trust service subjects" • Link
  • [4] I. Amerini et al., "Deepfake Media Forensics: State of the Art and Challenges Ahead". arXiv:2408.00388, 2024. • Link
  • [5] Z. Huang, B. Xia, Z. Lin, Z. Mou, W. Yang, and J. Jia, "FFAA: Multimodal Large Language Model based Explainable Open-World Face Forgery Analysis Assistant", arXiv [cs.CV]. 2024. • Link
  • [6] C. Hegde, G. Mittal and N. Memon. "Gotcha: Real-time video deepfake detection via challenge-response". In IEEE European Symposium on Security and Privacy (EuroS&P), 2024. • Link
  • [7] R. Ziglio, C. Pasquini and S. Ranise, "Spotting Tell-Tale Visual Artifacts in Face Swapping Videos: Strengths and Pitfalls of CNN Detectors," 2025 13th International Workshop on Biometrics and Forensics (IWBF), Munich, Germany, 2025, pp. 01-06. • DOI, Complementary material

Automatic Security Testing Tool for Identity Management Protocols CLEANSE DAISY ST

ID: p-2026-st-1

Published on: Wednesday, 21 January 2026

Deadline for Applications: Friday, 20 February 2026 at 23:59

Description:

Identity Management (IdM) protocols are the protocols supporting Single-Sign On (SSO) which is an authentication schema allowing the user to access different services using the same set of credentials. Two of the most known IdM protocols are SAML 2.0 SSO and OAuth 2.0/OpenID Connect. Several solutions for corporations like Google, Facebook and for Public Administration like eIDAS and SPID are based on IdM protocols. We propose to investigate and develop methodologies and tools for assessing the security and robustness of IdM implementations. This activity may include the definition of reusable testing patterns, the design and implementation of extensions or plugins for existing security testing tools—such as Micro-Id-Gym (MIG)—and the execution of automated security and conformance tests on IdM implementations.

Type: Internship + Thesis

Levels: BSc, MSc

Supervisors: Andrea Bisegna (a.bisegna@fbk.eu), Laura Cristiano (l.cristiano@fbk.eu)

Prerequisites: Basic knowledge of Python

Objectives:

  • Assess the security and robustness of IdM implementations, with a focus on SSO protocols such as SAML 2.0 and OAuth 2.0/OpenID Connect;
  • Develop methodologies and automated tools for security and conformance testing of IdM implementations, including extensions of MIG;
  • Identify vulnerabilities, misconfigurations, and non-conformities in real IdM implementations, providing actionable hints for their security.

Topics: Security testing, Identity management protocols, Security testing tools, Conformance testing