Publications

Interactive Engineering Training Framework in Mixed Reality Environments

Christina-Georgia Serghides, Andreas Aristidou

Presented at: International Workshop on eXtended Reality for Industrial and Occupational Supports (XRIOS), part of IEEE VR 2026 conference, Mar 2026

Disaster evacuation of the old city of Nicosia

Marios Stylianou, Marios Demetriou, Andreas Aristidou

Progress in Disaster Science, Special Issue: AI, Emerging Technologies, and Immersive Solutions for Disaster and Emergency Response, Elsevier, Mar 2026

Presented at: 8th International Disaster and Risk Conference, IDRC 2025, Nicosia, Cyprus

This paper presents a digital twin–based simulation of evacuation scenarios in Nicosia’s historic walled Old Town, showing how dense urban morphology, gate blockages, and infrastructure changes affect evacuation efficiency and highlighting the need for coordinated, adaptive flow-management strategies to reduce disaster risks in historic cities.

DOI paper bibtex project page

DRUMS: Drummer Reconstruction Using Midi Sequences

Theodoros Kyriakou, Panayiotis Charalambous, Andreas Aristidou

Presented at: 18th annual ACM SIGGRAPH conference on Motion, Interaction and Games, MIG 2025, Zurich, Switzerland, Dec 2025

DRUMS is a MIDI-driven system that generates expressive, full-body drumming performances, combining precise rhythmic accuracy with realistic hand, stick, upper-body, and facial movements. By integrating BiLSTM-based hand motion prediction, phrase-matched upper-body and facial expressions, and procedural foot control through a modular IK framework, our method produces visually convincing and musically aligned drummer animations for applications in digital performance.

DOI paper video bibtex project page

MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories

Marilena Lemonari, Andreas Panayiotou, Theodoros Kyriakou, Nuria Pelechano, Yiorgos Chrysanthou, Andreas Aristidou, Panayiotis Charalambous

Computer Graphics Forum, Volume 44, Issue 6., Sep 2025

MPACT is a framework that transforms unlabelled crowd data into controllable simulation parameters using image-based encoding and a parameter prediction network trained on synthetic image–profile pairs. It enables intuitive crowd authoring and behavior analysis, achieving high scores in believability, plausibility, and behavioral fidelity across evaluations and user studies.

DOI paper video code bibtex project page

Interactive Media for Cultural Heritage

Fotis Liarokapis, Maria Shehade, Andreas Aristidou, Yiorgos Chrysanthou

Springer Nature Switzerland, Jul 2025

This edited book explores the latest advancements in interactive media applied to Digital Cultural Heritage research, covering areas from visual data acquisition to immersive experiences like extended reality and digital storytelling. Structured into four sections, it offers theoretical discussions and diverse case studies, making it a valuable resource for academics, scholars, researchers, and students interested in interdisciplinary approaches to cultural heritage preservation and exploration through emerging technologies.

DOI paper bibtex

Motion labelling and recognition: A case study on the Zeibekiko dance

Maria Skublewska-Paszkowska, Pawel Powroznik, Marilena Lemonari, Andreas Aristidou

Interactive Media for Cultural Heritage, Jul 2025

This work employs Spatial Temporal Graph Convolutional Networks (ST-GCN) to recognize and annotate folk dance movements, using the Greek Zeibekiko dance as a case study to address the challenges of motion segmentation in complex, expressive dances. By enabling accurate classification and annotation of motion-captured dance data, the method supports cultural heritage preservation, dance analysis, and educational applications.

DOI paper bibtex

DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization

Jose Luis Pontón, Eduard Pujol, Andreas Aristidou, Carlos Andújar, Nuria Pelechano

Computer Graphics Forum, Volume 44, Issue 2., Apr 2025

Presented at: Eurographics 2025, EG'25 proceedings

DragPoser is a deep-learning-based motion reconstruction system that uses variable sparse sensors as input, achieving real-time high end-effector position accuracy through a pose optimization process within a structured latent space. Incorporating a Temporal Predictor network with a Transformer architecture, DragPoser surpasses traditional methods in precision, producing natural and temporally coherent poses, and demonstrating robustness and adaptability to dynamic constraints and various input configurations.

DOI paper video code project page

CEDRL: Simulating Diverse Crowds with Example-Driven Deep Reinforcement Learning

Andreas Panayiotou, Andreas Aristidou, Panayiotis Charalambous

Computer Graphics Forum, Volume 44, Issue 2., Apr 2025

Presented at: Eurographics 2025, EG'25 proceedings

This paper introduces CEDRL (Crowds using Example-driven Deep Reinforcement Learning), a framework that models diverse and adaptive crowd behaviors by leveraging multiple datasets and a reward function aligned with real-world observations. The approach enables real-time controllability and generalization across scenarios, showcasing enhanced behavior complexity and adaptability in virtual environments.

DOI paper video code project page

Multi-Modal Instrument Performance: A musical database

Theodoros Kyriakou, Andreas Aristidou, Panayiotis Charalambous

Computer Graphics Forum, Volume 44, Issue 2., Apr 2025

Presented at: Eurographics 2025, EG'25 proceedings

This paper introduces the Multi-Modal Instrument Performances (MMIP) database, the first dataset to combine synchronized high-quality 3D motion capture, audio, video, and MIDI data for musical performances involving guitar, piano, and drums. It highlights the challenges of capturing and managing such multimodal data, offering an open-access repository with tools for exploration, visualization, and playback.

DOI paper video database

Lead: Latent realignment for human motion diffusion

Nefeli Andreou, Xi Wang, Victoria Fernández Abrevaya, Marie‐Paule Cani, Yiorgos Chrysanthou, Vicky Kalogeiton

Computer Graphics Forum, e70093, Apr 2025

This work proposes LEAD, a method that combines latent diffusion with a realignment mechanism to create a semantically structured space for generating realistic human motion from natural language. It enables both high-quality motion synthesis and motion textual inversion, outperforming modern methods in realism, diversity, and alignment with textual input.

DOI

A novel multidisciplinary approach for reptile movement and behavior analysis

Savvas Zotos, Marilena Stamatiou, Sofia-Zacharenia Marketaki, Duncan J. Irschick, Jeremy A. Bot, Andreas Aristidou, Emily L. C. Shepard, Mark D. Holton, Ioannis N. Vogiatzakis

Integrative Zoology, Feb 2025

This paper introduces a multidisciplinary approach to studying reptile behavior, combining tri-axial accelerometers, video recordings, motion capture systems, and 3D reconstruction to create detailed digital archives of movements and behaviors. Using two Mediterranean reptiles as case studies, it highlights the potential of this method to advance research on complex and understudied behaviors, offering ecological insights and tools for behavioral analysis.

DOI paper bibtex

Deep convolutional generative adversarial networks in retinitis pigmentosa disease images augmentation and detection

Paweł Powroźnik, Maria Skublewska-Paszkowska, Katarzyna Nowomiejska, Andreas Aristidou, Andreas Panayides, Robert Rejdak

Advances in Science and Technology Research Journal, Volume 19, no. 2, pages 321-340., Nov 2024

This study leverages Deep Convolutional Generative Adversarial Networks (DCGAN) and hybrid VGG16-XGBoost techniques to enhance medical datasets, focusing on retinitis pigmentosa, a rare eye condition. The proposed method improves image clarity, dataset augmentation, and detection accuracy, achieving over 90% in key performance metrics and a 19% increase in baseline classification accuracy.

DOI paper bibtex

Identifying and Animating Movement of Zeibekiko Sequences by Spatial Temporal Graph Convolutional Network with Multi Attention Modules

Maria Skublewska-Paszkowska, Paweł Powroźnik, Marcin Barszcz, Krzysztof Dziedzic, Andreas Aristidou

Advances in Science and Technology Research Journal, Volume 18, no. 8, pages 217-227., Nov 2024

This study employs optical motion capture technology to document and translate the Zeibekiko dance into a 3D virtual environment. Using a Spatial Temporal Graph Convolutional Network with Multi Attention Modules (ST-GCN-MAM), the system accurately captures and classifies essential dance sequences by focusing on key body regions, enabling precise, realistic virtual animations with applications in gaming, video production, and digital heritage preservation.

DOI paper bibtex

Underwater Virtual Exploration of the Ancient Port of Amathus

Andreas Alexandrou, Filip Skola, Dimitrios Skarlatos, Stella Demesticha, Fotis Liarokapis, Andreas Aristidou

Journal of Cultural Heritage, Volume 70, pages 181-193, November–December 2024., Sep 2024

This work focuses on the digital reconstruction and visualization of underwater cultural heritage, providing a gamified virtual reality (VR) experience of Cyprus' ancient Amathus harbor. Utilizing photogrammetry, our immersive VR environment enables seamless exploration and interaction with this historic site. Advanced features such as guided tours, procedural generation, and machine learning enhance realism and user engagement. User studies validate the quality of our VR experiences, highlighting minimal discomfort and demonstrating promising potential for advancing underwater exploration and conservation efforts.

DOI paper video bibtex project page

Design and Implementation of an Interactive Virtual Library based on its Physical Counterpart

Christina-Georgia Serghides, Giorgos Christoforidis, Nikolas Iakovides, Andreas Aristidou

Virtual Reality, Volume 28, Article Number 124, June, 2024, Springer., Jun 2024

This work explores the creation of a digital replica of a physical Library, using photogrammetry and 3D modelling. A Virtual Reality (VR) platform was developed to immerse users in a virtual library experience, which can also serve as a community and knowledge hub. A perceptual study was conducted to understand the current usage of physical libraries, examine the users’ experience in VR, and identify the requirements and expectations in the development of a virtual library counterpart. Five key usage scenarios were implemented, as a proof-of-concept, with emphasis on 24/7 access, functionality, and interactivity. A user evaluation study endorsed all its key attributes and future viability.

DOI paper database bibtex project page