Publications

MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories

Marilena Lemonari, Andreas Panayiotou, Theodoros Kyriakou, Nuria Pelechano, Yiorgos Chrysanthou, Andreas Aristidou, Panayiotis Charalambous

Computer Graphics Forum, May 2025

MPACT is a framework that transforms unlabelled crowd data into controllable simulation parameters using image-based encoding and a parameter prediction network trained on synthetic image–profile pairs. It enables intuitive crowd authoring and behavior analysis, achieving high scores in believability, plausibility, and behavioral fidelity across evaluations and user studies.

DOI paper video code project page

DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization

Jose Luis Pontón, Eduard Pujol, Andreas Aristidou, Carlos Andújar, Nuria Pelechano

Computer Graphics Forum, Volume 44, Issue 2., Apr 2025

Presented at: Eurographics 2025, EG'25 proceedings

DragPoser is a deep-learning-based motion reconstruction system that uses variable sparse sensors as input, achieving real-time high end-effector position accuracy through a pose optimization process within a structured latent space. Incorporating a Temporal Predictor network with a Transformer architecture, DragPoser surpasses traditional methods in precision, producing natural and temporally coherent poses, and demonstrating robustness and adaptability to dynamic constraints and various input configurations.

DOI paper video code project page

CEDRL: Simulating Diverse Crowds with Example-Driven Deep Reinforcement Learning

Andreas Panayiotou, Andreas Aristidou, Panayiotis Charalambous

Computer Graphics Forum, Volume 44, Issue 2., Apr 2025

Presented at: Eurographics 2025, EG'25 proceedings

This paper introduces CEDRL (Crowds using Example-driven Deep Reinforcement Learning), a framework that models diverse and adaptive crowd behaviors by leveraging multiple datasets and a reward function aligned with real-world observations. The approach enables real-time controllability and generalization across scenarios, showcasing enhanced behavior complexity and adaptability in virtual environments.

DOI paper video code project page

Multi-Modal Instrument Performance: A musical database

Theodoros Kyriakou, Andreas Aristidou, Panayiotis Charalambous

Computer Graphics Forum, Volume 44, Issue 2., Apr 2025

Presented at: Eurographics 2025, EG'25 proceedings

This paper introduces the Multi-Modal Instrument Performances (MMIP) database, the first dataset to combine synchronized high-quality 3D motion capture, audio, video, and MIDI data for musical performances involving guitar, piano, and drums. It highlights the challenges of capturing and managing such multimodal data, offering an open-access repository with tools for exploration, visualization, and playback.

DOI paper video database

A novel multidisciplinary approach for reptile movement and behavior analysis

Savvas Zotos, Marilena Stamatiou, Sofia-Zacharenia Marketaki, Duncan J. Irschick, Jeremy A. Bot, Andreas Aristidou, Emily L. C. Shepard, Mark D. Holton, Ioannis N. Vogiatzakis

Integrative Zoology, Feb 2025

This paper introduces a multidisciplinary approach to studying reptile behavior, combining tri-axial accelerometers, video recordings, motion capture systems, and 3D reconstruction to create detailed digital archives of movements and behaviors. Using two Mediterranean reptiles as case studies, it highlights the potential of this method to advance research on complex and understudied behaviors, offering ecological insights and tools for behavioral analysis.

DOI paper bibtex

Deep convolutional generative adversarial networks in retinitis pigmentosa disease images augmentation and detection

Paweł Powroźnik, Maria Skublewska-Paszkowska, Katarzyna Nowomiejska, Andreas Aristidou, Andreas Panayides, Robert Rejdak

Advances in Science and Technology Research Journal, Volume 19, no. 2, pages 321-340., Nov 2024

This study leverages Deep Convolutional Generative Adversarial Networks (DCGAN) and hybrid VGG16-XGBoost techniques to enhance medical datasets, focusing on retinitis pigmentosa, a rare eye condition. The proposed method improves image clarity, dataset augmentation, and detection accuracy, achieving over 90% in key performance metrics and a 19% increase in baseline classification accuracy.

DOI paper bibtex

Identifying and Animating Movement of Zeibekiko Sequences by Spatial Temporal Graph Convolutional Network with Multi Attention Modules

Maria Skublewska-Paszkowska, Paweł Powroźnik, Marcin Barszcz, Krzysztof Dziedzic, Andreas Aristidou

Advances in Science and Technology Research Journal, Volume 18, no. 8, pages 217-227., Nov 2024

This study employs optical motion capture technology to document and translate the Zeibekiko dance into a 3D virtual environment. Using a Spatial Temporal Graph Convolutional Network with Multi Attention Modules (ST-GCN-MAM), the system accurately captures and classifies essential dance sequences by focusing on key body regions, enabling precise, realistic virtual animations with applications in gaming, video production, and digital heritage preservation.

DOI paper bibtex

Underwater Virtual Exploration of the Ancient Port of Amathus

Andreas Alexandrou, Filip Skola, Dimitrios Skarlatos, Stella Demesticha, Fotis Liarokapis, Andreas Aristidou

Journal of Cultural Heritage, Volume 70, pages 181-193, November–December 2024., Sep 2024

This work focuses on the digital reconstruction and visualization of underwater cultural heritage, providing a gamified virtual reality (VR) experience of Cyprus' ancient Amathus harbor. Utilizing photogrammetry, our immersive VR environment enables seamless exploration and interaction with this historic site. Advanced features such as guided tours, procedural generation, and machine learning enhance realism and user engagement. User studies validate the quality of our VR experiences, highlighting minimal discomfort and demonstrating promising potential for advancing underwater exploration and conservation efforts.

DOI paper video bibtex project page

Design and Implementation of an Interactive Virtual Library based on its Physical Counterpart

Christina-Georgia Serghides, Giorgos Christoforidis, Nikolas Iakovides, Andreas Aristidou

Virtual Reality, Volume 28, Article Number 124, June, 2024, Springer., Jun 2024

This work explores the creation of a digital replica of a physical Library, using photogrammetry and 3D modelling. A Virtual Reality (VR) platform was developed to immerse users in a virtual library experience, which can also serve as a community and knowledge hub. A perceptual study was conducted to understand the current usage of physical libraries, examine the users’ experience in VR, and identify the requirements and expectations in the development of a virtual library counterpart. Five key usage scenarios were implemented, as a proof-of-concept, with emphasis on 24/7 access, functionality, and interactivity. A user evaluation study endorsed all its key attributes and future viability.

DOI paper database bibtex project page

Overcoming Challenges of Cycling Motion Capturing and Building a Comprehensive Dataset

Panayiotis Kyriakou, Marios Kyriakou, Yiorgos Chrysanthou

Presented at: The Creating Lively Interactive Populated Environments (CLIPE 2024) Workshop, The Eurographics Association, Apr 2024

This article outlines a methodology for capturing cyclist motion using motion capture (mocap) hardware and creating a publicly available comprehensive dataset. It features a modular system with innovative marker placement, and the resulting dataset is used to produce 3D visualizations and various data representations, which are shared in an online library for public access and collaborative research.

DOI database

LexiCrowd: A Learning Paradigm towards Text to Behaviour Parameters for Crowds

Marilena Lemonari, Nefeli Andreou, Nuria Pelechano, Panayiotis Charalambous, Yiorgos Chrysanthou

Presented at: The Creating Lively Interactive Populated Environments (CLIPE 2024) Workshop, The Eurographics Association, Apr 2024

This work uses a pre-trained Large Language Model (LLM) to generate pseudo-pairs of text and behavior labels, then trains a variational auto-encoder (VAE) on this synthetic dataset, constraining the latent space with a latent label loss for interpretable behavior parameters. Our model, tested with human-provided textual descriptions of crowd datasets, can parameterize unseen sentences to produce novel behaviors compatible with simulator parameters, demonstrating its potential for text-to-crowd generation and full sentence generation from behavior profiles.

DOI

Virtual Instrument Performances (VIP): A Comprehensive Review

Theodoros Kyriakou, Mercè Álvarez, Andreas Panayiotou, Yiorgos Chrysanthou, Panayiotis Charalambous, Andreas Aristidou

Computer Graphics Forum, Volume 43, Issue 2, Apr 2024

Presented at: Eurographics 2024, EG'24 STAR papers

The evolving landscape of performing arts, driven by advancements in Extended Reality (XR) and the Metaverse, presents transformative opportunities for digitizing musical experiences. This comprehensive survey explores the relatively unexplored field of Virtual Instrument Performances (VIP), addressing challenges related to motion capture precision, multi-modal interactions, and the integration of sensory modalities, with a focus on fostering inclusivity, creativity, and live performances in diverse settings.

DOI paper bibtex

Digitizing Traditional Dances Under Extreme Clothing: The Case Study of Eyo

Temi Ami-Williams, Christina-Georgia Serghides, Andreas Aristidou

Journal of Cultural Heritage, Volume 67, pages 145–157, February 2024., Mar 2024

Presented at: The International Council for Traditional Music (ICTM) 2023 and the Cyprus Dance Film Festival (CDFF) 2023

This work examines the challenges of capturing movements in traditional African masquerade garments, specifically the Eyo masquerade dance from Lagos, Nigeria. By employing a combination of motion capture technologies, the study addresses the limitations posed by "extreme clothing" and offers valuable insights into preserving cultural heritage dances. The findings lead to an efficient pipeline for digitizing and visualizing folk dances with intricate costumes, culminating in a visually captivating animation showcasing an Eyo masquerade dance performance.

DOI paper video project page

SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data

Jose Luis Pontón, Haoran Yun, Andreas Aristidou, Carlos Andújar, Nuria Pelechano

ACM Transactions on Graphics, Volume 43, Issue 1, Article No.: 5, pages 1–14., Oct 2023

Presented at: SIGGRAPH Asia 2023.

SparsePoser is a novel deep learning-based approach that reconstructs full-body poses using only six tracking devices. The system uses a convolutional autoencoder to generate high-quality human poses learned from motion capture data and a lightweight feed-forward neural network IK component to adjust hands and feet based on the corresponding trackers.

DOI paper video code bibtex project page

Collaborative VR: Solving riddles in the concept of escape rooms

Afxentis Ioannou, Marilena Lemonari, Fotis Liarokapis, Andreas Aristidou

Presented at: International Conference on Interactive Media, Smart Systems and Emerging Technologies, IMET, Oct 2023

This work explores alternative means of communication in collaborative virtual environments (CVEs) and their impact on users' engagement and performance. Through a case study of a collaborative VR escape room, we conduct a user study to evaluate the effects of nontraditional communication methods in computer-supported cooperative work (CSCW). Despite the absence of traditional interactions, our study reveals that users can effectively convey messages and complete tasks, akin to real-life scenarios.

DOI paper bibtex