Virtual Library in the concept of digital twins

Nikolas Iakovides, Andreas Lazarou, Panayiotis Kyriakou, Andreas Aristidou

Presented at: International Conference on Interactive Media, Smart Systems and Emerging Technologies, IMET, Oct 2022

In this work, we reconstruct the Limassol Municipal University Library in the concept of a digital twin. To do so, we conducted a perceptual survey to understand the current use of physical libraries, examine the user’s experience with VR, and identify potential use cases of VR libraries. Based on the outcome, we design five use case scenarios where we demonstrate the potential use of virtual libraries.

DOI paper video project page

Pose Representations for Deep Skeletal Animation

Nefeli Andreou, Andreas Aristidou, Yiorgos Chrysanthou

Computer Graphics Forum, Volume 41, Issue 8, Sep 2022

Presented at: ACM SIGGRAPH/ Eurographics Symposium on Computer Animation, SCA'22. Eurographics Association

In this work we present an efficient method for training neural networks, specifically designed for character animation. We use dual quaternions as the mathematical framework, and we take advantage of the skeletal hierarchy, to avoid rotation discontinuities, a common problem when using Euler angle or exponential map parameterizations, or motion ambiguities, a common problem when using positional data. Our method does not requires re-projection onto skeleton constraints to avoid bone stretching violation and invalid configurations, while the network is propagated learning using both rotational and positional information.

DOI paper video code project page

CCP: Configurable Crowd Profiles

Andreas Panayiotou, Theodoros Kyriakou, Marilena Lemonari, Yiorgos Chrysanthou, Panayiotis Charalambous

Presented at: SIGGRAPH ’22 Conference Proceedings, Aug 2022

In this paper, we present a RL-based framework for learning multiple agent behaviors concurrently. We optimize the agent by varying the importance of the selected behaviors (goal seeking, collision avoidance, interaction with environment, and grouping) while training; essentially we have a reward function that changes dynamically during training. The importance of each separate sub-behavior is added as input to the policy, resulting in the development of a single model capable of capturing as well as enabling dynamic run-time manipulation of agent profiles; thus allowing configurable profiles.

DOI paper video code

Digitizing Wildlife: The case of reptiles 3D virtual museum

Savvas Zotos, Marilena Lemonari, Michael Konstantinou, Anastasios Yiannakidis, Georgios Pappas, Panayiotis Kyriakou, Ioannis N. Vogiatzakis, Andreas Aristidou

IEEE Computer Graphics and Applications, Jun 2022

In this paper, we design and develop a 3D virtual museum with holistic metadata documentation and a variety of captured reptile behaviors and movements. Our main contribution lies on the procedure of rigging, capturing, and animating reptiles, as well as the development of a number of novel educational applications.

DOI paper video project page

Let's All Dance: Enhancing Amateur Dance Motions

Qiu Zhou, Manyi Li, Qiong Zeng, Andreas Aristidou, Xiaojing Zhang, Lin Chen, Changhe Tu

Computational Visual Media, Jun 2022

In this paper, we present a deep model that enhances professionalism to amateur dance movements, allowing the movement quality to be improved in both the spatial and temporal domains. We illustrate the effectiveness of our method on real amateur and artificially generated dance movements. We also demonstrate that our method can synchronize 3D dance motions with any reference audio under non-uniform and irregular misalignment.

DOI paper video code database project page

Authoring Virtual Crowds: A Survey

Marilena Lemonari, Rafael Blanco, Panayiotis Charalambous, Nuria Pelechano, Marios Avraamides, Julien Pettré, Yiorgos Chrysanthou

Computer Graphics Forum, Volume 41, Issue 2, Pages 677-701, May 2022

Presented at: Eurographics 2022 - STAR

In this survey, we provide a review of the most relevant methods in authoring virtual crowds, emphasizing the amount and nature of influence that the users have over the final result. We discuss the currently available authoring tools (e.g., graphical user interfaces, drag-and-drop), identifying the trends of early and recent work, and we suggest promising directions for future research that mainly stem from the rise of learning-based methods, and the need for a unified authoring framework.


Safeguarding our Dance Cultural Heritage

Andreas Aristidou, Alan Chalmers, Yiorgos Chrysanthou, Celine Loscos, Franck Multon, Joseph E. Parkins, Bhuvan Sarupuri, Efstathios Stavrakis

Presented at: Eurographics 2022 - Tutorials, May 2022

In this tutorial, we show how the European Project, SCHEDAR, exploited emerging technologies to digitize, analyze, and holistically document our intangible heritage creations, that is a critical necessity for the preservation and the continuity of our identity as Europeans.

DOI paper video project page

Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure

Andreas Aristidou, Anastasios Yiannakidis, Kfir Aberman, Daniel Cohen-Or, Ariel Shamir, Yiorgos Chrysanthou

IEEE Transaction on Visualization and Computer Graphics (Early Access), Mar 2022

Presented at: ACM SIGGRAPH/ Eurographics Symposium on Computer Animation, SCA'22. Eurographics Association

In this work, we present a music-driven neural framework that generates realistic human motions, which are rich, avoid repetitions, and jointly form a global structure that respects the culture of a specific dance genre. We illustrate examples of various dance genre, where we demonstrate choreography control and editing in a number of applications.

DOI paper video database project page

Virtual Dance Museums: the case of Greek/Cypriot folk dancing

Andreas Aristidou, Nefeli Andreou, Loukas Charalambous, Anastasios Yiannakidis, Yiorgos Chrysanthou

Presented at: EUROGRAPHICS Workshop on Graphics and Cultural Heritage, GCH'21, Nov 2021

This paper presentes a virtual dance museum that has been developed to allow for widely educating the public, most specifically the youngest generations, about the story, costumes, music, and history of our dances. The museum is publicly accessible, and also enables motion data reusability, facilitating dance learning applications through gamification.

DOI paper bibtex project page

Emotion Recognition from 3D Motion Capture Data using Deep CNNs

Haris Zacharatos, Christos Gatzoulis, Panayiotis Charalambous, Yiorgos Chrysanthou

Presented at: 3rd IEEE Conference on Games, Aug 2021


Background segmentation in multicolored illumination environments

Nikolas Ladas, Paris Kaimakis, Yiorgos Chrysanthou

The Visual Computer, 37, 2221–2233, Aug 2021

We present an algorithm for the segmentation of images into background and foreground regions. The proposed algorithm utilizes a physically based formulation of scene appearance which explicitly models the formation of shadows originating from color light sources. This formulation enables a probabilistic model to distinguish between shadows and foreground objects in challenging images.

DOI bibtex

A 3D digitisation workflow for architecture-specific annotation of built heritage

Marissia Deligiorgi, Maria I.Maslioukova, Melinos Averkiou, Andreas C.Andreou, Pratheba Selvaraju, Evangelos Kalogerakis, Gustavo Patow, Yiorgos Chrysanthou, George Artopoulos

Journal of Archaeological Science: Reports, Volume 37, June 2021, 102787, Jun 2021


Adult2Child: Motion Style Transfer using CycleGANs

Yuzhu Dong, Andreas Aristidou, Ariel Shamir, Moshe Mahler, Eakta Jain

Presented at: ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG'20, Oct 2020

This paper presents an effective style translation method that tranfers adult motion capture data to the style of child motion using CycleGANs. Our method allows training on unpaired data using a relatively small number of sequences of child and adult motions that are not required to be temporally aligned. We have also captured high quality adult2child 3D motion capture data that are publicly available for future studies.

DOI paper video code database bibtex project page

Using Epistemic Game Development to Teach Software Development Skills

Christos Gatzoulis, Andreas S. Andreou, Panagiotis Zaharias, Yiorgos Chrysanthou

International Journal of Game-Based Learning (IJGBL), 10(4), Oct 2020


MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

ACM Transaction on Graphics, 40(1), Article 1, Sep 2020

Presented at: SIGGRAPH Asia 2020

MotioNet is a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. It decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric, skeleton, encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels. We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities.

DOI paper video code bibtex project page