Publications

Digitizing Wildlife: The case of reptiles 3D virtual museum

Savvas Zotos, Marilena Lemonari, Michael Konstantinou, Anastasios Yiannakidis, Georgios Pappas, Panayiotis Kyriakou, Ioannis N. Vogiatzakis, Andreas Aristidou

IEEE Computer Graphics and Applications, May 2022

In this paper, we design and develop a 3D virtual museum with holistic metadata documentation and a variety of captured reptile behaviors and movements. Our main contribution lies on the procedure of rigging, capturing, and animating reptiles, as well as the development of a number of novel educational applications.

paper project page

Let's All Dance: Enhancing Amateur Dance Motions

Qiu Zhou, Manyi Li, Qiong Zeng, Andreas Aristidou, Xiaojing Zhang, Lin Chen, Changhe Tu

Computational Visual Media, May 2022

In this paper, we present a deep model that enhances professionalism to amateur dance movements, allowing the movement quality to be improved in both the spatial and temporal domains. We illustrate the effectiveness of our method on real amateur and artificially generated dance movements. We also demonstrate that our method can synchronize 3D dance motions with any reference audio under non-uniform and irregular misalignment.

paper video project page

Safeguarding our Dance Cultural Heritage

Andreas Aristidou, Alan Chalmers, Yiorgos Chrysanthou, Celine Loscos, Franck Multon, Joseph E. Parkins, Bhuvan Sarupuri, Efstathios Stavrakis

Presented at: Eurographics 2022 - Tutorials, Apr 2022

In this tutorial, we show how the European Project, SCHEDAR, exploited emerging technologies to digitize, analyze, and holistically document our intangible heritage creations, that is a critical necessity for the preservation and the continuity of our identity as Europeans.

paper video project page

Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure

Andreas Aristidou, Anastasios Yiannakidis, Kfir Aberman, Daniel Cohen-Or, Ariel Shamir, Yiorgos Chrysanthou

IEEE Transaction on Visualization and Computer Graphics (Early Access), Mar 2022

Presented at: ACM SIGGRAPH/ Eurographics Symposium on Computer Animation, SCA'22. Eurographics Association

In this work, we present a music-driven neural framework that generates realistic human motions, which are rich, avoid repetitions, and jointly form a global structure that respects the culture of a specific dance genre. We illustrate examples of various dance genre, where we demonstrate choreography control and editing in a number of applications.

DOI paper video database project page

A Hierarchy-Aware Pose Representation for Deep Character Animation

Nefeli Andreou, Andreas Lazarou, Andreas Aristidou, Yiorgos Chrysanthou

arXiv.org > cs > arXiv:2111.13907, Nov 2021

In this work we present an efficient method for training neural networks, specifically designed for character animation. We use dual quaternions as the mathematical framework, and we take advantage of the skeletal hierarchy, to avoid rotation discontinuities, a common problem when using Euler angle or exponential map parameterizations, or motion ambiguities, a common problem when using positional data. Our method does not requires re-projection onto skeleton constraints to avoid bone stretching violation and invalid configurations, while the network is propagated learning using both rotational and positional information.

paper video

Virtual Dance Museums: the case of Greek/Cypriot folk dancing

Andreas Aristidou, Nefeli Andreou, Loukas Charalambous, Anastasios Yiannakidis, Yiorgos Chrysanthou

Presented at: EUROGRAPHICS Workshop on Graphics and Cultural Heritage, GCH'21, Nov 2021

This paper presentes a virtual dance museum that has been developed to allow for widely educating the public, most specifically the youngest generations, about the story, costumes, music, and history of our dances. The museum is publicly accessible, and also enables motion data reusability, facilitating dance learning applications through gamification.

DOI paper bibtex project page

Emotion Recognition from 3D Motion Capture Data using Deep CNNs

Haris Zacharatos, Christos Gatzoulis, Panayiotis Charalambous, Yiorgos Chrysanthou

Presented at: 3rd IEEE Conference on Games, Aug 2021

paper

Background segmentation in multicolored illumination environments

Nikolas Ladas, Paris Kaimakis, Yiorgos Chrysanthou

The Visual Computer, 37, 2221–2233, Aug 2021

We present an algorithm for the segmentation of images into background and foreground regions. The proposed algorithm utilizes a physically based formulation of scene appearance which explicitly models the formation of shadows originating from color light sources. This formulation enables a probabilistic model to distinguish between shadows and foreground objects in challenging images.

DOI bibtex

A 3D digitisation workflow for architecture-specific annotation of built heritage

Marissia Deligiorgi, Maria I.Maslioukova, Melinos Averkiou, Andreas C.Andreou, Pratheba Selvaraju, Evangelos Kalogerakis, Gustavo Patow, Yiorgos Chrysanthou, George Artopoulos

Journal of Archaeological Science: Reports, Volume 37, June 2021, 102787, Jun 2021

DOI

Adult2Child: Motion Style Transfer using CycleGANs

Yuzhu Dong, Andreas Aristidou, Ariel Shamir, Moshe Mahler, Eakta Jain

Presented at: ACM SIGGRAPH Conference on Motion, Interaction, and Games, MIG'20, Oct 2020

This paper presents an effective style translation method that tranfers adult motion capture data to the style of child motion using CycleGANs. Our method allows training on unpaired data using a relatively small number of sequences of child and adult motions that are not required to be temporally aligned. We have also captured high quality adult2child 3D motion capture data that are publicly available for future studies.

DOI paper video code database bibtex project page

Using Epistemic Game Development to Teach Software Development Skills

Christos Gatzoulis, Andreas S. Andreou, Panagiotis Zaharias, Yiorgos Chrysanthou

International Journal of Game-Based Learning (IJGBL), 10(4), Oct 2020

DOI

MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

ACM Transaction on Graphics, 40(1), Article 1, Sep 2020

Presented at: SIGGRAPH Asia 2020

MotioNet is a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. It decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric, skeleton, encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels. We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities.

DOI paper video code bibtex project page

Salsa dance learning evaluation and motion analysis in gamified virtual reality environment

Simon Senecal, Niels A. Nijdam, Andreas Aristidou, Nadia Magnenat-Thalmann

Multimedia Tools and Applications, 79 (33-34): 24621-24643, Sep 2020

We propose an interactive learning application in the form of a virtual reality game, that aims to help users to improve their salsa dancing skills. The application consists of three components, a virtual partner with interactive control to dance with, visual and haptic feedback, and a game mechanic with dance tasks. Learning is evaluated and analyzed using Musical Motion Features and the Laban Motion Analysis system, prior and after training, showing convergence of the profile of non-dancer toward the profile of regular dancers, which validates the learning process.

DOI paper video bibtex project page

Digital Dance Ethnography: Organizing Large Dance Collections

Andreas Aristidou, Ariel Shamir, Yiorgos Chrysanthou

ACM Journal on Computing and Cultural Heritage, 12(4), Article 29, Nov 2019

This paper presents a method for contextually motion analysis that organizes dance data semantically, to form the first digital dance ethnography. The method is capable of exploiting the contextual correlation between dances, and distinguishing fine-grained difference between semantically similar motions. It illustrates a number of different organization trees, and portray the chronological and geographical evolution of dances.

DOI paper video database bibtex project page

Why did the human cross the Road?

Panayiotis Charalambous, Yiorgos Chrysanthou

Presented at: ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG ’19), Article 47, 1–2. Best Poster Award, Oct 2019

Humans at rest tend to stay at rest. Humans in motion tend to cross the road – Isaac Newton.” Even though this response is meant to be a joke to indicate the answer is quite obvious, this important feature of real world crowds is rarely considered in simulations. Answering this question involves several things such as how agents balance between reaching goals, avoid collisions with heterogeneous entities and how the environment is being modeled. As part of a preliminary study, we introduce a reinforcement learning framework to train pedestrians to cross streets with bidirectional traffic. Our initial results indicate that by using a very simple goal centric representation of agent state and a simple reward function, we can simulate interesting behaviors such as pedestrians crossing the road through crossings or waiting for cars to pass.

DOI paper video database