A 3D virtual character playing a role in digital storytelling has a unique style in both its appearance and motion. Since the style reflects the character’s unique personality, it is crucial to preserve it and maintain consistency. However, when a character’s motion is directly controlled by a user wearing motion sensors, its unique style may be lost. We present a novel character motion control method that preserves the character’s motion style using only a small amount of animation data created specifically for the character. Instead of machine learning approaches that require a large dataset for training, we propose a search-based method that directly finds the most similar character pose from the animation data to match the user’s current pose. To demonstrate the usability of our method, we conducted experiments using a character model and its animation data created by an expert designer for a virtual reality game. To validate that our method effectively preserves the original motion style of the character, we compared our results with those obtained using general human motion capture data. Additionally, to illustrate the scalability of our method, we present experimental results with varying numbers of motion sensors.
@article{Bae2019CharacterMotionControl, title={Character Motion Control by Using Limited Sensors and Animation Data}, author={Bae, Tae Sung and Lee, Eun Ji and Kim, Ha Eun and Park, Minji and Choi, Myung Geol}, journal={Journal of the Korea Computer Graphics Society}, volume={25}, number={3}, pages={85--92}, year={2019}, publisher={Korea Computer Graphics Society} }