● Research proposes new method to drastically reduce workload of motion capture analysis, an area of growing interest in the metaverse domain
TOKYO – December 20, 2022 – LINE Corporation is pleased to announce that its research paper on motion capture data analysis has been selected for presentation at AAAI-23, the 37th AAAI Conference on Artificial Intelligence.
Hosted by the Association for the Advancement of Artificial Intelligence (AAAI), the AAAI conference series is the world's top international conference on artificial intelligence. The research paper is one of 1,721 papers (19.6%) out of a total of 8,777 submissions that have been selected for oral presentation at the conference, which will be held in Washington, D.C., USA, from February 7 to 14, 2023.
By focusing on human motion to localize action segments, proposed method only requires information about which labels are included in a video to label all the video’s frames
LINE's research paper proposes a novel method to drastically reduce the workload required to create a character's movements using motion capture data, a digital record of human or animal movements. As characters like human avatars play an important role in the metaverse and other virtual worlds, analysis of motion capture data has been drawing considerable attention along with advances in metaverse technology.
Human-like motion for characters can either be drawn by hand or created using skeletal motion capture data, which are obtained by recording real human movement, and the process is repeated for every new motion. To decrease the workload, research in applying machine learning to existing motion capture data has been popular in recent years. The methods that have been proposed so far require labeled video frames, or individual images (in the context of this research, skeleton data) that make up a video, to train a machine learning model. Labels describe which movements a character is doing in the video. Currently, these frames are being labeled manually by human annotators.
LINE's research paper suggests a completely new approach named Skeleton-based Weakly-supervised Temporal Action Localization (S-WTAL) to reduce the workload of labeling. By focusing solely on human motion, the proposed method only requires annotators to provide labels that are included in the motion capture video to help train the model to detect frames that fall under certain actions (e.g. "walk," "sit," and "kick") with high accuracy. Furthermore, inspired by existing studies that handle the issue of incorrect labeling, the method reduces incorrect predictions of labels by utilizing predictions generated at the early stage of model training as pseudo labels. Experimental results have proven that this method extracts video frames with much higher accuracy than conventional methods for regular, image-based videos.
Figure 1) Research overview
The study proposes S-WTAL, a novel problem setting that aims to predict which actions are performed at various points of a motion capture video given only labels assigned to the entire video (not to each frame).
・Frame-Level Label Refinement for Skeleton-Based Weakly-Supervised Action Recognition
Qing Yu* and Kent Fujiwara
*Qing is a PhD student at the Graduate School of Information Science and Technology, University of Tokyo, who co-authored the research paper while working as a research intern at LINE Corporation.
LINE's focus on basic research
LINE conducts basic research in various technologies—from those that help create AI-driven services and features, to those that ensure proper handling of data including user information for privacy protection. In particular, the company has been focusing on speech, language, and image processing based mainly on machine learning. Recognition of LINE's research work include the following:
- ICCV 2021, the international conference on computer vision, accepted two of LINE's research papers.*1
- INTERSPEECH 2021, the international conference on speech processing, accepted six of LINE's research papers.*2
- ICLR 2022, the international conference on deep learning, accepted one of LINE's research papers.*3
*1 Press release published on July 28, 2021: https://linecorp.com/ja/pr/news/ja/2021/3843 (Japanese only)
*2 Press release published on August 30, 2021: https://linecorp.com/ja/pr/news/en/2021/3919
*3 Press release published on February 10, 2022: https://linecorp.com/ja/pr/news/en/2022/4123
Going forward, LINE will continue to develop and improve its businesses and services to grow and expand its vast potential as a communication platform.