Topics of Interest: The TriFusion workshop seeks original research at the intersection of computer graphics, AI, and robotics, focusing on the joint development and interaction of human users, virtual avatars, and humanoid robots. Topics of interest include, but are not limited to, the following technical areas:
• Motion Capture, Tracking, and Retargeting for Graphics and Embodiment
Systems and algorithms for capturing high-fidelity full-body human motion—ranging from optical motion capture and markerless depth sensing to wearable IMUs—and retargeting that motion to diverse virtual characters, game/VR avatars, and physically simulated humanoids. Emphasis is placed on graphics-oriented challenges such as robust pose estimation under occlusion, seamless skeletal/non-skeletal tracking for varied rig structures, and visually consistent kinematic mapping to differently proportioned models. Of particular interest are scalable, real-time, low-latency pipelines for live human–avatar–robot motion transfer in immersive and interactive environments.
• Data-Driven Human Motion Generation and Animation
Generative and physics-based techniques for synthesizing visually realistic and expressive human motion in computer graphics contexts, from natural locomotion to complex, task-driven sequences. Methods include neural and probabilistic models—GANs, VAEs, diffusion models, reinforcement learning—that produce stylized, context-aware motion driven by environmental and narrative constraints. We encourage approaches that bridge character animation for games/VR with controller generation for physically embodied agents, ensuring stylistic coherence and physical plausibility across both synthetic and real embodiments.
• Cross-Embodiment Learning and Sim-to-Real for Graphics-Driven Agents
Learning frameworks that connect virtual avatar simulations and physically embodied humanoid robots, enabling bidirectional transfer of motion skills. Topics include domain adaptation and sim-to-real techniques that bridge high-fidelity physics simulations and real-world motion capture; shared latent spaces for human/robot/character animation; and multi-domain imitation learning unifying graphics animation data and robotics datasets. Of special interest are co-training methods that jointly optimize on heterogeneous motion sources, facilitating vision-based skill learning that generalizes across avatars, animated characters, and real robots—advancing the creation of consistent, believable motion in both simulated graphics environments and the physical world.