| 5-2 現在の主要研究テーマをご記入ください。 (1テーマにつき100〜300字程度) Please describe your current main research theme(s). | 1. In-Context 3D Craniofacial Shape Completion for Orthognathic Surgical Planning
Facial-driven orthognathic surgical planning requires a patient-specific reference facial appearance that represents the intended postoperative outcome.
A clinically important subproblem in this setting is upper-face to lower-face prediction: inferring a normal and anatomically coherent lower facial geometry from an intact upper facial region.
This task is challenging due to the nonlinear, region-dependent relationship between facial anatomy and skeletal structure, and is not well addressed by existing approaches that primarily focus on reference bone model estimation or rely on linear statistical shape models.
In this work, we propose a retrieval-guided in-context learning framework for 3D craniofacial shape completion.
Our method treats complete facial shapes from a population as exemplars and predicts the missing lower face of a query subject by reasoning over these examples.
A self-supervised retrieval encoder first selects anatomically relevant support examples based on upper-face geometry.
A transformer-based in-context completion network then jointly processes the retrieved exemplars and the query upper face, decomposing each support shape into upper- and lower-face tokens and synthesizing the corresponding lower-face geometry for the query.
Unlike existing 3D in-context learning methods that rely on joint sampling or overlapping geometry, our approach enables completion across anatomically disjoint regions.
Experiments on craniofacial datasets demonstrate that the proposed method produces realistic, anatomically consistent lower-face predictions and outperforms statistical and deep learning baselines in both geometric accuracy and clinical relevance.
Our framework introduces a new paradigm for exemplar-based facial prediction and provides a flexible foundation for face-driven and soft-tissue–aware surgical planning.
2. Training-Free Style Transfer with Position-Bias Removal and Semantic-Guided Attention
Recent diffusion-based style transfer methods have achieved notable progress, with StyleID attracting wide attention for its training-free attention-based design. However, directly injecting style features into self-attention introduces implicit positional bias, where content tokens tend to over-attend to spatially adjacent style regions, preventing proper global correspondence matching. Moreover, without semantic guidance, neighboring content pixels that belong to the same region may attend to different style areas, producing fragmented textures and disrupting the coherence of region-level appearance. We propose StyleID++, a semantic-aware enhancement framework that addresses these two issues through two simple yet effective components. A circular padding strategy eliminates positional bias and restores the attention mechanism’s global receptive field, while self-guided attention refinement steers each content query by incorporating the attention patterns of its semantically similar neighbors, ensuring that style selection is supported by consistent regional semantics. This leads to cleaner and more coherent stylization. Extensive experiments show that StyleID++ improves style consistency and visual fidelity while remaining fully training-free, outperforming prior methods.
|