MonoAvatar: Learning Personalized High Quality Volumetric Head Avatars from Monocular RGB Videos

태그
3D
스키밍 날짜
2023/04/05
They used short (1-2 minutes) video as training data to optimize 3DMM-anchored NeRF.
1.
They define additional feature in each 3DMM vertices which can be computed by processing rasterized vertex displacements (between neutral and expressed) into U-Net.
2.
For each query point in NeRF formulation, they found K-Nearest 3DMM vertices.
3.
Using 1. additional feature and 2. displacement between vertex & query point information of these nearest points, they can get intermediate feature for each nearest vertices.
4.
By weighted summing the intermediate feature, they get input for NeRF.
5.
Train NeRF!
They insisted their learnt avatar can be driven by the same subject under different capturing condition (as I attached, wearing glasses).
The main point is that they made avatar model which can be rendered with user-defined expression and viewpoint.