YIN Bao-cai, ZHANG Si-guang, WANG Li-chun, TANG Heng-liang. 3D Visible Speech Animation Driven by Prosody Text[J]. Journal of Beijing University of Technology, 2009, 35(12): 1690-1696.
    Citation: YIN Bao-cai, ZHANG Si-guang, WANG Li-chun, TANG Heng-liang. 3D Visible Speech Animation Driven by Prosody Text[J]. Journal of Beijing University of Technology, 2009, 35(12): 1690-1696.

    3D Visible Speech Animation Driven by Prosody Text

    • This paper proposes a new approach for generating realistic three-dimensional speech animation.The basic idea is to synthesize the animated faces using prosodic information edited by user with a kind of text markup language.By capturing characteristic trajectories of utterances from video clips, our technique builds up a parametric model based on the exponential formula.Based on this formula the static viseme is extended to dynamic one.To relate the prosody text with the 3D animation, the input attribute is mapped to be the value of formula parameter.Experimental results show that the proposed technique synthesizes animation of different effects depending on the availability with the prosodic information.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return