비젼 기반의 3차원 아바타 얼굴 애니메이션
- Alternative Title
- 3D Avatar Facial Animation of Vision-based Approach
- Abstract
- In order to keep the anonymity in a cyber space and give realistic communication, most of researchers have been tried to develop 3D facial animations. There are many attempts to apply real human expression to 3D avatar. However, since human face has very complex muscle structure and also reveals various expression with a minute variation, it is very difficult to make a model for facial animation.
Since Parke's research studied in 1970's, many researchers have been focused on personalized facial modeling or generation of realistic animation. The real time functionality is an another important factor in HCI or Cyber Space. Recently, there are attempting a lot of researches capture actual human expression and then adapt those expression to 3D virtual characters.
Researches for realistic facial animations can be largely categorized into two groups. First is to gather human expressions and the other is an adaptation to facial model. A most practical approach to get human expression data is to analyze the facial movements from a video with actual facial expressions. Similarly with a practical approach, recent researches have tried to obtain expression data from a video and then use them for 3D virtual avatars. It is called as vision-based approach.
In this thesis, we propose an integrated system that obtains facial animation data from a video and adapt them to 3D avatar face model. For real time application, each step of this integrated system have to use informations to be easily obtained and also choose algorithms with low time complexity. We use color videos to extract the expression data of human face.
In order to detect human face from video, we propose a 3D TSL skin color model. Skin color is a unique facial characteristic and also an easily obtainable component from a face. But, skin color has a disadvantage that sensibly react to a lighting condition. To solve this problem, we make a skin color model considering both a color tone and its brightness. It is resulted in a robust color detection algorithm. However, since it is impossible to sufficiently distinguish between face colors and similar background colors yet, we employ a haar based classifier to get a high classification performance. The positional variation of major features such as eye, mouth, and nose are traced using a color probability distribution.
When we use a generalized 3D face model, it is impossible to directly apply animation data to 3D face model because actual face from a video and 3D face model have different face measures. Therefore it is necessary to transform animation data into them of a 3D face model. We choose FAPUs that was defined in MPEG-4 and is a generalized form for standard facial expressions.
A parameter set to depict animation data are composed of the displacement and the rotation of distinctive features. The set is divided into a motion data and an expression data. To control the whole region of a face model with a few number of parameters, we define a parameter set as a set of FDPs in MPEG-4's FAP that related with the position of eye and mouth.
The motion of mouth through the morphing have shown unnatural mouth motion. It is because the mouth motion are largely dependent on the movement of jaw joints. That is, the mouth motion is a rotational movement. So, in this thesis, we create a bone model to depict the mouth opening and describe rotational movements of the jaw. This bone structure can control the mesh points of a 3D face model.
Our proposed facial animation system can give real time controls of face expression. So, it is very suitable in applications with 3D avatars such as HCI, cyber education, and chat.
- Author(s)
- 이윤정
- Issued Date
- 2008
- Awarded Date
- 2008. 8
- Type
- Dissertation
- Keyword
- 얼굴 애니메이션 3차원 아바타 이미지기반
- Publisher
- 부경대학교 대학원
- URI
- https://repository.pknu.ac.kr:8443/handle/2021.oak/11066
http://pknu.dcollection.net/jsp/common/DcLoOrgPer.jsp?sItemId=000001955501
- Alternative Author(s)
- Lee, Yun-Jung
- Affiliation
- 부경대학교 대학원
- Department
- 대학원 전자계산학과
- Advisor
- 김영봉
- Table Of Contents
- 표목차 = iii
그림목차 = iv
I. 서론 = 1
1. 연구 배경 = 2
2. 연구 내용 = 4
3. 논문 구성 = 6
II. 관련 연구 = 7
1. 얼굴 검출 = 7
가. 특징 기반 방법 = 8
나. 이미지 기반 방법 = 10
2. 얼굴 특징 검출 = 11
3. 얼굴 추적 = 12
4. 3차원 얼굴 모델 표정 제어 = 13
가. 키 프레임 방법 = 14
나. 근육 기반 방법 = 15
다. 매개변수 기반 방법 = 17
5. 비젼 기반 얼굴 애니메이션 시스템 = 20
III. 제안 시스템의 개요 = 23
IV. 얼굴 및 특징 검출 = 29
1. 얼굴 검출 = 29
가. 3차원 TSL 피부 색상 모델 = 31
나. 얼굴 후보 영역 검출 및 얼굴 검증 = 38
2. 얼굴 특징 검출 = 41
가. 눈 영역 검출 = 41
나. 입 검출 = 46
다. 코 검출 = 50
3. 얼굴 추적 = 53
4. 실험 결과 = 60
가. 3차원 TSL 피부 색상 모델 성능 = 60
나. 얼굴 검출 = 63
다. 얼굴 특징 영역 검출 = 72
라. 얼굴 추적 = 77
V. 얼굴 모델 제어 = 80
1. 얼굴 애니메이션 파라미터 = 80
2. 얼굴 모션에 따른 애니메이션 파라미터 = 85
3. 3차원 아바타 얼굴 표정 제어 = 91
VI. 결론 = 98
[참고문헌] = 101
- Degree
- Doctor
-
Appears in Collections:
- 과학기술융합전문대학원 > 기타 학과
- Authorize & License
-
- Files in This Item:
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.