PUKYONG

3D Image Reconstruction Algorithm Based on Depth Map Information of a Kinect Camera Sensor

Metadata Downloads
Abstract
3D image reconstruction is one of the research hotspots and difficulties in the areas of computer vision, artificial intelligence, virtual reality and so on.
To implement 3D image construction system, this thesis proposes 3D image reconstruction algorithm using the depth information obtained from a Kinect camera sensor. To do this, the followings are done. First of all, structure of the Kinect camera sensor and the principle of achieving depth information are introduced. At the same time. Principle of camera calibration, and geometrical relationship in image, model of camera distortion and geometrical relationship of two cameras are introduced. Secondly, a calibration method for the Kinect camera sensor as a dual-camera system is designed. Thirdly, due to the laser point interference to infrared image, preliminary calibration results for infrared camera has a big error. To reduce this calibration error, this thesis proposes two methods: one is shielding infrared transmitter for eliminating the interference sources, and the other is putting the image capture environment in a bright room for increasing the infrared image brightness. Fourthly, radial distortions of both RGB color camera and infrared camera are calibrated inside the Kinect camera sensor. Due to too small tangential distortions its calibration is neglected. The RGB color camera and the infrared camera have translation in one direction, and their rotations are too small. This can be proved from the appearance of the Kinect camera sensor. Compared to four typical 3D image reconstruction algorithms, a real-time 3D image reconstruction algorithm is described based on the depth information of the Kinect camera sensor. Fifthly, three sources of noise from the Kinect equipment, measurement environment and surface properties of the object to be measured are introduced. Experiment is implemented to approve effect of random noise. Based on the signal structure of depth map and the random noise, 5 filters such as the existing median filter, Gaussian filter, bilateral filter, joint bilateral filter, and improved joint bilateral filter are applied. In the image processing of the original depth map, each filter effect applied to 3D reconstructed image using 3D point cloud and depth map are analyzed. Of those filters, Gaussian filtering is the shortest in running time, but it cannot be denoised completely. Joint bilateral filtering is very good for denoising, but its running time is the longest. By combining the Gaussian filter and joint bilateral filter strengths to compensate these defects of both filters, the improved joint bilateral filter is proposed.
Finally, the effectiveness of the proposed algorithms is verified by using simulation and experiment.

Keywords: Real-time 3D image reconstruction, Kinect camera sensor, Camera calibration, mean square error, Denoise, Improved joint bilateral filter.
Author(s)
Tian Shui Gao
Issued Date
2017
Awarded Date
2017. 2
Type
Dissertation
Keyword
3D Image Reconstruction Camera Sensor
Publisher
부경대학교 대학원
URI
https://repository.pknu.ac.kr:8443/handle/2021.oak/13444
http://pknu.dcollection.net/jsp/common/DcLoOrgPer.jsp?sItemId=000002333923
Affiliation
부경대학교 대학원
Department
대학원 기계공학부기계설계학전공
Advisor
김상봉
Table Of Contents
Chapter 1: Introduction 1
1.1 Background and motivation 1
1.2 Problems statements 8
1.3 Objective and research method 9
1.4 Outline of thesis and summary of contributions 10
Chapter 2: System Description and Calibration 12
2.1 Selection of 3D image reconstruction methods 12
2.2 Principle of Kinect camera sensor 13
2.3 Coordinate System 16
2.3.1 Image coordinate system 18
2.3.2 Camera coordinate system 19
2.3.3 World coordinate system 20
2.4 Camera model 21
2.5 Camera calibration 23
2.5.1 Perspective projection 24
2.5.2 Determination of camera parameters 26
2.6 Lens distortion 29
2.6.1 Redial distortion 30
2.6.2 Tangential distortion 31
2.7 Geometric relation of dual-cameras 31
2.8 Realization of calibration module 33
2.9 Calibration result and analysis 38
2.9.1 Calibration of two cameras 38
2.9.1.1 Analysis of calibration the color camera 40
2.9.1.2 Analysis of calibration the infrared camera 43
2.9.2 Calibration improvement 45
2.9.2.1 Improved color camera calibration 47
2.9.2.2 Improved infrared camera calibration 51
2.9.2.3 Geometric relations of two cameras 53
2.10 Summary of the chapter 54
Chapter 3: 3D Image Reconstruction Algorithm Analysis 56
3.1 3D image reconstruction algorithm 56
3.1.1 Kinect data acquisition 61
3.1.2 Coordinate transformation 62
3.1.3 XY coordinate calculation 62
3.1.4 3D point cloud display 63
3.2 Noise analysis 65
3.2.1 Noise sources 65
3.2.2 Mathematical model of random noise 70
3.3 Denoising algorithm 72
3.3.1 Algorithm evaluation criteria 72
3.3.2 Denoising algorithm 74
3.3.3 Median filter 74
3.3.4 Gaussian filter 77
3.3.5 Bilateral filter 81
3.3.5.1 Comparison with Gaussian filter 82
3.3.5.2 Parameter selection 86
3.3.5.3 Filtering results 90
3.3.6 Joint bilateral filter 91
3.4 Improved denoising algorithm 93
3.4.1 Filter’s profiling 93
3.4.2 Improved joint bilateral filter 94
3.5 3D reconstructed using point cloud library 97
3.6 Summary 98
Chapter 4: Conclusions and Future Works 101
4.1 Conclusions 101
4.2 Future works 104
References 105
Publication and Conferences 111
Appendix 113
Degree
Master
Appears in Collections:
대학원 > 기계공학부-기계설계학전공
Authorize & License
  • Authorize공개
Files in This Item:

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.