CN103389042A - Ground automatic detecting and scene height calculating method based on depth image - Google Patents

Ground automatic detecting and scene height calculating method based on depth image Download PDF

Info

Publication number
CN103389042A
CN103389042A CN2013102895596A CN201310289559A CN103389042A CN 103389042 A CN103389042 A CN 103389042A CN 2013102895596 A CN2013102895596 A CN 2013102895596A CN 201310289559 A CN201310289559 A CN 201310289559A CN 103389042 A CN103389042 A CN 103389042A
Authority
CN
China
Prior art keywords
ground
image
depth
camera
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102895596A
Other languages
Chinese (zh)
Inventor
夏东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2013102895596A priority Critical patent/CN103389042A/en
Publication of CN103389042A publication Critical patent/CN103389042A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

Provided is a ground automatic detecting and scene height calculating method based on a depth image. If h obtained through calculation of a certain point meets the condition that H1<h<H2 and d<Thsmooth, the point is judged to be on the ground. All points meeting the condition are extracted, a region with the largest area is found through a binary image labeling method, and the region is identified to be the ground. After the ground is judged, l3 of the ground region in an L_matrix is averaged to obtain indication of a plane normal vector under a camera coordinate system. An average value of the ground region h is the installing height of a camera. For any image point, a height calculation formula is as follows. By means of the method, the depth image is used for conducting ground automatic detecting, and the height between all the points and the ground is calculated in the scene according to a ground detecting result. The height information enables an intelligent monitoring system and an intelligent robot to understand the scene deeply and thoroughly.

Description

Method based on the automatic detection in the ground of depth image and scene high computational
Technical field
The present invention relates to a kind of ground detection and height measurement method, be specially the method for the automatic detection in a kind of ground based on depth image and scene high computational.
Background technology
In the application of modern intelligent monitor system and intelligent robot, the detection on ground has considerable meaning for the path planning of target detection and robot, but traditional optics and acoustic sensor all can not be very stable, ground detected.
Summary of the invention
Technical matters solved by the invention is to provide the method for the automatic detection in a kind of ground based on depth image and scene high computational, it utilizes depth image to carry out the automatic detection on ground, again according to the result of ground detection, calculate in scene a little apart from the height on ground.These elevation informations make intelligent monitor system and intelligent robot more deep for the understanding of scene, thorough.
Technical matters solved by the invention realizes by the following technical solutions:
The method of the automatic detection in a kind of ground based on depth image and scene high computational, concrete steps are as follows:
Step (1), background modeling
At first, to complete static scene, utilize depth camera to gather M frame depth image depth m, m=1,2 ... M,, to adding up of M frame depth image, obtain cumulative and image S, add up simultaneously the number of times that on each pixel, the significant depth value occurs in M frame depth image, obtain a width and represent the image Valid of effective value occurrence number in the M two field picture, its image value is calculated as follows:
Order
Figure 927389DEST_PATH_IMAGE002
, can obtain the background depth map after multiframe is level and smooth;
Step (2), utilize the parameter of depth camera Intrinsic Matrix, calculate the coordinate of background under the depth camera coordinate system, pixel (u, v) is located the Coordinate calculation method of background under camera coordinate system and is:
Figure 2013102895596100002DEST_PATH_IMAGE001
, and with x c(u, v), y c(u, v), z c(u, v) preserves, and obtains the onesize image x of three width and background depth map c, y c, z c
In the latter half of step (3), At Background Depth image Valid,, to each pixel (u ', v '), choose three and lay respectively at
Figure 2013102895596100002DEST_PATH_IMAGE002
Point, wherein bias represents the image distance of the pixel of the current needs of the pixel distance of choosing judgement;
Step (4), respectively at image x c, y c, z cIn, read four points (u ', v '),
Figure 249152DEST_PATH_IMAGE005
Image value, and according to following manner, be arranged in a little coordinate:
Figure 2013102895596100002DEST_PATH_IMAGE003
Step (5), utilize p 2, p 3, p 4Fit Plane, order
Figure 934528DEST_PATH_IMAGE007
, utilize the normal vector on cross product digital simulation plane
Figure 2013102895596100002DEST_PATH_IMAGE004
, and calculate this plane and p 1Apart from d (u, v)=(p 1-p 2) * l 3, and the camera coordinate system initial point therewith the distance h (u, v) on plane=-p 2* l 3, herein * dot product between number two vectors of expression; Simultaneously with l 3Preserve according to the order of image coordinate (u ', v ') as a vector, i.e. L_matrix (u ', v ')=l 3
Step (6), all points are done the operation of step (3), step (4), step (5), obtain d and h a little;
Step (7), utilize priori, the camera altitude range H that input is estimated 1, H 2If the h that certain point calculates meets H 1<h<H 2And d<Th Smooth, judge that so this point rest on the ground; Th wherein SmoothFor judgement p 1, p 2, p 3, p 4The threshold value on same plane whether;
Step (8), all are met the point that step (7) provides condition extract, and find a zone of area maximum by the Pixel Labeling in Binary Images method, it is ground that this zone namely is identified as;
Step (9), after having judged ground, with the l of ground region in the L_matrix matrix 3Average, obtain one
Figure 883341DEST_PATH_IMAGE009
, this Be the expression of planar process vector under camera coordinate system; And the mean value of ground region h
Figure 2013102895596100002DEST_PATH_IMAGE005
Be the setting height(from bottom) of video camera;
Step (10), to the Subgraph picture point
Figure 767617DEST_PATH_IMAGE011
, it highly can calculate according to following formula:
Figure 2013102895596100002DEST_PATH_IMAGE006
As preferred version: step is established in (7)
Figure 933205DEST_PATH_IMAGE013
.
Compared with prior art, the invention has the beneficial effects as follows: the method it utilize depth image to carry out the automatic detection on ground, then according to the result of ground detection, calculate in scene a little apart from the height on ground.These elevation informations make intelligent monitor system and intelligent robot more deep for the understanding of scene, thorough.
Embodiment
In order to make technological means of the present invention, creation characteristic, workflow, using method reach purpose and effect is easy to understand, below further set forth the present invention.
For a better understanding of the present invention, at first some concepts are defined or illustrate.
1, depth image: refer to export and comprise by depth camera the image of depth information of scene, from normal image represent the gray scale of scene or color different, depth image represents is range information between scene point and depth camera.
2, depth camera: refer to utilize one of following three kinds of technology or three kinds of technology to merge the video camera that obtains depth information of scene, these three kinds of technology refer to:
Figure 599810DEST_PATH_IMAGE014
Structure light coding (structure light coding),
Figure 387548DEST_PATH_IMAGE015
Binocular vision technology (binocular),
Figure 885525DEST_PATH_IMAGE016
Time-of-flight method (time of flight).
3, depth camera intrinsic parameter (intrinsic parameters) matrix: this matrix is by the focal distance f of depth camera, picture dot size dx, dy, normalization focal length
Figure 594855DEST_PATH_IMAGE017
, and the coordinate (u of photocentre in image coordinate system 0, v 0) be expressed as
Figure 2013102895596100002DEST_PATH_IMAGE007
.
4, ground a: plane that refers in scene to occupy the latter half maximum area in depth image, requirement in the fitting depth video camera, makes depth image the latter half can be good at reflecting the depth information on ground by adjusting the camera setting angle.
The method of the automatic detection in a kind of ground based on depth image and scene high computational, concrete steps are as follows:
Step (1), background modeling
At first, to the scene of fully static (not having the sport foreground target), utilize depth camera to gather M frame depth image depth m, m=1,2 ... M,, to adding up of M frame depth image, obtain cumulative and image S, add up simultaneously the number of times that on each pixel, the significant depth value occurs in M frame depth image, obtain a width and represent the image Valid of effective value occurrence number in the M two field picture, its image value is calculated as follows:
Figure DEST_PATH_IMAGE008
Order
Figure 780483DEST_PATH_IMAGE020
, can obtain the background depth map after multiframe is level and smooth;
Step (2), utilize the parameter of depth camera intrinsic parameter (intrinsic parameters) matrix, calculate the coordinate of background under the depth camera coordinate system, pixel (u, v) is located the Coordinate calculation method of background under camera coordinate system and is:
Figure 2013102895596100002DEST_PATH_IMAGE009
, and with x c(u, v), y c(u, v), z c(u, v) preserves, and obtains the onesize image x of three width and background depth map c, y c, z c.
In the latter half of step (3), At Background Depth image Valid,, to each pixel (u ', v '), choose three and lay respectively at
Figure 347916DEST_PATH_IMAGE022
The point, wherein bias represents the image distance of the pixel of the current needs judgement of the pixel distance of choosing, need to prove, the arrangement mode of three points is not strict with according to the coordinate that provides and is chosen, can be also to choose three points according to any regular, the variation of this reconnaissance mode, do not change essence of the present invention.
Step (4), respectively at image x c, y c, z cIn, read four points
Figure 768533DEST_PATH_IMAGE023
,
Figure 178786DEST_PATH_IMAGE024
Image value, and according to following manner, be arranged in a little coordinate:
Figure 292236DEST_PATH_IMAGE025
Step (5), utilize p 2, p 3, p 4Fit Plane, order
Figure 217466DEST_PATH_IMAGE026
, utilize the normal vector on cross product digital simulation plane , and calculate this plane and p 1Apart from d (u, v)=(p 1-p 2) * l 3, and the camera coordinate system initial point therewith the distance h (u, v) on plane=-p 2* l 3, herein * dot product between number two vectors of expression; Simultaneously with l 3Preserve according to the order of image coordinate (u ', v ') as a vector, i.e. L_matrix (u ', v ')=l 3.
Step (6), all points are done step (3), step (4), the operation of step (5), obtain d and h a little.
Step (7), utilize priori, the camera altitude range H that input is estimated 1, H 2If the h that certain point calculates meets H 1<h<H 2And d<Th Smooth, Th wherein SmoothFor judgement p 1, p 2, p 3, p 4Threshold value on same plane, generally be made as , judge that so this point rest on the ground.
Step (8), all are met the point that step (7) provides condition extract, and find a zone of area maximum by the Pixel Labeling in Binary Images method, it is ground that this zone namely is identified as.
Step (9), after having judged ground, with the l of ground region in the L_matrix matrix 3Average, obtain one
Figure 476038DEST_PATH_IMAGE029
, this
Figure 888564DEST_PATH_IMAGE029
Be the expression of planar process vector under camera coordinate system; And the mean value of ground region h Be the setting height(from bottom) of video camera.
Step (10), to the Subgraph picture point
Figure 35829DEST_PATH_IMAGE031
, it highly can calculate according to following formula:
Figure 225502DEST_PATH_IMAGE032
Above demonstration and described ultimate principle of the present invention, principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; that describes in above-described embodiment and instructions just illustrates principle of the present invention; without departing from the spirit and scope of the present invention; the present invention also has various changes and modifications, and these changes and improvements all fall in the claimed scope of the invention.Claimed scope of the present invention is defined by appending claims and equivalent thereof.

Claims (2)

1. the ground based on depth image is detected and the method for scene high computational automatically, it is characterized in that, concrete steps are as follows:
Step (1), background modeling
At first, to complete static scene, utilize depth camera to gather M frame depth image depth m, m=1,2 ... M,, to adding up of M frame depth image, obtain cumulative and image S, add up simultaneously the number of times that on each pixel, the significant depth value occurs in M frame depth image, obtain a width and represent the image Valid of effective value occurrence number in the M two field picture, its image value is calculated as follows:
Figure 2013102895596100001DEST_PATH_IMAGE001
Order
Figure 915923DEST_PATH_IMAGE002
, can obtain the background depth map after multiframe is level and smooth;
Step (2), utilize the parameter of depth camera Intrinsic Matrix, calculate the coordinate of background under the depth camera coordinate system, pixel (u, v) is located the Coordinate calculation method of background under camera coordinate system and is:
Figure 2013102895596100001DEST_PATH_IMAGE002
, and with x c(u, v), y c(u, v), z c(u, v) preserves, and obtains the onesize image x of three width and background depth map c, y c, z c
In the latter half of step (3), At Background Depth image Valid,, to each pixel (u ', v '), choose three and lay respectively at Point, wherein bias represents the image distance of the pixel of the current needs of the pixel distance of choosing judgement;
Step (4), respectively at image x c, y c, z cIn, read four points (u ', v '), Image value, and according to following manner, be arranged in a little coordinate:
Figure DEST_PATH_IMAGE004
Step (5), utilize p 2, p 3, p 4Fit Plane, order
Figure 851780DEST_PATH_IMAGE007
, utilize the normal vector on cross product digital simulation plane
Figure 2013102895596100001DEST_PATH_IMAGE005
, and calculate this plane and p 1Apart from d (u, v)=(p 1-p 2) * l 3, and the camera coordinate system initial point therewith the distance h (u, v) on plane=-p 2* l 3, herein * dot product between number two vectors of expression; Simultaneously with l 3Preserve according to the order of image coordinate (u ', v ') as a vector, i.e. L_matrix (u ', v ')=l 3
Step (6), all points are done the operation of step (3), step (4), step (5), obtain d and h a little;
Step (7), utilize priori, the camera altitude range H that input is estimated 1, H 2If the h that certain point calculates meets H 1<h<H 2And d<Th Smooth, judge that so this point rest on the ground; Th wherein SmoothFor judgement p 1, p 2, p 3, p 4The threshold value on same plane whether;
Step (8), all are met the point that step (7) provides condition extract, and find a zone of area maximum by the Pixel Labeling in Binary Images method, it is ground that this zone namely is identified as;
Step (9), after having judged ground, with the l of ground region in the L_matrix matrix 3Average, obtain one
Figure 257670DEST_PATH_IMAGE009
, this Be the expression of planar process vector under camera coordinate system; And the mean value of ground region h
Figure DEST_PATH_IMAGE006
Be the setting height(from bottom) of video camera;
Step (10), to the Subgraph picture point
Figure 596751DEST_PATH_IMAGE011
, it highly can calculate according to following formula:
Figure 692883DEST_PATH_IMAGE012
2. the method for the automatic detection in the ground based on depth image according to claim 1 and scene high computational, is characterized in that, step is established in (7) .
CN2013102895596A 2013-07-11 2013-07-11 Ground automatic detecting and scene height calculating method based on depth image Pending CN103389042A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102895596A CN103389042A (en) 2013-07-11 2013-07-11 Ground automatic detecting and scene height calculating method based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102895596A CN103389042A (en) 2013-07-11 2013-07-11 Ground automatic detecting and scene height calculating method based on depth image

Publications (1)

Publication Number Publication Date
CN103389042A true CN103389042A (en) 2013-11-13

Family

ID=49533391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102895596A Pending CN103389042A (en) 2013-07-11 2013-07-11 Ground automatic detecting and scene height calculating method based on depth image

Country Status (1)

Country Link
CN (1) CN103389042A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361575A (en) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 Automatic ground testing and relative camera pose estimation method in depth image
CN105832336A (en) * 2016-03-18 2016-08-10 京东方科技集团股份有限公司 Body height measurement system and method
CN107016697A (en) * 2017-04-11 2017-08-04 杭州光珀智能科技有限公司 A kind of height measurement method and device
WO2017152529A1 (en) * 2016-03-09 2017-09-14 京东方科技集团股份有限公司 Determination method and determination system for reference plane
CN107220632A (en) * 2017-06-12 2017-09-29 山东大学 A kind of pavement image dividing method based on normal direction feature
WO2017201638A1 (en) * 2016-05-23 2017-11-30 Intel Corporation Human detection in high density crowds
CN107633210A (en) * 2017-08-28 2018-01-26 上海欧忆能源科技有限公司 Building site ascend a height monitoring system, method, apparatus, medium and equipment violating the regulations
CN108399641A (en) * 2018-03-12 2018-08-14 北京华捷艾米科技有限公司 Again the determination method and device on ground are detected
CN108460333A (en) * 2018-01-19 2018-08-28 北京华捷艾米科技有限公司 Ground detection method and device based on depth map
CN111429509A (en) * 2020-03-24 2020-07-17 北京大学深圳研究生院 Centralized measuring and calculating method for height of target object
CN111586299A (en) * 2020-05-09 2020-08-25 北京华捷艾米科技有限公司 Image processing method and related equipment
CN111652136A (en) * 2020-06-03 2020-09-11 苏宁云计算有限公司 Pedestrian detection method and device based on depth image
CN112419390A (en) * 2020-11-26 2021-02-26 北京华捷艾米科技有限公司 Method and system for measuring height of human body
CN112739975A (en) * 2018-09-28 2021-04-30 松下知识产权经营株式会社 Dimension measuring device and dimension measuring method
CN112750205A (en) * 2019-10-30 2021-05-04 南京深视光点科技有限公司 Plane dynamic detection system and detection method
CN114708318A (en) * 2022-04-12 2022-07-05 西安交通大学 Depth camera-based unknown surface curvature measuring method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08263658A (en) * 1995-03-20 1996-10-11 Fujitsu Denso Ltd Registering method and collating device of fingerprint
JPH1196355A (en) * 1997-09-18 1999-04-09 Olympus Optical Co Ltd Method and device for labeling image
EP1280072A2 (en) * 2001-07-25 2003-01-29 Nec Corporation Image retrieval apparatus and image retrieving method
US20040032494A1 (en) * 2002-08-13 2004-02-19 Wataru Ito Object-detection-condition modifiable object detection method and object detection apparatus using the method
CN101013028A (en) * 2006-01-31 2007-08-08 欧姆龙株式会社 Image processing method and image processor
CN102693539A (en) * 2012-03-13 2012-09-26 夏东 Rapid three-dimensional calibration method for wide baselines for intelligent monitoring systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08263658A (en) * 1995-03-20 1996-10-11 Fujitsu Denso Ltd Registering method and collating device of fingerprint
JPH1196355A (en) * 1997-09-18 1999-04-09 Olympus Optical Co Ltd Method and device for labeling image
EP1280072A2 (en) * 2001-07-25 2003-01-29 Nec Corporation Image retrieval apparatus and image retrieving method
US20040032494A1 (en) * 2002-08-13 2004-02-19 Wataru Ito Object-detection-condition modifiable object detection method and object detection apparatus using the method
CN101013028A (en) * 2006-01-31 2007-08-08 欧姆龙株式会社 Image processing method and image processor
CN102693539A (en) * 2012-03-13 2012-09-26 夏东 Rapid three-dimensional calibration method for wide baselines for intelligent monitoring systems

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361575B (en) * 2014-10-20 2015-08-19 湖南戍融智能科技有限公司 Automatic floor in depth image detects and video camera relative pose estimation method
CN104361575A (en) * 2014-10-20 2015-02-18 湖南戍融智能科技有限公司 Automatic ground testing and relative camera pose estimation method in depth image
WO2017152529A1 (en) * 2016-03-09 2017-09-14 京东方科技集团股份有限公司 Determination method and determination system for reference plane
US10319104B2 (en) 2016-03-09 2019-06-11 Boe Technology Group Co., Ltd. Method and system for determining datum plane
CN105832336B (en) * 2016-03-18 2019-01-22 京东方科技集团股份有限公司 height measuring system and method
CN105832336A (en) * 2016-03-18 2016-08-10 京东方科技集团股份有限公司 Body height measurement system and method
US10424078B2 (en) 2016-03-18 2019-09-24 Boe Technology Group Co., Ltd. Height measuring system and method
WO2017156894A1 (en) * 2016-03-18 2017-09-21 京东方科技集团股份有限公司 Height measurement system and method
US10402633B2 (en) 2016-05-23 2019-09-03 Intel Corporation Human detection in high density crowds
WO2017201638A1 (en) * 2016-05-23 2017-11-30 Intel Corporation Human detection in high density crowds
CN107016697B (en) * 2017-04-11 2019-09-03 杭州光珀智能科技有限公司 A kind of height measurement method and device
CN107016697A (en) * 2017-04-11 2017-08-04 杭州光珀智能科技有限公司 A kind of height measurement method and device
CN107220632A (en) * 2017-06-12 2017-09-29 山东大学 A kind of pavement image dividing method based on normal direction feature
CN107220632B (en) * 2017-06-12 2020-02-18 山东大学 Road surface image segmentation method based on normal characteristic
CN107633210A (en) * 2017-08-28 2018-01-26 上海欧忆能源科技有限公司 Building site ascend a height monitoring system, method, apparatus, medium and equipment violating the regulations
CN108460333A (en) * 2018-01-19 2018-08-28 北京华捷艾米科技有限公司 Ground detection method and device based on depth map
US10776932B2 (en) 2018-03-12 2020-09-15 BeiJing Hjimi Technology Co., Ltd Determining whether ground is to be re-detected
CN108399641B (en) * 2018-03-12 2019-10-11 北京华捷艾米科技有限公司 Again the determination method and device on ground are detected
CN108399641A (en) * 2018-03-12 2018-08-14 北京华捷艾米科技有限公司 Again the determination method and device on ground are detected
CN112739975B (en) * 2018-09-28 2023-06-13 松下知识产权经营株式会社 Dimension measuring device and dimension measuring method
CN112739975A (en) * 2018-09-28 2021-04-30 松下知识产权经营株式会社 Dimension measuring device and dimension measuring method
CN112750205A (en) * 2019-10-30 2021-05-04 南京深视光点科技有限公司 Plane dynamic detection system and detection method
CN112750205B (en) * 2019-10-30 2023-05-16 南京深视光点科技有限公司 Plane dynamic detection system and detection method
CN111429509A (en) * 2020-03-24 2020-07-17 北京大学深圳研究生院 Centralized measuring and calculating method for height of target object
CN111586299A (en) * 2020-05-09 2020-08-25 北京华捷艾米科技有限公司 Image processing method and related equipment
CN111586299B (en) * 2020-05-09 2021-10-19 北京华捷艾米科技有限公司 Image processing method and related equipment
CN111652136A (en) * 2020-06-03 2020-09-11 苏宁云计算有限公司 Pedestrian detection method and device based on depth image
CN111652136B (en) * 2020-06-03 2022-11-22 苏宁云计算有限公司 Pedestrian detection method and device based on depth image
CN112419390A (en) * 2020-11-26 2021-02-26 北京华捷艾米科技有限公司 Method and system for measuring height of human body
CN114708318A (en) * 2022-04-12 2022-07-05 西安交通大学 Depth camera-based unknown surface curvature measuring method

Similar Documents

Publication Publication Date Title
CN103389042A (en) Ground automatic detecting and scene height calculating method based on depth image
US8588516B2 (en) Interpolation image generation apparatus, reconstructed image generation apparatus, method of generating interpolation image, and computer-readable recording medium storing program
WO2019223382A1 (en) Method for estimating monocular depth, apparatus and device therefor, and storage medium
CN105956539B (en) A kind of Human Height measurement method of application background modeling and Binocular Vision Principle
US8831337B2 (en) Method, system and computer program product for identifying locations of detected objects
CN106650701B (en) Binocular vision-based obstacle detection method and device in indoor shadow environment
CN105627932A (en) Distance measurement method and device based on binocular vision
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
KR101941801B1 (en) Image processing method and device for led display screen
CN108364292B (en) Illumination estimation method based on multiple visual angle images
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
US20200410688A1 (en) Image Segmentation Method, Image Segmentation Apparatus, Image Segmentation Device
CN105282530B (en) Automatic white balance implementation method and device based on background modeling
JP2014112055A (en) Estimation method for camera attitude and estimation system for camera attitude
CN109163775B (en) Quality measurement method and device based on belt conveyor
CN104200453B (en) Parallax image correcting method based on image segmentation and credibility
CN112991459B (en) Camera calibration method, device, equipment and storage medium
CN104299220A (en) Method for filling cavity in Kinect depth image in real time
CN104079800A (en) Shaking preventing method for video image in video surveillance
CN105957107A (en) Pedestrian detecting and tracking method and device
CN107991665A (en) It is a kind of based on fixed-focus camera to target three-dimensional coordinate method for continuous measuring
CN110617772A (en) Non-contact type line diameter measuring device and method
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN112184837A (en) Image detection method and device, electronic equipment and storage medium
CN104537627A (en) Depth image post-processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131113