CN106981081A - A kind of degree of plainness for wall surface detection method based on extraction of depth information - Google Patents

A kind of degree of plainness for wall surface detection method based on extraction of depth information Download PDF

Info

Publication number
CN106981081A
CN106981081A CN201710127442.6A CN201710127442A CN106981081A CN 106981081 A CN106981081 A CN 106981081A CN 201710127442 A CN201710127442 A CN 201710127442A CN 106981081 A CN106981081 A CN 106981081A
Authority
CN
China
Prior art keywords
depth information
angle point
detected
norm
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710127442.6A
Other languages
Chinese (zh)
Inventor
陈思
于鸿洋
陈宏洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710127442.6A priority Critical patent/CN106981081A/en
Publication of CN106981081A publication Critical patent/CN106981081A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of degree of plainness for wall surface detection method based on extraction of depth information, belong to the distance measurement technique using binocular vision.The present invention extracts camera to the depth information of metope using binocular camera, gather two groups of images pair in region to be detected from different angles respectively and obtain corresponding depth information, obtain two width depth information figures, during depth information figure is obtained, by the improvement to existing Hhartley orthosises, to lift the real-time and accuracy rate of trimming process;And rotation and translation is carried out to a wherein width depth information figure, so that the imaging space coordinate of two width depth information figures is overlapping, depth information figure is strengthened in generation, take the mode for the depth value for strengthening depth information figure as the distance value in region to be detected, the flatness in region to be detected is adjudicated based on the magnitude relationship of distance value and predetermined threshold value.The present invention can be used for intelligent plastered work, its easy to operate, intelligence, and the ageing of flatness detection, accuracy rate are high.

Description

A kind of degree of plainness for wall surface detection method based on extraction of depth information
Technical field
The invention belongs to Binocular visual field, and in particular to the extraction of depth information technology of metope.
Background technology
Binocular stereo vision (Binocular Stereo Vision) is a kind of important form of machine vision, and it is based on Principle of parallax simultaneously utilizes two images of the imaging device from different position acquisition testees, by calculating between image corresponding points Position deviation, the method to obtain object dimensional geological information.
The depth information for obtaining object based on binocular stereo vision is mainly included the following steps that:Camera chain demarcation (is obtained Take the inside and outside parameter of video camera), image is obtained, (inside and outside parameter based on video camera is to the image of collection to carrying out for image rectification Image rectification, with cause corresponding pixel be located at same horizontal line on), binocular solid matching and disparity computation, based on regarding Difference obtains depth information.During image correction process, relatively common Hhartley correction algorithms (specifically refer to document Specifically refer to document Hartley R, Zisserman A.Multiple view geometry in computer vision [J] .Cambridge University Press, 2003), its feature based Point matching calculates the basis matrix of Epipolar geometry F, completes image rectification.In hartley correction algorithms, match the characteristic point used and typically use SIFT (Scale Invariant Feature Transform) feature point extraction method obtains, and it mainly includes following four step:Detect the pole of metric space It is worth point, screening extreme point (rejecting bad extreme point), the principal direction and generation feature of characteristic point is chosen based on current extreme value point Point description.Although the characteristic of SIFT feature is stronger, for the higher binocular vision of requirement of real-time, it is deposited In following deficiency:(1) the calculating time of characteristic point is longer;(2) this characteristic point quantity of generation is too many, is carrying out Feature Points Matching When, its calculating degree is complicated, and error hiding easily occurs;(3) SIFT feature is not the characteristic point of image intuitively, reflects image The poor intuition of structure.
With the development of building intellectualization, intelligent mechanical float will progressively substitute existing artificial, semi-automatic plastered work, because This intellectualized detection to metope depth information is imperative.Currently, depth information is mainly obtained by active infrared light device, But which is easily influenceed by ambient, while equipment is expensive, complex operation, algorithm complex is high.
The content of the invention
The technical problems to be solved by the invention are to provide a kind of obtains region to be detected based on binocular stereo vision Metope every arrives the depth information of camera plane, judges whether metope is smooth using depth information, and positions out-of-flatness region, And then realize the intellectuality of plastered work.
A kind of degree of plainness for wall surface detection method based on extraction of depth information of the present invention, comprises the following steps:
Step 1:The inside and outside parameter of the binocular camera detected for metope is obtained using gridiron pattern standardization:
Step 2:Gather two groups of images pair in region to be detected from different angles respectively using binocular camera, and carry out Image denoising processing;
Step 3:Respectively to two groups of images to carrying out image correction process:
301:Image is extracted to the characteristic point of each image and feature matching is carried out, basis matrix F is calculated;
Wherein extract characteristic point processing be:
Harris Corner Detections are carried out to image and (specifically refer to document Harris C, Stephens M.A combined Corner and edge detector [C] .Alvey vision conference.1988) obtain angle point;
8*8 the first rectangular area is determined centered on each angle point in the picture, in units of the first rectangular area respectively Calculate the angle point P positioned at the center of the first rectangular area1Principal direction:
The gradient-norm of each angle point and direction (gradient-norm and direction are corresponded) in the first rectangular area are calculated, and to each Gradient-norm is weighted processing:The central point of the rectangular area of distance first is nearer, then the weight for corresponding to the gradient-norm of angle point is bigger;
Based on direction corresponding with gradient-norm, the gradient-norm after weighting is counted:According to 8 (360 ° of angular intervals 8 sections are divided into obtain), the gradient-norm after the weighting for belonging to same angular interval is overlapped, taken in 8 stack results most The big corresponding angular interval of person is used as angle point P1Principal direction;
16*16 the second rectangular area is determined centered on each angle point in the picture, then the second rectangular area is divided equally For 16 4*4 sub-block, the angle point P positioned at the center of the second rectangular area is calculated respectively in units of the second rectangular area2's 128 dimensional feature vectors description:
Gradient-norm and the direction of each angle point in each sub-block are calculated respectively, and according to 8 angular intervals, (360 ° are divided into 8 Section is obtained), based on the corresponding direction of each gradient-norm, the gradient-norm for belonging to same angular interval is overlapped, 8 latitudes are obtained Block feature vector, the block feature vector of 16 sub-blocks constitutes 128 dimensional feature vectors description of current angle point;
Using each angle point as characteristic point, describe son by 128 dimensional feature vectors of each angle point and principal direction obtains each spy Levy characteristic vector a little;
302:According to obtained basis matrix F, image correction process is carried out using Hartley correction algorithms are relative to figure;
Step 4:Respectively to every group of image to carrying out binocular solid matching and disparity computation;
Step 5:Calculate the depth value of each pixel in region to be detected respectively based on every group of parallaxObtain two width Depth information figure, wherein b represent the video camera spacing of binocular camera, and f represents focal length, and c represents each pixel in region to be detected The parallax of point;
Step 6:Based on the imaging space coordinate of two groups of images pair, the imaging space of an adjustment wherein width depth information figure is sat Depth information figure is strengthened in mark so that the imaging space coordinate of two width depth information figures is overlapping, generation;
The mode for the depth value for strengthening depth information figure is taken as the distance value in region to be detected, based on distance value with presetting The magnitude relationship of threshold value adjudicates the flatness in region to be detected.Such as, region to be detected is then can determine that more than predetermined threshold value Flatness is recessed, is less than, then the flatness in region to be detected is convex.
The present invention is improved existing Hhartley correction algorithms, based on Harris in image correction process The detection of angle point combines the characteristic point of SIFT feature vector description, can effectively improve the processing of existing Hhartley correction algorithms Actual effect and raising Feature Points Matching efficiency.Meanwhile, during Depth Information Acquistion in order to overcome single width scene, due to video camera Angle and the illumination reason in the external world, it is possible that occlusion area or weak texture region cause the cavity of depth information (i.e. Easily there is parallax value and is calculated as infinity in the larger region of some change in depth) so that the region to be detected finally given There is the empty situation without point in depth information (three-dimensional point cloud), and the present invention shoots two groups of images pair from different angles respectively, point Its depth information figure is not recovered, then according to the spatial relationship between shooting twice, is translated and rotates recovery so that two The picture of secondary shooting is spatially overlapped, so that depth information overlaping in space, so as to obtain part The depth information figure that region is strengthened, recovers more dense accurate depth information, to ensure the accuracy rate of flatness detection.
Further, in order to improve the treatment effeciency of the inside and outside parameter for obtaining binocular camera, in step 1 of the present invention, obtain Taking the inside and outside parameter of binocular camera detected in metope is specially:
101:Binocular camera gathers multiple chessboard table images;
102:Candidate angulars all in each chessboard table images are detected using Harris Corner Detection Algorithms;
103:Chessboard table images are carried out after binary conversion treatment, Screening Treatment is carried out to candidate angular:
In chessboard table images after binarization, a rectangle centered on candidate angular is determined, rectangular edges institute is judged Whether the number of two class pixel values of the pixel accounted for is identical, if so, then retaining current angle point, otherwise deletes;
104:Determine the coordinate of candidate angular:
Respectively centered on the candidate angular after step 102 screening, a square detection window is determined, and window will be detected Mouth is divided into four regions based on axis, and two non-adjacent regions are respectively defined as into the first subregion, the second subregion;
In default hunting zone by default step-length movement detection window, and detection window is calculated at current location First subregion, the squared difference of the grey scale pixel value of the second subregion and δ;
Using the center of the corresponding positions of δ minimum in hunting zone as current candidate angle point coordinate value;
105:The coordinate of each candidate angular obtained based on step 103, using gridiron pattern standardization, calculates binocular camera Inside and outside parameter.
By adopting the above-described technical solution, the beneficial effects of the invention are as follows:Easy to operate, intelligence, and the timeliness of detection Property, accuracy rate it is high.
Brief description of the drawings
Fig. 1 is the preferable angle point schematic diagram in chessboard table images;
Fig. 2 is improved gridiron pattern standardization schematic diagram of the invention;
Fig. 3 is the schematic diagram of the feature point extraction involved by image correction process;
Fig. 4 is the test for comparing feature point extraction of the present invention in image correction process and the performance of existing mode Figure.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, with reference to embodiment and accompanying drawing, to this hair It is bright to be described in further detail.
A kind of degree of plainness for wall surface detection method based on extraction of depth information of the present invention specifically includes the following steps:
Step 1:The inside and outside parameter of the binocular camera detected for metope is obtained using gridiron pattern standardization:
101:Make gridiron pattern:Size is used for 20mm*20mm, 20 rows 26 are arranged, asymmetric gridiron pattern.
102:Gather image:Shot and obtained on tessellated 20, picture using left and right camera, should wrapped in every pictures Containing whole gridiron patterns, and tessellated position is different from, and has certain inclination angle best.
103:Image gray processing:Because the picture that camera is gathered is usually 24 true coloured pictures, therefore need to be converted into gray-scale map, then Carry out in gridiron pattern standardization, present embodiment, using improved Zhang gridiron patterns standardization.
104:Gridiron pattern standardization is demarcated:Harris Corner Detections are used in usual Zhang scaling method Characteristic point on image of the method to detect identification shooting, although the calculating speed of Harris corner detection approach is fast, but in demarcation In algorithm, the angle point of identification tends not to be that (preferable angle point is four black and white boundaries on gridiron pattern to required " ideal " angle point Place, as shown in Figure 1), therefore, in order that obtaining in calibration process, the Corner Detection on gridiron pattern is more accurate, and the present invention is right Zhang gridiron pattern standardizations are improved.Using X-comers it is symmetrical and chequered with black and white the characteristics of, can be to Harris angles The angle point that point detection method is detected is screened.Referring to Fig. 2, for the rectangle frame centered on angle point, when along rectangle frame Trip is when taking a round, the black passed by and white rectangular shaped rim length and should be equal, i.e. lb1+lb2+lb3=lw1+lw2, its Middle lb1,lb2,lb3It is illustrated respectively in the 3 sections of black rectangle frame length passed through in walk process, lw1,lw2It is illustrated respectively in trip The 2 sections of black rectangle frame length passed through during walking.Because the black and white of image not necessarily complies fully with two-value numerical value, Therefore first to chessboard table images carry out binary conversion treatment, such as represent black with 1, represent white with 0, then only need to judgement with Whether the number of two class pixel values of the rectangle frame (size of rectangle frame is preset value) centered on angle point is identical, i.e., in rectangle frame In, whether the number that pixel value is 1 is identical with the number that pixel value is 0, if so, being then preferable angle point, retains;Otherwise, reject.
Improved Zhang gridiron patterns standardization i.e. of the invention is specially:
Candidate angulars all in each chessboard table images are detected using Harris Corner Detection Algorithms;
Chessboard table images are carried out after binary conversion treatment, Screening Treatment is carried out to candidate angular:
In chessboard table images after binarization, a rectangle centered on candidate angular is determined, rectangular edges institute is judged Whether the number of two class pixel values of the pixel accounted for is identical, if so, then retaining current angle point, otherwise deletes;
Determine the coordinate of candidate angular:Respectively centered on the candidate angular after screening, a square detection window is determined Mouthful, and detection window is divided into four regions based on axis, two non-adjacent regions are respectively defined as the first sub-district Domain, the second subregion;By default step-length movement detection window in default hunting zone, and detection window is calculated in present bit The first subregion, the squared difference of the grey scale pixel value of the second subregion and δ when putting;By δ correspondences minimum in hunting zone Position center as current candidate angle point coordinate value;
Based on the coordinate of obtained each candidate angular, using Zhang gridiron pattern standardizations, calculate binocular camera it is interior, Outer parameter.
Step 2:Gather two groups of images pair in region to be detected from different angles respectively using binocular camera, and carry out Image denoising processing.When gathering two groups of images pair in region to be detected, it is necessary to record every group of figure of collection it is relative when imaging it is empty Between coordinate, the relative imaging space coordinate of every group of figure can be obtained by means of auxiliary phase, when strengthening depth information figure in order to generate Use.
Step 3:Respectively to two groups of images to carrying out image correction process:
301:Image is extracted to the characteristic point of each image and feature matching is carried out, basis matrix F is calculated;
Wherein extract characteristic point processing be:
First, Harris Corner Detections are carried out respectively to each image and obtains angle point;
Then, in each image centered on each angle point determine 8*8 the first rectangular area, using the first rectangular area as Unit calculates the angle point P positioned at the center of the first rectangular area respectively1Principal direction:
The gradient-norm of each angle point and direction in the first rectangular area are calculated, and processing is weighted to each gradient-norm:Away from Central point from the first rectangular area is nearer, then the weight for corresponding to the gradient-norm of angle point is bigger;Then, based on corresponding with gradient-norm Direction, the gradient-norm after weighting is counted:According to 8 angular intervals (360 ° are divided into 8 sections and obtain), it will belong to same Gradient-norm after the weighting of angular interval is overlapped, and takes the corresponding angular interval of the maximum in 8 stack results as angle Point P1Principal direction;
Then, 16*16 the second rectangular area is determined centered on each angle point in the picture, then by the second rectangular area 16 4*4 sub-block is divided into, the angle point positioned at the center of the second rectangular area is calculated respectively in units of the second rectangular area P2128 dimensional feature vectors description son:Gradient-norm and the direction of each angle point in each sub-block are calculated respectively, according to 8 angles Interval (360 ° are divided into 8 sections and obtain), based on the corresponding direction of each gradient-norm, the gradient-norm for belonging to same angular interval is entered Row superposition, obtains the block feature vector of 8 latitudes, the block feature vectors of 16 sub-blocks constitute 128 dimensional features of current angle point to Amount description;
Finally, each angle point is described son by 128 dimensional feature vectors of each angle point and principal direction is obtained as characteristic point The characteristic vector of each characteristic point.
In the present embodiment, the gradient-norm and direction for calculating each angle point can be carried out using equation below:
Wherein (x, y) represents the image coordinate of angle point, and θ (x, y) represents the direction of the angle point positioned at (x, y) place, m (x, y) The gradient-norm of the angle point positioned at (x, y) place is represented, function L () represents the gray value of present image coordinate.
When being counted to gradient-norm, preferred statistics with histogram, as shown in Figure 3.
302:According to obtained basis matrix F, image correction process is carried out using Hartley correction algorithms are relative to figure;
Step 4:Respectively to every group of image to carrying out binocular solid matching and disparity computation;
Step 5:Calculate the depth value of each pixel in region to be detected respectively based on every group of parallaxObtain two width Depth information figure, wherein b represent the video camera spacing of binocular camera, and f represents focal length, and c represents each pixel in region to be detected The parallax of point;
Step 6:Based on the imaging space coordinate of two groups of images pair, the imaging space of an adjustment wherein width depth information figure is sat Depth information figure is strengthened in mark so that the imaging space coordinate of two width depth information figures is overlapping, generation;Then, reinforcement depth letter is taken The mode of depth value of figure is ceased as the distance value in region to be detected, is adjudicated based on the magnitude relationship of distance value and predetermined threshold value The flatness in region to be detected.Such as, the flatness that region to be detected is then can determine that more than predetermined threshold value is recessed;It is less than, then treats The flatness of detection zone is convex.
When depth information figure is strengthened in generation, the image for generally shooting second passes through to corresponding depth information figure Rotation moves to the image shot for the first time to corresponding imaging space coordinate.Represented respectively with P, P ' for the first time, for the second time The image of shooting has P to the pixel coordinates matrix of corresponding depth information figurea'=[R T] Pa, therefore only need to be based on two The spatial relation [R T] of secondary shooting, wherein R represents spin matrix, and T represents translation matrix, and P ' is passed through into Pa'=[R T] Pa P imaging space coordinate is rotated to, depth information figure is strengthened in generation.A demarcation reference is set in the equipment of binocular camera Thing, preferably calibrated reference are gridiron pattern scaling board (such as the black rectangle in binocular camera handheld device), then now chess Spatial relationship between the coordinate system of disk case marker fixed board and the camera coordinates system of left camera is fixed, and first time, second are obtained respectively Spatial relation R of the gridiron pattern scaling board in auxiliary camera during secondary shooting1,T1And R2,T2, wherein R1、R2Two are represented respectively Spin matrix during secondary shooting, T1、T2Translation matrix when representing to shoot twice respectively, then according to formula R=R2R1 -1, T= T2-R2R1 -1T1Calculate R, T parameters, according to Pa'=[R T] Pa, the image slices vegetarian refreshments of obtained depth information figure will be shot for the second time Translation is rotated back in the imaging space coordinate shot for the first time, so that the effect of depth information figure is enhanced, can be by not With the shooting of angle, so as to fill up the cavity of the point cloud on first time position, (i.e. depth information can not be calculated, or more than presetting The position of value).
In order to more specifically describe improved Hhartley correction algorithms of the invention and existing Hhartley correction algorithms Performance, using Fig. 4 as test chart, both contrasts are specific as shown in table 1 in feature point extraction and the performance of matching.Can by table 1 Know, the processing actual effect of existing Hhartley correction algorithms can be effectively improved and Feature Points Matching efficiency is improved.
The matching result of two kinds of characteristic points when the threshold value of table 1 is 0.95
SIFT feature Point matching The Feature Points Matching of the present invention
Left image characteristic point quantity (individual) 1250 787
Right image characteristic point quantity (individual) 1200 598
The time (ms) of algorithm consumption 295.536 58.750
The logarithm (to) of matching double points 914 640
The logarithm (to) of correct matching double points 720 600
Correct matching rate 78.77% 93.75%
The foregoing is only a specific embodiment of the invention, any feature disclosed in this specification, except non-specifically Narration, can alternative features equivalent by other or with similar purpose replaced;Disclosed all features or all sides Method or during the step of, in addition to mutually exclusive feature and/or step, can be combined in any way.

Claims (2)

1. a kind of degree of plainness for wall surface detection method based on extraction of depth information, it is characterised in that comprise the following steps:
Step 1:The inside and outside parameter of the binocular camera detected for metope is obtained using gridiron pattern standardization:
Step 2:Gather two groups of images pair in region to be detected from different angles respectively using binocular camera, and carry out image Denoising;
Step 3:Respectively to two groups of images to carrying out image correction process:
301:Image is extracted to the characteristic point of each image and feature matching is carried out, basis matrix F is calculated;
Wherein extract characteristic point processing be:
Harris Corner Detections are carried out to image and obtain angle point;
8*8 the first rectangular area is determined centered on each angle point in the picture, is calculated respectively in units of the first rectangular area Angle point P positioned at the center of the first rectangular area1Principal direction:
The gradient-norm of each angle point and direction in the first rectangular area are calculated, and processing is weighted to each gradient-norm:Distance The central point of one rectangular area is nearer, then the weight for corresponding to the gradient-norm of angle point is bigger;
Based on direction corresponding with gradient-norm, the gradient-norm after weighting is counted:It is same by belonging to according to 8 angular intervals Gradient-norm after the weighting of one angular interval is overlapped, and takes the corresponding angular interval conduct of the maximum in 8 stack results Angle point P1Principal direction;
16*16 the second rectangular area is determined centered on each angle point in the picture, then the second rectangular area is divided into 16 Individual 4*4 sub-block, calculates the angle point P positioned at the center of the second rectangular area respectively in units of the second rectangular area2128 dimension Characteristic vector description:
Gradient-norm and the direction of each angle point in each sub-block are calculated respectively, according to 8 angular intervals, based on each gradient-norm pair The direction answered, the gradient-norm for belonging to same angular interval is overlapped, and obtains the block feature vector of 8 latitudes, 16 sub-blocks Block feature vector constitutes 128 dimensional feature vectors description of current angle point;
Using each angle point as characteristic point, describe son by 128 dimensional feature vectors of each angle point and principal direction obtains each characteristic point Characteristic vector;
Wherein, 8 angular intervals are divided into 8 sections for 360 ° and obtained;
302:According to obtained basis matrix F, image correction process is carried out using Hartley correction algorithms are relative to figure;
Step 4:Respectively to every group of image to carrying out binocular solid matching and disparity computation;
Step 5:Calculate the depth value of each pixel in region to be detected respectively based on every group of parallaxObtain two width depth Hum pattern, wherein b represent the video camera spacing of binocular camera, and f represents focal length, and c represents each pixel in region to be detected Parallax;
Step 6:Based on the imaging space coordinate of two groups of images pair, the imaging space coordinate of a wherein width depth information figure is adjusted, So that the imaging space coordinate of two width depth information figures is overlapping, depth information figure is strengthened in generation;
The mode for the depth value for strengthening depth information figure is taken as the distance value in region to be detected, based on distance value and predetermined threshold value Magnitude relationship adjudicate the flatness in region to be detected.
2. the method as described in claim 1, it is characterised in that in step 1, the binocular camera that is detected for metope is obtained Inside and outside parameter is specially:
101:Binocular camera gathers multiple chessboard table images;
102:Candidate angulars all in each chessboard table images are detected using Harris Corner Detection Algorithms;
103:Chessboard table images are carried out after binary conversion treatment, Screening Treatment is carried out to candidate angular:
In chessboard table images after binarization, a rectangle centered on candidate angular is determined, is judged shared by rectangular edges Whether the number of two class pixel values of pixel is identical, if so, then retaining current angle point, otherwise deletes;
104:Determine the coordinate of candidate angular:
Respectively centered on the candidate angular after step 102 screening, a square detection window is determined, and by detection window base Four regions are divided into axis, two non-adjacent regions are respectively defined as the first subregion, the second subregion;
By default step-length movement detection window in default hunting zone, and calculate first of detection window at current location Subregion, the squared difference of the grey scale pixel value of the second subregion and δ;
Using the center of the corresponding positions of δ minimum in hunting zone as current candidate angle point coordinate value;
105:The coordinate of each candidate angular obtained based on step 103, using gridiron pattern standardization, calculates binocular camera Inside and outside parameter.
CN201710127442.6A 2017-03-06 2017-03-06 A kind of degree of plainness for wall surface detection method based on extraction of depth information Pending CN106981081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710127442.6A CN106981081A (en) 2017-03-06 2017-03-06 A kind of degree of plainness for wall surface detection method based on extraction of depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710127442.6A CN106981081A (en) 2017-03-06 2017-03-06 A kind of degree of plainness for wall surface detection method based on extraction of depth information

Publications (1)

Publication Number Publication Date
CN106981081A true CN106981081A (en) 2017-07-25

Family

ID=59338622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710127442.6A Pending CN106981081A (en) 2017-03-06 2017-03-06 A kind of degree of plainness for wall surface detection method based on extraction of depth information

Country Status (1)

Country Link
CN (1) CN106981081A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108362235A (en) * 2018-02-28 2018-08-03 四川斐讯信息技术有限公司 A kind of test system and method for the ground flat degree based on image procossing
CN108537846A (en) * 2018-02-08 2018-09-14 北京航空航天大学青岛研究院 Camera calibration method and apparatus
CN109463969A (en) * 2018-09-10 2019-03-15 深圳蓝胖子机器人有限公司 Recognition methods, robot and the computer storage medium of express delivery cabinet and its quick despatch
CN109827526A (en) * 2019-03-13 2019-05-31 中国十七冶集团有限公司 One kind being based on photogrammetric planar smoothness detection method and its flow chart of data processing
CN110111374A (en) * 2019-04-29 2019-08-09 上海电机学院 Laser point cloud matching process based on grouping staged threshold decision
CN110706182A (en) * 2019-10-10 2020-01-17 普联技术有限公司 Method and device for detecting flatness of shielding case, terminal equipment and storage medium
CN111161263A (en) * 2020-04-02 2020-05-15 北京协同创新研究院 Package flatness detection method and system, electronic equipment and storage medium
CN111340869A (en) * 2020-03-27 2020-06-26 上海东普信息科技有限公司 Express package surface flatness identification method, device, equipment and storage medium
CN112258482A (en) * 2020-10-23 2021-01-22 广东博智林机器人有限公司 Building exterior wall mortar flow drop detection method and device
CN113013468A (en) * 2019-12-20 2021-06-22 奥迪股份公司 Method for producing a power cell for a motor vehicle and corresponding production device
CN113284137A (en) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 Paper wrinkle detection method, device, equipment and storage medium
CN114370859A (en) * 2022-01-13 2022-04-19 安徽中擎建设发展有限公司 Laser marking method for plastering inner wall of building
CN115205562A (en) * 2022-07-22 2022-10-18 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN116257000A (en) * 2023-01-09 2023-06-13 中建八局第二建设有限公司 Remote control system of intelligent plastering machine based on Internet

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968347A (en) * 2010-08-31 2011-02-09 苏州大学 Method for extracting surface flatness of flat granular objects
CN103814306A (en) * 2011-06-24 2014-05-21 索弗特凯耐提克软件公司 Depth measurement quality enhancement
CN103874615A (en) * 2011-10-06 2014-06-18 Lg伊诺特有限公司 Apparatus and method for measuring road flatness
CN104517276A (en) * 2013-09-28 2015-04-15 沈阳新松机器人自动化股份有限公司 Checker corner detection method
CN105374037A (en) * 2015-11-04 2016-03-02 西安邮电大学 Checkerboard angular point automatic screening method of corner detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968347A (en) * 2010-08-31 2011-02-09 苏州大学 Method for extracting surface flatness of flat granular objects
CN103814306A (en) * 2011-06-24 2014-05-21 索弗特凯耐提克软件公司 Depth measurement quality enhancement
US20140253679A1 (en) * 2011-06-24 2014-09-11 Laurent Guigues Depth measurement quality enhancement
CN103874615A (en) * 2011-10-06 2014-06-18 Lg伊诺特有限公司 Apparatus and method for measuring road flatness
CN104517276A (en) * 2013-09-28 2015-04-15 沈阳新松机器人自动化股份有限公司 Checker corner detection method
CN105374037A (en) * 2015-11-04 2016-03-02 西安邮电大学 Checkerboard angular point automatic screening method of corner detection
CN105374037B (en) * 2015-11-04 2017-11-03 西安邮电大学 A kind of X-comers auto-screening method of corner detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈宏洋: ""双目视觉深度信息提取以及关键算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537846A (en) * 2018-02-08 2018-09-14 北京航空航天大学青岛研究院 Camera calibration method and apparatus
CN108537846B (en) * 2018-02-08 2022-05-27 北京航空航天大学青岛研究院 Camera calibration method and device
CN108362235A (en) * 2018-02-28 2018-08-03 四川斐讯信息技术有限公司 A kind of test system and method for the ground flat degree based on image procossing
CN109463969A (en) * 2018-09-10 2019-03-15 深圳蓝胖子机器人有限公司 Recognition methods, robot and the computer storage medium of express delivery cabinet and its quick despatch
CN109827526A (en) * 2019-03-13 2019-05-31 中国十七冶集团有限公司 One kind being based on photogrammetric planar smoothness detection method and its flow chart of data processing
CN110111374A (en) * 2019-04-29 2019-08-09 上海电机学院 Laser point cloud matching process based on grouping staged threshold decision
CN110706182A (en) * 2019-10-10 2020-01-17 普联技术有限公司 Method and device for detecting flatness of shielding case, terminal equipment and storage medium
CN113013468A (en) * 2019-12-20 2021-06-22 奥迪股份公司 Method for producing a power cell for a motor vehicle and corresponding production device
CN111340869B (en) * 2020-03-27 2023-04-11 上海东普信息科技有限公司 Express package surface flatness identification method, device, equipment and storage medium
CN111340869A (en) * 2020-03-27 2020-06-26 上海东普信息科技有限公司 Express package surface flatness identification method, device, equipment and storage medium
CN111161263B (en) * 2020-04-02 2020-08-21 北京协同创新研究院 Package flatness detection method and system, electronic equipment and storage medium
CN111161263A (en) * 2020-04-02 2020-05-15 北京协同创新研究院 Package flatness detection method and system, electronic equipment and storage medium
CN112258482A (en) * 2020-10-23 2021-01-22 广东博智林机器人有限公司 Building exterior wall mortar flow drop detection method and device
CN113284137A (en) * 2021-06-24 2021-08-20 中国平安人寿保险股份有限公司 Paper wrinkle detection method, device, equipment and storage medium
CN114370859A (en) * 2022-01-13 2022-04-19 安徽中擎建设发展有限公司 Laser marking method for plastering inner wall of building
CN115205562A (en) * 2022-07-22 2022-10-18 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN115205562B (en) * 2022-07-22 2023-03-14 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN116257000A (en) * 2023-01-09 2023-06-13 中建八局第二建设有限公司 Remote control system of intelligent plastering machine based on Internet

Similar Documents

Publication Publication Date Title
CN106981081A (en) A kind of degree of plainness for wall surface detection method based on extraction of depth information
CN105225482B (en) Vehicle detecting system and method based on binocular stereo vision
CN105346706B (en) Flight instruments, flight control system and method
JP6295645B2 (en) Object detection method and object detection apparatus
CN105447853B (en) Flight instruments, flight control system and method
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN105279372B (en) A kind of method and apparatus of determining depth of building
CN108416791A (en) A kind of monitoring of parallel institution moving platform pose and tracking based on binocular vision
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN108335350A (en) The three-dimensional rebuilding method of binocular stereo vision
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN104517095B (en) A kind of number of people dividing method based on depth image
CN104408746B (en) A kind of passenger flow statistical system based on depth information
CN110738618B (en) Irregular stacking volume measuring method based on binocular camera
CN107615334A (en) Object detector and object identification system
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
Zou et al. A method of stereo vision matching based on OpenCV
CN107592922A (en) Method for implementing operation to ground
CN106596063A (en) Method for measuring lens distortion and system thereof
CN108510540A (en) Stereoscopic vision video camera and its height acquisition methods
CN102831601A (en) Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN109461206A (en) A kind of the face three-dimensional reconstruction apparatus and method of multi-view stereo vision
CN113112588A (en) Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction
CN110245600A (en) Adaptively originate quick stroke width unmanned plane Approach for road detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170725

RJ01 Rejection of invention patent application after publication