CN107392954A - A kind of gross error point elimination method based on sequence image - Google Patents

A kind of gross error point elimination method based on sequence image Download PDF

Info

Publication number
CN107392954A
CN107392954A CN201710535121.XA CN201710535121A CN107392954A CN 107392954 A CN107392954 A CN 107392954A CN 201710535121 A CN201710535121 A CN 201710535121A CN 107392954 A CN107392954 A CN 107392954A
Authority
CN
China
Prior art keywords
point
msub
mrow
cloud
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710535121.XA
Other languages
Chinese (zh)
Other versions
CN107392954B (en
Inventor
刘巍
赵海洋
张致远
叶帆
兰志广
张洋
马建伟
贾振元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201710535121.XA priority Critical patent/CN107392954B/en
Publication of CN107392954A publication Critical patent/CN107392954A/en
Application granted granted Critical
Publication of CN107392954B publication Critical patent/CN107392954B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

A kind of gross error point elimination method based on sequence image of the present invention belongs to reverse-engineering field, is related to a kind of gross error point elimination method based on sequence image.This method, using the auxiliary laser striped that is formed on measured object of left and right video camera shooting generating laser, obtains the cloud data for representing measured object surface information first by way of laser combination binocular vision.The point of selection is fitted to by a curve according to least square method, using adjacent two curves as point cloud zone boundary, completes the division in point cloud sector domain;Obtain respectively again in the domain of every piece of point cloud sector o'clock to two boundary curves beeline, gross error point is judged according to ratio size.This method is simple to operate, first building topology structure is not needed, the information such as the density of point cloud is calculated to delete unnecessary point cloud, improves the efficiency that a cloud gross error point removes, the limitation of single width laser striation picture point cloud processing is overcome, ensure that the accuracy of the local message of cloud data.

Description

A kind of gross error point elimination method based on sequence image
Technical field
The invention belongs to reverse-engineering field, is related to a kind of gross error point elimination method based on sequence image.
Background technology
With the continuous development of aviation digital technology, industry competition will be more and more fierce, to aircraft product quality Requirement also more and more higher, so development substitutes conventionally manufactured gimmick to compel in eyebrow the reverse modeling technology of airplane parts Eyelash.Committed step of the processing as reverse-engineering of cloud data, its processing accuracy is by the reconstruction precision of direct decision model. During Point Cloud Processing, the gross error point rejecting for putting cloud is the first step of points cloud processing.During points cloud processing The method that the gross error point of point cloud is rejected, domestic and international many scholars have carried out corresponding research, such as .kd-tree methods, space cell Lattice method, Octree method etc., but at present in these methods, it is, it is necessary to which the extremely long time builds kd- the shortcomings that kd-tree methods Tree, so that asking neighborhood also to take a significant amount of time each point;Selection requirement of the space cell lattice method to grid is compared Height, complex operation;Octree is linear structure, it is necessary to substantial amounts of memory storage pointer.In laser measurement system data acquisition Cheng Zhong, because measured object can produce impulsive noise point with errors caused by reason such as measuring environments in itself, i.e., so-called gross error, Its irregular distribution and deviation True Data is larger, but negligible amounts, influences reconstruction precision.To avoid above mentioned problem, just It is necessary to carry out point cloud data gross error point rejecting processing.
Gross error point rejecting for cloud data is handled, Gao Jianmin of Xi'an Communications University et al.,《Modern scientist Engineering》7th phase,《Data preprocessing technical research based on reverse-engineering》It is proposed that one kind is counted using isolated point in one text The method that rejection method rejects gross error.This method obtains mean μ and variances sigma by calculating in wall scroll point cloud scan line2Afterwards, Establish normal distribution N (μ, the σ of consecutive points distance2).Then, using adjacent 2 points of air line distance as statistics pair in scan line As utilizing normal distribution N (μ, the σ of consecutive points distance2) 3 σ rules judge point going or staying, can preferably reject wall scroll scanning Impulse noise data on line.But this method is merely able to be handled for single width laser optical strip image acquisition cloud data, can not Gross error point rejecting is carried out to the point cloud face of a plurality of cloud line composition, there is significant limitation.
Liu Guanzhou of Beijing Mine and Metallurgy General Inst et al., in the patent No.:201210496277.9 patent《It is a kind of three-dimensional sharp The denoising of light cloud data and compressing method and system》In propose the processing method and system of a kind of cloud data, this method Current point is with the distance of its neighborhood each point and the average of distance and for representing data in point cloud after main calculating topological structure The standard deviation of dispersion degree, deletion judgement is carried out to current point.By using the points cloud processing method, noise spot filtering is improved Accuracy rate and cloud data accuracy, and significantly reduce the redundancy of cloud data.But this method needs first to establish Topological structure, and delete unnecessary point cloud by calculating the information such as the point curvature of cloud, density, has computationally intensive, and efficiency is low, The problems such as raw scanning data local message can not be ensured.
The content of the invention
The present invention is in order to solve under existing big visual field, the limitation during large aerospace flat-type part Point Cloud Processing Property, invent a kind of gross error point elimination method based on sequence image.The purpose is to the gross error for cloud data Point, which is rejected, needs first building topology structure in removal process, and the information such as curvature, density by calculating point cloud is unnecessary to delete Point cloud, computationally intensive, efficiency is low, can not ensure raw scanning data local message, can not be to the point of a plurality of cloud line composition The problems such as cloud face is handled, scanned for by the cloud data to acquisition according to the direction of scan line, be fitted cloud data, Division points cloud sector domain, by the ratio between judgement o'clock to the beeline of two boundary curves, realize the quick, high-precision of cloud data Gross error point remove.Overcoming needs building topology structure during existing points cloud processing, can not ensure original scan number According to local message, the problems such as can not handling the point cloud face of a plurality of cloud line composition, it is with a wide range of applications.
The technical solution adopted by the present invention is a kind of gross error point elimination method based on sequence image, it is characterized in that, This method shoots generating laser c tested first by way of laser combination binocular vision, using left and right video camera a, b The auxiliary laser striped f formed on thing e, obtain the cloud data for representing measured object e surface informations;Secondly by every point cloud line two Boundary point corresponding to end connects, and obtains straight line g, by the point between two boundary points near straight line g, one is taken every n point After point, the match point J that treats of selection is fitted to by a curve according to least square method, using adjacent two curves as point cloud sector Domain border, complete the division in point cloud sector domain;Finally obtain respectively in the domain of every piece of point cloud sector o'clock to the most short of two boundary curves Distance h1、h2, afterwards, according to h1With h2Ratio size judge gross error point;Method comprises the following steps that:
The first step, obtain cloud data
Measuring apparatus is installed, auxiliary laser transmitter c is opened and irradiates measured object e, after collection is started, opens turntable D drives generating laser c to rotate, and makes laser scanning measured object e.Then, the position of integral translation left and right cameras a, b, carry out Repeatedly shooting, ensure the integrality of measured object e shapes face information.Auxiliary laser striation f images are collected by information acquisition system Afterwards, it is necessary to be extracted to laser striation f center line, the present invention is the side using the extraction of optical strip image center grey scale centre of gravity Method, formula (1) are:
Wherein:, (ui,vi) it is the i-th row striation grey scale centre of gravity coordinate, IijFor the i-th row jth row gray value;.Pass through the method Auxiliary laser striation f characteristic point two-dimensional signal is obtained, in conjunction with calibration result and reconstruction formula, obtains boundary point and striation D coordinates value of the central point under world coordinate system, reconstruction formula are as follows:
Where it is assumed that xi'=(Xi',Yi'), Xi', Yi' it is respectively in the sharp point or striation that left video camera a is gathered Heart point xi' horizontal stroke, ordinate under image coordinates system;xi′'=(Xi′′,Yi′'), Xi′', Yi′' it is respectively right video camera b collections Image spot central point xi‘' horizontal stroke, ordinate under image coordinates system;f1、f2Respectively left and right video camera a, b demarcate to obtain Focal length;It is spin matrixs of the right video camera b relative to left video camera a, [tx ty tz] it is right video camera b Relative to left video camera a translation matrix, obtained by calibration experiment;Then (xi,yi,zi) it is the three-dimensional for rebuilding corresponding points out Coordinate, thus obtain the three dimensional point cloud on whole measured object e surfaces.
Second step, put the division in cloud sector domain
For the point cloud of acquisition, two points of head and the tail of every point cloud line are numbered, i.e. the boundary point to point cloud chart picture Numbering 1 ..., 2n, boundary point corresponding to every point cloud line both ends is connected, obtain straight line g, by between two boundary points in straight line Point near g, after n point takes a point, using formula (3) (4), selection is treated by match point according to least square method J is fitted to a curve;
yi=a0+a1xi+...+akxi k (3)
Using adjacent two curves as the left and right border l in monolithic point cloud sector domain1、l2, complete the division in point cloud sector domain;
3rd step, the removal of gross error point
After point cloud region division, the point in a cloud sector domain is scanned for successively, obtained respectively in the domain of every piece of point cloud sector O'clock to two boundary curves beeline h1、h2, afterwards, according to h1With h2Ratio ρ sizes judge gross error point I;
Wherein, h1To put the beeline to left margin, h2To put the beeline to right margin;When in a cloud sector domain Point is close to left margin l1When, ρ tends to 0, when the point in a cloud sector domain is close to right margin l2When, ρ tends to ∞, therefore, sets threshold value α1、α2, work as α1≤ρ≤α2When, this point is judged for gross error point I, and this point is rejected;As ρ≤α1Or ρ >=α2When, judge this point For normal point, this point is retained;This completes the gross error of point cloud data point I rejectings.
The invention has the advantages that scan-type cloud data is obtained by the way of laser combination binocular vision, to obtaining The cloud data taken scans for according to the direction of scan line, is fitted cloud data, division points cloud sector domain, passes through judgement o'clock to two The ratio between beeline of bar boundary curve, realize that quick, the high-precision gross error point of cloud data removes.Overcome existing Building topology structure is needed during points cloud processing, raw scanning data local message can not be ensured;Overcome single width laser light The limitation of bar image points cloud processing, the problems such as can not handling the point cloud face of a plurality of cloud line composition.Improve a cloud The efficiency that gross error point removes, and the accuracy of the local message of cloud data is ensure that, it is with a wide range of applications.
Brief description of the drawings
Fig. 1 is the acquisition schematic diagram of cloud data, wherein, the left video cameras of a-, the right video cameras of b-, c- generating lasers, d- Turntable, e- measured objects, f- laser striations.
Fig. 2 is a cloud region division signal, wherein, 1,3 ..., 2n-1- coboundaries point numbering, 2,4 ..., 2n- lower boundaries Point numbering, boundary point line corresponding to g- both ends, I- gross errors point, what J- chose treats match point, l1- monolithic point cloud sector domain is left Border, l2- monolithic point cloud sector domain right margin, h1- arrive left margin beeline, h2- arrive right margin beeline;
Fig. 3 is that gross error point removes flow chart
Embodiment
Describe the embodiment of the present invention in detail below in conjunction with technical method and accompanying drawing.
Method shoots generating laser c first by way of laser combination binocular vision, using left and right video camera a, b The auxiliary laser striped f formed on measured object e, obtain the cloud data for representing measured object e surface informations;Secondly by every point Boundary point corresponding to cloud line both ends connects, and obtains straight line g, by the point between two boundary points near straight line g, every n point Take a point;Selection is treated that match point J is fitted to a curve according to least square method afterwards, using adjacent two curves as Point cloud zone boundary, complete the division in point cloud sector domain;Finally obtain respectively in the domain of every piece of point cloud sector o'clock to two boundary curves Beeline h1、h2, afterwards, according to h1With h2Ratio size judge gross error point;Method comprises the following steps that:
The first step, obtain cloud data
The model industrial cameras of VC-12MC-M/C 65 of Vieworks companies of South Korea production are chosen in this measurement, and this camera is Progressive scan formula Surface scan industrial camera, that select herein is the Lasiris that generating laser is the production of Coherent companies PowerLine generating lasers, measured object e are aviation flat-type part.After experimental facilities is installed, generating laser c is opened simultaneously Measured object e is irradiated, after collection is started, turntable d is opened and drives generating laser c to rotate, make laser scanning measured object e.So Afterwards, the position of left and right cameras a, b is converted, is repeatedly shot, ensures the integrality of measured object e shapes face information.Pass through information After acquisition system collects auxiliary laser striation f images, laser striation f center line is extracted using formula (1), obtained Laser striation f characteristic point two-dimensional signal is taken, in conjunction with calibration result reconstruction formula (2), left and right camera a, b can be shot Striation information is matched, and two-dimensional signal reduction thus is turned into three-dimensional point information.Afterwards according to calibration result, finally obtain whole The three dimensional point cloud on individual measured object e surfaces.
Second step, put the division in cloud sector domain
For the point cloud of acquisition, two points of head and the tail of every point cloud line are numbered, i.e. the boundary point to point cloud chart picture Numbering 1 ..., 2n, boundary point corresponding to every point cloud line both ends is connected, obtain straight line g, by between two boundary points in straight line Point near g, after 5 points take a point, selection is treated by match point according to least square method using formula (3), (4) J is fitted to a curve;Using adjacent two curves as the left and right border l in monolithic point cloud sector domain1、l2, complete drawing for point cloud sector domain Point.
3rd step, the removal of gross error point
After point cloud region division, the point in a cloud sector domain is scanned for successively, obtained respectively in the domain of every piece of point cloud sector O'clock to two boundary curves beeline h1、h2, afterwards, according to h1With h2Ratio size judge gross error point.According to For formula (5) when the point in a cloud sector domain is close to left margin, ρ tends to 0, and when the point in a cloud sector domain is close to right margin, ρ tends to ∞, therefore, threshold alpha is set1=0.1, α2=10, as 0.1≤ρ≤10, judge that this point for gross error point I, this is put and picked Remove;As ρ≤0.1 or ρ >=10, this point is judged for normal point, by this point reservation, this completes the thick mistake of point cloud data Not good enough I rejecting.
The present invention, using the measuring method of laser combination binocular vision, improves existing on the basis of least square method The limitation of reverse process of reconstruction point cloud data gross error point minimizing technology, it is quick, high-precision to realize gross error point Removal.

Claims (1)

1. a kind of gross error point elimination method based on sequence image, it is characterized in that, this method is combined double by laser first The mode visually felt, the auxiliary laser formed using left and right video camera (a, b) shooting generating laser (c) on measured object (e) Striped (f), obtain the cloud data for representing measured object (e) surface information;Secondly by boundary point corresponding to every point cloud line both ends Connection, straight line g is obtained, by the point between two boundary points near straight line g, a point is taken every n point;Further according to a most young waiter in a wineshop or an inn Multiplication treats that match point J is fitted to a curve by selection, using adjacent two curves as point cloud zone boundary, completes point cloud sector The division in domain;Finally obtain respectively in the domain of every piece of point cloud sector o'clock to two boundary curves beeline h1、h2, afterwards, according to h1With h2Ratio size judge gross error point;Method comprises the following steps that:
The first step, obtain cloud data
Measuring apparatus is installed, auxiliary laser transmitter (c) is opened and irradiates measured object (e), after collection is started, opens turntable (d) drive generating laser (c) to rotate, make laser scanning measured object (e);Then, the left and right video camera of integral translation (a, b) Position, repeatedly shot, ensure the integrality of measured object (e) shape face information;Auxiliary is collected by information acquisition system to swash , it is necessary to be extracted to the center line of laser striation (f) after light striation (f) image, optical strip image center grey scale centre of gravity is utilized The method of extraction, formula (1) are:
<mrow> <mo>(</mo> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>=</mo> <mfrac> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mi>p</mi> </mrow> <mi>q</mi> </munderover> <mi>j</mi> <mo>&amp;CenterDot;</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mi>p</mi> </mrow> <mi>q</mi> </munderover> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein:, (ui,vi) it is the i-th row striation grey scale centre of gravity coordinate, IijFor the i-th row jth row gray value;;Can be with by the method The characteristic point two-dimensional signal of auxiliary laser striation (f) is obtained, in conjunction with calibration result and reconstruction formula, obtains boundary point and light D coordinates value of the bar central point under world coordinate system, reconstruction formula (2) are as follows:
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>zX</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>zY</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> </mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <msub> <mi>t</mi> <mi>y</mi> </msub> <mo>-</mo> <msubsup> <mi>Y</mi> <msup> <mi>i</mi> <mo>&amp;prime;</mo> </msup> <mo>&amp;prime;</mo> </msubsup> <msub> <mi>t</mi> <mi>z</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <msub> <mi>Y</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mn>7</mn> </msub> <msubsup> <mi>X</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>+</mo> <msub> <mi>r</mi> <mn>8</mn> </msub> <msubsup> <mi>Y</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>+</mo> <msub> <mi>r</mi> <mn>9</mn> </msub> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mn>4</mn> </msub> <msubsup> <mi>X</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>+</mo> <msub> <mi>r</mi> <mn>5</mn> </msub> <msubsup> <mi>Y</mi> <mi>i</mi> <mo>&amp;prime;</mo> </msubsup> <mo>+</mo> <msub> <mi>r</mi> <mn>6</mn> </msub> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Where it is assumed that xi'=(Xi',Yi'), Xi', Yi' it is respectively sharp point or optical losses that left video camera (a) gathers Point xi' horizontal stroke, the ordinate under image coordinates system;xi'=(Xi′,Yi'), Xi', Yi' it is respectively the figure that right video camera (b) gathers As spot center point xi‘' horizontal stroke, the ordinate under image coordinates system;f1、f2Respectively left and right video camera (a, b) demarcation obtains Focal length;It is spin matrix of the right video camera (b) relative to left video camera (a), [tx ty tz] it is right shooting Machine (b) is obtained relative to the translation matrix of left video camera (a) by calibration experiment;Then (xi,yi,zi) it is to rebuild corresponding points out Three-dimensional coordinate, thus obtain the three dimensional point cloud on whole measured object (e) surface;
Second step, put the division in cloud sector domain
For the point cloud of acquisition, two points of head and the tail of every point cloud line are numbered, i.e., the boundary point of point cloud chart picture numbered 1 ..., 2n, boundary point corresponding to every point cloud line both ends is connected, obtain straight line g, will be attached in straight line g between two boundary points Near point, after n point takes a point, using formula (3), (4), selection is treated by match point J according to least square method It is fitted to a curve;
yi=a0+a1xi+…+akxi k (3)
Using adjacent two curves as the left and right border l in monolithic point cloud sector domain1、l2, complete the division in point cloud sector domain;
3rd step, the removal of gross error point
After point cloud region division, the point in a cloud sector domain is scanned for successively, obtains the point in the domain of every piece of point cloud sector respectively To the beeline h of two boundary curves1、h2, afterwards, according to h1With h2Ratio ρ sizes judge gross error point I;
<mrow> <mi>&amp;rho;</mi> <mo>=</mo> <mfrac> <msub> <mi>h</mi> <mn>1</mn> </msub> <msub> <mi>h</mi> <mn>2</mn> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein, h1To put the beeline to left margin, h2To put the beeline to right margin;When the point in a cloud sector domain leans on Nearly left margin l1When, ρ tends to 0, when the point in a cloud sector domain is close to right margin l2When, ρ tends to ∞, therefore, sets threshold alpha1、α2, Work as α1≤ρ≤α2When, this point is judged for gross error point I, and this point is rejected;As ρ≤α1Or ρ >=α2When, judge that this point is normal Point, retain this point;This completes the gross error of point cloud data point I rejecting.
CN201710535121.XA 2017-07-04 2017-07-04 A kind of gross error point elimination method based on sequence image Expired - Fee Related CN107392954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710535121.XA CN107392954B (en) 2017-07-04 2017-07-04 A kind of gross error point elimination method based on sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710535121.XA CN107392954B (en) 2017-07-04 2017-07-04 A kind of gross error point elimination method based on sequence image

Publications (2)

Publication Number Publication Date
CN107392954A true CN107392954A (en) 2017-11-24
CN107392954B CN107392954B (en) 2019-11-19

Family

ID=60334231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710535121.XA Expired - Fee Related CN107392954B (en) 2017-07-04 2017-07-04 A kind of gross error point elimination method based on sequence image

Country Status (1)

Country Link
CN (1) CN107392954B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229342A (en) * 2017-12-18 2018-06-29 西南技术物理研究所 A kind of surface vessel target automatic testing method
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN109272524A (en) * 2018-08-27 2019-01-25 大连理工大学 A kind of small scale point cloud noise denoising method based on Threshold segmentation
CN109741386A (en) * 2018-12-26 2019-05-10 豪威科技(武汉)有限公司 The Enhancement Method and stereo visual system of stereo visual system
CN112991202A (en) * 2021-03-01 2021-06-18 歌尔科技有限公司 Calibration method of optical center position, terminal device and computer readable storage medium
CN114459383A (en) * 2022-02-28 2022-05-10 嘉兴市像景智能装备有限公司 Calibration method based on sine stripe phase shift profilometry and implementation device
CN114609591A (en) * 2022-03-18 2022-06-10 湖南星晟智控科技有限公司 Data processing method based on laser point cloud data
WO2023040392A1 (en) * 2021-09-17 2023-03-23 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus of encoding/decoding point cloud geometry data sensed by at least one sensor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113020A1 (en) * 2001-12-19 2003-06-19 General Electric Company Method for the extraction of image features caused by structure light using template information
US20030112449A1 (en) * 2001-12-19 2003-06-19 General Electric Company Method for the extraction of image features caused by structure light using image reconstruction
CN101718528A (en) * 2009-12-10 2010-06-02 北京科技大学 Digital image based rapid solving method of circle parameters
CN102486371A (en) * 2010-12-03 2012-06-06 沈阳黎明航空发动机(集团)有限责任公司 Measuring and calculating method of profile line part without datum
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN105716539A (en) * 2016-01-26 2016-06-29 大连理工大学 Rapid high-precision 3D shape measuring method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113020A1 (en) * 2001-12-19 2003-06-19 General Electric Company Method for the extraction of image features caused by structure light using template information
US20030112449A1 (en) * 2001-12-19 2003-06-19 General Electric Company Method for the extraction of image features caused by structure light using image reconstruction
CN101718528A (en) * 2009-12-10 2010-06-02 北京科技大学 Digital image based rapid solving method of circle parameters
CN102486371A (en) * 2010-12-03 2012-06-06 沈阳黎明航空发动机(集团)有限责任公司 Measuring and calculating method of profile line part without datum
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN105716539A (en) * 2016-01-26 2016-06-29 大连理工大学 Rapid high-precision 3D shape measuring method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229342A (en) * 2017-12-18 2018-06-29 西南技术物理研究所 A kind of surface vessel target automatic testing method
CN108229342B (en) * 2017-12-18 2021-10-26 西南技术物理研究所 Automatic sea surface ship target detection method
CN108445505A (en) * 2018-03-29 2018-08-24 南京航空航天大学 Feature significance detection method based on laser radar under thread environment
CN109272524A (en) * 2018-08-27 2019-01-25 大连理工大学 A kind of small scale point cloud noise denoising method based on Threshold segmentation
CN109741386A (en) * 2018-12-26 2019-05-10 豪威科技(武汉)有限公司 The Enhancement Method and stereo visual system of stereo visual system
CN112991202A (en) * 2021-03-01 2021-06-18 歌尔科技有限公司 Calibration method of optical center position, terminal device and computer readable storage medium
WO2023040392A1 (en) * 2021-09-17 2023-03-23 Beijing Xiaomi Mobile Software Co., Ltd. Method and apparatus of encoding/decoding point cloud geometry data sensed by at least one sensor
CN114459383A (en) * 2022-02-28 2022-05-10 嘉兴市像景智能装备有限公司 Calibration method based on sine stripe phase shift profilometry and implementation device
CN114459383B (en) * 2022-02-28 2023-12-15 嘉兴市像景智能装备有限公司 Calibration method based on sine stripe phase shift profilometry
CN114609591A (en) * 2022-03-18 2022-06-10 湖南星晟智控科技有限公司 Data processing method based on laser point cloud data

Also Published As

Publication number Publication date
CN107392954B (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN107392954A (en) A kind of gross error point elimination method based on sequence image
CN111047554B (en) Composite insulator overheating defect detection method based on instance segmentation
CN104657587B (en) A kind of center line extraction method of laser stripe
CN111832655B (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN107301648A (en) Redundant points cloud minimizing technology based on overlapping region boundary angles
CN103913131B (en) Free curve method vector measurement method based on binocular vision
CN109118574A (en) A kind of fast reverse modeling method extracted based on three-dimensional feature
CN109272524A (en) A kind of small scale point cloud noise denoising method based on Threshold segmentation
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN105844602A (en) Airborne LIDAR point cloud 3D filtering method based on volume elements
CN110322453A (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN110544233B (en) Depth image quality evaluation method based on face recognition application
CN104881855B (en) A kind of multi-focus image fusing method of utilization morphology and free boundary condition movable contour model
CN109580630A (en) A kind of visible detection method of component of machine defect
CN109559324A (en) A kind of objective contour detection method in linear array images
CN111985552B (en) Method for detecting diseases of thin strip-shaped structure of airport pavement under complex background
CN110009671B (en) Grid curved surface reconstruction system for scene understanding
CN110726998B (en) Method for measuring mining subsidence basin in mining area through laser radar scanning
CN112347987A (en) Multimode data fusion three-dimensional target detection method
CN104091327A (en) Method and system for generating dendritic shrinkage porosity defect simulation image of casting
CN114119902A (en) Building extraction method based on unmanned aerial vehicle inclined three-dimensional model
CN113256543A (en) Point cloud completion method based on graph convolution neural network model
CN107563991A (en) The extraction of piece surface fracture laser striation and matching process
CN116385659A (en) Point cloud building modeling method, system, storage medium and electronic equipment
Ma et al. Rapid estimation of apple phenotypic parameters based on 3D reconstruction.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191119

Termination date: 20210704