CN105740818B - A kind of artificial target's detection method applied to augmented reality - Google Patents

A kind of artificial target's detection method applied to augmented reality Download PDF

Info

Publication number
CN105740818B
CN105740818B CN201610065210.8A CN201610065210A CN105740818B CN 105740818 B CN105740818 B CN 105740818B CN 201610065210 A CN201610065210 A CN 201610065210A CN 105740818 B CN105740818 B CN 105740818B
Authority
CN
China
Prior art keywords
line segment
edge
edge line
pixel
quadrangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610065210.8A
Other languages
Chinese (zh)
Other versions
CN105740818A (en
Inventor
赵子健
马帅依凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201610065210.8A priority Critical patent/CN105740818B/en
Publication of CN105740818A publication Critical patent/CN105740818A/en
Application granted granted Critical
Publication of CN105740818B publication Critical patent/CN105740818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Abstract

The present invention relates to a kind of artificial target's detection method applied to augmented reality, specific steps include:S1:Acquisition frame image slightly samples frame image, and using oblique network scanning, detection obtains frame image edge pixels;S2:Based on RANSAC algorithms, detection obtains the edge line segment in frame image;S3:Edge line segment is merged;S4:Edge line segment is extended, is screened;S5:Quadrangle corner points are detected, and according to quadrangle corner points, construct quadrangle.To frame image preprocessing before operation of the present invention, coarse grid sampling is carried out, edge detection is carried out to each net region, the sequential operation time is substantially reduced, improves detection speed, real-time is good, meets real-time testing requirements.The present invention uses the detection method based on edge, carries out line segment test, the line segment then tested according to line segment first, and the quadrangle frame of remodeling has good robustness to illumination variation and circumstance of occlusion.

Description

A kind of artificial target's detection method applied to augmented reality
Technical field
The present invention relates to a kind of artificial target's detection methods applied to augmented reality, belong to the technology of augmented reality application Field.
Background technology
Augmented reality is a kind of by " seamless " the integrated new technology of real world information and virtual world information, is Script in the certain time spatial dimension of real world be difficult experience entity information (visual information, sound, taste, Tactile etc.), it by science and technology such as computers, is superimposed again after analog simulation, by virtual Information application to real world, by the mankind Sense organ is perceived, to reach the sensory experience of exceeding reality.True environment and virtual object have been added to together in real time One picture or space exist simultaneously.
Reference mark system is widely used in the general work such as augmented reality, robot navigation, position tracking, image modeling Journey is mainly placed in the artificial marks of known 2D in environment by image processing techniques detection identification, flag information is extracted, to calculate Video camera and object relative position relation.The most important parameter of reference mark system is exactly false drop rate, sized-multiple rate, minimum detection picture Element and illumination immunity to interference.The key technology of reference mark system is the design of artificial target and corresponding recognition positioning method. Existing artificial target's design is single so that is vulnerable to the interference of light and complex object in image recognition, inside is not deposited Store up information so that the data of dummy object must be stored in identification equipment and can not flexibly change.
ARTag indicate, be a kind of two-value plane landmark, be each flagged with the ID numbers of oneself, by two-value numeric coding in Mark is internal.Compared to ARToolkit marks before, ARTag solves the problems such as high error rate, false drop rate and sized-multiple rate.Base Quasi- designation system performance depends on the detection performance indicated for 2D, and in addition to detection speed, error rate etc. considers outer, Mark Detection Algorithm is also required to illumination condition, and the situations such as blocking has good robustness.
Invention content
In view of the deficiencies of the prior art, the present invention provides a kind of artificial target's detection methods applied to augmented reality;
The present invention improves recognition speed, interference resistance and the accuracy of artificial target.
Term is explained
RANSAC algorithms, the abbreviation of RANdom SAmple Consensus are the samples for including abnormal data according to one group Data set calculates the mathematical model parameter of data, obtains the algorithm of effective sample data.
The technical scheme is that:
A kind of artificial target's detection method applied to augmented reality, specific steps include:
S1:Acquisition frame image slightly samples frame image, and using oblique network scanning, detection obtains frame image edge pixels;
S2:Based on RANSAC algorithms, according to the edge pixel that step S1 is obtained, detection obtains the edge line in frame image Section;
S3:The edge line segment that step S2 is detected is merged;
S4:Edge line segment after step S3 fusions is extended, is screened;
S5:Extended according to step S4, the edge line segment after screening, detects quadrangle corner points, and according to quadrangle corner Point constructs quadrangle.
According to currently preferred, the step S1 is specifically included:
S11:Frame image is divided into several regions, each region includes m × m pixels, m ∈ (20,60);Further preferably , m=40;Frame image is divided into several regions, detection accelerates speed, improves real-time.
S12:To each region that step S11 is obtained, network scanning is carried out using oblique x ° and (180-x) ° of scan line, The opposite side distance of x ∈ (22.5,67.5), each grid are y pixel, y ∈ (3,9);It is further preferred that x=45, y=5;
S13:Every scan line of step S12 leads convolution with Gauss single order, calculates pixel along every scanning line direction Gradient intensity component;
S14:The pixel obtained according to step S13 calculates pixel along the gradient intensity component of every scanning line direction Gradient intensity value, the corresponding pixel of local extremum, that is, edge pixel gradient intensity Local Extremum in gradient intensity value, The edge pixel is extracted, while according to gradient intensity component, calculating the direction of the edge pixel.
According to currently preferred, the step S3 is specifically included:
S31:An edge line segment obtained by step S2 is chosen, orders as line segment a, is arbitrarily selected from remaining edge line segment Another line segment is taken, orders as line segment b, fusion judgement is carried out to line segment a, b, i.e.,:If line segment a, b meet:|θab|∈Δθ,And Lab∈Δl, then line segment a, b are merged, obtain new line segment;Otherwise, continue to select in remaining edge line segment Line taking section, continuation carry out merging judgement with line segment a, until line segment a in addition to line segment a all edge line segments complete to merge judgement; θa、θbFor the direction of line segment a, b, ΔθFor the threshold value of line segment direction error to be fused, θab、LabFor the direction of line segment a, b line ab And length, ΔlAllow the threshold value of length for line segment a, b line ab;
S32:Step S31 is repeated, until all edge line segments that step S2 is obtained complete fusion.
According to currently preferred, the step S4 is specifically included:
S41:The edge line segment obtained according to step S3 arbitrarily chooses an edge line segment and chooses one endpoint, sentences It is disconnected whether consistent along the direction of the direction of the edge line segment pixel adjacent with the endpoint, if unanimously, by the pixel It is added in the edge line segment, continues checking for the next pixel adjacent with the pixel, up to certain pixel direction and is somebody's turn to do Edge line segment direction is inconsistent, then the pixel is updated to a new endpoint of the edge line segment;
S42:Step S41, the edge after being extended are executed to two endpoints of the obtained all edge line segments of step S3 Line segment and its new endpoint;
S43:Edge line segment after the extension obtained to step S42 carries out preliminary screening, deletes length value and is less than n pixel Edge line segment, n ∈ (15,25), it is further preferred that n=20;The too small edge line segment of length value is unlikely to be mark side Frame even illustrates that the excessive or scene scale of mark frame distortion is too big, the meaning not detected, therefore deletes;
S44:Line segment test is carried out to the edge line segment after step S43 preliminary screenings, one is chosen along the direction of edge line segment A pixel apart from 2-4 pixels of its endpoint detects the gray value of the pixel, if the gray value of the pixel is in In 128-255 ranges, then judge that endpoint qualification, the edge line segment meet mark bounding box features, otherwise, then judge the endpoint Unqualified, which does not meet mark bounding box features;When two endpoints of edge line segment are all determined as unqualified, then delete Except the edge line segment;
S45:Step S44 is repeated, until all edge line segments have all carried out line segment survey, obtains that all to meet mark frame special The edge line segment of sign.
According to currently preferred, in step S5, extended according to step S4, the edge line segment after screening, detect quadrangle Corner points specifically include:
S51:In all edge line segments for meeting mark bounding box features obtained from step S45, one edge is arbitrarily chosen Line segment is set as line segment cd, as a line of quadrangle, from the remaining edge line segment for meeting mark bounding box features, The line segment ef intersected with the line segment cd is chosen, line segment cd, ef meet:θcd!≈θef, min { ce, cf, de, df }≤Δ and line Section cd, line segment ef meet quadrangle adjacent side feature, a corner points of crosspoint, that is, quadrangle of line segment cd, ef;
S52:Using step S51 the methods, the corner point sequence of quadrangle is obtained;
S53:All edge line segments for meeting mark bounding box features that all step S45 are obtained are traversed, all four are obtained The corner point sequence of side shape.
According to currently preferred, according to quadrangle corner points, quadrangle constructed, is specifically included:
S54:Quadrangle is constructed according to the corner points number that all corner point sequences that step S53 is obtained include:If side The corner points number of angle point sequence is 4, connects 4 corner points, directly constructs quadrangle;
If the corner points number of corner point sequence is 3, extend 2 articles of edge line segments for not constituting the 4th corner points, intersection point obtains To the 4th corner points, 4 corner points are connected, construct quadrangle;
If the corner points number of corner point sequence is 2, extend 2 edge line segments for only having 1 corner points, if with the 3rd Edge line segment intersection, intersection point is corner points, connects 4 corner points, constructs quadrangle;Otherwise, quadrangle can not be constructed;
If the corner points number of corner point sequence is 1, quadrangle can not be constructed;
S55:All corner point sequences are traversed, all quadrangles are obtained, that is, detect all people's work mark.
Beneficial effects of the present invention are:
1, coarse grid sampling is carried out to frame image preprocessing before operation of the present invention, edge inspection is carried out to each net region It surveys, substantially reduces the sequential operation time, improve detection speed, real-time is good, meets real-time testing requirements.
2, the present invention uses the detection method based on edge, carries out line segment test first, then tests to obtain according to line segment Line segment, the quadrangle frame of remodeling has good robustness to illumination variation and circumstance of occlusion.
Description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is RANSAC algorithm schematic diagrames;
RANSAC algorithms are a kind of common line segment approximating methods.In Fig. 2, black-white point is the edge pixel detected Point, c1, d1 point are the edge pixel point arbitrarily chosen, as the endpoint for assuming edge line segment, meet c1, d1 point and c1, D1 point lines direction is consistent, with assume line segment c1d1 distances in edge are close enough and direction is consistent with the directions line segment c1d1 point by regarding For the support point of line segment c1d1.In Fig. 2, the support point of line segment c1d1 is 12, and the support point of line segment e1f1 is 3, in repetition Step is stated, obtains that the line segment that point is most, the i.e. line segment is supported to be identified as existing line segment.
Fig. 3 is the schematic diagram that the edge line segment after fusion is extended, screened.
Specific implementation mode
The present invention is further qualified with embodiment with reference to the accompanying drawings of the specification, but not limited to this.
Embodiment
A kind of artificial target's detection method applied to augmented reality, specific steps include:
S1:Acquisition frame image slightly samples frame image, and using oblique network scanning, detection obtains frame image edge pixels;
S2:Based on RANSAC algorithms, according to the edge pixel that step S1 is obtained, detection obtains the edge line in frame image Section;
S3:The edge line segment that step S2 is detected is merged;
S4:Edge line segment after step S3 fusions is extended, is screened;
S5:Extended according to step S4, the edge line segment after screening, detects quadrangle corner points, and according to quadrangle corner Point constructs quadrangle.As shown in Figure 1.
The step S1, specifically includes:
S11:Frame image is divided into several regions, each region includes 40 × 40 pixels, and frame image is divided into several Region, detection accelerate speed, improve real-time.
S12:To each region that step S11 is obtained, network scanning is carried out using oblique 45 ° and 135 ° of scan lines, often The opposite side distance of a grid is 5 pixels;
Carrying out network scanning main cause using oblique scan line is:Under normal circumstances, it is less to be just placed in picture for mark, mark Will is generally inclined in picture, therefore oblique scan line is used to carry out network scanning.
S13:Every scan line of step S12 leads convolution with Gauss single order, calculates pixel along every scanning line direction Gradient intensity component;
S14:The pixel obtained according to step S13 calculates pixel along the gradient intensity component of every scanning line direction Gradient intensity value, the corresponding pixel of local extremum, that is, edge pixel gradient intensity Local Extremum in gradient intensity value, The edge pixel is extracted, the threshold value for extracting edge pixel is 30/256 pixel value, while according to gradient intensity component, calculating the side Direction θ=tan of edge pixel-1(gy/gx), gyIt is the gradient intensity of Y-component, gxIt is the gradient intensity of X-component.
S2:Based on RANSAC algorithms, according to the edge pixel that step S1 is obtained, detection obtains the edge line in frame image Section;
The step S2, specifically includes:
S21:In all areas for 40 × 40 pixels that step S11 has been divided, two edge pixels are randomly selected, if two The direction of a edge pixel is consistent with the direction of their lines, then assuming their line, there are one edge line segments;
S22:The quantity for calculating the edge pixel for the edge line segment for supporting the hypothesis described in step S21, if edge pixel Meet:The direction θ 1 of γ ∈ (0.1,0.25), edge pixel are consistent with edge line segment direction, then it is assumed that the edge pixel is supported should It is assumed that edge line segment, Count adds 1;Wherein, γ is the distance for the edge line segment that edge pixel distance assumes, θ 1 is edge picture The direction of element, Count are the quantity of the edge pixel for the edge line segment for supporting the hypothesis;
S23:The edge line segment that Count reaches 12 hypothesis is considered existing, is removed from image and supports theirs Edge pixel;
S24:Step S21 to step S23 is repeated, until most of edge pixels are removed in image, and is found all Edge line segment.
In Fig. 2, black-white point is the edge pixel detected, and c1, d1 point are the edge pixel point arbitrarily chosen, as It is assumed that the endpoint of edge line segment, meets:The point direction c1, d1 and c1, d1 point line direction are consistent, with hypothesis edge line segment c1d1 Distance point close and consistent with the point line direction c1, d1 direction enough is considered as assuming the support point of edge line segment c1d1.Fig. 2 In, it is assumed that the support point of edge line segment c1d1 is 12, it is assumed that the support point of edge line segment c1d1 is 3, edge line segment c1d1 It is considered existing, it is assumed that edge line segment c1d1 is excluded.
The step S3, specifically includes:
An edge line segment obtained by step S2 is chosen, is ordered as line segment a, is arbitrarily chosen from remaining edge line segment another One line segment orders as line segment b, carries out fusion judgement to line segment a, b, i.e.,:If line segment a, b meet:|θab|∈Δθ,And Lab∈Δl, then line segment a, b are merged, obtain new line segment;Otherwise, continue to select in remaining edge line segment Line taking section, continuation carry out merging judgement with line segment a, until line segment a in addition to line segment a all edge line segments complete to merge judgement; θa、θbFor the direction of line segment a, b, ΔθFor the threshold value of line segment direction error to be fused, θab、LabFor the direction of line segment a, b line ab And length, ΔlAllow the threshold value of length for line segment a, b line ab;
S32:Step S31 is repeated, until all edge line segments that step S2 is obtained complete fusion.
The step S4, specifically includes:
S41:The edge line segment obtained according to step S3 arbitrarily chooses an edge line segment and chooses one endpoint, sentences It is disconnected whether consistent along the direction of the direction of the edge line segment pixel adjacent with the endpoint, if unanimously, by the pixel It is added in the edge line segment, continues checking for the next pixel adjacent with the pixel, up to certain pixel direction and is somebody's turn to do Edge line segment direction is inconsistent, then the pixel is updated to a new endpoint of the edge line segment;
In Fig. 3, pixel 1 is the pixel for the edge line segment chosen, and pixel 2 is the pixel for being added to edge line segment Point, pixel 3 are new endpoint;
S42:Step S41, the edge after being extended are executed to two endpoints of the obtained all edge line segments of step S3 Line segment and its new endpoint;
S43:Edge line segment after the extension obtained to step S42 carries out preliminary screening, deletes length value and is less than 20 pictures The edge line segment of element;The too small edge line segment of length value is unlikely to be mark frame, even, illustrates that the mark frame distorted Big or scene scale is too big, the meaning not detected, therefore deletes;
S44:Line segment test is carried out to the edge line segment after step S43 preliminary screenings, one is chosen along the direction of edge line segment A pixel apart from its 3 pixel of endpoint detects the gray value of the pixel, if the gray value of the pixel is in 250 In range, then judge that endpoint qualification, the edge line segment meet mark bounding box features, otherwise, then judge that the endpoint is unqualified, it should Edge line segment does not meet mark bounding box features;When two endpoints of edge line segment are all determined as unqualified, then the edge is deleted Line segment;
In Fig. 3, pixel 4 is test pixel point, it is recognised that if detection line segment meets mark bounding box features, pixel Point 4 should be relatively bright point.
S45:Step S44 is repeated, until all edge line segments have all carried out line segment survey, obtains that all to meet mark frame special The edge line segment of sign.
In step S5, extended according to step S4, the edge line segment after screening, detects quadrangle corner points, specifically include:
S51:In all edge line segments for meeting mark bounding box features obtained from step S45, one edge is arbitrarily chosen Line segment is set as line segment cd, as a line of quadrangle, from the remaining edge line segment for meeting mark bounding box features, The line segment ef intersected with the line segment cd is chosen, line segment cd, ef meet:θcd!≈θet, min { ce, cf, de, df }≤Δ and line Section cd, line segment ef meet quadrangle adjacent side feature, a corner points of crosspoint, that is, quadrangle of line segment cd, ef;
S52:Using step S51 the methods, the corner point sequence of quadrangle is obtained;
S53:All edge line segments for meeting mark bounding box features that all step S45 are obtained are traversed, all four are obtained The corner point sequence of side shape.
According to quadrangle corner points, quadrangle is constructed, is specifically included:
S54:Quadrangle is constructed according to the corner points number that all corner point sequences that step S53 is obtained include:If side The corner points number of angle point sequence is 4, connects 4 corner points, directly constructs quadrangle;
If the corner points number of corner point sequence is 3, extend 2 articles of edge line segments for not constituting the 4th corner points, intersection point obtains To the 4th corner points, 4 corner points are connected, construct quadrangle;
If the corner points number of corner point sequence is 2, extend 2 edge line segments for only having 1 corner points, if with the 3rd Edge line segment intersection, intersection point is corner points, connects 4 corner points, constructs quadrangle;Otherwise, quadrangle can not be constructed;
If the corner points number of corner point sequence is 1, quadrangle can not be constructed;
S55:All corner point sequences are traversed, all quadrangles are obtained, that is, detect all people's work mark.

Claims (7)

1. a kind of artificial target's detection method applied to augmented reality, which is characterized in that specific steps include:
S1:Acquisition frame image slightly samples frame image, and using oblique network scanning, detection obtains frame image edge pixels;Specifically Including:
S11:Frame image is divided into several regions, each region includes m × m pixels, m ∈ (20,60);
S12:To each region that step S11 is obtained, network scanning, x ∈ are carried out using oblique x ° and 180 °-x ° of scan line The opposite side distance of (22.5,67.5), each grid is y pixel, y ∈ (3,9);
S13:Every scan line of step S12 leads convolution with Gauss single order, calculates gradient of the pixel along every scanning line direction Strength component;
S14:The pixel obtained according to step S13 calculates the ladder of pixel along the gradient intensity component of every scanning line direction Spend intensity value, the corresponding pixel of local extremum, that is, edge pixel gradient intensity Local Extremum in gradient intensity value, extraction The edge pixel, while according to gradient intensity component, calculating the direction of the edge pixel;
S2:Based on RANSAC algorithms, according to the edge pixel that step S1 is obtained, detection obtains the edge line segment in frame image;
S3:The edge line segment that step S2 is detected is merged;
S4:Edge line segment after step S3 fusions is extended, is screened;
S5:Extended according to step S4, the edge line segment after screening, detects quadrangle corner points, and according to quadrangle corner points, structure Make quadrangle.
2. a kind of artificial target's detection method applied to augmented reality according to claim 1, which is characterized in that m= 40;X=45, y=5.
3. a kind of artificial target's detection method applied to augmented reality according to claim 1, which is characterized in that described Step S3, specifically includes:
S31:An edge line segment obtained by step S2 is chosen, is ordered as line segment a, is arbitrarily chosen from remaining edge line segment another One line segment orders as line segment b, carries out fusion judgement to line segment a, b, i.e.,:If line segment a, b meet:|θab|∈Δθ,And Lab∈Δl, then line segment a, b are merged, obtain new line segment;Otherwise, continue to select in remaining edge line segment Line taking section, continuation carry out merging judgement with line segment a, until line segment a in addition to line segment a all edge line segments complete to merge judgement; θa、θbFor the direction of line segment a, b, ΔθFor the threshold value of line segment direction error to be fused, θab、LabFor the direction of line segment a, b line ab And length, ΔlAllow the threshold value of length for line segment a, b line ab;
S32:Step S31 is repeated, until all edge line segments that step S2 is obtained complete fusion.
4. a kind of artificial target's detection method applied to augmented reality according to claim 1, which is characterized in that described Step S4, specifically includes:
S41:The edge line segment obtained according to step S3 arbitrarily chooses an edge line segment and chooses one endpoint, judges edge Whether the direction of the direction of the edge line segment pixel adjacent with the endpoint is consistent, if unanimously, which added Into the edge line segment, the next pixel adjacent with the pixel is continued checking for, until certain pixel direction and the edge Line segment direction is inconsistent, then the pixel is updated to a new endpoint of the edge line segment;
S42:Step S41, the edge line segment after being extended are executed to two endpoints of the obtained all edge line segments of step S3 And its new endpoint;
S43:Edge line segment after the extension obtained to step S42 carries out preliminary screening, deletes the side that length value is less than n pixel Edge line segment, n ∈ (15,25);
S44:Line segment test is carried out to the edge line segment after step S43 preliminary screenings, along the direction of edge line segment choose one away from The pixel of 2-4 from its endpoint pixels, detects the gray value of the pixel, if the gray value of the pixel is in 128- In 255 ranges, then judge that endpoint qualification, the edge line segment meet mark bounding box features, otherwise, then judge that the endpoint does not conform to Lattice, the edge line segment do not meet mark bounding box features;When two endpoints of edge line segment are all determined as unqualified, then deleting should Edge line segment;
S45:Step S44 is repeated, until all edge line segments have all carried out line segment survey, obtains all mark bounding box features that meet Edge line segment.
5. a kind of artificial target's detection method applied to augmented reality according to claim 4, which is characterized in that n= 20。
6. a kind of artificial target's detection method applied to augmented reality according to claim 4, which is characterized in that step In S5, extended according to step S4, the edge line segment after screening, detects quadrangle corner points, specifically include:
S51:It is arbitrary to choose one edge line segment in all edge line segments for meeting mark bounding box features obtained from step S45, Be set as line segment cd, as a line of quadrangle, from the remaining edge line segment for meeting mark bounding box features, choose with The line segment ef of the line segment cd intersections, line segment cd, ef meet:θcd!≈θef, min { ce, cf, de, df }≤Δ and line segment cd, line Section ef meets quadrangle adjacent side feature, a corner points of crosspoint, that is, quadrangle of line segment cd, ef;
S52:Using step S51 the methods, the corner point sequence of quadrangle is obtained;
S53:All edge line segments for meeting mark bounding box features that all step S45 are obtained are traversed, all quadrangles are obtained Corner point sequence.
7. a kind of artificial target's detection method applied to augmented reality according to claim 6, which is characterized in that according to Quadrangle corner points construct quadrangle, specifically include:
S54:Quadrangle is constructed according to the corner points number that all corner point sequences that step S53 is obtained include:If corner points The corner points number of sequence is 4, connects 4 corner points, directly constructs quadrangle;
If the corner points number of corner point sequence is 3, extends and do not constitute 2 articles of edge line segments of the 4th corner points, intersection point obtains the 4 corner points connect 4 corner points, construct quadrangle;
If the corner points number of corner point sequence is 2, extend 2 edge line segments for only having 1 corner points, if with the 3rd article of side Edge line segment intersection, intersection point are corner points, connect 4 corner points, construct quadrangle;Otherwise, quadrangle can not be constructed;
If the corner points number of corner point sequence is 1, quadrangle can not be constructed;
S55:All corner point sequences are traversed, all quadrangles are obtained, that is, detect all people's work mark.
CN201610065210.8A 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality Active CN105740818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610065210.8A CN105740818B (en) 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610065210.8A CN105740818B (en) 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality

Publications (2)

Publication Number Publication Date
CN105740818A CN105740818A (en) 2016-07-06
CN105740818B true CN105740818B (en) 2018-11-06

Family

ID=56247015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610065210.8A Active CN105740818B (en) 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality

Country Status (1)

Country Link
CN (1) CN105740818B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682652B (en) * 2017-02-27 2020-06-23 上海大学 Structure surface disease inspection and analysis method based on augmented reality
CN108428250B (en) * 2018-01-26 2021-09-21 山东大学 X-corner detection method applied to visual positioning and calibration
CN110136159B (en) * 2019-04-29 2023-03-31 辽宁工程技术大学 Line segment extraction method for high-resolution remote sensing image
CN110310279A (en) * 2019-07-09 2019-10-08 苏州梦想人软件科技有限公司 Rectangle and curl rectangle corner image-recognizing method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Business Card Region Segmentation by Block-based Line Fitting and Largest Quadrilateral Search with Constraints;Yong-Joong Kim等;《International Symposium on Image and Signal Processing and Analysis》;20150909;正文第二节 *
红外手术导航仪的高精度定位理论与方法;郑银强;《中国优秀硕士学位论文全文数据库》;20091231;全文 *
纸币面额与序列号识别算法的设计与实现;郭俊芳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110915;正文第2.2节,图2-4、2-17 *
高精度手术导航的研究与应用;沈铁;《中国优秀硕士学位论文全文数据库》;20121231;全文 *

Also Published As

Publication number Publication date
CN105740818A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
Ye et al. Robust registration of multimodal remote sensing images based on structural similarity
Lei et al. New crack detection method for bridge inspection using UAV incorporating image processing
Liu et al. Concrete crack assessment using digital image processing and 3D scene reconstruction
Prasanna et al. Automated crack detection on concrete bridges
CN105740818B (en) A kind of artificial target's detection method applied to augmented reality
Rottensteiner et al. Automated delineation of roof planes from lidar data
Gamba et al. Improving urban road extraction in high-resolution images exploiting directional filtering, perceptual grouping, and simple topological concepts
JP7102569B2 (en) Damage information processing device, damage information processing method, and damage information processing program
CN110378900A (en) The detection method of product defects, apparatus and system
CN103198476B (en) Image detection method of thick line type cross ring mark
CN104048969A (en) Tunnel defect recognition method
Brilakis et al. Visual pattern recognition models for remote sensing of civil infrastructure
Seers et al. Extraction of three-dimensional fracture trace maps from calibrated image sequences
CN104700355A (en) Generation method, device and system for indoor two-dimension plan
Yao et al. Automatic scan registration using 3D linear and planar features
Alsadik et al. Efficient use of video for 3D modelling of cultural heritage objects
Rahmdel et al. A review of hough transform and line segment detection approaches
CN110728269B (en) High-speed rail contact net support pole number plate identification method based on C2 detection data
JPH063145A (en) Tunnel face image recording processing system
Hu et al. Generalized image recognition algorithm for sign inventory
Kumar et al. Comparative analysis for edge detection techniques
Adu-Gyamfi et al. Functional evaluation of pavement condition using a complete vision system
Mojidra et al. Vision-based fatigue crack detection using global motion compensation and video feature tracking
CN105930813B (en) A method of detection composes a piece of writing this under any natural scene
JP3017122B2 (en) Depth information extraction device and depth information extraction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant