CN105740818A - Artificial mark detection method applied to augmented reality - Google Patents

Artificial mark detection method applied to augmented reality Download PDF

Info

Publication number
CN105740818A
CN105740818A CN201610065210.8A CN201610065210A CN105740818A CN 105740818 A CN105740818 A CN 105740818A CN 201610065210 A CN201610065210 A CN 201610065210A CN 105740818 A CN105740818 A CN 105740818A
Authority
CN
China
Prior art keywords
line segment
edge
pixel
corner points
edge line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610065210.8A
Other languages
Chinese (zh)
Other versions
CN105740818B (en
Inventor
赵子健
马帅依凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201610065210.8A priority Critical patent/CN105740818B/en
Publication of CN105740818A publication Critical patent/CN105740818A/en
Application granted granted Critical
Publication of CN105740818B publication Critical patent/CN105740818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial mark detection method applied to augmented reality. The method comprises the specific steps of S1: acquiring a frame image, roughly sampling the frame image, performing slant grid scanning, and performing detection to obtain edge pixels of the frame image; S2: based on an RANSAC algorithm, performing detection to obtain edge line segments in the frame image; S3: fusing the edge line segments; S4: extending and screening the edge line segments; and S5: detecting edge corner points of a quadrangle and constructing the quadrangle according to the edge corner points of the quadrangle. According to the method, the frame image is preprocessed before calculation, rough grid sampling is performed, and each grid region is subjected to edge detection, so that the program operation time is greatly shortened, the detection speed is increased, the real-time property is good, and the real-time detection requirement is met; and by adopting an edge-based detection method, line segment testing is performed at first and then a quadrilateral frame of a mark is reconstructed according to the line segments obtained by the line segment testing, so that the method has very good robustness for the illumination change and the occlusion condition.

Description

A kind of artificial target's detection method being applied to augmented reality
Technical field
The present invention relates to a kind of artificial target's detection method being applied to augmented reality, belong to the technical field of augmented reality application.
Background technology
Augmented reality, it is a kind of by new technique integrated to real world information and virtual world information " seamless ", it is the entity information (visual information, sound, taste, sense of touch etc.) originally being difficult to experience in the certain time spatial dimension of real world, by science and technology such as computers, superposition again after analog simulation, by virtual Information application to real world, by the perception of human sensory institute, thus reaching the sensory experience of exceeding reality.Real environment and virtual object have been added to same picture in real time or space exists simultaneously.
Reference mark system, it is widely used in the general engineerings such as augmented reality, robot navigation, position tracking, image modeling, identify, mainly through image processing techniques detection, the artificial mark of known 2D being placed in environment, extract flag information, calculate video camera and object relative position relation.The most important parameter of reference mark system is exactly false drop rate, sized-multiple rate, minimum detection pixel and illumination immunity to interference.The key technology of reference mark system is the design of artificial target and corresponding recognition positioning method.Existing artificial target designs single so that be vulnerable to the interference of light and complex object when image recognition, and inside does not store information so that the data of dummy object must be stored on identification equipment and cannot change flexibly.
ARTag indicates, is a kind of two-value plane landmark, and each ID number being flagged with oneself is internal in mark by two-value numeric coding.ARToolkit mark before comparing, ARTag solves the problems such as high error rate, false drop rate and sized-multiple rate.Reference mark systematic function depends on the detection performance for 2D mark, and except detection speed, outside error rate etc. is considered, Mark Detection algorithm is also required to illumination condition, and the situation such as block has good robustness.
Summary of the invention
For the deficiencies in the prior art, the invention provides a kind of artificial target's detection method being applied to augmented reality;
The present invention improves the recognition speed of artificial target, interference resistance and accuracy.
Terminological interpretation
RANSAC algorithm, the abbreviation of RANdomSAmpleConsensus, it is according to one group of sample data set comprising abnormal data, calculates the mathematical model parameter of data, obtain the algorithm of effective sample data.
The technical scheme is that
A kind of artificial target's detection method being applied to augmented reality, concrete steps include:
S1: acquisition frame image, slightly samples to two field picture, adopts oblique network scanning, and detection obtains two field picture edge pixel;
S2: based on RANSAC algorithm, according to the step S1 edge pixel obtained, detection obtains the edge line segment in two field picture;
S3: the edge line segment that step S2 is detected merges;
S4: the edge line segment after step S3 is merged extends, screens;
S5: according to the edge line segment after step S4 extension, screening, detects tetragon corner points, and according to tetragon corner points, constructs tetragon.
According to currently preferred, described step S1, specifically include:
S11: two field picture is divided into several regions, each region comprise m × m pixel, m ∈ (20,60);It is further preferred that m=40;Two field picture is divided into several regions, and detection accelerates speed, improves real-time.
S12: to the step S11 each region obtained, adopts the scanning line of oblique x ° and (180-x) ° to carry out network scanning, x ∈ (22.5,67.5), and the opposite side distance of each grid is y pixel, y ∈ (3,9);It is further preferred that x=45, y=5;
The every scanning line of S13: step S12 leads convolution with Gauss single order, calculates the pixel gradient intensity component along every scan-line direction;
S14: according to the step S13 pixel the obtained gradient intensity component along every scan-line direction, calculate the gradient intensity value of pixel, pixel that local extremum in gradient intensity value is corresponding and edge pixel gradient intensity Local Extremum, extract this edge pixel, simultaneously according to gradient intensity component, calculate the direction of this edge pixel.
According to currently preferred, described step S3, specifically include:
S31: choose one by the step S2 edge line segment obtained, orders as line segment a, arbitrarily chooses another line segment from residue edge line segment, orders as line segment b, carries out line segment a, b merging and judges, it may be assumed that if line segment a, b meet: | θab|∈Δθ,And Lab∈Δl, then line segment a, b are merged, obtain new line segment;Otherwise, continue to choose line segment in residue edge line segment, continue to carry out merging with line segment a to judge, judge until line segment a and all edges line segment except line segment a complete to merge;θa、θbFor the direction of line segment a, b, ΔθFor the threshold value of line segment direction error to be fused, θab、LabDirection and length, Δ for line segment a, b line ablThe threshold value of length is allowed for line segment a, b line ab;
S32: repeat step S31, completes to merge to all edges line segment that step S2 obtains.
According to currently preferred, described step S4, specifically include:
S41: according to the step S3 edge line segment obtained, arbitrarily choose an edge line segment and choose one end points, judge that whether the direction of the pixel adjacent with this end points along the direction of this edge line segment is consistent, if it is consistent, then this pixel is added in this edge line segment, continuing checking for the next pixel adjacent with this pixel, until certain pixel direction is inconsistent with this edge line segment direction, then this pixel is updated to a new end points of this edge line segment;
S42: two end points of all edges line segment that step S3 is obtained perform step S41, edge line segment after being extended and newly end points;
S43: the edge line segment after the extension that step S42 is obtained carries out Preliminary screening, deletes the length value edge line segment less than n pixel, n ∈ (15,25), it is further preferred that n=20;The too small edge line segment of length value is unlikely to be mark frame, even if illustrating that the distortion of this mark frame is excessive or scene yardstick is too big, and the meaning not detected, therefore delete;
S44: the edge line segment after step S43 Preliminary screening is carried out line segment test, a pixel apart from its 24 pixels of end points is chosen in direction along edge line segment, detect the gray value of this pixel, if the gray value of this pixel is in 128 255 scopes, then judging that this end points is qualified, this edge line segment meets mark bounding box features, otherwise, then judging that this end points is defective, this edge line segment does not meet mark bounding box features;When two end points of edge line segment are all judged to defective, then delete this edge line segment;
S45: repeat step S44, until all edges line segment has all carried out line segment survey, obtains all edge line segments meeting mark bounding box features.
According to currently preferred, in step S5, according to the edge line segment after step S4 extension, screening, detect tetragon corner points, specifically include:
S51: from all edge line segments meeting mark bounding box features that step S45 obtains, arbitrarily choose an edge line segment, it is set as line segment cd, Article 1 limit as tetragon, from the remaining edge line segment meeting mark bounding box features, choosing and the described line segment cd line segment ef intersected, line segment cd, ef meet: θcd!≈θef, min{ce, cf, de, df}≤Δ and line segment cd, line segment ef meet tetragon adjacent side feature, the joining of line segment cd, ef and tetragon corner points;
S52: utilize method described in step S51, obtains the corner points sequence of tetragon;
S53: traversal institute in steps S45 obtain all meet indicate bounding box features edge line segments, obtain the corner points sequence of all of tetragon.
According to currently preferred, according to tetragon corner points, construct tetragon, specifically include:
S54: the corner points number structure tetragon comprised according to the step S53 all of corner points sequence obtained: if the corner points number of corner points sequence is 4, connects 4 corner points, directly construct tetragon;
If the corner points number of corner points sequence is 3, extending the 2 articles of edge line segments not constituting the 4th corner points, intersection point obtains the 4th corner points, connects 4 corner points, constructs tetragon;
If the corner points number of corner points sequence is 2, extend 2 the edge line segments only having 1 corner points, if with the 3rd article of edge line segment intersection, intersection point is corner points, connects 4 corner points, structure tetragon;Otherwise, it is impossible to structure tetragon;
If the corner points number of corner points sequence is 1, it is impossible to structure tetragon;
S55: travel through all corner points sequences, obtain all of tetragon, namely detect all of artificial target.
The invention have the benefit that
1, to two field picture pretreatment before computing of the present invention, carrying out coarse grid sampling, each net region is carried out rim detection, substantially reduce the sequential operation time, improve detection speed, real-time is good, meets real-time testing requirement.
2, the present invention adopts the detection method based on edge, first carries out line segment test, then tests the line segment obtained, the tetragon frame of remodeling according to line segment, illumination variation and circumstance of occlusion are had good robustness.
Accompanying drawing explanation
Fig. 1 is the flow chart of the present invention;
Fig. 2 is RANSAC algorithm schematic diagram;
RANSAC algorithm is a kind of conventional line segment approximating method.In Fig. 2, namely black-white point is the edge pixel point detected, c1, d1 point is the edge pixel point arbitrarily chosen, as the end points assuming edge line segment, meet c1, d1 point and c1, d1 point line direction is consistent, and assume that line segment c1d1 distance in edge is enough near and the consistent point in direction and line segment c1d1 direction is considered the support point of line segment c1d1.In Fig. 2, the support point of line segment c1d1 is 12, and the support point of line segment e1f1 is 3, and repeat the above steps obtains the line segment that support point is maximum, and namely this line segment is identified as the line segment of existence.
Fig. 3 is the schematic diagram that the edge line segment after merging is extended, screened.
Detailed description of the invention
Below in conjunction with Figure of description and embodiment, the present invention is further qualified, but is not limited to this.
Embodiment
A kind of artificial target's detection method being applied to augmented reality, concrete steps include:
S1: acquisition frame image, slightly samples to two field picture, adopts oblique network scanning, and detection obtains two field picture edge pixel;
S2: based on RANSAC algorithm, according to the step S1 edge pixel obtained, detection obtains the edge line segment in two field picture;
S3: the edge line segment that step S2 is detected merges;
S4: the edge line segment after step S3 is merged extends, screens;
S5: according to the edge line segment after step S4 extension, screening, detects tetragon corner points, and according to tetragon corner points, constructs tetragon.As shown in Figure 1.
Described step S1, specifically includes:
S11: two field picture is divided into several regions, and each region comprises 40 × 40 pixels, is divided into several regions by two field picture, and detection accelerates speed, improves real-time.
S12: to the step S11 each region obtained, adopting the scanning line of oblique 45 ° and 135 ° to carry out network scanning, the opposite side distance of each grid is 5 pixels;
Adopting oblique scanning line to carry out network scanning main cause to be: generally, mark is just being placed in that picture is less, and mark is generally inclined in picture, therefore adopts oblique scanning line to carry out network scanning.
The every scanning line of S13: step S12 leads convolution with Gauss single order, calculates the pixel gradient intensity component along every scan-line direction;
S14: according to the step S13 pixel the obtained gradient intensity component along every scan-line direction, calculate the gradient intensity value of pixel, pixel that local extremum in gradient intensity value is corresponding and edge pixel gradient intensity Local Extremum, extract this edge pixel, the threshold value extracting edge pixel is 30/256 pixel value, simultaneously according to gradient intensity component, calculate the direction θ=tan of this edge pixel-1(gy/gx), gyIt is the gradient intensity of Y-component, gxIt it is the gradient intensity of X-component.
S2: based on RANSAC algorithm, according to the step S1 edge pixel obtained, detection obtains the edge line segment in two field picture;
Described step S2, specifically includes:
S21: in all regions of step S11 40 × 40 pixels divided, randomly selects two edge pixels, if the direction of two edge pixels is consistent with the direction of they lines, then supposes that their line exists an edge line segment;
S22: calculate the quantity of the edge pixel of the edge line segment supporting the supposition described in step S21, if edge pixel meets: γ ∈ (0.1,0.25), the direction θ 1 of edge pixel is consistent with edge line segment direction, then thinking that this edge pixel supports the edge line segment of this supposition, Count adds 1;Wherein, γ is the distance of the edge line segment that edge pixel distance supposes, θ 1 is the direction of edge pixel, and Count is the quantity of the edge pixel of the edge line segment supporting described supposition;
It is considered as what exist that S23:Count reaches the edge line segment of the supposition of 12, removes the edge pixel supporting them from image;
S24: repeat step S21 to step S23, until most of edge pixels are removed in image, and find all of edge line segment.
In Fig. 2, namely black-white point is the edge pixel detected, c1, d1 point is the edge pixel point arbitrarily chosen, as the end points supposing edge line segment, meet: c1, d1 point direction and c1, d1 point line direction are consistent, with suppose edge line segment c1d1 distance enough closely and the consistent point in direction and c1, d1 point line direction be considered the support point of supposition edge line segment c1d1.In Fig. 2, it is assumed that the support point of edge line segment c1d1 is 12, it is assumed that the support point of edge line segment c1d1 is 3, edge line segment c1d1 is considered as what exist, it is assumed that edge line segment c1d1 is excluded.
Described step S3, specifically includes:
Choose one by the step S2 edge line segment obtained, order as line segment a, from residue edge line segment, arbitrarily choose another line segment, order as line segment b, carry out line segment a, b merging and judge, it may be assumed that if line segment a, b meet: | θab|∈Δθ,And Lab∈Δl, then line segment a, b are merged, obtain new line segment;Otherwise, continue to choose line segment in residue edge line segment, continue to carry out merging with line segment a to judge, judge until line segment a and all edges line segment except line segment a complete to merge;θa、θbFor the direction of line segment a, b, ΔθFor the threshold value of line segment direction error to be fused, θab、LabDirection and length, Δ for line segment a, b line ablThe threshold value of length is allowed for line segment a, b line ab;
S32: repeat step S31, completes to merge to all edges line segment that step S2 obtains.
Described step S4, specifically includes:
S41: according to the step S3 edge line segment obtained, arbitrarily choose an edge line segment and choose one end points, judge that whether the direction of the pixel adjacent with this end points along the direction of this edge line segment is consistent, if it is consistent, then this pixel is added in this edge line segment, continuing checking for the next pixel adjacent with this pixel, until certain pixel direction is inconsistent with this edge line segment direction, then this pixel is updated to a new end points of this edge line segment;
In Fig. 3, pixel 1 is the pixel of the edge line segment chosen, and pixel 2 is the pixel adding edge line segment to, and pixel 3 is new end points;
S42: two end points of all edges line segment that step S3 is obtained perform step S41, edge line segment after being extended and newly end points;
S43: the edge line segment after the extension that step S42 is obtained carries out Preliminary screening, deletes the length value edge line segment less than 20 pixels;The too small edge line segment of length value is unlikely to be mark frame, even if illustrating that the distortion of this mark frame is excessive or scene yardstick is too big, and the meaning not detected, therefore delete;
S44: the edge line segment after step S43 Preliminary screening is carried out line segment test, a pixel apart from its 3 pixels of end points is chosen in direction along edge line segment, detect the gray value of this pixel, if the gray value of this pixel is in 250 scopes, then judging that this end points is qualified, this edge line segment meets mark bounding box features, otherwise, then judging that this end points is defective, this edge line segment does not meet mark bounding box features;When two end points of edge line segment are all judged to defective, then delete this edge line segment;
In Fig. 3, pixel 4 is test pixel point, it is possible to know, if detection line segment meets mark bounding box features, pixel 4 should be relatively bright point.
S45: repeat step S44, until all edges line segment has all carried out line segment survey, obtains all edge line segments meeting mark bounding box features.
In step S5, according to the edge line segment after step S4 extension, screening, detect tetragon corner points, specifically include:
S51: from all edge line segments meeting mark bounding box features that step S45 obtains, arbitrarily choose an edge line segment, it is set as line segment cd, Article 1 limit as tetragon, from the remaining edge line segment meeting mark bounding box features, choosing and the described line segment cd line segment ef intersected, line segment cd, ef meet: θcd!≈θet, min{ce, cf, de, df}≤Δ and line segment cd, line segment ef meet tetragon adjacent side feature, the joining of line segment cd, ef and tetragon corner points;
S52: utilize method described in step S51, obtains the corner points sequence of tetragon;
S53: traversal institute in steps S45 obtain all meet indicate bounding box features edge line segments, obtain the corner points sequence of all of tetragon.
According to tetragon corner points, construct tetragon, specifically include:
S54: the corner points number structure tetragon comprised according to the step S53 all of corner points sequence obtained: if the corner points number of corner points sequence is 4, connects 4 corner points, directly construct tetragon;
If the corner points number of corner points sequence is 3, extending the 2 articles of edge line segments not constituting the 4th corner points, intersection point obtains the 4th corner points, connects 4 corner points, constructs tetragon;
If the corner points number of corner points sequence is 2, extend 2 the edge line segments only having 1 corner points, if with the 3rd article of edge line segment intersection, intersection point is corner points, connects 4 corner points, structure tetragon;Otherwise, it is impossible to structure tetragon;
If the corner points number of corner points sequence is 1, it is impossible to structure tetragon;
S55: travel through all corner points sequences, obtain all of tetragon, namely detect all of artificial target.

Claims (8)

1. the artificial target's detection method being applied to augmented reality, it is characterised in that concrete steps include:
S1: acquisition frame image, slightly samples to two field picture, adopts oblique network scanning, and detection obtains two field picture edge pixel;
S2: based on RANSAC algorithm, according to the step S1 edge pixel obtained, detection obtains the edge line segment in two field picture;
S3: the edge line segment that step S2 is detected merges;
S4: the edge line segment after step S3 is merged extends, screens;
S5: according to the edge line segment after step S4 extension, screening, detects tetragon corner points, and according to tetragon corner points, constructs tetragon.
2. a kind of artificial target's detection method being applied to augmented reality according to claim 1, it is characterised in that described step S1, specifically includes:
S11: two field picture is divided into several regions, each region comprise m × m pixel, m ∈ (20,60);
S12: to the step S11 each region obtained, adopting the scanning line of oblique x ° and 180 °-x ° to carry out network scanning, x ∈ (22.5,67.5), the opposite side distance of each grid is y pixel, y ∈ (3,9);
The every scanning line of S13: step S12 leads convolution with Gauss single order, calculates the pixel gradient intensity component along every scan-line direction;
S14: according to the step S13 pixel the obtained gradient intensity component along every scan-line direction, calculate the gradient intensity value of pixel, pixel that local extremum in gradient intensity value is corresponding and edge pixel gradient intensity Local Extremum, extract this edge pixel, simultaneously according to gradient intensity component, calculate the direction of this edge pixel.
3. a kind of artificial target's detection method being applied to augmented reality according to claim 2, it is characterised in that m=40;X=45, y=5.
4. a kind of artificial target's detection method being applied to augmented reality according to claim 1, it is characterised in that described step S3, specifically includes:
S31: choose one by the step S2 edge line segment obtained, orders as line segment a, arbitrarily chooses another line segment from residue edge line segment, orders as line segment b, carries out line segment a, b merging and judges, it may be assumed that if line segment a, b meet: | θab|∈Δθ,And Lab∈Δl, then line segment a, b are merged, obtain new line segment;Otherwise, continue to choose line segment in residue edge line segment, continue to carry out merging with line segment a to judge, judge until line segment a and all edges line segment except line segment a complete to merge;θa、θbFor the direction of line segment a, b, ΔθFor the threshold value of line segment direction error to be fused, θab、LabDirection and length, Δ for line segment a, b line ablThe threshold value of length is allowed for line segment a, b line ab;
S32: repeat step S31, completes to merge to all edges line segment that step S2 obtains.
5. a kind of artificial target's detection method being applied to augmented reality according to claim 1, it is characterised in that described step S4, specifically includes:
S41: according to the step S3 edge line segment obtained, arbitrarily choose an edge line segment and choose one end points, judge that whether the direction of the pixel adjacent with this end points along the direction of this edge line segment is consistent, if it is consistent, then this pixel is added in this edge line segment, continuing checking for the next pixel adjacent with this pixel, until certain pixel direction is inconsistent with this edge line segment direction, then this pixel is updated to a new end points of this edge line segment;
S42: two end points of all edges line segment that step S3 is obtained perform step S41, edge line segment after being extended and newly end points;
S43: the edge line segment after the extension that step S42 is obtained carries out Preliminary screening, deletes the length value edge line segment less than n pixel, n ∈ (15,25);
S44: the edge line segment after step S43 Preliminary screening is carried out line segment test, a pixel apart from its 24 pixels of end points is chosen in direction along edge line segment, detect the gray value of this pixel, if the gray value of this pixel is in 128 255 scopes, then judging that this end points is qualified, this edge line segment meets mark bounding box features, otherwise, then judging that this end points is defective, this edge line segment does not meet mark bounding box features;When two end points of edge line segment are all judged to defective, then delete this edge line segment;
S45: repeat step S44, until all edges line segment has all carried out line segment survey, obtains all edge line segments meeting mark bounding box features.
6. a kind of artificial target's detection method being applied to augmented reality according to claim 5, it is characterised in that n=20.
7. a kind of artificial target's detection method being applied to augmented reality according to claim 5, it is characterised in that in step S5, according to the edge line segment after step S4 extension, screening, detects tetragon corner points, specifically includes:
S51: from all edge line segments meeting mark bounding box features that step S45 obtains, arbitrarily choose an edge line segment, it is set as line segment cd, Article 1 limit as tetragon, from the remaining edge line segment meeting mark bounding box features, choosing and the described line segment cd line segment ef intersected, line segment cd, ef meet: θcd!≈θef, min{ce, cf, de, df}≤Δ and line segment cd, line segment ef meet tetragon adjacent side feature, the joining of line segment cd, ef and tetragon corner points;
S52: utilize method described in step S51, obtains the corner points sequence of tetragon;
S53: traversal institute in steps S45 obtain all meet indicate bounding box features edge line segments, obtain the corner points sequence of all of tetragon.
8. a kind of artificial target's detection method being applied to augmented reality according to claim 7, it is characterised in that according to tetragon corner points, constructs tetragon, specifically includes:
S54: the corner points number structure tetragon comprised according to the step S53 all of corner points sequence obtained: if the corner points number of corner points sequence is 4, connects 4 corner points, directly construct tetragon;
If the corner points number of corner points sequence is 3, extending the 2 articles of edge line segments not constituting the 4th corner points, intersection point obtains the 4th corner points, connects 4 corner points, constructs tetragon;
If the corner points number of corner points sequence is 2, extend 2 the edge line segments only having 1 corner points, if with the 3rd article of edge line segment intersection, intersection point is corner points, connects 4 corner points, structure tetragon;Otherwise, it is impossible to structure tetragon;
If the corner points number of corner points sequence is 1, it is impossible to structure tetragon;
S55: travel through all corner points sequences, obtain all of tetragon, namely detect all of artificial target.
CN201610065210.8A 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality Active CN105740818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610065210.8A CN105740818B (en) 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610065210.8A CN105740818B (en) 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality

Publications (2)

Publication Number Publication Date
CN105740818A true CN105740818A (en) 2016-07-06
CN105740818B CN105740818B (en) 2018-11-06

Family

ID=56247015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610065210.8A Active CN105740818B (en) 2016-01-29 2016-01-29 A kind of artificial target's detection method applied to augmented reality

Country Status (1)

Country Link
CN (1) CN105740818B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682652A (en) * 2017-02-27 2017-05-17 上海大学 Structure surface disease inspection and analysis method based on augmented reality
CN108428250A (en) * 2018-01-26 2018-08-21 山东大学 A kind of X angular-point detection methods applied to vision positioning and calibration
CN110136159A (en) * 2019-04-29 2019-08-16 辽宁工程技术大学 Line segments extraction method towards high-resolution remote sensing image
CN110310279A (en) * 2019-07-09 2019-10-08 苏州梦想人软件科技有限公司 Rectangle and curl rectangle corner image-recognizing method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YONG-JOONG KIM等: "Business Card Region Segmentation by Block-based Line Fitting and Largest Quadrilateral Search with Constraints", 《INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS》 *
沈铁: "高精度手术导航的研究与应用", 《中国优秀硕士学位论文全文数据库》 *
郑银强: "红外手术导航仪的高精度定位理论与方法", 《中国优秀硕士学位论文全文数据库》 *
郭俊芳: "纸币面额与序列号识别算法的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682652A (en) * 2017-02-27 2017-05-17 上海大学 Structure surface disease inspection and analysis method based on augmented reality
CN106682652B (en) * 2017-02-27 2020-06-23 上海大学 Structure surface disease inspection and analysis method based on augmented reality
CN108428250A (en) * 2018-01-26 2018-08-21 山东大学 A kind of X angular-point detection methods applied to vision positioning and calibration
CN108428250B (en) * 2018-01-26 2021-09-21 山东大学 X-corner detection method applied to visual positioning and calibration
CN110136159A (en) * 2019-04-29 2019-08-16 辽宁工程技术大学 Line segments extraction method towards high-resolution remote sensing image
CN110136159B (en) * 2019-04-29 2023-03-31 辽宁工程技术大学 Line segment extraction method for high-resolution remote sensing image
CN110310279A (en) * 2019-07-09 2019-10-08 苏州梦想人软件科技有限公司 Rectangle and curl rectangle corner image-recognizing method

Also Published As

Publication number Publication date
CN105740818B (en) 2018-11-06

Similar Documents

Publication Publication Date Title
Ye et al. Robust registration of multimodal remote sensing images based on structural similarity
Liu et al. Concrete crack assessment using digital image processing and 3D scene reconstruction
CN112232391B (en) Dam crack detection method based on U-net network and SC-SAM attention mechanism
CN107154040B (en) Tunnel lining surface image crack detection method
Gamba et al. Improving urban road extraction in high-resolution images exploiting directional filtering, perceptual grouping, and simple topological concepts
CN105654507B (en) A kind of vehicle overall dimension measurement method based on the tracking of image behavioral characteristics
US10643332B2 (en) Method of vehicle image comparison and system thereof
CN103279765B (en) Steel wire rope surface damage detection method based on images match
CN104156965B (en) A kind of automatic quick joining method of Mine Monitoring image
CN107657644B (en) Sparse scene flows detection method and device under a kind of mobile environment
CN103198476B (en) Image detection method of thick line type cross ring mark
KR101130963B1 (en) Apparatus and method for tracking non-rigid object based on shape and feature information
KR20130056309A (en) Text-based 3d augmented reality
CN105740818A (en) Artificial mark detection method applied to augmented reality
US8724851B2 (en) Aerial survey video processing
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
CN115546113B (en) Method and system for predicting fracture image and front three-dimensional structure parameters of tunnel face
Rahmdel et al. A Review of Hough Transform and Line Segment Detection Approaches.
Kumar et al. Comparative analysis for edge detection techniques
Adu-Gyamfi et al. Functional evaluation of pavement condition using a complete vision system
Mojidra et al. Vision-based fatigue crack detection using global motion compensation and video feature tracking
Haider et al. A hybrid method for edge continuity based on Pixel Neighbors Pattern Analysis (PNPA) for remote sensing satellite images
Hu et al. Generalized image recognition algorithm for sign inventory
CN110428264A (en) Fake method, device, equipment and medium are tested in identification based on dot matrix screen antifalsification label
Wu et al. Deep learning‐based super‐resolution with feature coordinators preservation for vision‐based measurement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant