CN103035003A - Method and device of achieving augmented reality - Google Patents

Method and device of achieving augmented reality Download PDF

Info

Publication number
CN103035003A
CN103035003A CN2012105320473A CN201210532047A CN103035003A CN 103035003 A CN103035003 A CN 103035003A CN 2012105320473 A CN2012105320473 A CN 2012105320473A CN 201210532047 A CN201210532047 A CN 201210532047A CN 103035003 A CN103035003 A CN 103035003A
Authority
CN
China
Prior art keywords
connected region
camera position
edge connected
local feature
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105320473A
Other languages
Chinese (zh)
Other versions
CN103035003B (en
Inventor
柳海波
史舒娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201210532047.3A priority Critical patent/CN103035003B/en
Publication of CN103035003A publication Critical patent/CN103035003A/en
Application granted granted Critical
Publication of CN103035003B publication Critical patent/CN103035003B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device of achieving augmented reality. The method of achieving augmented reality comprises a first step of obtaining one or more margin communicated regions of a collected real image, a second step of carrying out downsampling process to images included in the margin communicated regions if coverage area of the margin communicated regions meets predetermined requirements but the quality of the images does not meet the predetermined requirements, a third step of determining camera positional information according to the included images of the downsampling processed margin communicated regions and a saved sampling pattern in advance, and a fourth step of carrying out process operation of the augmented reality according to the camera positional information. According to the method and the device of achieving the augmented reality, calculated quantity in the process of the augmented reality can be effectively reduced, and process effect of the augmented reality can be guaranteed.

Description

A kind of method and device of realizing augmented reality
Technical field
The present invention relates to the augmented reality processing technology field, relate in particular to a kind of method and device of realizing augmented reality.
Background technology
Augmented reality (Augmented Reality, AR) is the new technology that grows up on the basis of virtual reality, is an important branch of virtual reality research.Put it briefly, augmented reality is that computer graph technology and visualization technique produce non-existent virtual objects in the actual environment, and with virtual objects exactly " embedding " in true environment, by display device virtual objects and true environment are combined together, virtual information is applied to real world, thereby present to the real new environment of sensory effects of user, to realize the enhancing to reality.
Be used for realizing that the augmented reality system of augmented reality need to guarantee that by analyzing a large amount of locator datas and scene information the dummy object of computing machine generation can accurately be positioned at real scene.Therefore, usually all comprise following basic treatment step in the augmented reality system:
(1) obtains real scene information;
(2) real scene information and the camera position information obtained are analyzed;
(3) generating virtual object;
(4) draw dummy object according to camera position information at view plane, and dummy object is shown with real scene information.
At present, the main augmented reality that adopts comprises: augmented reality (the being Marker AR) technology of band mark and without augmented reality (Marker-less AR) technology of mark.
The below will be respectively be described two kinds of augmented realities of available technology adopting.
(1) augmented reality of band mark
In the augmented reality of tape label, mainly be to use square black and white mark, identify and follow the tracks of the black and white mark with the estimation of realization camera position, and then realize virtual and real stack.
The specific implementation process of the augmented reality of tape label can comprise:
(1) image of camera collection real world, and it is passed to computing machine;
(2) software on the computing machine is searched for all possible rectangle object in every frame video, for example, can use the fixed gate limit value that the rectangle object mark is split;
(3) if find rectangle, then software adopts corresponding mathematical method to calculate camera with respect to the position of rectangle object mark, to estimate camera position information;
(4) after definite camera position information, just can be in the position of appointment according to camera position information, dummy object is added in the video;
(5) through after the above-mentioned processing, just can in display device, see the video that is superimposed with dummy object.
In above-mentioned Marker AR technology, when marking image is imperfect (for example, marking image is blocked or the part marking image is moved out of the camera visual field), then be difficult to identify the content of mark, like this, just, can't dummy object be added in the video according to the content of mark.
(2) without the augmented reality that marks
In the augmented reality without mark, its specific implementation process can comprise:
(1) image of camera collection real world, and it is passed to computing machine;
(2) local feature of computer software detected image in frame of video mates the local feature of local feature and possible destination object;
(3) according to the geometrical constraint of local feature position, only keep reasonably coupling, thereby can identify destination object according to rational coupling;
(4) after there is identified destination object in definite system, then follow the tracks of identified destination object, and adopt camera pose estimation algorithm, calculate the relatively position of each destination object of camera;
(5) determining camera relatively behind the position of each destination object, can be respectively on the position that target is guided, according to the relative positional information of each target of camera, dummy object is added in the video;
(6) through after the above-mentioned processing, just can in display device, see the video that is superimposed with dummy object.
In Marker-less AR technology, owing to use the local feature coupling identifying object of image, thereby calculated amount is larger in the corresponding processing procedure.And, in this Marker-less AR technology, to camera collection to the size of target image certain requirement is also arranged, if target is away from camera, the target image that camera collection arrives is too small, and the local feature in the target image that then will cause detecting is very few, and can't meet the requirements of the Proper Match Characteristic Number, and then cause to detect destination object, so that can't finish be added to processing in the video of dummy object.
Summary of the invention
The purpose of this invention is to provide a kind of method and device of realizing augmented reality, thereby can reduce the calculated amount in the augmented reality processing procedure in the prior art and can guarantee the treatment effect of augmented reality.
The objective of the invention is to be achieved through the following technical solutions:
First aspect, described a kind of method that realizes augmented reality comprises:
Obtain one or more edges connected region of the true picture that collects;
If the coverage of described edge connected region meets pre-provisioning request, but picture quality does not meet pre-provisioning request, the image that then described edge connected region is comprised carries out down-sampling to be processed;
The image that edge connected region after processing according to down-sampling comprises and the sampling template of pre-save are determined camera position information;
Carry out the processing operation of augmented reality according to described camera position information.
In the possible implementation of the first of first aspect, described one or more edges connected region of obtaining the true picture that collects also comprises before:
Edge to the true picture that collects carries out binary conversion treatment, and is communicated with principles according to eight the true picture that collects is divided into one or more edges connected region.
In the possible implementation of the second of first aspect, described coverage meets pre-provisioning request and refers to: there are two complete linear edges at least in described edge connected region, and can obtain complete quadrilateral by the mode that prolongs polishing; Described picture quality does not meet pre-provisioning request and refers to: the quantity of the local feature point in the described edge connected region does not reach predetermined quantity.
In conjunction with the possible implementation of the first of first aspect or first aspect or the possible implementation of the second of first aspect, in the third possible implementation of first aspect, the method also comprises:
If coverage and the picture quality of described edge connected region all meet pre-provisioning request, then adopt the mode of exact matching to determine camera position information.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect, the mode of described employing exact matching determines that the step of camera position information comprises:
The mode that adopts statistics with the local feature point in the described edge connected region be scheduled in the training set unique point and mate the satisfactory local feature point of acquisition matching probability;
According to the geometrical constraint of the local feature position of edge connected region, in the satisfactory local feature point of described matching probability, determine the satisfactory local feature point of rational matching probability;
Determine camera position information according to the satisfactory local feature point of described rational matching probability.
In conjunction with the possible implementation of the first of first aspect or first aspect or the possible implementation of the second of first aspect, in the 5th kind of possible implementation of first aspect, the method also comprises:
If the coverage of described edge connected region does not meet pre-provisioning request, picture quality meets pre-provisioning request, judges then whether described edge connected region is the estimation zone of destination object;
If described edge connected region is the estimation zone of destination object, then adopt the mode of feature point tracking to determine camera position information, otherwise, adopt and determine camera position information based on the matching way of invariant features operator.
In conjunction with the 5th kind of possible implementation of first aspect, in the 6th kind of possible implementation of first aspect, described matching way based on the invariant features operator determines that the step of camera position information comprises:
Extract the local feature point that comprises in the described edge connected region;
The described local feature point of extraction and the invariant features operator of pre-save are mated, determine the local feature point that the match is successful according to matching result;
According to the geometrical constraint of the local feature position of edge connected region, in the described local feature point that the match is successful, determine the local feature point that reasonably the match is successful;
Determine camera position information according to the described reasonably local feature point that the match is successful.
In conjunction with the possible implementation of the first of first aspect or first aspect or the possible implementation of the second of first aspect, in the 7th kind of possible implementation of first aspect, the method also comprises:
If coverage and the picture quality of described edge connected region all do not meet pre-provisioning request, then can't determine camera position information.
In conjunction with the possible implementation of the first of first aspect or first aspect or the possible implementation of the second of first aspect, in the 8th kind of possible implementation of first aspect, the method also comprises: main thread and image rendering thread, perhaps, main thread, image rendering thread and one or more subregion processing threads, wherein:
Main thread is used for selecting not processed edge connected region to carry out the operation of described definite camera position information;
The image rendering thread is used for carrying out the processing operation of described augmented reality to determining the edge connected region of camera position information;
The subregion processing threads is used for selecting not processed edge connected region to carry out the operation of described definite camera position information.
Second aspect, described a kind of device of realizing augmented reality comprises:
The connected region acquisition module is for one or more edges connected region of obtaining the true picture that collects;
The down-sampling processing module, the coverage that is used for the edge connected region obtained at described connected region acquisition module meets pre-provisioning request, but picture quality is not when meeting pre-provisioning request, and the image that described edge connected region is comprised carries out down-sampling to be processed;
The first camera position determination module is determined camera position information for the image that comprises according to the edge connected region after the described down-sampling processing module down-sampling processing and the sampling template of pre-save;
The augmented reality processing module is used for carrying out according to the camera position information that described the first camera position determination module is determined the processing operation of augmented reality.
In the possible implementation of the first of second aspect, this device also comprises:
Connected region is divided module, be used for before described connected region acquisition module obtains one or more edges connected region of the true picture that collects, edge to the true picture that collects carries out binary conversion treatment, and is communicated with principles according to eight the true picture that collects is divided into one or more edges connected region.
In the possible implementation of the second of second aspect, described coverage meets pre-provisioning request and refers to: there are two complete linear edges at least in described edge connected region, and can obtain complete quadrilateral by the mode that prolongs polishing; Described picture quality does not meet pre-provisioning request and refers to: the quantity of the local feature point in the described edge connected region does not reach predetermined quantity.
In conjunction with the possible implementation of the first of second aspect or second aspect or the possible implementation of the second of second aspect, in the third possible implementation of second aspect, this device also comprises:
Based on the camera position determination module of exact matching, be used for when the coverage of described edge connected region and picture quality all meet pre-provisioning request, adopting the mode of exact matching to determine camera position information.
In conjunction with the third possible implementation of second aspect, in the 4th kind of possible implementation of second aspect, described camera position determination module based on exact matching comprises:
The statistical match module, the mode that be used for to adopt statistics with the local feature point of described edge connected region be scheduled in the training set unique point and mate the satisfactory local feature point of acquisition matching probability;
Proper Match point determination module is used for the geometrical constraint according to the local feature position of edge connected region, determines the satisfactory local feature point of rational matching probability in the satisfactory local feature point of matching probability that described statistical match module obtains;
The second camera position determination module, the satisfactory local feature point of rational matching probability that is used for determining according to described Proper Match point determination module is determined camera position information.
In conjunction with the possible implementation of the first of second aspect or second aspect or the possible implementation of the second of second aspect, in the 5th kind of possible implementation of second aspect, this device also comprises:
Judge module is used for not meeting pre-provisioning request in the coverage of described edge connected region, when picture quality meets pre-provisioning request, judges whether described edge connected region is the estimation zone of destination object;
The 3rd camera position determination module is used for adopting the mode of feature point tracking to determine camera position information when described judge module determines that described edge connected region is the estimation zone of destination object;
The 4th camera position determination module is used for adopting and determining camera position information based on the matching way of invariant features operator when described judge module determines that described edge connected region is not the estimation zone of destination object.
In conjunction with the 5th kind of possible implementation of second aspect, in the 6th kind of possible implementation of second aspect, described the 4th camera position determination module specifically comprises:
Invariant features operator extraction module is used for extracting the invariant features operator of the local feature point that described edge connected region comprises;
Invariant features operator matching module mates for the invariant features operator of the described local feature point that described invariant features operator extraction module is extracted and the invariant features operator of pre-save, determines the local feature point that the match is successful according to matching result;
Proper Match point determination module is used for the geometrical constraint according to the local feature position of edge connected region, determines the local feature point that reasonably the match is successful in the described local feature point that the match is successful;
Camera position is determined submodule, and the local feature point that reasonably the match is successful that is used for determining according to described Proper Match point determination module is determined camera position information.
In conjunction with the possible implementation of the first of second aspect or second aspect or the possible implementation of the second of second aspect, in the 7th kind of possible implementation of second aspect, this device also comprises: main thread processing module and image rendering thread process module, perhaps, main thread processing module, image rendering thread process module and one or more subregion processing threads processing module, wherein:
The main thread processing module is used for selecting not processed edge connected region to carry out the operation of described definite camera position information;
Image rendering thread process module is used for carrying out the processing operation of described augmented reality to determining the edge connected region of camera position information;
Subregion processing threads processing module is used for selecting not processed edge connected region to carry out the operation of described definite camera position information.
In conjunction with second aspect and above-mentioned various possible implementation thereof, this device comprises: digital camera, the mobile phone that carries camera, computer.
As seen from the above technical solution provided by the invention, the technical scheme that provides of the embodiment of the invention can be blocked and the augmented reality of incomplete image is processed and had very strong adaptability to target; Simultaneously, the requirement of the size of the target image that in the augmented reality processing procedure camera collection is arrived also reduces greatly, can adapt to well the problem of remote little image, namely when target during away from camera, also has very strong robustness; In addition, the embodiment of the invention can also adopt optimum processing scheme for different image blocks, so that corresponding computation amount; And the multithreading processing scheme can also further improve the treatment effeciency of augmented reality.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, the accompanying drawing of required use was done to introduce simply during the below will describe embodiment, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite of not paying creative work, can also obtain other accompanying drawings according to these accompanying drawings.
The treatment scheme synoptic diagram of the method that Fig. 1 provides for the embodiment of the invention;
The image preprocessing process synoptic diagram that Fig. 2 provides for the embodiment of the invention;
The synoptic diagram of the prolongation polishing processing mode that Fig. 3 provides for the embodiment of the invention;
The first processing procedure synoptic diagram of definite camera position that Fig. 4 provides for the embodiment of the invention;
The second processing procedure synoptic diagram of definite camera position that Fig. 5 provides for the embodiment of the invention;
The adaptive sampling grid synoptic diagram that Fig. 6 provides for the embodiment of the invention;
The sampling grid synoptic diagram that Fig. 7 provides for the embodiment of the invention;
The synoptic diagram of the multithreading processing scheme that Fig. 8 provides for the embodiment of the invention;
The structural representation one of the device that Fig. 9 provides for the embodiment of the invention;
The structural representation of the device of the augmented reality scheme that provides of the carrying embodiment of the invention is provided Figure 10;
The structural representation two of the device that Figure 11 provides for the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on embodiments of the invention, those of ordinary skills belong to protection scope of the present invention not making the every other embodiment that obtains under the creative work prerequisite.
The embodiment of the invention provides a kind of implementation method of augmented reality, and its specific implementation process can may further comprise the steps as shown in Figure 1:
Step 11 is obtained one or more edges connected region of the true picture that collects;
Before described one or more edges connected region of obtaining the true picture that collects, comprise that also the true picture that will collect is divided into the processing procedure of one or more edges connected region, this process specifically can comprise: the edge to the true picture that collects carries out binary conversion treatment, and is communicated with principles according to eight the true picture that collects is divided into one or more edges connected region;
Need to prove that connected region refers to the pixel that links together in the sub-picture.For example, in two dimensional image, Y (Y is less than or equal to 8) neighbor arranged around the hypothetical target pixel, if this grey scale pixel value equate with the gray-scale value of some pixel X in this Y pixel, claim this pixel and pixel X to have connectedness.
Step 12, if the coverage of described edge connected region meets pre-provisioning request, but picture quality does not meet pre-provisioning request, the image that then described edge connected region is comprised carries out down-sampling to be processed;
Wherein, described coverage meets pre-provisioning request and is specifically as follows: there are two complete linear edges at least in described edge connected region, and can obtain complete quadrilateral by the mode that prolongs polishing; Described picture quality do not meet pre-provisioning request can for: the quantity of the local feature point in the described edge connected region does not reach predetermined quantity;
Step 13, the image that the edge connected region after processing according to down-sampling comprises and the sampling template of pre-save are determined camera position information;
Particularly, recording the information (the camera position information that target image is corresponding is known) of target image in the corresponding sampling template, target image in edge connected region after the down-sampling processing image that comprises and the template of sampling is carried out normalized processing, the destination object of the images match that the edge connected region after definite and corresponding down-sampling is processed comprises, thus can determine camera position information based on the destination object of this coupling;
Step 14 after having determined camera position information, just can be carried out according to described camera position information the processing operation of augmented reality easily.
In the embodiment of the invention, except meeting predetermined requirement but picture quality does not meet the edge connected region of pre-provisioning request carries out augmented reality processes to coverage, can carry out too corresponding augmented reality for the edge connected region of other situations and process, specifically can comprise:
(1) coverage and picture quality all meet the edge connected region of pre-provisioning request
If coverage and the picture quality of described edge connected region all meet pre-provisioning request, then adopt the mode of exact matching to determine camera position information;
Particularly, adopt accordingly the mode of exact matching to determine that the step of camera position information can comprise:
The mode that adopts statistics with the local feature point in the described edge connected region be scheduled in the training set unique point and mate the satisfactory local feature point of acquisition matching probability;
According to the geometrical constraint of the local feature position of edge connected region, in the satisfactory local feature point of described matching probability, determine the satisfactory local feature point of rational matching probability;
Determine camera position information according to the satisfactory local feature point of described rational matching probability.
(2) coverage does not meet predetermined requirement but picture quality all meets the edge connected region of predetermined requirement
If the coverage of described edge connected region does not meet pre-provisioning request, picture quality all meets pre-provisioning request, judges then whether described edge connected region is the estimation zone of destination object;
If described edge connected region is the estimation zone of destination object, then adopt the mode of feature point tracking to determine camera position information, otherwise, adopt and determine camera position information based on the matching way of invariant features operator.
Wherein, the step of accordingly determining camera position information based on the matching way of invariant features operator specifically can but be not limited to comprise: at first, extract the invariant features operator of the local feature point that comprises in the described edge connected region; Afterwards, the invariant features operator of the described local feature point that extracts and the invariant features operator of pre-save are mated, according to the definite local feature point that the match is successful of matching result; Then, according to the geometrical constraint of the local feature position of edge connected region, in the described local feature point that the match is successful, determine the local feature point that reasonably the match is successful again; At last, just can determine camera position information according to the described reasonably local feature point that the match is successful.
(3) coverage and picture quality all do not meet the edge connected region of predetermined requirement
If coverage and the picture quality of described edge connected region all do not meet pre-provisioning request, then can't determine camera position information.
In the embodiment of the invention, increase the efficient of real processing procedure in order to improve corresponding execution, can also comprise main thread and image rendering thread, perhaps, main thread, image rendering thread and one or more subregion processing threads, wherein:
Main thread is used for selecting not processed edge connected region to carry out the operation of described definite camera position information;
The image rendering thread is used for carrying out the processing operation of described augmented reality to determining the edge connected region of camera position information;
The subregion processing threads, be used for selecting not processed edge connected region to carry out the operation of described definite camera position information, like this, when the described true picture that collects comprised a plurality of edges connected region, each edge connected region of executive basis was determined respectively the processing procedure of corresponding camera position information simultaneously.
The technical scheme that the embodiment of the invention provides can be blocked to target and have very strong adaptability with incompleteness.And the target size that can arrive camera collection requires to reduce, and namely when target during away from camera, also has very strong robustness.Simultaneously, the technical scheme that provides of the embodiment of the invention is with respect to the computation amount of Marker-less AR scheme of the prior art.
For ease of the understanding of the present invention, below in conjunction with accompanying drawing specific embodiments of the invention are described further.
The present invention can comprise for the image that collects and carry out corresponding image preprocessing process in the specific implementation process, and the first processing procedure and the second processing procedure synoptic diagram of determining camera position information based on pretreated image; Afterwards, just can increase according to the camera position information of determining the processing of reality.
Respectively the specific implementation of each processing procedure is described below in conjunction with accompanying drawing.
(1) image preprocessing process
For the large purpose of calculated amount in the processing procedure that solves augmented reality, need in the embodiment of the invention video image that collects is carried out pre-service.
As shown in Figure 2, corresponding image preprocessing process can may further comprise the steps:
Step 21 is carried out adaptive rim detection to image, and the difference that corresponding edge detection algorithm can be based on present image and Gaussian Blur image is got threshold mode, also can be other similar rim detection modes;
Step 22 to the edge of image carrying out binary conversion treatment, thereby obtains the bianry image at edge, particularly, can also carry out necessary morphology to bianry image and process, adopt expand, erosion operation, strengthen the connectedness of edge image;
Step 23 according to eight connection principles, obtains one or more edges connected region corresponding to this image that collects, and detects qualified edge connected region, be i.e. the too small edge connected region of filtering region area;
Step 24, in the qualified edge connected region that obtains, some edge connected region is expanded processing, if can obtain complete quadrilateral by expanding to process, then this quadrilateral area (i.e. this edge connected region) is regional for the strong possibility that target exists, the edge clear that represents this edge connected region, might be able to adopt the mode of tracking to determine camera position information, otherwise, if can't obtain complete quadrilateral by expanding to process, then this edge connected region is called non-strong possibility zone;
Expanding particularly mode can comprise: for the edge connected region that has at least two complete linear edges, can expand by the mode that prolongs polishing, for example, as shown in Figure 3, in addition two limits to this edge connected region prolong polishing, to determine whether obtaining a complete quadrilateral;
Step 25 checks the type of the tetragonal edge connected region that each is possible, so that can carry out in different ways determining of camera position information according to different types in the subsequent processes;
Particularly, after the image that collects is carried out above-mentioned processing, just can adopt respectively different processing modes to determine corresponding camera position information according to the dissimilar of each edge connected region, wherein:
Be the edge connected region in non-strong possibility zone for type, can adopt the first processing procedure of determining camera position to carry out determining of camera position information, namely determine camera position information based on invariant features identification, the mode of following the tracks of; Be the edge connected region in strong possibility zone for type, can adopt the second processing procedure of determining camera position to carry out determining of camera position information, namely determine camera position information based on the strongly-adaptive identification of homography matrix control, the mode of tracking.
(2) determine the first processing procedure of camera position based on pretreated image
Be the edge connected region in non-strong possibility zone for type, specifically adopted in this first processing procedure based on invariant features identification, the processing mode of following the tracks of and determined camera position information;
Particularly, as shown in Figure 4, can may further comprise the steps:
Local feature point in the step 41, Edge detected connected region;
Corresponding local feature point refers to the corner point in the edge connected region, because the possibility of the destination object of the definite camera position information of corner point conduct that the edge connected region is outer is little, so the corner point in the Edge detected connected region only, thereby can reduce calculated amount in the process of determining camera position information;
That is to say that in the processing procedure of augmented reality, constant extraction and the coupling of levying operator of local feature point is the larger algorithm of calculated amount.Usually adopt FAST(Features from Accelerated Segment Test) detect response (Response) greater than the corner point (Corner Point) of preset value, and the feature operator that calculates each corner point is (as improving the SIFT of calculated amount, SURF operator etc.), afterwards, the feature operator that will detect in real time again and the feature operator in the database mate, then for there being wrong coupling, also need to use the PROSAC algorithm to process, to obtain the most rational coupling set.As seen corresponding processing procedure is more loaded down with trivial details, calculated amount is larger, therefore, in order to reduce calculated amount, adopted the implementation of the corner point in the Edge detected connected region to calculate to reduce unnecessary feature operator in the embodiment of the invention, reduce the number of times that mates, improve the correct probability of coupling, and reduce PROSAC and sound out the number of times that calculates;
Step 42, finish detection behind the corner point in the leading edge connected region, judge whether enough the local feature in the edge connected region counts, namely whether greater than the amount threshold of setting, if so, then execution in step 43, otherwise, the number that is illustrated in the interior local feature point of edge connected region in this non-strong possibility zone is not enough to guarantee the accuracy of subsequent calculations, namely can't accurately determine camera position information, returns in advance (finishing the processing for this edge connected region).
Step 43, if the local feature of edge connected region is counted enough, during namely greater than the amount threshold set, then continuation checks most of covering the in estimation zone of the object (being destination object) whether this edge connected region has been detected, judge namely whether this edge connected region is the estimation zone of destination object, to determine further whether this edge connected region can be reliably as the foundation of determining camera position information, if, then execution in step 44, otherwise, execution in step 45;
Step 44 is carried out and is followed the tracks of computing, namely the local feature point in this edge connected region followed the tracks of, and execution in step 48;
Step 45, the invariant features operator of the local feature point in the extraction edge connected region;
Step 46 is mated the invariant features operator of the local feature point that extracts and the invariant features operator in the existing invariant features operator database, and the acquisition matching result namely obtains the invariant features operator local feature that the match is successful point;
Step 47 for matching result, continues to adopt the PROSAC algorithm to remove irrational matching result, obtains rational matching result, namely obtains the reasonably local feature point of coupling, and execution in step 48;
Step 48, whether the local feature point that determining step 44 traces into is greater than predetermined threshold value, and if so, then execution in step 49, otherwise, can't determine camera position information, processing procedure finishes; And whether the local feature point in the rational matching result that determining step 47 obtains is greater than default threshold value, and if so, then execution in step 49, otherwise, can't determine camera position information, processing procedure finishes;
Step 49 is determined camera position information according to the local feature point estimation that traces into, and perhaps, determines camera position information according to the local feature point estimation of rational coupling.
Can determine reliably greatly to reduce the calculated amount in the processing procedure under the prerequisite of camera position information by said process, thereby improve treatment effeciency.
(3) determine the second processing procedure of camera position based on pretreated image
Be the edge connected region in strong possibility zone for type, in this second processing procedure, employing for determining camera position information based on the strongly-adaptive identification of homography matrix control, the processing mode of tracking;
Particularly, as shown in Figure 5, can may further comprise the steps:
Step 50, the homography matrix of edge calculation connected region, and the local feature point in the detection connected region, so that can according to the number of the local feature point that detects whether greater than threshold value (whether the number that is local feature point enough) in the subsequent processes, and carry out different processing modes;
In strong possibility zone, tetragonal four summits corresponding to edge connected region have been detected and have obtained, and the homography matrix that has four kinds of possibility corresponding relations between these four summits and the destination object, this homography matrix is for the corresponding relation between the image that represents this edge connected region and be scheduled to, and it is the image of using that this predetermined image is the corresponding training set of subsequent construction;
Step 51 judges whether the number of the local feature point detect is enough, if enough, then adopts the mode of statistical classification to carry out the classification and matching of local feature point, and namely execution in step 52, otherwise, execution in step 58;
Step 52 when the local feature point number that detects is enough, then selects to be fit to work as the training set of leading edge connected region camera position, and corresponding training set can be set up by the mode of off-line training;
Step 53 adopts the mode of statistics to carry out the matching operation of local feature point in selected;
Above-mentioned steps 52,53 processing procedure specifically can realize by the method for fern (Random Ferns) at random, namely can adopt the method for fern at random to identify fast the local feature point of coupling; In the process of the method identification local feature point by fern at random, need to preserve the database according to the training set of the picture construction of being scheduled to; Usually M the size that can be divided into of fern is the sub-fern of S at random, and at this moment, the large young pathbreaker of corresponding database is proportional to M2 S, can reach 32Mb according to the single image database of this formula; For reducing the EMS memory occupation to equipment such as portable terminals, and guarantee that database supports a plurality of objects (namely support preserving a plurality of training sets that make up respectively based on a plurality of different images corresponding to camera position), then must reduce by the mode that reduces S databases space hold amount; Reduce S and can reduce discrimination, in order to improve discrimination, set up in the process of training set at off-line training, can be according to affine matrix to the training sample (this training sample comes from predetermined image, and the standard picture that embody camera position information of image for obtaining in advance that should be predetermined) that needs affined transformation T = t 0 t 1 t 2 t 3 t 4 t 5 Parameter vector t 0 t 1 t 3 t 4 Cluster, division, and the corresponding training set of acquisition is added up in the training space after respectively each being divided, the training centralized recording that obtains the parameter vector of image corresponding to certain camera position information, i.e. the parameter vector of in store different each self-corresponding image of camera position information in the different training sets.
After setting up above-mentioned training set, just can execution in step 52,53 carry out the matching operation of local feature point, mate in the process of local feature point based on training set when carrying out online, specifically can in the training set of this acquisition, find out parameter vector corresponding to this homography matrix according to the homography matrix that step 50 calculates, the principle nearest according to Euclidean distance again, in parameter vector corresponding to homography matrix, find on the nearest training cluster (being the parameter vector in the training set), carry out the statistics identification of local feature point, add up accordingly identifying and be the parameter vector in local feature point and the training set is carried out matching operation.
Step 54, judge that in the process of above-mentioned coupling whether the number of the satisfactory local feature point of matching probability (such as the local feature point greater than predetermined probability threshold value) is greater than predetermined threshold, if, then execution in step 55, otherwise, return, namely the processing procedure for this edge connected region finishes;
Step 55 for matching result, continues to adopt the PROSAC algorithm to remove irrational matching result, obtains rational matching result, namely obtains the reasonably local feature point of coupling, and execution in step 56;
Whether step 56 judges the local feature point that reasonably mates greater than predetermined threshold value, and if so, then execution in step 57, otherwise, for the processing procedure end of this edge connected region;
Step 57 is carried out determining of camera position information according to the local feature point of rational coupling.
Step 58 is carried out down-sampling to the image in this edge connected region and is processed;
Under local feature was counted out inadequate situation, possible corresponding objects in order to improve the adaptability to this situation, then can adopt step 58 to the processing procedure of step 511 to process away from camera or serious motion blur (motion blur);
At first the processing procedure of down-sampling is described: shown in image 6, a pre-defined ideal grid (sampling grid in the left side among Fig. 6) that does not have in the camera perspective distortion situation, this grid has covered the image of mark center; By homography matrix, desirable grid can be mapped to the sampling grid on Fig. 6 right side; When the posture (being the position) of camera in the time of can changing, the sampling grid that corresponding mapping obtains also can self-adaptation change thereupon, and this processing procedure is adaptive latticed down-sampling processing procedure.
The below illustrates the process of Grid Sampling as example take the quadrilateral sampling grid matrix of K * K.The summit of the quadrilateral sampling grid matrix of K * K has (K+1) * (K+1), arrangement mode as shown in Figure 7, wherein:
Each sampling grid G IjFour vertex sets:
VG ij={v (t-1)×(k+1)+j,v (i-1)×(k+1)+j,v t×(k+1)+j+1,v t×(k+1)+j};
i∈(1,...,k+1),j∈(1,...,k+1);
With quadrilateral G IjInterior pixel is classified as (i, j) class, and the pixel that belongs to (i, j) class on average obtains sampled pixel, like this, can obtain the little image block of K * K with K * K quadrilateral sampling grid down-sampling;
Step 59 is carried out normalized with the image of sampling acquisition and the sampling template in the database, determines the normalized crosscorrelation coefficient;
Step 510, the direction of the image that the normalized crosscorrelation parameter identification sampling that obtains according to normalized obtains;
Particularly, in step 59,510, can comprise: at first, little image block and the little image block of target in the database (template of namely sampling) that sampling is obtained calculate normalized crosscorrelation coefficient (NCC, Normalized Cross Correlation), be called for short normalization coefficient; Afterwards, get the normalized crosscorrelation coefficient greater than the little image block of the maximum target of predetermined threshold value and NCC value as and the target image piece of the little image block coupling that obtains of sampling, the last direction that just can determine according to this target image piece the image that sampling obtains;
Step 511, after the direction of determining the image that sampling obtains, the direction of the image that just can obtain according to sampling is finally determined camera position information.
In the embodiment of the invention, in order to take full advantage of the characteristics of partial image regional processing process, the embodiment of an extendible multithreading has been proposed also.Particularly, with reference to shown in Figure 8, can comprise three class threads (Thread) in this multithreading embodiment, wherein:
The TM thread journey of serving as theme is used for finishing the pre-service that gathers image, selects that not processed edge connected region is identified, followed the tracks of, and the processing of camera pose estimation (being the camera position information estimator);
The TR thread is the image rendering thread, is used for according to result of calculation stack dummy object, and playing up, to realize the processing of corresponding augmented reality finishing the edge connected region of camera position information calculations;
The TOi thread is the subregion processing threads, this subregion processing threads is optional setting, all belong to the subregion processing threads such as the TO1 to TOn among Fig. 8, be used for selecting not processed edge connected region, and finish the processing of identification, tracking and camera pose estimation (being the camera position information estimator).
Above-mentioned multithreading processing scheme may operate on the processor of monokaryon or multinuclear, and can be according to what of the core number of processor, and the number of configuration TOi thread is to reach optimum efficiency.
Can the processing scheme by local feature point be blocked to target and the processing of incomplete image has very strong adaptability by the invention described above embodiment; Simultaneously, according to whether enriching of local feature point, can switch to the adaptive mesh down-sampling, the scheme of normalized crosscorrelation coefficient template matches, thereby the requirement of the size of the target image that camera collection is arrived also reduces greatly, the problem that can adapt to well remote little image namely when target during away from camera, also has very strong robustness; In addition, the embodiment of the invention is divided into little region unit (being a plurality of edges connected regions) with image, thereby can adopt optimum processing scheme for different image block, so that the computation amount of the Marker-less AR scheme that provides with respect to prior art.Moreover, image is divided into little region unit, can also realize easily the adjustable multithreading processing scheme of high efficiency, scale, further improve the treatment effeciency of augmented reality.
One of ordinary skill in the art will appreciate that all or part of flow process that realizes in above-described embodiment method, to come the relevant hardware of instruction to finish by computer program, described program can be stored in the computer read/write memory medium, this program can comprise the flow process such as the embodiment of above-mentioned each side method when carrying out.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or store-memory body (Random Access Memory, RAM) etc. at random.
The embodiment of the invention provides a kind of device of realizing augmented reality, and its specific implementation structure can comprise following processing module as shown in Figure 9:
Connected region acquisition module 91 is for one or more edges connected region of obtaining the true picture that collects;
Down-sampling processing module 92, the coverage that is used for the edge connected region obtained at described connected region acquisition module 91 meets pre-provisioning request, but picture quality is not when meeting pre-provisioning request, and the image that described edge connected region is comprised carries out down-sampling to be processed;
The first camera position determination module 93 is determined camera position information for the image that comprises according to the edge connected region after the described down-sampling processing module 92 down-samplings processing and the sampling template of pre-save;
Augmented reality processing module 94 is used for carrying out the processing operation of augmented reality according to the camera position information that described the first camera position determination module 93 is determined.
Further, this device can also comprise connected region division module 95, be used for before described connected region acquisition module 91 obtains one or more edges connected region of the true picture that collects, edge to the true picture that collects carries out binary conversion treatment, and be communicated with principles according to eight the true picture that collects is divided into one or more edges connected region, so that offer when needed described connected region acquisition module 91.
In this device, corresponding coverage meet pre-provisioning request can for: there are two complete linear edges at least in described edge connected region, and can obtain complete quadrilateral by the mode that prolongs polishing; Described picture quality do not meet pre-provisioning request can for: the quantity of the local feature point in the described edge connected region does not reach predetermined quantity.
Particularly, this device can also comprise the camera position determination module 96 based on exact matching, is used for adopting the mode of exact matching to determine camera position information when the coverage of described edge connected region and picture quality all meet pre-provisioning request; Wherein, described camera position determination module 96 based on exact matching specifically can comprise:
Statistical match module 961, the mode that be used for to adopt statistics with the local feature point of described edge connected region be scheduled in the training set unique point and mate the satisfactory local feature point of acquisition matching probability;
Proper Match point determination module 962, be used for the geometrical constraint according to the local feature position of edge connected region, in the satisfactory local feature point of matching probability that described statistical match module 961 obtains, determine the satisfactory local feature point of rational matching probability;
Second camera position determination module 963 is used for determining camera position information according to the satisfactory local feature point of rational matching probability that described Proper Match point determination module 962 is determined.
Particularly, this device can also comprise:
Judge module 97 is used for not meeting pre-provisioning request in the coverage of described edge connected region, when picture quality meets pre-provisioning request, judges whether described edge connected region is the estimation zone of destination object;
The 3rd camera position determination module 98 is used for adopting the mode of feature point tracking to determine camera position information when described judge module 97 determines that described edge connected region is the estimation zone of destination object;
The 4th camera position determination module 99 is used for adopting and determining camera position information based on the matching way of invariant features operator when described judge module 97 determines that described edge connected region is not the estimation zone of destination object.
Further, described the 4th camera position determination module 99 specifically can comprise:
Invariant features operator extraction module 991 is used for extracting the invariant features operator of the local feature point that described edge connected region comprises;
Invariant features operator matching module 992, mate for the invariant features operator of the described local feature point that described invariant features operator extraction module 991 is extracted and the invariant features operator of pre-save, determine the local feature point that the match is successful according to matching result;
Proper Match point determination module 993 is used for the geometrical constraint according to the local feature position of described edge connected region, determines the local feature point that reasonably the match is successful in the described invariant features operator matching module 992 local feature point that the match is successful;
Camera position is determined submodule 994, is used for determining camera position information according to the local feature point that reasonably the match is successful that described Proper Match point determination module 993 is determined.
Particularly, this device can also comprise: main thread processing module 910 and image rendering thread process module 911, perhaps, main thread processing module 910, image rendering thread process module 911 and one or more subregion processing threads processing module 912, wherein:
Main thread processing module 910 is used for selecting not processed edge connected region to carry out the operation of described definite camera position information;
Image rendering thread process module 911 is used for carrying out the processing operation of described augmented reality to determining the edge connected region of camera position information;
Subregion processing threads processing module 912 is used for selecting not processed edge connected region to carry out the operation of described definite camera position information.
By above-mentioned processing scheme, can be blocked and the augmented reality of incomplete image is processed and had very strong adaptability to target; Simultaneously, the requirement of the size of the target image that in the augmented reality processing procedure camera collection is arrived also reduces greatly, can adapt to well the problem of remote little image, namely when target during away from camera, also has very strong robustness; In addition, the embodiment of the invention can also adopt optimum processing scheme for different image blocks, so that corresponding computation amount; And the multithreading processing scheme can also further improve the treatment effeciency of augmented reality.
Need to prove, have a detailed description among the specific implementation of the function that each processing unit that comprises in the said apparatus is realized each embodiment in front, so here repeat no more.
A kind of device of realizing augmented reality that the embodiment of the invention provides, it specifically may operate on the entity apparatus as shown in figure 10.This entity apparatus can be digital equipment (such as digital camera), intelligent mobile terminal equipment (such as smart mobile phone and platform computer etc.) or other similar electronic equipments.This entity apparatus specifically can comprise:
Camera module is used for gathering image, and the image of collection can be video or picture;
Processor is used for information is processed, and as the image of camera module collection is processed, and the control information of user's input is processed etc.;
Display device is used for showing the information of watching to the user that is desirable to provide, such as information such as the video after processing through processor or pictures;
User Interface is for providing user and this device mutual interface;
Memory module is used in local save data information, such as information such as video or pictures;
Communication module is used for and network service, outwards transmits so that the information exchange of this locality is crossed network, also can receive the information that comes from network by network.
In said apparatus, the system of corresponding augmented reality can be arranged in the corresponding processor, the information that relates in the processor processing procedure can be stored in the memory module, also can be shown in the corresponding display device, can also obtain the information that relates in the processing procedure from network by communication module, the information exchange that relates in also can processing procedure is crossed communication module and is sent in the network.In the process that processor is processed, the user can also participate in by User Interface the control of processing procedure.
Be arranged at device concrete structure that a kind of in the above-mentioned entity apparatus realize augmented reality as shown in figure 11, can comprise mixed mark augmented reality pretreatment module, strongly-adaptive identification, tracking module based on invariant features identification, tracking module and homography matrix control on topography's piece, wherein:
Corresponding mixed mark augmented reality pretreatment module can be used for carrying out corresponding image preprocessing process for the image that collects; On topography's piece, be used for carrying out the first processing procedure of determining camera position information based on pretreated image based on invariant features identification, tracking module; The strongly-adaptive identification of homography matrix control, tracking module are used for carrying out the second processing procedure synoptic diagram of determining camera position information based on pretreated image.Concrete preprocessing process, determine to describe before the first processing procedure of camera position information and the second processing procedure based on pretreated image, do not repeat them here.
The those skilled in the art can be well understood to, be the convenience described and succinct, only the division with above-mentioned each functional module is illustrated, in the practical application, can as required the above-mentioned functions distribution be finished by different functional modules, the inner structure that is about to device is divided into different functional modules, to finish all or part of function described above.The specific works process of the device of foregoing description and module can with reference to the corresponding process among the preceding method embodiment, not repeat them here.
In several embodiment that the application provides, should be understood that disclosed apparatus and method can realize by another way.For example, device embodiment described above only is schematic, for example, the division of described module, only be that a kind of logic function is divided, during actual the realization other dividing mode can be arranged, for example a plurality of unit or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.Another point, the shown or coupling each other discussed or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrically, machinery or other form.
Described unit as separating component explanation can or can not be physically to separate also, and the parts that show as the unit can be or can not be physical locations also, namely can be positioned at a place, perhaps also can be distributed on a plurality of network element.Can select according to the actual needs wherein some or all of unit to realize the purpose of present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in the processing unit, also can be that the independent physics of unit exists, and also can be integrated in the unit two or more unit.Above-mentioned integrated unit both can adopt the form of hardware to realize, also can adopt the form of SFU software functional unit to realize.
The above; only for the better embodiment of the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (18)

1. a method that realizes augmented reality is characterized in that, comprising:
Obtain one or more edges connected region of the true picture that collects;
If the coverage of described edge connected region meets pre-provisioning request, but picture quality does not meet pre-provisioning request, the image that then described edge connected region is comprised carries out down-sampling to be processed;
The image that edge connected region after processing according to down-sampling comprises and the sampling template of pre-save are determined camera position information;
Carry out the processing operation of augmented reality according to described camera position information.
2. method according to claim 1 is characterized in that, described one or more edges connected region of obtaining the true picture that collects also comprises before:
Edge to the true picture that collects carries out binary conversion treatment, and is communicated with principles according to eight the true picture that collects is divided into one or more edges connected region.
3. method according to claim 1 is characterized in that, described coverage meets pre-provisioning request and refers to: there are two complete linear edges at least in described edge connected region, and can obtain complete quadrilateral by the mode that prolongs polishing; Described picture quality does not meet pre-provisioning request and refers to: the quantity of the local feature point in the described edge connected region does not reach predetermined quantity.
4. according to claim 1,2 or 3 described methods, it is characterized in that the method also comprises:
If coverage and the picture quality of described edge connected region all meet pre-provisioning request, then adopt the mode of exact matching to determine camera position information.
5. method according to claim 4 is characterized in that, the mode of described employing exact matching determines that the step of camera position information comprises:
The mode that adopts statistics with the local feature point in the described edge connected region be scheduled in the training set unique point and mate the satisfactory local feature point of acquisition matching probability;
According to the geometrical constraint of the local feature position of edge connected region, in the satisfactory local feature point of described matching probability, determine the satisfactory local feature point of rational matching probability;
Determine camera position information according to the satisfactory local feature point of described rational matching probability.
6. according to claim 1,2 or 3 described methods, it is characterized in that the method also comprises:
If the coverage of described edge connected region does not meet pre-provisioning request, picture quality meets pre-provisioning request, judges then whether described edge connected region is the estimation zone of destination object;
If described edge connected region is the estimation zone of destination object, then adopt the mode of feature point tracking to determine camera position information, otherwise, adopt and determine camera position information based on the matching way of invariant features operator.
7. method according to claim 6 is characterized in that, described matching way based on the invariant features operator determines that the step of camera position information comprises:
Extract the local feature point that comprises in the described edge connected region;
The described local feature point of extraction and the invariant features operator of pre-save are mated, determine the local feature point that the match is successful according to matching result;
According to the geometrical constraint of the local feature position of edge connected region, in the described local feature point that the match is successful, determine the local feature point that reasonably the match is successful;
Determine camera position information according to the described reasonably local feature point that the match is successful.
8. according to claim 1,2 or 3 described methods, it is characterized in that the method also comprises:
If coverage and the picture quality of described edge connected region all do not meet pre-provisioning request, then can't determine camera position information.
9. according to claim 1,2 or 3 described methods, it is characterized in that the method also comprises: main thread and image rendering thread, perhaps, main thread, image rendering thread and one or more subregion processing threads, wherein:
Main thread is used for selecting not processed edge connected region to carry out the operation of described definite camera position information;
The image rendering thread is used for carrying out the processing operation of described augmented reality to determining the edge connected region of camera position information;
The subregion processing threads is used for selecting not processed edge connected region to carry out the operation of described definite camera position information.
10. a device of realizing augmented reality is characterized in that, comprising:
The connected region acquisition module is for one or more edges connected region of obtaining the true picture that collects;
The down-sampling processing module, the coverage that is used for the edge connected region obtained at described connected region acquisition module meets pre-provisioning request, but picture quality is not when meeting pre-provisioning request, and the image that described edge connected region is comprised carries out down-sampling to be processed;
The first camera position determination module is determined camera position information for the image that comprises according to the edge connected region after the described down-sampling processing module down-sampling processing and the sampling template of pre-save;
The augmented reality processing module is used for carrying out according to the camera position information that described the first camera position determination module is determined the processing operation of augmented reality.
11. device according to claim 10 is characterized in that, this device also comprises:
Connected region is divided module, be used for before described connected region acquisition module obtains one or more edges connected region of the true picture that collects, edge to the true picture that collects carries out binary conversion treatment, and is communicated with principles according to eight the true picture that collects is divided into one or more edges connected region.
12. device according to claim 10 is characterized in that, described coverage meets pre-provisioning request and refers to: there are two complete linear edges at least in described edge connected region, and can obtain complete quadrilateral by the mode that prolongs polishing; Described picture quality does not meet pre-provisioning request and refers to: the quantity of the local feature point in the described edge connected region does not reach predetermined quantity.
13. according to claim 10,11 or 12 described devices, it is characterized in that this device also comprises:
Based on the camera position determination module of exact matching, be used for when the coverage of described edge connected region and picture quality all meet pre-provisioning request, adopting the mode of exact matching to determine camera position information.
14. device according to claim 13 is characterized in that, described camera position determination module based on exact matching comprises:
The statistical match module, the mode that be used for to adopt statistics with the local feature point of described edge connected region be scheduled in the training set unique point and mate the satisfactory local feature point of acquisition matching probability;
Proper Match point determination module is used for the geometrical constraint according to the local feature position of edge connected region, determines the satisfactory local feature point of rational matching probability in the satisfactory local feature point of matching probability that described statistical match module obtains;
The second camera position determination module, the satisfactory local feature point of rational matching probability that is used for determining according to described Proper Match point determination module is determined camera position information.
15. according to claim 10,11 or 12 described devices, it is characterized in that this device also comprises:
Judge module is used for not meeting pre-provisioning request in the coverage of described edge connected region, when picture quality meets pre-provisioning request, judges whether described edge connected region is the estimation zone of destination object;
The 3rd camera position determination module is used for adopting the mode of feature point tracking to determine camera position information when described judge module determines that described edge connected region is the estimation zone of destination object;
The 4th camera position determination module is used for adopting and determining camera position information based on the matching way of invariant features operator when described judge module determines that described edge connected region is not the estimation zone of destination object.
16. device according to claim 15 is characterized in that, described the 4th camera position determination module specifically comprises:
Invariant features operator extraction module is used for extracting the invariant features operator of the local feature point that described edge connected region comprises;
Invariant features operator matching module mates for the invariant features operator of the described local feature point that described invariant features operator extraction module is extracted and the invariant features operator of pre-save, determines the local feature point that the match is successful according to matching result;
Proper Match point determination module is used for the geometrical constraint according to the local feature position of edge connected region, determines the local feature point that reasonably the match is successful in the described local feature point that the match is successful;
Camera position is determined submodule, and the local feature point that reasonably the match is successful that is used for determining according to described Proper Match point determination module is determined camera position information.
17. according to claim 10,11 or 12 described devices, it is characterized in that this device also comprises: main thread processing module and image rendering thread process module, perhaps, main thread processing module, image rendering thread process module and one or more subregion processing threads processing module, wherein:
The main thread processing module is used for selecting not processed edge connected region to carry out the operation of described definite camera position information;
Image rendering thread process module is used for carrying out the processing operation of described augmented reality to determining the edge connected region of camera position information;
Subregion processing threads processing module is used for selecting not processed edge connected region to carry out the operation of described definite camera position information.
18. each described device is characterized in that according to claim 10~17, described device comprises: digital camera, the mobile phone that carries camera, computer.
CN201210532047.3A 2012-12-11 2012-12-11 A kind of method and device realizing augmented reality Expired - Fee Related CN103035003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210532047.3A CN103035003B (en) 2012-12-11 2012-12-11 A kind of method and device realizing augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210532047.3A CN103035003B (en) 2012-12-11 2012-12-11 A kind of method and device realizing augmented reality

Publications (2)

Publication Number Publication Date
CN103035003A true CN103035003A (en) 2013-04-10
CN103035003B CN103035003B (en) 2015-09-09

Family

ID=48021870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210532047.3A Expired - Fee Related CN103035003B (en) 2012-12-11 2012-12-11 A kind of method and device realizing augmented reality

Country Status (1)

Country Link
CN (1) CN103035003B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980371A (en) * 2017-03-24 2017-07-25 电子科技大学 It is a kind of based on the mobile augmented reality exchange method for closing on heterogeneous distributed structure
CN107248169A (en) * 2016-03-29 2017-10-13 中兴通讯股份有限公司 Image position method and device
CN107798703A (en) * 2016-08-30 2018-03-13 成都理想境界科技有限公司 A kind of realtime graphic stacking method and device for augmented reality
CN108305316A (en) * 2018-03-08 2018-07-20 网易(杭州)网络有限公司 Rendering intent, device, medium based on AR scenes and computing device
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN108734059A (en) * 2017-04-18 2018-11-02 深圳市丰巨泰科电子有限公司 indoor mobile robot object identification method
CN109427099A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of enhancement information display methods and system based on surface
CN109828791A (en) * 2018-12-28 2019-05-31 北京奇艺世纪科技有限公司 A kind of animation playing method, terminal and computer readable storage medium
CN110620924A (en) * 2019-09-23 2019-12-27 广州虎牙科技有限公司 Method and device for processing coded data, computer equipment and storage medium
WO2023245488A1 (en) * 2022-06-22 2023-12-28 Snap Inc. Double camera streams

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050163344A1 (en) * 2003-11-25 2005-07-28 Seiko Epson Corporation System, program, and method for generating visual-guidance information
CN101625762A (en) * 2009-06-19 2010-01-13 深圳市中瀛鑫科技发展有限公司 Target dividing method and target dividing device
CN101763632A (en) * 2008-12-26 2010-06-30 华为技术有限公司 Method for demarcating camera and device thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050163344A1 (en) * 2003-11-25 2005-07-28 Seiko Epson Corporation System, program, and method for generating visual-guidance information
CN101763632A (en) * 2008-12-26 2010-06-30 华为技术有限公司 Method for demarcating camera and device thereof
CN101625762A (en) * 2009-06-19 2010-01-13 深圳市中瀛鑫科技发展有限公司 Target dividing method and target dividing device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANDREW I. COMPORT等: "A real-time tracker for markerless augmented reality", 《PROCEEDINGS OF THE SECOND IEEE AND ACM INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 03)》, 10 October 2003 (2003-10-10), pages 36 - 45 *
G. SIMON等: "A two-stage robust statistical method for temporal registration from features of various type", 《SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, 1998》, 7 January 1998 (1998-01-07), pages 261 - 266 *
Y. GENC等: "Marker-less Tracking for AR: A Learning-Based Approach", 《PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 02)》, 31 December 2002 (2002-12-31), pages 295 - 304 *
赵新灿等: "基于自然特征的增强现实注册算法", 《华南理工大学学报(自然科学版)》, vol. 35, no. 5, 31 May 2007 (2007-05-31), pages 41 - 45 *
顾耀林等: "基于投影技术的增强现实注册方法", 《计算机工程与应用》, vol. 44, no. 10, 31 December 2008 (2008-12-31), pages 59 - 61 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248169A (en) * 2016-03-29 2017-10-13 中兴通讯股份有限公司 Image position method and device
CN107248169B (en) * 2016-03-29 2021-01-22 中兴通讯股份有限公司 Image positioning method and device
CN107798703A (en) * 2016-08-30 2018-03-13 成都理想境界科技有限公司 A kind of realtime graphic stacking method and device for augmented reality
CN106980371B (en) * 2017-03-24 2019-11-05 电子科技大学 It is a kind of based on the mobile augmented reality exchange method for closing on heterogeneous distributed structure
CN106980371A (en) * 2017-03-24 2017-07-25 电子科技大学 It is a kind of based on the mobile augmented reality exchange method for closing on heterogeneous distributed structure
CN108734059B (en) * 2017-04-18 2022-02-11 深圳市丰巨泰科电子有限公司 Object identification method for indoor mobile robot
CN108734059A (en) * 2017-04-18 2018-11-02 深圳市丰巨泰科电子有限公司 indoor mobile robot object identification method
CN109427099A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 A kind of enhancement information display methods and system based on surface
CN108305316A (en) * 2018-03-08 2018-07-20 网易(杭州)网络有限公司 Rendering intent, device, medium based on AR scenes and computing device
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN109828791A (en) * 2018-12-28 2019-05-31 北京奇艺世纪科技有限公司 A kind of animation playing method, terminal and computer readable storage medium
CN109828791B (en) * 2018-12-28 2022-03-22 北京奇艺世纪科技有限公司 Animation playing method, terminal and computer readable storage medium
CN110620924A (en) * 2019-09-23 2019-12-27 广州虎牙科技有限公司 Method and device for processing coded data, computer equipment and storage medium
CN110620924B (en) * 2019-09-23 2022-05-20 广州虎牙科技有限公司 Method and device for processing coded data, computer equipment and storage medium
WO2023245488A1 (en) * 2022-06-22 2023-12-28 Snap Inc. Double camera streams

Also Published As

Publication number Publication date
CN103035003B (en) 2015-09-09

Similar Documents

Publication Publication Date Title
CN103035003B (en) A kind of method and device realizing augmented reality
CN107977639B (en) Face definition judgment method
Peng et al. Drone-based vacant parking space detection
US20090190798A1 (en) System and method for real-time object recognition and pose estimation using in-situ monitoring
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN105868708A (en) Image object identifying method and apparatus
CN112950667B (en) Video labeling method, device, equipment and computer readable storage medium
CN103765880A (en) Networked capture and 3D display of localized, segmented images
CN101169827A (en) Method and device for tracking characteristic point of image
CN108564579A (en) A kind of distress in concrete detection method and detection device based on temporal and spatial correlations
CN109409250A (en) A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN103353941A (en) Natural marker registration method based on viewpoint classification
CN107948586B (en) Trans-regional moving target detecting method and device based on video-splicing
CN113160075A (en) Processing method and system for Apriltag visual positioning, wall-climbing robot and storage medium
CN101572770A (en) Method for testing motion available for real-time monitoring and device thereof
Giang et al. TopicFM: Robust and interpretable topic-assisted feature matching
CN116311201A (en) Substation equipment state identification method and system based on image identification technology
CN107274382B (en) State identification method and device of hard pressing plate and electronic equipment
CN104978558B (en) The recognition methods of target and device
CN111161219B (en) Robust monocular vision SLAM method suitable for shadow environment
CN108805838A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN104867129A (en) Light field image segmentation method
Colombari et al. Background initialization in cluttered sequences

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150909

Termination date: 20181211