CN105513083A - PTAM camera tracking method and device - Google Patents

PTAM camera tracking method and device Download PDF

Info

Publication number
CN105513083A
CN105513083A CN201511023879.2A CN201511023879A CN105513083A CN 105513083 A CN105513083 A CN 105513083A CN 201511023879 A CN201511023879 A CN 201511023879A CN 105513083 A CN105513083 A CN 105513083A
Authority
CN
China
Prior art keywords
video camera
carry out
point
attitude
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511023879.2A
Other languages
Chinese (zh)
Other versions
CN105513083B (en
Inventor
刘洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sina Technology China Co Ltd
Original Assignee
Sina Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sina Technology China Co Ltd filed Critical Sina Technology China Co Ltd
Priority to CN201511023879.2A priority Critical patent/CN105513083B/en
Publication of CN105513083A publication Critical patent/CN105513083A/en
Application granted granted Critical
Publication of CN105513083B publication Critical patent/CN105513083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a PTAM camera tracking method and device. The method comprises steps of: acquiring an estimated attitude of the camera at a current image frame according to image information input into the camera; projecting the three dimensional feature points of known positions in a real scene onto the image plane of the camera according to the estimated attitude of the camera at the current image frame, matching feature points by using an improved independent binary robust primary characteristic I-BRIEF algorithm, and computing the real attitude of the camera at the current image frame; matching the estimated attitude of the camera at the current image frame and the real attitude of the camera at the current image frame in order to acquires a matching result; and adjusting the attitude of the camera by using the matching result in order to achieve the image tracking of the camera. The method and the device may improve the tracking capability of a fast-moving camera system with variable light illumination and have high robustness.

Description

A kind of PTAM video camera tracking method and device
Technical field
The present invention relates to Camera location and draughtsmanship field, particularly relate to a kind of PTAM (Paralleltrackingandmapping, parallel type is followed the tracks of and drawing) video camera tracking method and device.
Background technology
SLAM (SimultaneousLocalizationandMapping), also claims synchronous location and draughtsmanship, is proposed the earliest by Smith, Self and Cheeseman in 1988 years.The theory important due to it and using value, thought by a lot of scholar the key realizing real full autonomous mobile robot.SLAM problem can be described as: robot is mobile from a unknown position in circumstances not known, self poisoning is carried out according to location estimation and map in moving process, on the basis of self poisoning, build increment type map simultaneously, realize autonomous location and the navigation of robot.
In SLAM framework, have two core procedures, one is ask for video camera attitude according to the structural information of scene, and another is the three-dimensional structure carrying out re-construct according to the video camera attitude asked for.First step is called tracking by us, and second step is called drawing.In SLAM framework, the task of tracking and drawing hockets, and follows the tracks of the scene structure information depending on and chart and obtain, and drawing depends on the video camera attitude that tracking is asked for conversely.
Although for many years studied in robot field SLAM method, until 2003 are just incorporated into computer vision field by AndrewDavison, achieve the SLAM system of first real-time view-based access control model.From then on, SLAM method is studied widely in video camera Attitude estimation field and is adopted.Parallel type is followed the tracks of and drawing (Paralleltrackingandmapping, PTAM) Camera location is divided into two independently tasks of following the tracks of and chart by technology, two independently thread run respectively, like this can under the condition not affecting Camera location real-time, by low time efficiency, structure (StructureFromMotion is asked in the motion of high precision, SFM) technology is incorporated in the task of drawing, and by the scene structure information that the optimization of bundle collection method of adjustment recovers, this has increased substantially robustness and the accuracy of Camera tracking algorithm, alleviate the processing time of every two field picture during system cloud gray model simultaneously.In the Camera location thread of PTAM, Feature Points Matching search is the important step of video camera Attitude estimation, in the realization of PTAM technology, system adopts based on pixel difference quadratic sum (SSD, SumofSquaredDifferences) coupling of describing method carries out Feature Points Matching, but we find under video camera rapid movement or illumination variation situation, and the method becomes unreliable.
Summary of the invention
The embodiment of the present invention provides a kind of PTAM video camera tracking method and device, to improve the tracking power of camera chain under video camera rapid movement and illumination variation.
On the one hand, embodiments provide a kind of parallel type and follow the tracks of and drawing PTAM video camera tracking method, described method comprises:
According to the image information of video camera input, obtain the estimation attitude of described video camera in current image frame;
According to the estimation attitude of described video camera in current image frame, by the three-dimensional feature spot projection of known location in real scene on the plane of delineation of described video camera, carry out Feature Points Matching by the independent binary robust primary features I-BRIEF algorithm improved, calculate the actual attitude of described video camera in current image frame;
Utilize described video camera to mate in the estimation attitude of current image frame and actual attitude, obtain matching result;
Described matching result is utilized to carry out the pose adjustment of described video camera, to carry out the image trace of described video camera.
On the other hand, embodiments provide a kind of parallel type and follow the tracks of and drawing PTAM Camera location device, described device comprises:
Estimate attitude acquiring unit, for the image information inputted according to video camera, obtain the estimation attitude of described video camera in current image frame;
I-BRIEF algorithmic match unit, for according to the estimation attitude of described video camera in current image frame, by the three-dimensional feature spot projection of known location in real scene on the plane of delineation of described video camera, carry out Feature Points Matching by the independent binary robust primary features I-BRIEF algorithm improved, calculate the actual attitude of described video camera in current image frame; Utilize described video camera to mate in the estimation attitude of current image frame and actual attitude, obtain matching result;
Pose adjustment unit, for the pose adjustment utilizing described matching result to carry out described video camera, to carry out the image trace of described video camera.
Technique scheme has following beneficial effect: embodiment of the present invention proposition I-BRIEF algorithm carries out Feature Points Matching, thus greatly improve the tracking stability of camera chain under video camera rapid movement and illumination variation under these two kinds of complex situations, there is higher robustness.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is that a kind of parallel type of the embodiment of the present invention is followed the tracks of and drawing PTAM video camera tracking method process flow diagram;
Fig. 2 is that a kind of parallel type of the embodiment of the present invention is followed the tracks of and drawing PTAM Camera location apparatus structure schematic diagram;
Fig. 3 a is embodiment of the present invention I-BRIEF algorithmic match unit composition structural representation;
Fig. 3 b is embodiment of the present invention pose adjustment unit composition structural representation;
Fig. 4 is the algorithm flow schematic diagram of application example tracking section of the present invention;
Fig. 5 be application example of the present invention give PTAM carry out unique point search coupling schematic diagram;
Fig. 6 is application example I-BRIEF algorithm computation process schematic diagram of the present invention;
Fig. 7 be application example of the present invention according to I-BRIEF algorithm two groups for test feature describing method picture blur (bikes sequence of pictures) and illumination comparatively dark under the two picture group sheets of (light sequence of pictures) effect;
Fig. 8 be application example of the present invention according to I-BRIEF algorithm two groups for test feature describing method picture blur (bikes sequence of pictures) and illumination comparatively dark under (light sequence of pictures) effect standard test sequences on experimental result schematic diagram.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
As shown in Figure 1, for a kind of parallel type of the embodiment of the present invention is followed the tracks of and drawing PTAM video camera tracking method process flow diagram, described method comprises:
101, according to the image information of video camera input, the estimation attitude of described video camera in current image frame is obtained;
102, according to the estimation attitude of described video camera in current image frame, by the three-dimensional feature spot projection of known location in real scene on the plane of delineation of described video camera, carry out Feature Points Matching by the independent binary robust primary features I-BRIEF algorithm improved, calculate the actual attitude of described video camera in current image frame;
103, utilize described video camera to mate in the estimation attitude of current image frame and actual attitude, obtain matching result;
104, described matching result is utilized to carry out the pose adjustment of described video camera, to carry out the image trace of described video camera.
Preferably, the described independent binary robust primary features I-BRIEF algorithm by improving carries out the method for Feature Points Matching, comprising: input an image tile centered by unique point; The image tile of the default size centered by described unique point is chosen a series of test pixel point pair be made up of two location of pixels, these test points to composition test point to set; In described test point is to set, choose a point right, in the first local smoothing method filtering of each point that this point is right, the then gray-scale value of two pixels that this point of test and comparison is right, obtains and surveys time result; Test result to repetition aforesaid operations process to each point in set, is then cascaded, forms the descriptor to described unique point by described test point; Export the descriptor of described unique point, to carry out Feature Points Matching according to time result of the survey in described descriptor.
Preferably, described survey time result comprises: dark, bright, similar, represent dark with binary number 01 respectively, binary number 10 represents bright, and binary number 00 represents similar.
Preferably, the attitude of described video camera comprises: the Viewing-angle information of video camera.
Preferably, the described pose adjustment utilizing described matching result to carry out described video camera, to carry out the image trace of described video camera, specifically comprise: according to the accuracy of the Feature Points Matching in plane of delineation projection of three-dimensional feature point in two field picture every in described matching result, estimate the tracking quality of described video camera, the pose adjustment of described video camera is carried out: if the accuracy of correct coupling is more than or equal to preset first threshold according to tracking quality, then confirm that tracking quality is good, do not carry out pose adjustment, follow the tracks of and as usual carry out; If the accuracy of correct coupling is lower than preset first threshold and be not less than preset Second Threshold, then confirm that tracking quality is general, do not carry out pose adjustment, follow the tracks of and as usual carry out, but do not allow in the image followed the tracks of, to add new key frame again, wherein, described first threshold is higher than described Second Threshold; If the accuracy of correct coupling is lower than preset Second Threshold, then confirm that tracking quality is poor, carry out the pose adjustment of described video camera according to the correct unique point of coupling; If the tracking quality of multiple picture frames of preset quantity is difference continuously, then confirm that image trace is lost, the key frame that the systematic search of described video camera is the most similar with present image, and then utilize the most similar described key frame found, recalculate the attitude of described video camera thus carry out pose adjustment.
As shown in Figure 2, for a kind of parallel type of the embodiment of the present invention is followed the tracks of and drawing PTAM Camera location apparatus structure schematic diagram, described device comprises:
Estimate attitude acquiring unit 21, for the image information inputted according to video camera, obtain the estimation attitude of described video camera in current image frame;
I-BRIEF algorithmic match unit 22, for according to the estimation attitude of described video camera in current image frame, by the three-dimensional feature spot projection of known location in real scene on the plane of delineation of described video camera, carry out Feature Points Matching by the independent binary robust primary features I-BRIEF algorithm improved, calculate the actual attitude of described video camera in current image frame; Utilize described video camera to mate in the estimation attitude of current image frame and actual attitude, obtain matching result;
Pose adjustment unit 23, for the pose adjustment utilizing described matching result to carry out described video camera, to carry out the image trace of described video camera.
Preferably, as shown in Figure 3 a, be embodiment of the present invention I-BRIEF algorithmic match unit composition structural representation, described I-BRIEF algorithmic match unit 22 comprises:
Load module 221, for inputting an image tile centered by unique point;
Survey secondary module 222, for choosing a series of test pixel point pair be made up of two location of pixels in the image tile of the default size centered by described unique point, these test points to composition test point to set; In described test point is to set, choose a point right, in the first local smoothing method filtering of each point that this point is right, the then gray-scale value of two pixels that this point of test and comparison is right, obtains and surveys time result; Test result to repetition aforesaid operations process to each point in set, is then cascaded, forms the descriptor to described unique point by described test point;
Output module 223, for exporting the descriptor of described unique point, to carry out Feature Points Matching according to time result of the survey in described descriptor.
Preferably, described survey time result comprises: dark, bright, similar, represent dark with binary number 01 respectively, binary number 10 represents bright, and binary number 00 represents similar.
Preferably, the attitude of described video camera comprises: the Viewing-angle information of video camera.
Preferably, described pose adjustment unit, specifically for the accuracy according to the Feature Points Matching in plane of delineation projection of three-dimensional feature point in two field picture every in described matching result, estimate the tracking quality of described video camera, carry out the pose adjustment of described video camera according to tracking quality:
As shown in Figure 3 b, be embodiment of the present invention pose adjustment unit composition structural representation, described pose adjustment unit 23, specifically comprises further:
First pose adjustment module 231, if be more than or equal to preset first threshold for the accuracy of correct coupling, then confirm that tracking quality is good, does not carry out pose adjustment, follow the tracks of and as usual carry out;
Second pose adjustment module 232, if be not less than preset Second Threshold for the accuracy of correct coupling lower than preset first threshold, then confirm that tracking quality is general, do not carry out pose adjustment, follow the tracks of and as usual carry out, but do not allow to add new key frame again in the image followed the tracks of, wherein, described first threshold is higher than described Second Threshold;
3rd pose adjustment module 233, if for correct accuracy of mating lower than preset Second Threshold, then confirms that tracking quality is poor, carries out the pose adjustment of described video camera according to the correct unique point of coupling; If the tracking quality of multiple picture frames of preset quantity is difference continuously, then confirm that image trace is lost, the key frame that the systematic search of described video camera is the most similar with present image, and then utilize the most similar described key frame found, recalculate the attitude of described video camera thus carry out pose adjustment.
Embodiment of the present invention proposition I-BRIEF algorithm carries out Feature Points Matching, thus greatly improves the tracking stability of camera chain under video camera rapid movement and illumination variation under these two kinds of complex situations, has higher robustness.
Below by way of application example, embodiment of the present invention technique scheme is described in detail:
Prior art is in the realization of PTAM technology, system adopts the coupling based on pixel difference quadratic sum (SSD) describing method to carry out Feature Points Matching, under video camera rapid movement or illumination variation situation, the method becomes unreliable, in order to improve the tracking power of system under video camera rapid movement and illumination variation, application example proposition I-BRIEF character description method of the present invention carries out Feature Points Matching, thus greatly improves the tracking stability of system under above-mentioned two kinds of complex situations.
Below PTAM system and improving one's methods of application example of the present invention are specifically described:
One, PTAM technology:
Parallel type is followed the tracks of and drawing (Paralleltrackingandmapping, PTAM) Camera location is divided into two independently tasks of following the tracks of and chart by technology, two independently thread run respectively, like this can under the condition not affecting Camera location real-time, by low time efficiency, structure technology (StructureFromMotion is asked in the motion of high precision, SFM) be incorporated in the task of drawing, and by the scene structure information that the optimization of bundle collection method of adjustment recovers, this has increased substantially robustness and the accuracy of Camera tracking algorithm, alleviate the processing time of every two field picture during system cloud gray model simultaneously.
As shown in Figure 4, be the algorithm flow schematic diagram of application example tracking section of the present invention, specifically comprise:
401, for the image that each frame inputs, the information before first needing according to video camera, estimates the possible estimation attitude of video camera at present frame;
402, video camera input picture, obtains the plane of delineation;
403, attitude is estimated, by the three-dimensional feature spot projection of known location in scene on the plane of delineation according to this;
404, the search of coarse level unique point is carried out;
405, camera attitude is upgraded according to rough layer coupling;
406, scene three-dimensional feature re-projection and unique point search;
407, the actual attitude of video camera is calculated.
In the Camera location thread of PTAM, Feature Points Matching search is the important step of video camera Attitude estimation.In order to find the match point of a three-dimensional feature point p in figure in the current frame, need to perform limited range search to p point predicted position place in the picture.In order to carry out this search, need first according to this first time observation position and current camera position between visual angle change corresponding deformation is carried out to the image tile being stored in the correspondence of p point in key frame, then p point is carried out projecting and searching for coupling to the current camera plane of delineation.Therefore, this search procedure just becomes a narrow baseline matching problem.First, the picture search template of a some p is generated from source key frame, then perform in this search coupling limiting radius at the predicted position place characteristic point matching method of the corresponding level of the pyramid of present frame, and choose Optimum Matching, if coupling difference is less than a default threshold value, so just think the corresponding coupling that have found p point.As shown in Figure 5, for application example of the present invention gives the PTAM schematic diagram carrying out unique point search coupling, wherein the left figure three-dimensional point illustrated in scene carries out the schematic diagram that projects to current camera plane, and right chart is shown in the schematic diagram carrying out unique point search coupling near subpoint.
In the realization of PTAM technology, system adopts the coupling based on pixel difference quadratic sum (SSD) describing method to carry out Feature Points Matching, but we find under video camera rapid movement or illumination variation situation, and the method becomes unreliable.
In order to improve the tracking power of system under video camera rapid movement and illumination variation, application example of the present invention utilizes nearest neoteric I-BRIEF unique point to describe the high reliability of algorithm under picture blur and illumination variation, proposition I-BRIEF character description method carries out Feature Points Matching, replace original SSD method, shown by a large amount of contrast test, new Feature Points Matching searching method improves the tracking stability of system under above-mentioned two kinds of complex situations greatly, thus greatly enhances the tracking performance of PTAM under complex environment.
Two, based on the matching process of pixel difference quadratic sum (SSD)
The method is similar to the image registration algorithm based on template matches, image registration algorithm based on template matches obtains a template that can comprise image main information in a reference image as reference characteristic block, then in image subject to registration, the match block the most similar to this reference characteristic block is searched, the principle of coupling is that whether for standard, to weigh this region the most similar to reference characteristic block with the quadratic sum of the pixel difference of two width image laps (this refers to overlapping block) (SumofSquaredDifferences, be called for short SSD).And the concept of template has just been applied to the neighborhood window of unique point local by the method, using the half-tone information value of unique point neighborhood window as the descriptor of this unique point, directly compare the coupling of realization character point.The method is the method for a kind of simple possible of carrying out Feature Points Matching, but because it directly utilizes the half-tone information value of image, so maximum shortcoming is exactly very responsive to the change of illumination, once need two width images of registration inconsistent in the exposure of overlapping region, so the method will be no longer accurate.Secondly, the neighborhood window carrying out employing during Feature Points Matching is rectangle, when needing two width images of registration to there is rotation and the convergent-divergent compared with large scale of larger angle, the feature of unique point neighborhood window will produce larger change, therefore the Rotation and Zoom for image will be more responsive, utilizes the method to carry out mating and also can become difficulty when producing image blurring simultaneously.In PTAM, just have employed the method for SSD to carry out the coupling between unique point.In order to make the impact adapting to illumination variation generation in matching process to a certain extent, before carrying out SSD coupling, first the operation deducting this window average is carried out to each pixel in the image window centered by unique point, i.e. zero-mean pixel difference sum of squares approach.This make the method to a certain degree can reduce the impact of illumination variation on Feature Points Matching, but effect is not very good.
Three, based on the Feature Points Matching of independent binary robust primary features (I-BRIEF) describing method improved
(1) I-BRIEF algorithm:
I-BRIEF algorithm, the independent binary robust primary features algorithm namely improved, its specific descriptions are as follows:
Input a: image tile centered by unique point
1 chooses a series of test pixel point pair be made up of two location of pixels by certain rule in the image tile of the appointment size centered by unique point, these test points to composition test point to set.
2 in test point is to set, to choose a point right, and in the first local smoothing method filtering of each point that this point is right, then compare the gray-scale value of two right pixels of this point, comparative result is divided into for dark, bright, similar, with 01, and 10,00 expression.
Test result to repetition step 2 process to each point in set, is then cascaded, forms the descriptor to unique point by 3 pairs of test points.
Export: to the description of the Bit String form of this unique point
As shown in Figure 6, be application example I-BRIEF algorithm computation process schematic diagram of the present invention.
(2) recognition effect of I-BRIEF algorithm under image blurring and illumination variation
As shown in Figure 7, for application example of the present invention (is tested picture to obtain from website, active vision laboratory, Oxford University at the two picture group sheets that two groups are used for test feature describing method (light sequence of pictures) effect under picture blur (bikes sequence of pictures) and illumination comparatively secretly according to I-BRIEF algorithm, these two groups of sequence of pictures as shown in Figure 7, often organize sequence and have six width pictures, bikes sequence is increasing along with picture sequence numbers increases picture blur degree, darkness deepens along with picture sequence numbers increases picture illumination for light sequence, during experiment each sequence with the secondary picture of sequence first for benchmark, Feature Points Matching is carried out respectively with the different character description method of this sequence subsequent pictures, matching double points ratio correct in Feature Points Matching result is as match cognization rate), as shown in Figure 8, for application example of the present invention according to I-BRIEF algorithm at two groups of experimental result schematic diagram in the standard test sequences of test feature describing method (light sequence of pictures) effect under picture blur (bikes sequence of pictures) and illumination are comparatively dark.Experimental result shows that I-BRIEF method has very high discrimination (shown in Fig. 8, the wherein length of I-BRIEF numeral feature interpretation string below) in the more image blurring and darker situation of illumination, and coupling is also very quick.Therefore it can be utilized to carry out Feature Points Matching search to resistance that is image blurring and illumination variation, thus make video camera still can compare stable Camera location when rapid movement produces image blurring and ambient lighting change.
Four, experimental result comparative analysis
The track thread of PTAM can estimate the quality of Camera location according to the accuracy of the Feature Points Matching in plane of delineation projection of three-dimensional feature point in every two field picture.If the scale-up factor of correct coupling is higher, so just think that tracking quality is (good), if coefficient is lower than certain threshold value, so just think tracking quality general (poor), follow the tracks of and as usual carry out, but do not allow in figure, to add new key frame again.If this coefficient is lower than a lower threshold value, so just think that tracking quality is poor (bad).If system continuous multiple frames tracking quality, for poor, so just thinks that tracking lost, system starts to attempt recovering video camera attitude (Attemptingrecovery).In application example experiment of the present invention, compare by the method improved and original PTAM method.First video camera is allowed to follow the tracks of certain hour in scene, build a scene structure hum pattern, and this figure is locked, then, when video camera in scene rapid movement or scene illumination darker time, follow the tracks of respectively with original PTAM method and improving one's methods of this paper, the tracking quality of statistics two kinds of methods, and be analyzed.
(1) the tracking quality contrast in video camera rapid movement situation
We carry out Camera location to experiment video sequence, and experimental result is as shown in table 1.Pass through statistics, being not difficult to find that adopting improves one's methods herein makes the frame number of tracking quality result as well more than former method a lot, and tracking quality be general or difference frame number to lack a lot, this prove this paper method really fast camera move under make PTAM follow the tracks of robust more.
Good Generally Difference Recover video camera attitude
Former PTAM scheme 1736 750 123 183
This application example 2238 521 13 20
Table 1
(2) the tracking quality contrast in the darker situation of illumination
Table 2 is the Statistical Comparison of tracking quality when original matching process based on SSD and the method based on I-BRIEF descriptor herein carry out Camera location respectively under the darker environment of illumination, be not difficult to find from statistics, the method adopting I-BRIEF to carry out Feature Points Matching can produce when illumination is bad significantly improves effect, and this demonstrates the validity of carrying out Feature Points Matching based on I-BRIEF again.
Good Generally Difference Recover video camera attitude
Former PTAM scheme 2218 533 58 34
This application example 2802 23 15 3
Table 2
(3) match time efficiency contrast
Table 3 give PTAM build figure in unique point be more than 3000 time Feature Points Matching statistics search time.Can find to use I-BRIEF to mate by time efficiency comparative analysis, quicker than the SSD method adopted in PTAM method to a certain extent.
Former PTAM scheme This application example
Feature Points Matching search time (millisecond) 9.8 4.5
Table 3
Compared with immediate prior art, first this programme introduces the basic framework of the camera tracking system based on SLAM, then the ultimate principle of wherein a kind of representational Camera Tracking Technology Parallel Tracking and draughtsmanship (PTAM) is described, and utilize I-BRIEF describing method in image blurring, illumination variation all has higher correct matching rate larger feature, apply it in PTAM system.Experimental result shows, this makes PTAM have higher robustness when fast camera motion generation is image blurring and ambient lighting is darker.
Should be understood that the particular order of the step in disclosed process or level are the examples of illustrative methods.Based on design preference, should be appreciated that, the particular order of the step in process or level can be rearranged when not departing from protection domain of the present disclosure.Appended claim to a method gives the key element of various step with exemplary order, and is not to be limited to described particular order or level.
In above-mentioned detailed description, various feature is combined in single embodiment together, to simplify the disclosure.This open method should be interpreted as reflecting such intention, that is, the embodiment of theme required for protection needs feature more more than the feature clearly stated in each claim.On the contrary, as appending claims reflect, application example of the present invention is in the state fewer than whole features of disclosed single embodiment.Therefore, appending claims is clearly merged in detailed description hereby, and wherein every claim is alone as the preferred embodiment that the present invention is independent.
For enabling any technician in this area realize or use the present invention, above disclosed embodiment is described.To those skilled in the art; The various alter modes of these embodiments are all apparent, and General Principle defined herein also can be applicable to other embodiment on the basis not departing from spirit of the present disclosure and protection domain.Therefore, the disclosure is not limited to the embodiment provided herein, but consistent with the widest scope of principle disclosed in the present application and novel features.
Description above comprises the citing of one or more embodiment.Certainly, all possible combination describing parts or method in order to describe above-described embodiment is impossible, but those of ordinary skill in the art should be realized that, each embodiment can do further combinations and permutations.Therefore, embodiment described herein is intended to contain all such changes, modifications and variations fallen in the protection domain of appended claims.In addition, " comprise " with regard to the term used in instructions or claims, the mode that contains of this word is similar to term and " comprises ", just as " comprising, " be in the claims used as link word explain such.In addition, be used in any one term in the instructions of claims " or " be to represent " non-exclusionism or ".
Those skilled in the art can also recognize the various illustrative components, blocks (illustrativelogicalblock) that the embodiment of the present invention is listed, unit, and step can pass through electronic hardware, computer software, or both combinations realize.For the replaceability (interchangeability) of clear displaying hardware and software, above-mentioned various illustrative components (illustrativecomponents), unit and step have universally described their function.Such function is the designing requirement realizing depending on specific application and whole system by hardware or software.Those skilled in the art for often kind of specifically application, can use the function described in the realization of various method, but this realization can should not be understood to the scope exceeding embodiment of the present invention protection.
Various illustrative logical block described in the embodiment of the present invention, or unit can pass through general processor, digital signal processor, special IC (ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the design of above-mentioned any combination realizes or operates described function.General processor can be microprocessor, and alternatively, this general processor also can be any traditional processor, controller, microcontroller or state machine.Processor also can be realized by the combination of calculation element, such as digital signal processor and microprocessor, multi-microprocessor, and a Digital Signal Processor Core combined by one or more microprocessor, or other similar configuration any realizes.
The software module that method described in the embodiment of the present invention or the step of algorithm directly can embed hardware, processor performs or the combination of both.Software module can be stored in the storage medium of other arbitrary form in RAM storer, flash memory, ROM storer, eprom memory, eeprom memory, register, hard disk, moveable magnetic disc, CD-ROM or this area.Exemplarily, storage medium can be connected with processor, with make processor can from storage medium reading information, and write information can be deposited to storage medium.Alternatively, storage medium can also be integrated in processor.Processor and storage medium can be arranged in ASIC, and ASIC can be arranged in user terminal.Alternatively, processor and storage medium also can be arranged in the different parts in user terminal.
In one or more exemplary design, the above-mentioned functions described by the embodiment of the present invention can realize in the combination in any of hardware, software, firmware or this three.If realized in software, these functions can store on the medium with computer-readable, or are transmitted on the medium of computer-readable with one or more instruction or code form.Computer readable medium comprises computer storage medium and is convenient to make to allow computer program transfer to the telecommunication media in other place from a place.Storage medium can be that any general or special computer can the useable medium of access.Such as, such computer readable media can include but not limited to RAM, ROM, EEPROM, CD-ROM or other optical disc storage, disk storage or other magnetic storage device, or other anyly may be used for carrying or store the medium that can be read the program code of form with instruction or data structure and other by general or special computer or general or special processor.In addition, any connection can be properly termed computer readable medium, such as, if software is by a concentric cable, fiber optic cables, twisted-pair feeder, Digital Subscriber Line (DSL) or being also comprised in defined computer readable medium with wireless way for transmittings such as such as infrared, wireless and microwaves from a web-site, server or other remote resource.Described video disc (disk) and disk (disc) comprise Zip disk, radium-shine dish, CD, DVD, floppy disk and Blu-ray Disc, and disk is usually with magnetic duplication data, and video disc carries out optical reproduction data with laser usually.Above-mentioned combination also can be included in computer readable medium.
Above-described embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only the specific embodiment of the present invention; the protection domain be not intended to limit the present invention; within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. parallel type is followed the tracks of and a drawing PTAM video camera tracking method, and it is characterized in that, described method comprises:
According to the image information of video camera input, obtain the estimation attitude of described video camera in current image frame;
According to the estimation attitude of described video camera in current image frame, by the three-dimensional feature spot projection of known location in real scene on the plane of delineation of described video camera, carry out Feature Points Matching by the independent binary robust primary features I-BRIEF algorithm improved, calculate the actual attitude of described video camera in current image frame;
Utilize described video camera to mate in the estimation attitude of current image frame and actual attitude, obtain matching result;
Described matching result is utilized to carry out the pose adjustment of described video camera, to carry out the image trace of described video camera.
2. PTAM video camera tracking method as claimed in claim 1, is characterized in that, the described independent binary robust primary features I-BRIEF algorithm by improving carries out the method for Feature Points Matching, comprising:
Input an image tile centered by unique point;
The image tile of the default size centered by described unique point is chosen a series of test pixel point pair be made up of two location of pixels, these test points to composition test point to set;
In described test point is to set, choose a point right, in the first local smoothing method filtering of each point that this point is right, the then gray-scale value of two pixels that this point of test and comparison is right, obtains and surveys time result; Test result to repetition aforesaid operations process to each point in set, is then cascaded, forms the descriptor to described unique point by described test point;
Export the descriptor of described unique point, to carry out Feature Points Matching according to time result of the survey in described descriptor.
3. PTAM video camera tracking method as claimed in claim 2, is characterized in that, described survey time result comprises: dark, bright, similar, represent dark with binary number 01 respectively, binary number 10 represents bright, and binary number 00 represents similar.
4. PTAM video camera tracking method as claimed in claim 1, it is characterized in that, the attitude of described video camera comprises: the Viewing-angle information of video camera.
5. PTAM video camera tracking method as claimed in claim 1, it is characterized in that, the described pose adjustment utilizing described matching result to carry out described video camera, to carry out the image trace of described video camera, specifically comprises:
According to the accuracy of the Feature Points Matching in plane of delineation projection of three-dimensional feature point in two field picture every in described matching result, estimate the tracking quality of described video camera, carry out the pose adjustment of described video camera according to tracking quality:
If the accuracy of correct coupling is more than or equal to preset first threshold, then confirms that tracking quality is good, do not carry out pose adjustment, follow the tracks of and as usual carry out;
If the accuracy of correct coupling is lower than preset first threshold and be not less than preset Second Threshold, then confirm that tracking quality is general, do not carry out pose adjustment, follow the tracks of and as usual carry out, but do not allow in the image followed the tracks of, to add new key frame again, wherein, described first threshold is higher than described Second Threshold;
If the accuracy of correct coupling is lower than preset Second Threshold, then confirm that tracking quality is poor, carry out the pose adjustment of described video camera according to the correct unique point of coupling; If the tracking quality of multiple picture frames of preset quantity is difference continuously, then confirm that image trace is lost, the key frame that the systematic search of described video camera is the most similar with present image, and then utilize the most similar described key frame found, recalculate the attitude of described video camera thus carry out pose adjustment.
6. parallel type is followed the tracks of and a drawing PTAM Camera location device, and it is characterized in that, described device comprises:
Estimate attitude acquiring unit, for the image information inputted according to video camera, obtain the estimation attitude of described video camera in current image frame;
I-BRIEF algorithmic match unit, for according to the estimation attitude of described video camera in current image frame, by the three-dimensional feature spot projection of known location in real scene on the plane of delineation of described video camera, carry out Feature Points Matching by the independent binary robust primary features I-BRIEF algorithm improved, calculate the actual attitude of described video camera in current image frame; Utilize described video camera to mate in the estimation attitude of current image frame and actual attitude, obtain matching result;
Pose adjustment unit, for the pose adjustment utilizing described matching result to carry out described video camera, to carry out the image trace of described video camera.
7. PTAM Camera location device as claimed in claim 6, it is characterized in that, described I-BRIEF algorithmic match unit comprises:
Load module, for inputting an image tile centered by unique point;
Survey secondary module, for choosing a series of test pixel point pair be made up of two location of pixels in the image tile of the default size centered by described unique point, these test points to composition test point to set; In described test point is to set, choose a point right, in the first local smoothing method filtering of each point that this point is right, the then gray-scale value of two pixels that this point of test and comparison is right, obtains and surveys time result; Test result to repetition aforesaid operations process to each point in set, is then cascaded, forms the descriptor to described unique point by described test point;
Output module, for exporting the descriptor of described unique point, to carry out Feature Points Matching according to time result of the survey in described descriptor.
8. PTAM Camera location device as claimed in claim 7, is characterized in that, described survey time result comprises: dark, bright, similar, represent dark with binary number 01 respectively, binary number 10 represents bright, and binary number 00 represents similar.
9. PTAM Camera location device as claimed in claim 6, it is characterized in that, the attitude of described video camera comprises: the Viewing-angle information of video camera.
10. PTAM Camera location device as claimed in claim 6, it is characterized in that, described pose adjustment unit, specifically for the accuracy according to the Feature Points Matching in plane of delineation projection of three-dimensional feature point in two field picture every in described matching result, estimate the tracking quality of described video camera, carry out the pose adjustment of described video camera according to tracking quality:
Described pose adjustment unit, specifically comprises further:
First pose adjustment module, if be more than or equal to preset first threshold for the accuracy of correct coupling, then confirm that tracking quality is good, does not carry out pose adjustment, follow the tracks of and as usual carry out;
Second pose adjustment module, if be not less than preset Second Threshold for the accuracy of correct coupling lower than preset first threshold, then confirm that tracking quality is general, do not carry out pose adjustment, follow the tracks of and as usual carry out, but do not allow to add new key frame again in the image followed the tracks of, wherein, described first threshold is higher than described Second Threshold;
3rd pose adjustment module, if for correct accuracy of mating lower than preset Second Threshold, then confirms that tracking quality is poor, carries out the pose adjustment of described video camera according to the correct unique point of coupling; If the tracking quality of multiple picture frames of preset quantity is difference continuously, then confirm that image trace is lost, the key frame that the systematic search of described video camera is the most similar with present image, and then utilize the most similar described key frame found, recalculate the attitude of described video camera thus carry out pose adjustment.
CN201511023879.2A 2015-12-31 2015-12-31 A kind of PTAM video camera tracking method and device Active CN105513083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511023879.2A CN105513083B (en) 2015-12-31 2015-12-31 A kind of PTAM video camera tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511023879.2A CN105513083B (en) 2015-12-31 2015-12-31 A kind of PTAM video camera tracking method and device

Publications (2)

Publication Number Publication Date
CN105513083A true CN105513083A (en) 2016-04-20
CN105513083B CN105513083B (en) 2019-02-22

Family

ID=55721040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511023879.2A Active CN105513083B (en) 2015-12-31 2015-12-31 A kind of PTAM video camera tracking method and device

Country Status (1)

Country Link
CN (1) CN105513083B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
CN109297496A (en) * 2018-09-29 2019-02-01 上海新世纪机器人有限公司 Robot localization method and device based on SLAM
WO2019068222A1 (en) * 2017-10-06 2019-04-11 Qualcomm Incorporated Concurrent relocation and reinitialization of vslam
WO2019084804A1 (en) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 Visual odometry and implementation method therefor
CN110119649A (en) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 State of electronic equipment tracking, device, electronic equipment and control system
CN111133473A (en) * 2017-09-28 2020-05-08 三星电子株式会社 Camera pose determination and tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075686A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Robust real-time on-line camera tracking method
US20120139914A1 (en) * 2010-12-06 2012-06-07 Samsung Electronics Co., Ltd Method and apparatus for controlling virtual monitor
CN102496022A (en) * 2011-11-02 2012-06-13 北京航空航天大学 Effective feature point description I-BRIEF method
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120139914A1 (en) * 2010-12-06 2012-06-07 Samsung Electronics Co., Ltd Method and apparatus for controlling virtual monitor
CN102075686A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Robust real-time on-line camera tracking method
CN102496022A (en) * 2011-11-02 2012-06-13 北京航空航天大学 Effective feature point description I-BRIEF method
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶峰: "实时相机跟踪技术研究与应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping
US11199414B2 (en) 2016-09-14 2021-12-14 Zhejiang University Method for simultaneous localization and mapping
CN106885574B (en) * 2017-02-15 2020-02-07 北京大学深圳研究生院 Monocular vision robot synchronous positioning and map construction method based on re-tracking strategy
CN106885574A (en) * 2017-02-15 2017-06-23 北京大学深圳研究生院 A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN108965687A (en) * 2017-05-22 2018-12-07 阿里巴巴集团控股有限公司 Shooting direction recognition methods, server and monitoring method, system and picture pick-up device
US10949995B2 (en) 2017-05-22 2021-03-16 Alibaba Group Holding Limited Image capture direction recognition method and server, surveillance method and system and image capture device
CN108965687B (en) * 2017-05-22 2021-01-29 阿里巴巴集团控股有限公司 Shooting direction identification method, server, monitoring method, monitoring system and camera equipment
CN111133473B (en) * 2017-09-28 2024-04-05 三星电子株式会社 Camera pose determination and tracking
CN111133473A (en) * 2017-09-28 2020-05-08 三星电子株式会社 Camera pose determination and tracking
WO2019068222A1 (en) * 2017-10-06 2019-04-11 Qualcomm Incorporated Concurrent relocation and reinitialization of vslam
US11340615B2 (en) 2017-10-06 2022-05-24 Qualcomm Incorporated Concurrent relocation and reinitialization of VSLAM
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation
WO2019084804A1 (en) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 Visual odometry and implementation method therefor
CN110119649A (en) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 State of electronic equipment tracking, device, electronic equipment and control system
CN109297496A (en) * 2018-09-29 2019-02-01 上海新世纪机器人有限公司 Robot localization method and device based on SLAM

Also Published As

Publication number Publication date
CN105513083B (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN105513083A (en) PTAM camera tracking method and device
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
US10368062B2 (en) Panoramic camera systems
US9025863B2 (en) Depth camera system with machine learning for recognition of patches within a structured light pattern
Zhang et al. Robust metric reconstruction from challenging video sequences
CN111445526A (en) Estimation method and estimation device for pose between image frames and storage medium
CN110544268B (en) Multi-target tracking method based on structured light and SiamMask network
KR20200063368A (en) Unsupervised stereo matching apparatus and method using confidential correspondence consistency
CN109525786A (en) Method for processing video frequency, device, terminal device and storage medium
CN110310305A (en) A kind of method for tracking target and device based on BSSD detection and Kalman filtering
Ni et al. An improved adaptive ORB-SLAM method for monocular vision robot under dynamic environments
You et al. MISD‐SLAM: multimodal semantic SLAM for dynamic environments
Yan et al. Learning complementary correlations for depth super-resolution with incomplete data in real world
KR101916573B1 (en) Method for tracking multi object
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
CN112270748B (en) Three-dimensional reconstruction method and device based on image
Luginov et al. Swiftdepth: An efficient hybrid cnn-transformer model for self-supervised monocular depth estimation on mobile devices
CN110390336B (en) Method for improving feature point matching precision
Lin et al. Matching cost filtering for dense stereo correspondence
Luo et al. Improved ORB‐SLAM2 Algorithm Based on Information Entropy and Image Sharpening Adjustment
CN113052311B (en) Feature extraction network with layer jump structure and method for generating features and descriptors
CN113674340A (en) Binocular vision navigation method and device based on landmark points
Li et al. NeRF-MS: Neural Radiance Fields with Multi-Sequence
Wu et al. Multi-view 3D reconstruction based on deep learning: A survey and comparison of methods
CN110059651B (en) Real-time tracking and registering method for camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230424

Address after: Room 501-502, 5/F, Sina Headquarters Scientific Research Building, Block N-1 and N-2, Zhongguancun Software Park, Dongbei Wangxi Road, Haidian District, Beijing, 100193

Patentee after: Sina Technology (China) Co.,Ltd.

Address before: 100080, International Building, No. 58 West Fourth Ring Road, Haidian District, Beijing, 20 floor

Patentee before: Sina.com Technology (China) Co.,Ltd.

TR01 Transfer of patent right