CN109948624A - Method, apparatus, electronic equipment and the computer storage medium of feature extraction - Google Patents

Method, apparatus, electronic equipment and the computer storage medium of feature extraction Download PDF

Info

Publication number
CN109948624A
CN109948624A CN201910124316.4A CN201910124316A CN109948624A CN 109948624 A CN109948624 A CN 109948624A CN 201910124316 A CN201910124316 A CN 201910124316A CN 109948624 A CN109948624 A CN 109948624A
Authority
CN
China
Prior art keywords
image
feature
processed
point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910124316.4A
Other languages
Chinese (zh)
Inventor
史桀绮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910124316.4A priority Critical patent/CN109948624A/en
Publication of CN109948624A publication Critical patent/CN109948624A/en
Pending legal-status Critical Current

Links

Abstract

The present invention provides a kind of method, apparatus of feature extraction, electronic equipment and computer storage mediums, this method comprises: obtaining image to be processed;Feature extraction is carried out to image to be processed, obtains the feature descriptor of pixel in the feature extraction map and feature extraction map of image to be processed;Determine the characteristics of image of image to be processed, according to feature extraction map and feature descriptor to determine the posture information of vision inertia odometer according to characteristics of image;The characteristics of image includes: the location information of contour feature point and/or the feature descriptor of contour feature point.In the present invention, a large amount of equally distributed contour feature points in image to be processed can be obtained, the characteristic point extracted in traditional extraction process is avoided to concentrate on a certain pocket and lose partial dimensional, the characteristics of image of robust can be also obtained when environmental change is violent, stability is good, alleviates the characteristics of image stability difference that existing feature extraction algorithm extracts and the bad technical problem of accuracy.

Description

Method, apparatus, electronic equipment and the computer storage medium of feature extraction
Technical field
The present invention relates to the technical fields of image procossing, set more particularly, to a kind of method, apparatus of feature extraction, electronics Standby and computer storage medium.
Background technique
In current visual synchronization positioning with map structuring system, researcher generallys use visual odometry, before utilization Camera motion is estimated in the matching between frame image afterwards, and optimizes camera pose using re-projection error.
In this course, then the characteristic point for needing to extract before and after frames image by feature extraction algorithm passes through front and back Camera motion is estimated in the matching of frame image features point.In current visual synchronization positioning and map structuring system, feature extraction Algorithm is all to use traditional orb algorithm or sift algorithm.Since the matching between before and after frames image places one's entire reliance upon feature The correspondence of point, if characteristic point produces big response (for example, extracting in previous frame image in the different zones of image Characteristic point is in the phone area of previous frame image, and the characteristic point extracted in current frame image is in the mouse area of current frame image Domain), even if feature extraction is accurate in this way, before and after frames image can not be also registrated by characteristic point.Both algorithms use The standard of " response is maximum " as judging characteristic point position, the stability of the characteristics of image of extraction are poor;In addition, at actual place During reason, especially for the illumination variation range of outdoor environment by larger, this proposes the feature for largely effecting on before and after frames image It takes and the matching of before and after frames image, especially in the case where dim light (such as night, snow scenes), it will so that orb is calculated (response is corresponding for characterizing its in the similar response of most areas acquisition of before and after frames image for method or sift algorithm Point is characterized the size of a probability), when using " response maximum " standard as judging characteristic point position in this way, cause to extract The quantity of obtained characteristic point is sharply reduced, namely the accuracy of the characteristics of image extracted is poor, can not carry out the feature of robust Match, and then estimates that obtain camera motion accuracy poor.And since camera motion is determined by the relative pose of before and after frames image, Present frame pose is estimated to all rely on the pose of former frame, therefore, if the estimation in some position produces error, is somebody's turn to do Error message will be positioned in visual synchronization and be transmitted always in map structuring system, and constantly accumulative, may finally cause to estimate Camera motion occur track drift.
To sum up, the characteristics of image stability difference and accuracy that existing feature extraction algorithm extracts are bad.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of method, apparatus of feature extraction, electronic equipment and computers Storage medium, to alleviate characteristics of image stability difference that existing feature extraction algorithm extracts and the bad technology of accuracy Problem.
In a first aspect, the embodiment of the invention provides a kind of methods of feature extraction, which comprises it is used to obtain vision When property odometer moves in target area, image to be processed that the target area is shot;To described wait locate It manages image and carries out feature extraction, obtain pixel in the feature extraction map and the feature extraction map of the image to be processed Feature descriptor;The pixel value of the first pixel indicates that its institute in the image to be processed is right in the feature extraction map The pixel answered is the probability of the contour feature point of target object in the image to be processed, and the feature descriptor indicates its institute The image block message of corresponding first pixel;According to pixel in the feature extraction map and the feature extraction map Feature descriptor determines the characteristics of image of the image to be processed, to be determined in the vision inertia according to described image feature The posture information of journey meter;Described image feature includes: the location information and/or the contour feature point of the contour feature point Feature descriptor.
Further, described image feature includes: the location information of the contour feature point;According to the feature extraction figure Spectrum and the feature extraction map in pixel feature descriptor, determine the image to be processed characteristics of image include: First object pixel is determined in the feature extraction map;The first object pixel is pixel in first pixel Value is greater than the pixel of presetted pixel threshold value;The first object pixel is ranked up according to its pixel value, obtains pixel Point collating sequence;Non-maxima suppression operation is carried out to the pixel in the pixel collating sequence, obtains target pixel points; Using location information of the target pixel points in the feature extraction map as the location information of the contour feature point.
Further, described image feature further include: the feature descriptor of the contour feature point;Obtaining the profile After the location information of characteristic point, the method also includes: in the feature extraction map in the feature descriptor of pixel, Using the feature descriptor of pixel corresponding to the location information of the contour feature point as the feature of the contour feature point Descriptor.
Further, the image to be processed includes: the first image to be processed and the second image to be processed, described first to Handle the previous image frame that image is the described second image to be processed;The vision inertia mileage is determined according to described image feature The posture information of meter includes: special according to the characteristics of image of the described first image to be processed and the image of second image to be processed Sign carries out characteristic matching, obtains characteristic matching point pair;The characteristic matching point centering includes the described first image to be processed and institute State the contour feature point to match in the second image to be processed;It determines corresponding with fisrt feature point three in the target area Dimensional feature point;The fisrt feature point is the profile spy that the characteristic matching point centering includes in the described first image to be processed Sign point;It is calculated using location information of the camera Attitude estimation algorithm to second feature point and three-dimensional feature point, and root The posture information of the vision inertia odometer is determined according to calculated result;The second feature point is the characteristic matching point centering It include the contour feature point in the described second image to be processed.
Further, according to the image of the characteristics of image of the described first image to be processed and second image to be processed spy Sign carries out characteristic matching, obtains characteristic matching point to including: to carry out inner product to second feature descriptor and third feature descriptor It calculates, obtains inner product calculating matrix, wherein the second feature descriptor is in the characteristics of image of the described first image to be processed Feature descriptor, the third feature descriptor be the described second image to be processed characteristics of image in feature descriptor; The characteristic matching point pair is determined according to the inner product calculating matrix.
Further, according to the inner product calculating matrix determine the characteristic matching point to include: the inner product calculate The minimum value of the i-th row and the minimum value of jth column are determined in matrix, respectively obtain the first minimum value and the second minimum value, wherein i 1 to I, the I is successively taken to indicate total line number in the inner product calculating matrix, j successively takes 1 to J, and the J indicates the inner product Total columns in calculating matrix;Judge first minimum value element corresponding to the inner product calculating matrix and described second Whether minimum value element corresponding in the inner product calculating matrix is identity element;If it is, judging described first most Whether small value is not more than preset threshold, alternatively, judging whether second minimum value is not more than the preset threshold;If so, Then in the second feature descriptor, determine that the feature of first minimum value or second minimum value is calculated in inner product Descriptor obtains first object feature descriptor;And in the third feature descriptor, it is described to determine that inner product is calculated The feature descriptor of first minimum value or second minimum value obtains the second target signature descriptor;By the first object Contour feature point corresponding to contour feature point corresponding to feature descriptor and the second target signature descriptor is determined as One group of characteristic matching point of the characteristic matching point centering.
Further, the method also includes: if first minimum value is corresponding to the inner product calculating matrix Element and second minimum value element corresponding in the inner product calculating matrix are not identity elements, it is determined that described the Contour feature point corresponding to contour feature point corresponding to one target signature descriptor and the second target signature descriptor It is not the characteristic matching point pair.
Further, the method also includes: if first minimum value be greater than the preset threshold, alternatively, described Second minimum value is greater than the preset threshold, it is determined that contour feature point corresponding to the first object feature descriptor and institute Stating contour feature point corresponding to the second target signature descriptor is not the characteristic matching point pair.
Further, the feature descriptor is floating number.
Further, feature extraction is carried out to the image to be processed, obtains the feature extraction figure of the image to be processed The feature descriptor of pixel includes: using feature extraction network to the image to be processed in spectrum and the feature extraction map Feature extraction is carried out, the feature of pixel in the feature extraction map and the feature extraction map of the image to be processed is obtained Descriptor.
Further, the method also includes: obtain training sample image;The characteristic point detection obtained using preparatory training Network is labeled the training sample image, obtains the position of the contour feature point of target object in the training sample image Confidence breath;The training sample image is converted by default transformation matrix, obtains transformed training sample image;It is logical Cross the training sample image, the transformed training sample image, the location information of the contour feature point and described pre- If transformation matrix extracts network to primitive character and is trained, the feature extraction network is obtained.
Further, the method also includes: obtain compound training sample image, wherein the compound training sample graph It include target object as in;The mark that contour feature point is carried out to the target object in the compound training sample image, obtains The location information of contour feature point;By the location information of the compound training sample image and the contour feature point to original Characteristic point detection network is trained, and obtains the characteristic point detection network.
Second aspect, the embodiment of the invention also provides a kind of device of feature extraction, described device includes: to obtain list Member, when being moved in target area for obtaining vision inertia odometer, to the target area shot wait locate Manage image;Feature extraction unit obtains the feature of the image to be processed for carrying out feature extraction to the image to be processed Extract the feature descriptor of pixel in map and the feature extraction map;First pixel in the feature extraction map Pixel value indicates that its corresponding pixel in the image to be processed is the profile of target object in the image to be processed The probability of characteristic point, the feature descriptor indicate the image block message of the first pixel corresponding to it;Determination unit is used for According to the feature descriptor of pixel in the feature extraction map and the feature extraction map, the image to be processed is determined Characteristics of image, to determine the posture information of the vision inertia odometer according to described image feature;Described image feature packet It includes: the feature descriptor of the location information of the contour feature point and/or the contour feature point.
The third aspect the embodiment of the invention provides a kind of electronic equipment, including memory, processor and is stored in described On memory and the computer program that can run on the processor, the processor are realized when executing the computer program The step of above-mentioned first aspect described in any item methods.
Fourth aspect, the embodiment of the invention provides a kind of meters of non-volatile program code that can be performed with processor The step of calculation machine readable medium, said program code makes the processor execute above-mentioned first aspect described in any item methods.
In embodiments of the present invention, firstly, when acquisition vision inertia odometer moves in target area, to target area The image to be processed shot;Then, feature extraction is carried out to image to be processed, the feature for obtaining image to be processed mentions Take the feature descriptor of pixel in map and feature extraction map;Finally, according to feature extraction map and feature extraction map The feature descriptor of middle pixel determines the characteristics of image of image to be processed, to determine vision inertia odometer according to characteristics of image Posture information.The characteristics of image includes: the location information of the contour feature point of target object and/or the profile spy of target object Levy the feature descriptor of point.As can be seen from the above description, in embodiments of the present invention, obtained image to be processed is finally extracted Characteristics of image is the contour feature point of target object therein, can obtain a large amount of equally distributed profiles in image to be processed Characteristic point avoids the characteristic point extracted in traditional extraction process and concentrates on a certain pocket and lose partial dimensional, meanwhile, The characteristics of image of robust can be also obtained when environmental change is violent, stability is good, alleviates existing feature extraction algorithm and mentions The bad technical problem of the characteristics of image stability difference and accuracy obtained.
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention are in specification, claims And specifically noted structure is achieved and obtained in attached drawing.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is the schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of the method for feature extraction provided in an embodiment of the present invention;
Fig. 3 is that the feature provided in an embodiment of the present invention according to pixel in feature extraction map and feature extraction map is retouched Symbol is stated, determines the method flow diagram of the characteristics of image of image to be processed;
Fig. 4 is the method for the posture information provided in an embodiment of the present invention that vision inertia odometer is determined according to characteristics of image Flow chart;
Fig. 5 is a kind of schematic diagram of the device of feature extraction provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise Under every other embodiment obtained, shall fall within the protection scope of the present invention.
Embodiment 1:
Firstly, describing the electronic equipment 100 for realizing the embodiment of the present invention referring to Fig.1, which can be used In the method for the feature extraction of operation various embodiments of the present invention.
As shown in Figure 1, electronic equipment 100 includes one or more processors 102, one or more memories 104, input Device 106, output device 108 and video camera 110, the connection machine that these components pass through bus system 112 and/or other forms The interconnection of structure (not shown).It should be noted that the component and structure of electronic equipment 100 shown in FIG. 1 are only exemplary, rather than limit Property, as needed, the electronic equipment also can have other assemblies and structure.
The processor 102 can use digital signal processor (DSP, Digital Signal Processing), show Field programmable gate array (FPGA, Field-Programmable Gate Array), programmable logic array (PLA, Programmable Logic Array) and ASIC (Application Specific Integrated Circuit) in At least one example, in hardware realizes that the processor 102 can be central processing unit (CPU, Central Processing Unit) or the processing unit of the other forms with data-handling capacity and/or instruction execution capability, and it can control institute Other components in electronic equipment 100 are stated to execute desired function.
The memory 104 may include one or more computer program products, and the computer program product can be with Including various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described volatile Property memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non-easy The property lost memory for example may include read-only memory (ROM), hard disk, flash memory etc..On the computer readable storage medium It can store one or more computer program instructions, processor 102 can run described program instruction, described below to realize The embodiment of the present invention in the client functionality (realized by processor) and/or other desired functions.In the calculating Various application programs and various data can also be stored in machine readable storage medium storing program for executing, such as the application program is used and/or produced Raw various data etc..
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat One or more of gram wind and touch screen etc..
The output device 108 can export various information (for example, image or sound) to external (for example, user), and It and may include one or more of display, loudspeaker etc..
The video camera 110 is used to carry out the acquisition of image to be processed, wherein video camera image warp to be processed collected The method for crossing the feature extraction obtains characteristics of image after being handled, for example, video camera can shoot the desired figure of user As (such as photo, video etc.), then, image spy is obtained after method of the image Jing Guo the feature extraction is handled Captured image can also be stored in the memory 104 for the use of other components by sign, video camera.
Illustratively, the electronic equipment for realizing the method for feature extraction according to an embodiment of the present invention can be implemented For intelligent mobile terminals such as smart phone, tablet computers.
Embodiment 2:
According to embodiments of the present invention, the embodiment of a kind of method of feature extraction is provided, it should be noted that in attached drawing Process the step of illustrating can execute in a computer system such as a set of computer executable instructions, although also, Logical order is shown in flow charts, but in some cases, can be executed with the sequence for being different from herein it is shown or The step of description.
Fig. 2 is a kind of flow chart of the method for feature extraction according to an embodiment of the present invention, as shown in Fig. 2, this method packet Include following steps:
Step S202 is shot to obtain when acquisition vision inertia odometer moves in target area to target area Image to be processed;
In embodiments of the present invention, the method that this feature is extracted can be applied to visual synchronization positioning and map structuring system In, it is, of course, also possible to which the embodiment of the present invention is to this in the system for being used to be determined the posture information of vision inertia odometer The executing subject of the method for feature extraction is without concrete restriction.
As an example, the present invention is applied to visual synchronization positioning and map structuring system in the method that this feature is extracted For be illustrated.When acquisition vision inertia odometer moves in target area, the target area is shot to obtain Image to be processed.Specifically, vision inertia odometer is as the sports equipment belonging to it is (for example, robot, automatic driving car Etc.) movement in target area and move.
In visual synchronization positioning with map structuring system, the algorithm flow of the posture information of vision inertia odometer is determined It is as follows: when sports equipment moves to any time, frame image to be processed to be obtained from vision inertia odometer, is regarded simultaneously Feel synchronous superposition system in remain over built up global map (i.e. the three-dimensional map of target area, Partial 3-D map this refers to the target area having been had built up before this frame image to be processed) crucial points According to.In order to estimate the posture information of Current vision inertia odometer, first by the characteristic point progress of before and after frames image to be processed Match, and then determine the relative motion between this two frames image to be processed by camera Attitude estimation algorithm, to obtain current The posture information (including rotation and translation) of vision inertia odometer.
The posture information of vision inertia odometer: pi
The point map of global map: li
The camera motion of estimation: pi+1=f (pi,ui)+wi
Observation: zi,j=h (pi,lj)+vi,j
Wherein, uiIndicate the motion related information of vision inertia odometer input, wiAnd vi,jIndicate noise.In practical problem In, observation zi,jWith uiIt is known that optimization aim is the p and l by optimal estimating, so that wiAnd vi,jIt is minimum.
It is clear that the algorithm above process greatly depends on the accuracy of Feature Points Matching.For the characteristic point of extraction, Any matching fault may all be positioned to visual synchronization leaves long-range error with map structuring system, and directly influences subsequent The estimation of camera pose (i.e. the pose of vision inertia odometer), i.e., usually said cumulative errors constantly increase, and lead to track Drift is even lost.
In order to solve this problem, error is evenly spread into each position using the method for figure optimization in the prior art, So that whole trajectory error reduces, and obtain the point map of more robust.However, the method based on figure optimization can only disperse Error, and cannot be by error concealment.Therefore, the cumulative errors that characteristic matching generates are still synchronous superposition system The largest source of middle error.For this problem, the feature extracting method in the present invention can reduce error from source, be promoted The precision of visual synchronization positioning and map structuring system.The characteristics of image that feature extracting method i.e. of the invention extracts is more smart It is close, while mating characteristic matching measurement rapidly and efficiently, so that the front end of synchronous superposition system is good enough, with this Reduce the residual error in synchronous superposition system.
Step S204 carries out feature extraction to image to be processed, obtains the feature extraction map and feature of image to be processed Extract the feature descriptor of pixel in map;The pixel value of the first pixel indicates it in figure to be processed in feature extraction map Corresponding pixel is the probability of the contour feature point of target object in image to be processed as in, and feature descriptor indicates its institute The image block message of corresponding first pixel;
Above content is to acquisition image to be processed and visual synchronization positioning and vision inertia determining in map structuring system The algorithm flow of the posture information of odometer is described.The treatment process for treating processing image is described below.
After obtaining image to be processed, feature extraction is carried out to image to be processed, obtains the feature extraction of image to be processed The feature descriptor of pixel in map and feature extraction map.Specifically, obtained feature extraction map and image to be processed Size it is identical, pixel in feature extraction map and the pixel in image to be processed correspond.Feature extraction map In the pixel value of the first pixel (any pixel point i.e. in feature extraction map) indicate that it is corresponding in image to be processed Pixel be image to be processed in target object contour feature point probability.Feature descriptor indicates first corresponding to it The image block message of pixel specifically indicates the first pixel corresponding to it and the image centered on the first pixel around Block message, each first pixel are corresponding with a feature descriptor.
In addition, in embodiments of the present invention, target object is specifically as follows geometric object, geometric object here refers to tool The object in kind of regular shape.Regular shape can be cube, triangle, the common geometric figure such as line segment.
It hereinafter describes in detail again to the process of feature extraction, details are not described herein.
Step S206 is determined according to the feature descriptor of pixel in feature extraction map and feature extraction map wait locate The characteristics of image of image is managed, to determine the posture information of vision inertia odometer according to characteristics of image;Characteristics of image includes: profile The location information of characteristic point and/or the feature descriptor of contour feature point.
In obtaining feature extraction map and feature extraction map after the feature descriptor of pixel, further according to spy The feature descriptor that sign extracts pixel in map and feature extraction map determines the characteristics of image of image to be processed.The image is special Sign includes: the feature descriptor of the location information of the contour feature point of target object and/or the contour feature point of target object.Under Wen Zhongzai describes to the process in detail, and details are not described herein.
In embodiments of the present invention, firstly, when acquisition vision inertia odometer moves in target area, to target area The image to be processed shot;Then, feature extraction is carried out to image to be processed, the feature for obtaining image to be processed mentions Take the feature descriptor of pixel in map and feature extraction map;Finally, according to feature extraction map and feature extraction map The feature descriptor of middle pixel determines the characteristics of image of image to be processed, to determine vision inertia odometer according to characteristics of image Posture information.The characteristics of image includes: the location information of the contour feature point of target object and/or the profile spy of target object Levy the feature descriptor of point.As can be seen from the above description, in embodiments of the present invention, obtained image to be processed is finally extracted Characteristics of image is the contour feature point of target object therein, can obtain a large amount of equally distributed profiles in image to be processed Characteristic point avoids the characteristic point extracted in traditional extraction process and concentrates on a certain pocket and lose partial dimensional, meanwhile, The characteristics of image of robust can be also obtained when environmental change is violent, stability is good, alleviates existing feature extraction algorithm and mentions The bad technical problem of the characteristics of image stability difference and accuracy obtained.
In an alternate embodiment of the present invention where, step S204 carries out feature extraction to image to be processed, obtains wait locate The feature descriptor for managing pixel in the feature extraction map and feature extraction map of image includes the following steps: to mention using feature It takes network handles processing image to carry out feature extraction, obtains picture in the feature extraction map and feature extraction map of image to be processed The feature descriptor of vegetarian refreshments.
Specifically, handling image by feature extraction network handles carries out feature extraction, this feature, which extracts network, to be SuperPoint convolutional neural networks (network that characteristic point detection and descriptor based on self-supervisory training are extracted).Hereinafter again The training process for extracting network to this feature is introduced.
In an alternate embodiment of the present invention where, characteristics of image includes: the location information of contour feature point;With reference to Fig. 3, Step S206 determines image to be processed according to the feature descriptor of pixel in feature extraction map and feature extraction map Characteristics of image includes the following steps:
Step S2061 determines first object pixel in feature extraction map;First object pixel is the first pixel Pixel value is greater than the pixel of presetted pixel threshold value in point;
Step S2062 is ranked up first object pixel according to its pixel value, obtains pixel collating sequence;
Optionally, when sequence, first object pixel is ranked up according to pixel value descending sequence, and then To pixel collating sequence.
Step S2063 carries out non-maxima suppression operation to the pixel in pixel collating sequence, obtains object pixel Point;
Step S2064, using location information of the target pixel points in feature extraction map as the position of contour feature point Information.
Specifically, carrying out non-maxima suppression operation is finally obtained to reduce the complexity in later pixel registration Location information of the target pixel points in feature extraction map just as the location information of contour feature point.The contour feature The location information of point is the Feature Descriptor in the present invention.
In an alternate embodiment of the present invention where, characteristics of image further include: the feature descriptor of contour feature point;? To after the location information of contour feature point, this method further include:
Step S2065, in feature extraction map in the feature descriptor of pixel, by the location information of contour feature point Feature descriptor of the feature descriptor of corresponding pixel as contour feature point.
Above content specifically describes the process for determining the characteristics of image of image to be processed, below in determining vision inertia The process of the posture information of journey meter is introduced.
In an alternate embodiment of the present invention where, image to be processed includes: that the first image to be processed and second are to be processed Image, the first image to be processed are the previous image frame of the second image to be processed;With reference to Fig. 4, step S206, according to characteristics of image Determine that the posture information of vision inertia odometer includes the following steps:
Step S2066 is carried out according to the characteristics of image of the characteristics of image of the first image to be processed and the second image to be processed Characteristic matching obtains characteristic matching point pair;Characteristic matching point centering includes in the first image to be processed and the second image to be processed The contour feature point to match;
Specifically comprise the following steps:
Step S20661 carries out inner product calculating to second feature descriptor and third feature descriptor, obtains inner product calculating Matrix, wherein second feature descriptor is the feature descriptor in the characteristics of image of the first image to be processed, third feature description Symbol is the feature descriptor in the characteristics of image of the second image to be processed;
Step S20662 determines characteristic matching point pair according to inner product calculating matrix.
Specific determination process includes following (1)-(7) step:
(1) minimum value of the i-th row and the minimum value of jth column are determined in inner product calculating matrix, respectively obtain the first minimum Value and the second minimum value, wherein i successively takes 1 to I, I to indicate total line number in inner product calculating matrix, and j successively takes 1 to J, and J is indicated Total columns in inner product calculating matrix;
(2) judge the first minimum value element corresponding to inner product calculating matrix and the second minimum value in inner product calculating matrix In corresponding element whether be identity element;
(3) if the first minimum value element corresponding to inner product calculating matrix and the second minimum value are in inner product calculating matrix In corresponding element be identity element, then judge whether the first minimum value is not more than preset threshold, alternatively, judge that second is minimum Whether value is not more than preset threshold;
(4) if the first minimum value is not more than preset threshold, alternatively, the second minimum value is not more than preset threshold, then the In two feature descriptors, determines that the feature descriptor of the first minimum value or the second minimum value is calculated in inner product, obtain the first mesh Mark feature descriptor;And in third feature descriptor, determine that the first minimum value or the second minimum value is calculated in inner product Feature descriptor obtains the second target signature descriptor;
It (5) will be corresponding to contour feature point corresponding to first object feature descriptor and the second target signature descriptor Contour feature point is determined as one group of characteristic matching point of characteristic matching point centering.
(6) if the first minimum value element corresponding to inner product calculating matrix and the second minimum value are in inner product calculating matrix In corresponding element be not identity element, it is determined that contour feature point corresponding to first object feature descriptor and the second mesh Marking contour feature point corresponding to feature descriptor is not characteristic matching point pair.
(7) if the first minimum value is greater than preset threshold, alternatively, the second minimum value is greater than preset threshold, it is determined that first Contour feature point corresponding to contour feature point corresponding to target signature descriptor and the second target signature descriptor is not special Levy matching double points.
The above process is illustrated with a specific example below:
Assuming that the contour feature point of the first image to be processed are as follows: A, B and C, the contour feature point of the second image to be processed are as follows: A ', B ' and C ' carry out characteristic matching according to the characteristics of image of the characteristics of image of the first image to be processed and the second image to be processed When, to second feature descriptor (feature descriptor in the characteristics of image of the i.e. first image to be processed) and third feature descriptor (feature descriptor in the characteristics of image of the i.e. second image to be processed) carries out inner product calculating, if A, B, C, A ', the spy of B ' and C ' Sign descriptor (be assumed to be 1x2 dimension) be respectively (a1, a2), (b1, b2), (c1, c2), (a1 ', a2 '), (b1 ', b2 ') and (c1 ', c2 '), then carrying out inner product operation are as follows:
Situation one:
If it is determined that obtaining the minimum value of the first row are as follows: a1b1'+a2b2';The minimum value of second row are as follows: b1a1'+b2a2'; The minimum value of the third line are as follows: c1c1'+c2c2';
Determine the minimum value of obtained first row are as follows: b1a1'+b2a2';The minimum value of secondary series are as follows: a1b1'+a2b2'; Tertial minimum value are as follows: c1c1'+c2c2'.
Then, it is known that including the minimum value (i.e. the first minimum value) of the first row and the minimum value (the second minimum value) of secondary series Corresponding element is the same element in product calculating matrix, is all the element a1b1'+a2b2' of the first row secondary series, further Judge the first minimum value a1b1'+a2b2' whether be not more than preset threshold (be assumed to be 0.1, which is the value for tending to 0, When only minimum value is not more than preset threshold, illustrate that two feature descriptors are orthogonal, corresponding contour feature point matching, the present invention Embodiment is to its size without concrete restriction), or judge the second minimum value (in the case of this kind, the second minimum value and first Minimum value is identical) whether it is not more than preset threshold, if being not more than preset threshold, in second feature descriptor, in determination The feature descriptor for the first minimum value (or second minimum value) a1b1'+a2b2' that product meter obtains, obtained first object feature Descriptor is (a1, a2);In third feature descriptor, determine that the first minimum value (or second minimum value) is calculated in inner product The feature descriptor of a1b1'+a2b2', the second obtained target signature descriptor are (b1 ', b2 '), then by first object spy It is special to levy profile corresponding to contour feature point A and the second target signature descriptor corresponding to descriptor (a1, a2) (b1 ', b2 ') Sign point B ' is determined as one group of characteristic matching point of characteristic matching point centering;
Similarly, it is known that contour feature point B and contour feature point A ' is characterized one group of characteristic matching point in matching double points;Wheel Wide characteristic point C and contour feature point C is characterized one group of characteristic matching point in matching double points.
Situation two:
It is illustrated in the case of in (6): if the minimum value (i.e. the first minimum value) of the first row are as follows: a1a1'+ A2a2', and the minimum value (i.e. the second minimum value) of first row are as follows: b1a1'+b2a2';First minimum value is in inner product calculating matrix Corresponding element and the second minimum value element corresponding in inner product calculating matrix is not identity element, it is determined that the first mesh Mark wheel corresponding to contour feature point A and the second target signature descriptor corresponding to feature descriptor (a1, a2) (a1 ', a2 ') Wide characteristic point A ' is not characteristic matching point pair, and contour feature point B corresponding to first object feature descriptor (b1, b2) and Contour feature point A ' corresponding to second target signature descriptor (a1 ', a2 ') is not characteristic matching point pair.
Situation three:
It is illustrated in the case of in (7): if the first minimum value (or second minimum value) in situation one A1b1'+a2b2' be greater than preset threshold, it is determined that contour feature point A corresponding to first object feature descriptor (a1, a2) and Contour feature point B ' corresponding to second target signature descriptor (b1 ', b2 ') is not characteristic matching point pair.
Step S2067 determines three-dimensional feature point corresponding with fisrt feature point in target area;Fisrt feature point is spy It include the contour feature point in the first image to be processed in sign matching double points;
As an example, the global map first built up according to the past determines that (practical is two with fisrt feature point Tie up point) three-dimensional feature point in corresponding target area.After obtaining three-dimensional feature point, due to having determined that characteristic matching Point is to (the corresponding relationship i.e. between fisrt feature point and second feature point, wherein second feature point, which is characterized in matching double points, to be wrapped The contour feature point being contained in the second image to be processed), and it is aware of the corresponding pass between three-dimensional feature point and fisrt feature point System, it is possible to determine the corresponding relationship between second feature point (practical is two-dimensional points) and three-dimensional feature point.
Step S2068 is carried out using location information of the camera Attitude estimation algorithm to second feature point and three-dimensional feature point It calculates, and determines the posture information of vision inertia odometer according to calculated result;Second feature point, which is characterized in matching double points, to be wrapped The contour feature point being contained in the second image to be processed.
As an example, after obtaining the corresponding relationship between second feature point and three-dimensional feature point, it will be able to use Camera Attitude estimation algorithm calculates the location information of second feature point and three-dimensional feature point corresponding with second feature point, Calculate three-dimensional feature point to second feature point transformation matrix, to also just obtain the posture information of vision inertia odometer. In addition, camera Attitude estimation algorithm can be PNP (pespective-n-point) algorithm.
In an alternate embodiment of the present invention where, feature descriptor is floating number.More former visual synchronization is fixed in precision There is a large amount of promotion in position with the common orb operator of map structuring system (orb algorithm obtains, and is binary number).
It describes in detail below to the training process of feature extraction network:
In an alternate embodiment of the present invention where, this method further include:
Step S301 obtains training sample image;
Step S302, the characteristic point detection network obtained using preparatory training are labeled training sample image, obtain The location information of the contour feature point of target object in training sample image;
It describes in detail, includes the following steps: to the training process of characteristic point detection network below
Step S3021 obtains compound training sample image, wherein includes target object in compound training sample image;
As an example, compound training sample image can be through rendering cube, triangle, and line segment etc. is common What several picture obtained, and by zooming in and out to the image after rendering and homograph expands compound training sample image Data volume.All images in compound training sample image all include apparent geometrical characteristic.
Step S3022 carries out the mark of contour feature point to the target object in synthesis training sample image, obtains profile The location information of characteristic point;
Step S3023 detects primitive character point by the location information of compound training sample image and contour feature point Network is trained, and obtains characteristic point detection network.
Obtained characteristic point detection network is for the contour feature point of geometrical characteristic and to be sensitive, whenever figure is opened in input one Picture, this feature point detect the location information that network will export the contour feature point of target object on image.
Step S303 converts training sample image by default transformation matrix, obtains transformed training sample Image;
Specifically, homograph can be carried out to training sample image by default transformation matrix, transformed instruction is obtained Practice sample image.
Step S304, by training sample image, transformed training sample image, contour feature point location information and Default transformation matrix extracts network to primitive character and is trained, and obtains feature extraction network.Specifically, the position of contour feature point Confidence breath and default transformation matrix can be referred to as true value pair.
The method of feature extraction of the invention has following superiority:
1, during training characteristics extract network, in order to reduce artificial mark burden, first with simple synthesis Training sample image trains an annotation tool -- and characteristic point detects network.By the annotation tool to mesh in training sample image The contour feature point of mark object is labeled, and simple and convenient, accuracy is good;
2, feature extraction network is sensitive for geometrical boundary, i.e., sensitive to the profile point of target object, therefore can obtain A large amount of equally distributed contour feature points, the characteristic point so as to avoid traditional method for extracting concentrate on a certain pocket and lose The problem of losing partial dimensional, robust transformation matrix can not be obtained.Simultaneously as being evenly distributed, it is tracked to camera motion During be avoided as much as possible due to position mutation caused by lose operation.A small amount of overlapping region is only needed, just can Camera motion is tracked successfully;
3, feature descriptor uses floating number, and more original visual synchronization positioning is common with map structuring system in precision Orb operator has greatly improved;
4, when carrying out Feature Points Matching, the positioning of original visual synchronization and map are replaced using the inner product of feature descriptor Euclidean distance used in building system, reduces the calculating step of floating number, and calculating process is simple;
5, characteristics of image is more robust, and in illuminance abrupt variation, seasonal variations can be used in more scenes such as day and night.Figure As feature is more prone to geometric profile, visual synchronization positioning still is able to map structuring system when ambient lighting changes violent Obtain the result of robust.
Embodiment 3:
The embodiment of the invention also provides a kind of device of feature extraction, the device that this feature is extracted is mainly used for executing sheet The method of feature extraction provided by inventive embodiments above content, below to the dress of feature extraction provided in an embodiment of the present invention It sets and does specific introduction.
Fig. 5 is a kind of schematic diagram of the device of feature extraction according to an embodiment of the present invention, as shown in figure 5, this feature mentions The device taken mainly includes acquiring unit 10, feature extraction unit 20 and determination unit 30, in which:
Acquiring unit shoots target area when moving in target area for obtaining vision inertia odometer Obtained image to be processed;
Feature extraction unit obtains the feature extraction figure of image to be processed for carrying out feature extraction to image to be processed The feature descriptor of pixel in spectrum and feature extraction map;In feature extraction map the pixel value of the first pixel indicate its Corresponding pixel is the probability of the contour feature point of target object in image to be processed, feature descriptor in image to be processed Indicate the image block message of the first pixel corresponding to it;
Determination unit is determined for the feature descriptor according to pixel in feature extraction map and feature extraction map The characteristics of image of image to be processed, to determine the posture information of vision inertia odometer according to characteristics of image;Characteristics of image includes: The location information of contour feature point and/or the feature descriptor of contour feature point.
In embodiments of the present invention, firstly, when acquisition vision inertia odometer moves in target area, to target area The image to be processed shot;Then, feature extraction is carried out to image to be processed, the feature for obtaining image to be processed mentions Take the feature descriptor of pixel in map and feature extraction map;Finally, according to feature extraction map and feature extraction map The feature descriptor of middle pixel determines the characteristics of image of image to be processed, to determine vision inertia odometer according to characteristics of image Posture information.The characteristics of image includes: the location information of the contour feature point of target object and/or the profile spy of target object Levy the feature descriptor of point.As can be seen from the above description, in embodiments of the present invention, obtained image to be processed is finally extracted Characteristics of image is the contour feature point of target object therein, can obtain a large amount of equally distributed profiles in image to be processed Characteristic point avoids the characteristic point extracted in traditional extraction process and concentrates on a certain pocket and lose partial dimensional, meanwhile, The characteristics of image of robust can be also obtained when environmental change is violent, stability is good, alleviates existing feature extraction algorithm and mentions The bad technical problem of the characteristics of image stability difference and accuracy obtained.
Optionally, characteristics of image includes: the location information of contour feature point;Determination unit is also used to: in feature extraction figure First object pixel is determined in spectrum;First object pixel is the picture that pixel value is greater than presetted pixel threshold value in the first pixel Vegetarian refreshments;First object pixel is ranked up according to its pixel value, obtains pixel collating sequence;To pixel collating sequence In pixel carry out non-maxima suppression operation, obtain target pixel points;By target pixel points in feature extraction map Location information of the location information as contour feature point.
Optionally, characteristics of image further include: the feature descriptor of contour feature point;Determination unit is also used to: being mentioned in feature It takes in map in the feature descriptor of pixel, the feature descriptor of pixel corresponding to the location information by contour feature point Feature descriptor as contour feature point.
Optionally, image to be processed includes: the first image to be processed and the second image to be processed, and the first image to be processed is The previous image frame of second image to be processed;Determination unit is also used to: according to the characteristics of image and second of the first image to be processed The characteristics of image of image to be processed carries out characteristic matching, obtains characteristic matching point pair;Characteristic matching point centering includes first wait locate The contour feature point to match in reason image and the second image to be processed;It determines corresponding with fisrt feature point in target area Three-dimensional feature point;It includes contour feature point in the first image to be processed that fisrt feature point, which is characterized in matching double points,;Using Camera Attitude estimation algorithm calculates the location information of second feature point and three-dimensional feature point, and is determined according to calculated result The posture information of vision inertia odometer;It includes wheel in the second image to be processed that second feature point, which is characterized in matching double points, Wide characteristic point.
Optionally it is determined that unit is also used to: carrying out inner product calculating to second feature descriptor and third feature descriptor, obtain To inner product calculating matrix, wherein second feature descriptor is the feature descriptor in the characteristics of image of the first image to be processed, the Three feature descriptors are the feature descriptor in the characteristics of image of the second image to be processed;Feature is determined according to inner product calculating matrix Matching double points.
Optionally it is determined that unit is also used to: according to inner product calculating matrix determine characteristic matching point to include: inner product calculate The minimum value of the i-th row and the minimum value of jth column are determined in matrix, respectively obtain the first minimum value and the second minimum value, wherein i 1 to I, I is successively taken to indicate total line number in inner product calculating matrix, j successively takes 1 to J, and J indicates total column in inner product calculating matrix Number;Judge that the first minimum value element corresponding to inner product calculating matrix and the second minimum value are corresponding in inner product calculating matrix Element whether be identity element;If it is, judging whether the first minimum value is not more than preset threshold, alternatively, judging second Whether minimum value is not more than preset threshold;If it is, determining that the first minimum is calculated in inner product in second feature descriptor The feature descriptor of value or the second minimum value, obtains first object feature descriptor;And in third feature descriptor, determine The feature descriptor of the first minimum value or the second minimum value is calculated in inner product, obtains the second target signature descriptor;By first Contour feature point corresponding to contour feature point corresponding to target signature descriptor and the second target signature descriptor is determined as One group of characteristic matching point of characteristic matching point centering.
Optionally it is determined that unit is also used to: if the first minimum value element corresponding to inner product calculating matrix and second Minimum value element corresponding in inner product calculating matrix is not identity element, it is determined that corresponding to first object feature descriptor Contour feature point and the second target signature descriptor corresponding to contour feature point be not characteristic matching point pair.
Optionally it is determined that unit is also used to: if the first minimum value is greater than preset threshold, alternatively, the second minimum value is greater than Preset threshold, it is determined that corresponding to contour feature point corresponding to first object feature descriptor and the second target signature descriptor Contour feature point be not characteristic matching point pair.
Optionally, feature descriptor is floating number.
Optionally, feature extraction unit is also used to: being carried out feature extraction using feature extraction network handles processing image, is obtained The feature descriptor of pixel into the feature extraction map and feature extraction map of image to be processed.
Optionally, which is also used to: obtaining training sample image;Network is detected using the characteristic point that preparatory training obtains Training sample image is labeled, the location information of the contour feature point of target object in training sample image is obtained;Pass through Default transformation matrix converts training sample image, obtains transformed training sample image;By training sample image, Transformed training sample image, the location information of contour feature point and default transformation matrix extract network to primitive character and carry out Training, obtains feature extraction network.
Optionally, which is also used to: obtaining compound training sample image, wherein include in compound training sample image Target object;The mark that contour feature point is carried out to the target object in synthesis training sample image, obtains contour feature point Location information;Network is detected to primitive character point by the location information of compound training sample image and contour feature point to instruct Practice, obtains characteristic point detection network.
The technical effect of the device of feature extraction provided by the embodiment of the present invention, realization principle and generation and aforementioned reality The embodiment of the method applied in example 2 is identical, and to briefly describe, Installation practice part does not refer to place, can refer to preceding method reality Apply corresponding contents in example.
In another embodiment, a kind of calculating of non-volatile program code that can be performed with processor is additionally provided Machine readable medium, said program code make the processor execute method described in any embodiment in above-mentioned power embodiment 2 Step.
In addition, in the description of the embodiment of the present invention unless specifically defined or limited otherwise, term " installation ", " phase Even ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can To be mechanical connection, it is also possible to be electrically connected;It can be directly connected, can also can be indirectly connected through an intermediary Connection inside two elements.For the ordinary skill in the art, above-mentioned term can be understood at this with concrete condition Concrete meaning in invention.
In the description of the present invention, it should be noted that term " center ", "upper", "lower", "left", "right", "vertical", The orientation or positional relationship of the instructions such as "horizontal", "inner", "outside" be based on the orientation or positional relationship shown in the drawings, merely to Convenient for description the present invention and simplify description, rather than the device or element of indication or suggestion meaning must have a particular orientation, It is constructed and operated in a specific orientation, therefore is not considered as limiting the invention.In addition, term " first ", " second ", " third " is used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.The apparatus embodiments described above are merely exemplary, for example, the division of the unit, Only a kind of logical function partition, there may be another division manner in actual implementation, in another example, multiple units or components can To combine or be desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or beg for The mutual coupling, direct-coupling or communication connection of opinion can be through some communication interfaces, device or unit it is indirect Coupling or communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in the executable non-volatile computer-readable storage medium of a processor.Based on this understanding, of the invention Technical solution substantially the part of the part that contributes to existing technology or the technical solution can be with software in other words The form of product embodies, which is stored in a storage medium, including some instructions use so that One computer equipment (can be personal computer, server or the network equipment etc.) executes each embodiment institute of the present invention State all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read- Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can be with Store the medium of program code.
Finally, it should be noted that embodiment described above, only a specific embodiment of the invention, to illustrate the present invention Technical solution, rather than its limitations, scope of protection of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair It is bright to be described in detail, those skilled in the art should understand that: anyone skilled in the art In the technical scope disclosed by the present invention, it can still modify to technical solution documented by previous embodiment or can be light It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make The essence of corresponding technical solution is detached from the spirit and scope of technical solution of the embodiment of the present invention, should all cover in protection of the invention Within the scope of.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. a kind of method of feature extraction, which is characterized in that the described method includes:
Vision inertia odometer is obtained when moving in target area, the figure to be processed shot to the target area Picture;
Feature extraction is carried out to the image to be processed, the feature extraction map and the feature for obtaining the image to be processed mention Take the feature descriptor of pixel in map;In the feature extraction map pixel value of the first pixel indicate its it is described to Handle the probability that pixel corresponding in image is the contour feature point of target object in the image to be processed, the feature Descriptor indicates the image block message of the first pixel corresponding to it;
According to the feature descriptor of pixel in the feature extraction map and the feature extraction map, determine described to be processed The characteristics of image of image, to determine the posture information of the vision inertia odometer according to described image feature;Described image is special Sign includes: the location information of the contour feature point and/or the feature descriptor of the contour feature point.
2. the method according to claim 1, wherein described image feature includes: the position of the contour feature point Confidence breath;
According to the feature descriptor of pixel in the feature extraction map and the feature extraction map, determine described to be processed The characteristics of image of image includes:
First object pixel is determined in the feature extraction map;The first object pixel is first pixel Middle pixel value is greater than the pixel of presetted pixel threshold value;
The first object pixel is ranked up according to its pixel value, obtains pixel collating sequence;
Non-maxima suppression operation is carried out to the pixel in the pixel collating sequence, obtains target pixel points;
Believe location information of the target pixel points in the feature extraction map as the position of the contour feature point Breath.
3. according to the method described in claim 2, it is characterized in that, described image feature further include: the contour feature point Feature descriptor;
After the location information for obtaining the contour feature point, the method also includes:
It, will be corresponding to the location information of the contour feature point in the feature extraction map in the feature descriptor of pixel Pixel feature descriptor of the feature descriptor as the contour feature point.
4. the method according to claim 1, wherein the image to be processed include: the first image to be processed and Second image to be processed, first image to be processed are the previous image frame of the described second image to be processed;
The posture information for determining the vision inertia odometer according to described image feature includes:
Feature is carried out according to the characteristics of image of the characteristics of image of the described first image to be processed and second image to be processed Match, obtains characteristic matching point pair;The characteristic matching point centering includes that the described first image to be processed and described second are to be processed The contour feature point to match in image;
Determine three-dimensional feature point corresponding with fisrt feature point in the target area;The fisrt feature point is the feature It include the contour feature point in the described first image to be processed in matching double points;
It is calculated using location information of the camera Attitude estimation algorithm to second feature point and three-dimensional feature point, and according to Calculated result determines the posture information of the vision inertia odometer;The second feature point is the characteristic matching point centering packet The contour feature point being contained in the described second image to be processed.
5. according to the method described in claim 4, it is characterized in that, according to the characteristics of image of the described first image to be processed and institute The characteristics of image for stating the second image to be processed carries out characteristic matching, obtains characteristic matching point to including:
Inner product calculating is carried out to second feature descriptor and third feature descriptor, obtains inner product calculating matrix, wherein described the Two feature descriptors are the feature descriptor in the characteristics of image of the described first image to be processed, and the third feature descriptor is Feature descriptor in the characteristics of image of second image to be processed;
The characteristic matching point pair is determined according to the inner product calculating matrix.
6. according to the method described in claim 5, it is characterized in that, determining the characteristic matching according to the inner product calculating matrix Point is to including:
In the inner product calculating matrix determine the i-th row minimum value and jth column minimum value, respectively obtain the first minimum value and Second minimum value, wherein i successively takes 1 to I, the I to indicate total line number in the inner product calculating matrix, and j successively takes 1 to J, The J indicates total columns in the inner product calculating matrix;
Judge first minimum value element corresponding to the inner product calculating matrix and second minimum value described interior Whether corresponding element is identity element in product calculating matrix;
If it is, judging whether first minimum value is not more than preset threshold, alternatively, whether judging second minimum value No more than the preset threshold;
If it is, determining that first minimum value or described second is calculated in inner product in the second feature descriptor The feature descriptor of minimum value obtains first object feature descriptor;And in the third feature descriptor, determine inner product The feature descriptor of first minimum value or second minimum value is calculated, obtains the second target signature descriptor;
It will be corresponding to contour feature point corresponding to the first object feature descriptor and the second target signature descriptor Contour feature point be determined as one group of characteristic matching point of the characteristic matching point centering.
7. according to the method described in claim 6, it is characterized in that, the method also includes:
If first minimum value element corresponding to the inner product calculating matrix and second minimum value are described interior Corresponding element is not identity element in product calculating matrix, it is determined that profile corresponding to the first object feature descriptor Contour feature point corresponding to characteristic point and the second target signature descriptor is not the characteristic matching point pair.
8. according to the method described in claim 6, it is characterized in that, the method also includes:
If first minimum value is greater than the preset threshold, alternatively, second minimum value is greater than the preset threshold, then It determines corresponding to contour feature point corresponding to the first object feature descriptor and the second target signature descriptor Contour feature point is not the characteristic matching point pair.
9. the method according to claim 1, wherein the feature descriptor is floating number.
10. obtaining institute the method according to claim 1, wherein carrying out feature extraction to the image to be processed The feature descriptor for stating pixel in the feature extraction map and the feature extraction map of image to be processed includes:
Feature extraction is carried out to the image to be processed using feature extraction network, obtains the feature extraction of the image to be processed The feature descriptor of pixel in map and the feature extraction map.
11. according to the method described in claim 10, it is characterized in that, the method also includes:
Obtain training sample image;
The characteristic point detection network obtained using preparatory training is labeled the training sample image, obtains the trained sample The location information of the contour feature point of target object in this image;
The training sample image is converted by default transformation matrix, obtains transformed training sample image;
By the training sample image, the transformed training sample image, the contour feature point location information and The default transformation matrix extracts network to primitive character and is trained, and obtains the feature extraction network.
12. according to the method for claim 11, which is characterized in that the method also includes:
Obtain compound training sample image, wherein include target object in the compound training sample image;
The mark that contour feature point is carried out to the target object in the compound training sample image, obtains the position of contour feature point Confidence breath;
By the location information of the compound training sample image and the contour feature point to primitive character point detect network into Row training obtains the characteristic point detection network.
13. a kind of device of feature extraction, which is characterized in that described device includes:
Acquiring unit shoots the target area when moving in target area for obtaining vision inertia odometer Obtained image to be processed;
Feature extraction unit, for carrying out feature extraction to the image to be processed, the feature for obtaining the image to be processed is mentioned Take the feature descriptor of pixel in map and the feature extraction map;The picture of first pixel in the feature extraction map Element value indicates that the profile that its corresponding pixel in the image to be processed is target object in the image to be processed is special The probability of point is levied, the feature descriptor indicates the image block message of the first pixel corresponding to it;
Determination unit, for the feature descriptor according to pixel in the feature extraction map and the feature extraction map, The characteristics of image of the image to be processed is determined, to determine the pose letter of the vision inertia odometer according to described image feature Breath;Described image feature includes: the location information of the contour feature point and/or the feature descriptor of the contour feature point.
14. a kind of electronic equipment, including memory, processor and it is stored on the memory and can transports on the processor Capable computer program, which is characterized in that the processor realizes the claims 1 to 12 when executing the computer program Any one of described in method the step of.
15. a kind of computer-readable medium for the non-volatile program code that can be performed with processor, which is characterized in that described The step of program code makes the processor execute method described in any one of the claims 1 to 12.
CN201910124316.4A 2019-02-18 2019-02-18 Method, apparatus, electronic equipment and the computer storage medium of feature extraction Pending CN109948624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910124316.4A CN109948624A (en) 2019-02-18 2019-02-18 Method, apparatus, electronic equipment and the computer storage medium of feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910124316.4A CN109948624A (en) 2019-02-18 2019-02-18 Method, apparatus, electronic equipment and the computer storage medium of feature extraction

Publications (1)

Publication Number Publication Date
CN109948624A true CN109948624A (en) 2019-06-28

Family

ID=67008081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910124316.4A Pending CN109948624A (en) 2019-02-18 2019-02-18 Method, apparatus, electronic equipment and the computer storage medium of feature extraction

Country Status (1)

Country Link
CN (1) CN109948624A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473259A (en) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 Pose determines method and device, electronic equipment and storage medium
CN111401385A (en) * 2020-03-19 2020-07-10 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111966859A (en) * 2020-08-27 2020-11-20 司马大大(北京)智能系统有限公司 Video data processing method and device and readable storage medium
CN113033576A (en) * 2019-12-25 2021-06-25 阿里巴巴集团控股有限公司 Image local feature extraction method, image local feature extraction model training method, image local feature extraction equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862319A (en) * 2017-11-19 2018-03-30 桂林理工大学 A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN109029433A (en) * 2018-06-28 2018-12-18 东南大学 Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862319A (en) * 2017-11-19 2018-03-30 桂林理工大学 A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot
CN109029433A (en) * 2018-06-28 2018-12-18 东南大学 Join outside the calibration of view-based access control model and inertial navigation fusion SLAM on a kind of mobile platform and the method for timing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DANIEL DETONE ET AL.: "Self-Improving Visual Odometry", 《ARXIV》 *
DANIEL DETONE ET AL.: "SuperPoint: Self-Supervised Interest Point Detection and Description", 《ARXIV》 *
安占福: "融合激光与 RGB-D 相机的移动机器人自定位算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473259A (en) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 Pose determines method and device, electronic equipment and storage medium
WO2021017358A1 (en) * 2019-07-31 2021-02-04 深圳市商汤科技有限公司 Pose determination method and apparatus, electronic device, and storage medium
TWI753348B (en) * 2019-07-31 2022-01-21 大陸商深圳市商湯科技有限公司 Pose determination method, pose determination device, electronic device and computer readable storage medium
CN113033576A (en) * 2019-12-25 2021-06-25 阿里巴巴集团控股有限公司 Image local feature extraction method, image local feature extraction model training method, image local feature extraction equipment and storage medium
CN113033576B (en) * 2019-12-25 2024-04-05 阿里巴巴集团控股有限公司 Image local feature extraction and model training method, device and storage medium
CN111401385A (en) * 2020-03-19 2020-07-10 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111401385B (en) * 2020-03-19 2022-06-17 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111966859A (en) * 2020-08-27 2020-11-20 司马大大(北京)智能系统有限公司 Video data processing method and device and readable storage medium

Similar Documents

Publication Publication Date Title
WO2022002150A1 (en) Method and device for constructing visual point cloud map
CN108960211B (en) Multi-target human body posture detection method and system
CN103839277B (en) A kind of mobile augmented reality register method of outdoor largescale natural scene
CN109948624A (en) Method, apparatus, electronic equipment and the computer storage medium of feature extraction
EP3113114B1 (en) Image processing method and device
CN103854283B (en) A kind of mobile augmented reality Tracing Registration method based on on-line study
CN107990899A (en) A kind of localization method and system based on SLAM
Liu et al. RDMO-SLAM: Real-time visual SLAM for dynamic environments using semantic label prediction with optical flow
CN110986969B (en) Map fusion method and device, equipment and storage medium
CN109816769A (en) Scene based on depth camera ground drawing generating method, device and equipment
CN107329962B (en) Image retrieval database generation method, and method and device for enhancing reality
CN109584302A (en) Camera pose optimization method, device, electronic equipment and computer-readable medium
CN111047626A (en) Target tracking method and device, electronic equipment and storage medium
CN110264495A (en) A kind of method for tracking target and device
WO2021051868A1 (en) Target location method and apparatus, computer device, computer storage medium
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN110222572A (en) Tracking, device, electronic equipment and storage medium
Liu et al. A SLAM-based mobile augmented reality tracking registration algorithm
WO2023087758A1 (en) Positioning method, positioning apparatus, computer-readable storage medium, and computer program product
WO2023284358A1 (en) Camera calibration method and apparatus, electronic device, and storage medium
Bu et al. Semi-direct tracking and mapping with RGB-D camera for MAV
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
Yao et al. Dynamicbev: Leveraging dynamic queries and temporal context for 3d object detection
Huang et al. A low-dimensional binary-based descriptor for unknown satellite relative pose estimation
CN107220588A (en) A kind of real-time gesture method for tracing based on cascade deep neutral net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190628