CN109271995A - A kind of high-precision image matching method and system - Google Patents

A kind of high-precision image matching method and system Download PDF

Info

Publication number
CN109271995A
CN109271995A CN201710584412.8A CN201710584412A CN109271995A CN 109271995 A CN109271995 A CN 109271995A CN 201710584412 A CN201710584412 A CN 201710584412A CN 109271995 A CN109271995 A CN 109271995A
Authority
CN
China
Prior art keywords
point
match
image
match point
interest label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710584412.8A
Other languages
Chinese (zh)
Inventor
张文星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Careland Technology Co Ltd
Original Assignee
Shenzhen Careland Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Careland Technology Co Ltd filed Critical Shenzhen Careland Technology Co Ltd
Priority to CN201710584412.8A priority Critical patent/CN109271995A/en
Publication of CN109271995A publication Critical patent/CN109271995A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of high-precision image matching method and systems, are related to data processing field, in particular to map datum processing and fabricating.High-precision image matching method disclosed by the invention comprises determining that target image and compares the point of interest label region in image;Feature extraction is carried out to point of interest label region;The feature extracted is matched, match point is obtained;Images match is carried out according to the distributing position of match point and/or match point quantity.The present invention is designed by algorithm optimization, improves images match precision, so as to remove redundant image, improves the producing efficiency of map especially high-precision electronic map.

Description

A kind of high-precision image matching method and system
Technical field
The present invention relates to data processing fields, in particular to map datum processing and fabricating.
Background technique
Electronic map (English: Electronic map), i.e. numerical map are using computer technology, in a digital manner Storage and the map consulted.Electronic map data generally comprises road, point of interest, road network etc..Electronic map be cartography and One system of application, is to control map generated by electronic computer, is the screen map based on the digital graphics, is Visually scheme on the spot." visualizing on the computer screen " is the basic feature of electronic map.
The characteristics of electronic map, there is following 6:
1. display can be accessed quickly.
2. animation may be implemented.
3. can be by map elements Layering manifestation.
4. enabling user have sense on the spot in person map three-dimensional, mobilism using virtual reality technology.
5. electronic map can be transferred to elsewhere using data transmission technology.
6. the automatic measurement of the length on figure, angle, area etc. may be implemented.
Electronic map data generally passes through field data collector and is acquired, such as acquisition target is put down, The modes such as take pictures, photograph acquire, and then carry out data mart modeling by interior industry data mart modeling personnel.In order to guarantee collected target It is accurate and complete, when generally acquiring on the spot, multiple pictures can be shot to acquisition target.
When carrying out map datum processing, because acquisition target has plurality of pictures, if not removing the image of repeated and redundant, need Interior industry data mart modeling personnel handle every image, it will cause the huge waste of interior industry resource.And remove repeated and redundant Image just need a kind of high-precision image matching method can judge containing it is identical acquisition target multiimage.Future Intelligent driving is highly dependent on high-precision electronic map, and the production of high-precision electronic map also needs high-precision image Method of completing the square improves high-precision electronic cartography efficiency to remove redundant image.
Summary of the invention
The object of the present invention is to provide a kind of high-precision image matching method and systems, to improve the essence of images match Degree improves the producing efficiency of map, especially high-precision electronic map to remove redundant image.
The present invention provides a kind of high-precision image matching method, comprising:
It determines target image and compares the point of interest label region in image;
Feature extraction is carried out to point of interest label region;
The feature extracted is matched, match point is obtained;
Images match is carried out according to the distributing position of match point and/or match point quantity.
The present invention also provides a kind of high-precision image matching systems, comprising:
Point of interest label area determination unit, for determining target image and comparing the point of interest label region in image;
Feature extraction unit, for carrying out feature extraction to point of interest label region;
Characteristic matching unit obtains match point for matching the feature extracted;
Image matching unit, for carrying out images match according to the distributing position and/or match point quantity of match point.
The present invention is designed by algorithm optimization, improves images match precision, so as to remove redundant image, improves ground The producing efficiency of figure especially high-precision electronic map.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the flow diagram of high-precision image matching method provided in an embodiment of the present invention;
Fig. 2 is the structural schematic diagram of high-precision image matching system provided in an embodiment of the present invention.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on this The embodiment of invention, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, belongs to protection scope of the present invention.
The embodiment of the present invention is described in further detail below in conjunction with attached drawing.
In map datum process, to freshly harvested image (target image), in the database images data of magnanimity In (compare image) be compared and match, the image of successful match, then be judged as YES include identical acquisition target repetition Image.The comparison can be target image with compare image and be compared two-by-two, be also possible to target image and compare image Concentration is disposably compared.
Embodiment one
The embodiment of the present invention provides a kind of high-precision image matching method, and specific implementation process is as shown in Figure 1, include following Step:
101, it determines target image and compares the point of interest label region in image, point of interest (POI) is in GIS-Geographic Information System A term, geographic object a little can be abstracted as by referring to all, more especially be lived closely related geography with people Entity, such as school, bank, restaurant, gas station, hospital, supermarket, point of interest label region, which refers in image, has point of interest letter The region of breath, if point of interest is a restaurant, point of interest label region is the region for having restaurant name in image.It specifically includes: Firstly, using MSER algorithm (Maximally Stable Extremal Regions maximum stable extremal region) in image Point of interest label region carry out coarse positioning, recycle HOG feature (Histogram of Oriented Gridients feature Detection algorithm) and SVM (Support Vector Machine support vector machines) to detection target be accurately positioned.
102, feature extraction is carried out to point of interest label region determined by step 101.Specifically, Root- can be used Sift algorithm (Root-Scale-invariant feature transform, scale invariant feature conversion) carries out feature and mentions It takes.Specifically include: the picture construction scale space compared firstly, for needs is exactly the Gaussian difference bundling using different scale Product core and image carry out convolution algorithm.Secondly, being compared using each sampled point with all the points around it, Gaussian difference is found Divide the extreme point in space, and removes Gaussian convolution space local curvature very asymmetric image by being fitted three-dimensional quadratic function Vegetarian refreshments.Again, it is each key point assigned direction parameter using the gradient direction distribution characteristic of key point field pixel, makes operator Have 128 dimension directioin parameters of rotational invariance, here it is the Feature Descriptors of key point, are normalized and are taken to description Root obtains final Expressive Features.
103, the feature for being extracted step 102 matches, and obtains match point.It will be in the image that feature be extracted Sub- matching is described in each scale image, can be judged as successful match, acquisition when the distance of description is less than threshold value With point.
104, images match is carried out according to the distributing position of match point and/or match point quantity.It include: according to match point Distributing position and/or match point quantity judge whether image matches, if match point spatial distribution is consistent and match point quantity reaches Preset value then confirms images match success.Match point quantity includes following one kind: all match point quantity, or removal away from From the match point quantity after the match point in the pre-determined distance of point of interest label zone boundary;Or the match point that removal line intersects Match point quantity afterwards;Or remove that match point and line in the pre-determined distance of point of interest label zone boundary intersect With the match point quantity after point.Match point quantity is preferably the matching removed in the pre-determined distance of point of interest label zone boundary Match point quantity after the match point that point and line intersect.Match point at the edge in point of interest label region has little significance, Therefore it can remove it, the matching line is interdigital: the line that the match point in two images connects is had intersection. Matching line is generally parallel line, can if the matching line of certain match point intersects with the matching line of other match points With the match point removal for intersecting line.
When the target image and comparing image successful match, then judge that image repeats, deletes multiimage to only retaining One image.Specifically, can only retain the most image of feature quantity when being deleted and being retained, alternatively, retaining The newest image of shooting time.Feature means that image is more clear, and the electronic map precision processed is also higher therefore excellent Select the image that keeping characteristics are most.
High-precision image matching method provided in an embodiment of the present invention is suitable for electronic map, especially high precision electro The production of sub- map.The embodiment of the present invention is designed by algorithm optimization, images match precision is improved, so as to remove redundancy Image improves the producing efficiency of map especially high-precision electronic map.
Embodiment two
The embodiment of the present invention provides a kind of high-precision image matching system, as shown in Figure 2, comprising:
Point of interest label area determination unit 201, for determining target image and comparing the point of interest label region in image.It is emerging Interesting point (POI) is a term in GIS-Geographic Information System, and geographic object a little can be abstracted as by referring to all, more especially It lives closely related geographical entity, such as school, bank, restaurant, gas station, hospital, supermarket with people, point of interest label area Domain refers to the region in image with interest point information, and if point of interest is a restaurant, point of interest label region is that have in image The region of restaurant name.It specifically includes: firstly, (Maximally Stable Extremal Regions is most using MSER algorithm Big stable extremal region) coarse positioning is carried out to the point of interest label region in image, recycle HOG feature (Histogram of Oriented Gridients feature detection algorithm) and SVM (Support Vector Machine support vector machines) to detection Target is accurately positioned.
Feature extraction unit 202, for point of interest label region determined by point of interest label area determination unit 201 Carry out feature extraction.Specifically, Root-sift algorithm (Root-Scale-invariant feature can be used Transform, scale invariant feature conversion) carry out feature extraction.It specifically includes: the picture construction compared firstly, for needs Scale space is exactly to carry out convolution algorithm using the difference of Gaussian convolution kernel and image of different scale.Secondly, utilizing each sampling Point is compared with all the points around it, finds the extreme point in difference of Gaussian space, and by being fitted three-dimensional quadratic function Remove the very asymmetric pixel of Gaussian convolution space local curvature.Again, the gradient direction of key point field pixel is utilized Distribution character is each key point assigned direction parameter, so that operator is had 128 dimension directioin parameters of rotational invariance, here it is passes The Feature Descriptor of key point is normalized description and root mean square is taken to obtain final Expressive Features.
Characteristic matching unit 203, the feature for being extracted feature extraction unit 202 are matched, are matched Point.Sub- matching is described in each scale image in the image for extracting feature, when the distance of description is less than threshold value It can be judged as successful match, obtain match point.
Image matching unit 204, for carrying out images match according to the distributing position and/or match point quantity of match point. It include: to judge whether image matches according to the distributing position and/or match point quantity of match point, if match point spatial distribution is consistent And match point quantity reaches preset value, then confirms images match success.Match point quantity includes following one kind: all match points Quantity, or remove the match point quantity after the match point in the pre-determined distance of point of interest label zone boundary;Or removal Match point quantity after the match point that line intersects;Or remove the matching in the pre-determined distance of point of interest label zone boundary Match point quantity after the match point that point and line intersect.Match point quantity is preferably removed apart from point of interest label zone boundary Match point quantity after the match point that match point and line in pre-determined distance intersect.At the edge in point of interest label region It with having little significance for point, therefore can remove it, the matching line is interdigital: the match point in two images is connected The line come has intersection.Matching line is generally parallel line, if of the matching line and other match points of certain match point Intersect with line, then the match point that can intersect line removes.
When the target image and comparing image successful match, then judge that image repeats, deletes multiimage to only retaining One image.Specifically, can only retain the most image of feature quantity when being deleted and being retained, alternatively, retaining The newest image of shooting time.Feature means that image is more clear, and the electronic map precision processed is also higher therefore excellent Select the image that keeping characteristics are most.
High-precision image matching system provided in an embodiment of the present invention is suitable for electronic map, especially high precision electro The production of sub- map.The embodiment of the present invention is designed by algorithm optimization, images match precision is improved, so as to remove redundancy Image improves the producing efficiency of map especially high-precision electronic map.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Within the technical scope of the present disclosure, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Subject to enclosing.

Claims (10)

1. a kind of high-precision image matching method characterized by comprising
It determines target image and compares the point of interest label region in image;
Feature extraction is carried out to point of interest label region;
The feature extracted is matched, match point is obtained;
Images match is carried out according to the distributing position of match point and/or match point quantity.
2. the method according to claim 1, wherein
The determining target image and the point of interest label region compared in image include:
Coarse positioning is carried out to the point of interest label region in image using MSER algorithm, HOG algorithm and SVM algorithm is recycled to carry out It is accurately positioned.
3. the method according to claim 1, wherein
Described includes: using Root-sift algorithm to the point of interest mark to the progress feature extraction of point of interest label region Board region carries out feature extraction.
4. the method according to claim 1, wherein
The feature extracted is matched, obtaining match point includes: by each scale in the image for extracting feature Sub- matching is described in image, can be judged as successful match when the distance of description is less than threshold value, obtain match point.
5. the method according to claim 1, wherein
It is described to include: according to the distributing position and/or match point quantity of match point progress images match
Judge whether image matches according to the distributing position of match point and/or match point quantity, if match point spatial distribution is consistent And match point quantity reaches preset value, then judges images match success.
6. a kind of high-precision image matching system characterized by comprising
Point of interest label area determination unit, for determining target image and comparing the point of interest label region in image;
Feature extraction unit, for carrying out feature extraction to point of interest label region;
Characteristic matching unit obtains match point for matching the feature extracted;
Image matching unit, for carrying out images match according to the distributing position and/or match point quantity of match point.
7. system according to claim 6, which is characterized in that
The determining target image and the point of interest label region compared in image include:
Coarse positioning is carried out to the point of interest label region in image using MSER algorithm, HOG algorithm and SVM algorithm is recycled to carry out It is accurately positioned.
8. system according to claim 6, which is characterized in that
Described includes: using Root-sift algorithm to the point of interest mark to the progress feature extraction of point of interest label region Board region carries out feature extraction.
9. system according to claim 6, which is characterized in that
The feature extracted is matched, obtaining match point includes: by each scale in the image for extracting feature Sub- matching is described in image, can be judged as successful match when the distance of description is less than threshold value, obtain match point.
10. system according to claim 6, which is characterized in that
It is described to include: according to the distributing position and/or match point quantity of match point progress images match
Judge whether image matches according to the distributing position of match point and/or match point quantity, if match point spatial distribution is consistent And match point quantity reaches preset value, then judges images match success.
CN201710584412.8A 2017-07-18 2017-07-18 A kind of high-precision image matching method and system Pending CN109271995A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710584412.8A CN109271995A (en) 2017-07-18 2017-07-18 A kind of high-precision image matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710584412.8A CN109271995A (en) 2017-07-18 2017-07-18 A kind of high-precision image matching method and system

Publications (1)

Publication Number Publication Date
CN109271995A true CN109271995A (en) 2019-01-25

Family

ID=65152424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710584412.8A Pending CN109271995A (en) 2017-07-18 2017-07-18 A kind of high-precision image matching method and system

Country Status (1)

Country Link
CN (1) CN109271995A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309741A (en) * 2023-05-22 2023-06-23 中南大学 TVDS image registration method, segmentation method, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101275854A (en) * 2007-03-26 2008-10-01 日电(中国)有限公司 Method and equipment for updating map data
CN103218427A (en) * 2013-04-08 2013-07-24 北京大学 Local descriptor extracting method, image searching method and image matching method
CN105069144A (en) * 2015-08-20 2015-11-18 华南理工大学 Similar image search method
CN106203342A (en) * 2016-07-01 2016-12-07 广东技术师范学院 Target identification method based on multi-angle local feature coupling
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm
CN106846608A (en) * 2017-01-25 2017-06-13 杭州视氪科技有限公司 A kind of visually impaired people's paper money recognition glasses based on RGB D cameras

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101275854A (en) * 2007-03-26 2008-10-01 日电(中国)有限公司 Method and equipment for updating map data
CN103218427A (en) * 2013-04-08 2013-07-24 北京大学 Local descriptor extracting method, image searching method and image matching method
CN105069144A (en) * 2015-08-20 2015-11-18 华南理工大学 Similar image search method
CN106203342A (en) * 2016-07-01 2016-12-07 广东技术师范学院 Target identification method based on multi-angle local feature coupling
CN106529591A (en) * 2016-11-07 2017-03-22 湖南源信光电科技有限公司 Improved MSER image matching algorithm
CN106846608A (en) * 2017-01-25 2017-06-13 杭州视氪科技有限公司 A kind of visually impaired people's paper money recognition glasses based on RGB D cameras

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309741A (en) * 2023-05-22 2023-06-23 中南大学 TVDS image registration method, segmentation method, device and medium
CN116309741B (en) * 2023-05-22 2023-08-11 中南大学 TVDS image registration method, segmentation method, device and medium

Similar Documents

Publication Publication Date Title
CN108763287B (en) Construction method of large-scale passable regional driving map and unmanned application method thereof
CN109753885B (en) Target detection method and device and pedestrian detection method and system
CN111462275A (en) Map production method and device based on laser point cloud
Karantzalos et al. Large-scale building reconstruction through information fusion and 3-d priors
CN105701798A (en) Point cloud extraction method and device for columnar object
CN102804231A (en) Piecewise planar reconstruction of three-dimensional scenes
CN111435421B (en) Traffic-target-oriented vehicle re-identification method and device
CN105953773B (en) Ramp slope angle acquisition methods and device
CN108225334A (en) A kind of localization method and device based on three-dimensional live-action data
Xu et al. A new clustering-based framework to the stem estimation and growth fitting of street trees from mobile laser scanning data
Yadav et al. Identification of trees and their trunks from mobile laser scanning data of roadway scenes
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN110689573A (en) Edge model-based augmented reality label-free tracking registration method and device
CN109146773B (en) Method and device for mapping river channel map to Web map
CN110798805A (en) Data processing method and device based on GPS track and storage medium
Liu et al. Deep-learning and depth-map based approach for detection and 3-D localization of small traffic signs
CN107563366A (en) A kind of localization method and device, electronic equipment
CN113963259A (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
Liu et al. Image-translation-based road marking extraction from mobile laser point clouds
CN115375857A (en) Three-dimensional scene reconstruction method, device, equipment and storage medium
CN103744903B (en) A kind of scene image search method based on sketch
CN111611900A (en) Target point cloud identification method and device, electronic equipment and storage medium
CN111383286A (en) Positioning method, positioning device, electronic equipment and readable storage medium
CN114758086A (en) Method and device for constructing urban road information model
Wu et al. A stepwise minimum spanning tree matching method for registering vehicle-borne and backpack LiDAR point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190125

RJ01 Rejection of invention patent application after publication