CN111222514A - Local map optimization method based on visual positioning - Google Patents

Local map optimization method based on visual positioning Download PDF

Info

Publication number
CN111222514A
CN111222514A CN201911422992.6A CN201911422992A CN111222514A CN 111222514 A CN111222514 A CN 111222514A CN 201911422992 A CN201911422992 A CN 201911422992A CN 111222514 A CN111222514 A CN 111222514A
Authority
CN
China
Prior art keywords
current layer
image
points
characteristic points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911422992.6A
Other languages
Chinese (zh)
Other versions
CN111222514B (en
Inventor
吕明
白阳
薛文通
李�杰
马盼
刘芳
赵蓓
王莎
王玲玲
赵彦彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xingsi Semiconductor Co ltd
Original Assignee
Xi'an Aerospace Huaxun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Aerospace Huaxun Technology Co ltd filed Critical Xi'an Aerospace Huaxun Technology Co ltd
Priority to CN201911422992.6A priority Critical patent/CN111222514B/en
Publication of CN111222514A publication Critical patent/CN111222514A/en
Application granted granted Critical
Publication of CN111222514B publication Critical patent/CN111222514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

According to the local map optimization method based on visual positioning, provided by the embodiment of the invention, parameters can be adaptively adjusted according to the characteristics of an image, and feature points with higher quality are extracted; the characteristic points can be uniformly distributed in the image area, so that the subsequent matching process is facilitated; meanwhile, aiming at the situation that the random sampling consistency method is wrong due to the fact that sample points are too many in the traditional method for carrying out mismatching screening directly, the Jaccard similarity is adopted for carrying out first screening, after the number of samples is reduced, the random sampling consistency method algorithm is carried out, the matching accuracy can be improved, after matching is successful, the image with updated characteristics replaces the original map image, and therefore local map optimization is achieved, and map accuracy is guaranteed.

Description

Local map optimization method based on visual positioning
Technical Field
The invention relates to the technical field of image processing, in particular to a local map optimization method based on visual positioning.
Background
At present, computer vision is continuously developed, and vision is widely applied to various fields, such as industry, medicine, transportation, military affairs and the like. The feature extraction and matching technology is a large research hotspot in the field of computer vision, and particularly in the field of visual navigation, high-precision image feature extraction and matching are the key for determining the performance of a visual navigation front end.
In recent years, the ORB algorithm is widely applied to various researches with good performance, but on one hand, the quantity and quality of extracted key points depend on the setting of a FAST threshold, and the fixed FAST threshold setting is difficult to meet the quantity and quality of feature points required by different images, on the other hand, the distribution of the feature points has great influence on subsequent matching, so that the feature points are uniformly distributed as much as possible, and meanwhile, the traditional feature point matching method still has great mismatching condition at present.
Disclosure of Invention
In order to solve the problem that the conventional feature point matching method still has large mismatching, the embodiment of the invention provides a local map optimization method based on visual positioning. The specific technical scheme is as follows:
the embodiment of the invention provides a local map optimization method based on visual positioning, which comprises the following steps:
acquiring a first picture and a second picture of a local map, and constructing a first image pyramid of the first picture; constructing a second image pyramid of the second picture;
taking a first layer in the first image pyramid as a first current layer; using the first layer of the second image pyramid as the second current layer
Respectively extracting features of the first current layer and the first current layer by adopting an ORB technology and performing descriptor calculation on the features so as to determine the direction of a feature point;
taking each layer in the first image pyramid as a first current layer, taking each layer in the second image pyramid as a second current layer, extracting the features of each layer in the first image pyramid and the second image pyramid, and performing descriptor calculation on the features;
performing feature matching on the first picture and the second picture;
performing motion estimation on the matched first image and the matched second image to obtain a motion estimation result;
and optimizing the local map according to the motion estimation result.
Optionally, the performing feature extraction and descriptor calculation on the first current layer and the first current layer by using the ORB technology respectively includes:
respectively carrying out rough extraction on the first current layer and the second current layer; obtaining a plurality of feature points of the first current layer and a plurality of features of the second current layer;
creating a decision tree model by using an ID3 algorithm, and respectively placing the plurality of characteristic points of the first current layer in the decision tree model to obtain a plurality of FAST characteristic points of the first current layer; respectively placing the plurality of characteristic points of the second current layer in the decision tree building model to obtain a plurality of FAST characteristic points of the second current layer;
removing locally dense feature points from the plurality of FAST feature points of the first current layer by using non-maximum suppression; removing locally dense feature points from the plurality of FAST feature points of the second current layer by using non-maximum suppression;
scaling the characteristic points in the first image pyramid after the local dense characteristic points are removed to form a first n levels; scaling the characteristic points in the second image pyramid after the local dense characteristic points are removed; forming a second nlevels amplitude; all feature points in the first nlevels are used as the OFAST feature points of the first picture, and all feature points in the second nlevels are used as the OFAST feature points of the second picture;
and determining the direction of the OFAST characteristic point by using a moment method.
Optionally, determining the direction of the feature point of the ast by using the moment method includes:
and calculating the centroid of the feature point in a radius range by using the r as the moment, wherein a vector is formed from the coordinates of the feature point to the centroid as the direction of the feature point. The moments of the image blocks are defined as follows:
Figure RE-GDA0002452011440000021
p is the selected pixel point, Ip is the pixel gray value, and I (x, y) is the image gray expression. The centroid of the image block can be rounded by the moment, and the centroid of the moment is:
Figure RE-GDA0002452011440000031
and (3) assuming that the geometric center of the image block is O, connecting the O with the centroid C to obtain a direction vector, wherein the vector is the direction of the feature point. The calculation formula is as follows:
θ=atan(m01/m10)
the characteristic points are uniformly distributed by applying a quadtree algorithm, a quadtree is constructed for the image with the extracted characteristic points, when the number of the characteristic points in the node is equal to 1, the quadtree is not divided downwards, if the number of the nodes is more than 1, the quadtree is continuously divided downwards until all the nodes only contain one characteristic point or the number of the divided nodes meets the requirement of the number of the characteristic points, and for the condition, the characteristic points with the maximum response value in the nodes are reserved and redundant characteristic points are discarded, so that the characteristic points are uniformly distributed.
The image intelligent fault judgment early warning system provided by the embodiment of the invention can adaptively adjust parameters according to the characteristics of images and extract feature points with higher quality; the characteristic points can be uniformly distributed in the image area, so that the subsequent matching process is facilitated; meanwhile, aiming at the situation that the random sampling consistency method is wrong due to the fact that sample points are too many in the traditional method for carrying out mismatching screening directly, the Jaccard similarity is adopted for carrying out first screening, after the number of samples is reduced, the random sampling consistency method algorithm is carried out, the matching accuracy can be improved, after matching is successful, the image with updated characteristics replaces the original map image, and therefore local map optimization is achieved, and map accuracy is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a flowchart of a local map optimization method based on visual positioning according to an embodiment of the present invention;
fig. 2 is a schematic diagram of FAST feature points according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of an ORB visual odometer module employed in accordance with an embodiment of the present invention.
Fig. 4 is a schematic view of a visual odometer between two adjacent frames according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
In order to solve the problem that the conventional feature point matching method still has large mismatching, a local map optimization method based on visual positioning is provided.
It should be noted that the visual odometer technology and the implementation thereof are key technologies and difficult works of the local map optimization method based on visual positioning, and the ORB visual odometer module adopted in the project is operated according to the flow as shown in fig. 3.
The visual odometer uses two adjacent frames of images to calculate the motion of the camera, the former frame locates the later reference frame, the latter frame defines the current frame, always uses the reference frame as coordinate system, uses the current frame to match with it and estimates the motion relation. The visual odometer between two adjacent frames can be illustrated in fig. 4.
Referring to fig. 1, an embodiment of the present invention provides a local map optimization method based on visual positioning, including:
s110, acquiring a first picture and a second picture of a local map, and constructing a first image pyramid of the first picture; constructing a second image pyramid of the second picture;
s120, taking a first layer in the first image pyramid as a first current layer; taking the first layer of the second image pyramid as a second current layer;
s130, respectively extracting features of the first current layer and the first current layer by adopting an ORB technology and performing descriptor calculation on the features to determine the direction of feature points;
specifically, ORB feature extraction is to extract ORB feature points from an input picture for feature matching later. ORB feature extraction can be summarized as two processes of oFAST key point extraction and rBRIEF descriptor computation.
S140, taking each layer in the first image pyramid as a first current layer, taking each layer in the second image pyramid as a second current layer, extracting features of each layer in the first image pyramid and the second image pyramid, and performing descriptor calculation on the features;
s150, performing feature matching on the first picture and the second picture;
s160, carrying out motion estimation on the matched first image and the matched second image to obtain a motion estimation result;
and S170, optimizing the local map according to the motion estimation result.
Further, the step of respectively performing feature extraction on the first current layer and performing descriptor computation on the features by using an ORB technique includes:
respectively carrying out rough extraction on the first current layer and the second current layer; obtaining a plurality of feature points of the first current layer and a plurality of features of the second current layer;
specifically, this step can extract a large number of feature points, but the quality of a large part of the feature points is not high. If the graph X is a FAST characteristic point, selecting a point P from the image, drawing a circle with the radius of 3 pixels by taking the P as the center of the circle, and if the gray value of continuous n pixel points on the circle is larger or smaller than the gray value of the P point, considering the P as the characteristic point. Typically n is set to 12. In order to accelerate the extraction of the feature points, the non-feature points are quickly removed, the gray values at the positions 1, 9, 5 and 13 are firstly detected, if P is the feature point, 3 or more than 3pixel values at the four positions are all larger or smaller than the gray value of the P point. If not, the point is directly drained.
Creating a decision tree model by using an ID3 algorithm, and respectively placing the plurality of characteristic points of the first current layer in the decision tree model to obtain a plurality of FAST characteristic points of the first current layer; the plurality of feature points of the second current layer are respectively placed in the decision tree model to obtain a plurality of FAST feature points of the second current layer, please refer to fig. 2, and fig. 2 is a schematic diagram of a FAST feature point according to an embodiment of the present invention.
Specifically, the machine learning method screens the optimal feature points. Briefly, a decision tree is trained by using an ID3 algorithm, and 16 pixels on the circumference of the feature point are input into the decision tree, so as to screen out the optimal FAST feature point.
Removing locally dense feature points from the plurality of FAST feature points of the first current layer by using non-maximum suppression; removing locally dense feature points from the plurality of FAST feature points of the second current layer by using non-maximum suppression;
specifically, non-maxima suppression removes locally denser feature points. And removing the problem of a plurality of feature points in the adjacent position by using a non-maximum suppression algorithm. The response size is calculated for each feature point. The calculation method is the absolute value sum of the deviations of the feature point P and its surrounding 16 feature points. And in the comparison of adjacent characteristic points, keeping the characteristic point with a larger response value, and deleting the rest characteristic points.
Scaling the characteristic points in the first image pyramid after the local dense characteristic points are removed to form a first n levels; scaling the characteristic points in the second image pyramid after the local dense characteristic points are removed; forming a second nlevels amplitude; all feature points in the first nlevels are used as the OFAST feature points of the first picture, and all feature points in the second nlevels are used as the OFAST feature points of the second picture;
specifically, the scale of the feature points is not deformed. And establishing a pyramid to realize multi-scale invariance of the feature points. Setting a scale factor scaleFactor (opencv defaults to 1.2) and pyramid layer numbers nlevels (pencv defaults to 8). And reducing the original image into n levels of images according to the scale factor. The scaled image is: i ═ I/scalefactor (k ═ 1,2, …, nlevels). And extracting the sum of the characteristic points of the nLevels images with different proportions to serve as an oFAST characteristic point of the image.
And determining the direction of the OFAST characteristic point by using a moment method.
Further, determining the direction of the OFAST feature point by using the moment method includes:
and calculating the centroid of the feature point in a radius range by using the r as the moment, wherein a vector is formed from the coordinates of the feature point to the centroid as the direction of the feature point. The moments of the image blocks are defined as follows:
Figure RE-GDA0002452011440000061
p is the selected pixel point, Ip is the pixel gray value, and I (x, y) is the image gray expression. The centroid of the image block can be rounded by the moment, and the centroid of the moment is:
Figure RE-GDA0002452011440000062
and (3) assuming that the geometric center of the image block is O, connecting the O with the centroid C to obtain a direction vector, wherein the vector is the direction of the feature point. The calculation formula is as follows:
θ=atan(m01/m10)
the characteristic points are uniformly distributed by applying a quadtree algorithm, a quadtree is constructed for the image with the extracted characteristic points, when the number of the characteristic points in the node is equal to 1, the quadtree is not divided downwards, if the number of the nodes is more than 1, the quadtree is continuously divided downwards until all the nodes only contain one characteristic point or the number of the divided nodes meets the requirement of the number of the characteristic points, and for the condition, the characteristic points with the maximum response value in the nodes are reserved and redundant characteristic points are discarded, so that the characteristic points are uniformly distributed.
The image intelligent fault judgment early warning system provided by the embodiment of the invention can adaptively adjust parameters according to the characteristics of images and extract feature points with higher quality; the characteristic points can be uniformly distributed in the image area, so that the subsequent matching process is facilitated; meanwhile, aiming at the situation that the random sampling consistency method is wrong due to the fact that sample points are too many in the traditional method for carrying out mismatching screening directly, the Jaccard similarity is adopted for carrying out first screening, after the number of samples is reduced, the random sampling consistency method algorithm is carried out, the matching accuracy can be improved, after matching is successful, the image with updated characteristics replaces the original map image, and therefore local map optimization is achieved, and map accuracy is guaranteed.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (3)

1. A local map optimization method based on visual positioning is characterized by comprising the following steps:
acquiring a first picture and a second picture of a local map, and constructing a first image pyramid of the first picture; constructing a second image pyramid of the second picture;
taking a first layer in the first image pyramid as a first current layer; using the first layer of the second image pyramid as the second current layer
Respectively extracting features of the first current layer and the first current layer by adopting an ORB technology and performing descriptor calculation on the features so as to determine the direction of a feature point;
taking each layer in the first image pyramid as a first current layer, taking each layer in the second image pyramid as a second current layer, extracting the features of each layer in the first image pyramid and the second image pyramid, and performing descriptor calculation on the features;
performing feature matching on the first picture and the second picture;
performing motion estimation on the matched first image and the matched second image to obtain a motion estimation result;
and optimizing the local map according to the motion estimation result.
2. An auxiliary positioning method based on visual positioning as claimed in claim 1, wherein the performing feature extraction and descriptor calculation for the first current layer and the first current layer respectively by using ORB technique comprises:
respectively carrying out rough extraction on the first current layer and the second current layer; obtaining a plurality of feature points of the first current layer and a plurality of features of the second current layer;
creating a decision tree model by using an ID3 algorithm, and respectively placing the plurality of characteristic points of the first current layer in the decision tree model to obtain a plurality of FAST characteristic points of the first current layer; respectively placing the plurality of characteristic points of the second current layer in the decision tree building model to obtain a plurality of FAST characteristic points of the second current layer;
removing locally dense feature points from the plurality of FAST feature points of the first current layer by using non-maximum suppression; removing locally dense feature points from the plurality of FAST feature points of the second current layer by using non-maximum suppression;
scaling the characteristic points in the first image pyramid after the local dense characteristic points are removed to form a first n levels; scaling the characteristic points in the second image pyramid after the local dense characteristic points are removed; forming a second nlevels amplitude; all feature points in the first nlevels are used as the OFAST feature points of the first picture, and all feature points in the second nlevels are used as the OFAST feature points of the second picture;
and determining the direction of the OFAST characteristic point by using a moment method.
3. An auxiliary positioning method based on visual positioning as claimed in claim 2, wherein determining the directions of the OFAST feature points by using the moment method comprises:
and calculating the centroid of the feature point in a radius range by using the r as the moment, wherein a vector is formed from the coordinates of the feature point to the centroid as the direction of the feature point. The moments of the image blocks are defined as follows:
Figure FDA0002352810580000021
p is the selected pixel point, Ip is the pixel gray value, and I (x, y) is the image gray expression. The centroid of the image block can be rounded by the moment, and the centroid of the moment is:
Figure FDA0002352810580000022
and (3) assuming that the geometric center of the image block is O, connecting the O with the centroid C to obtain a direction vector, wherein the vector is the direction of the feature point. The calculation formula is as follows:
θ=atan(m01/m10)
the characteristic points are uniformly distributed by applying a quadtree algorithm, a quadtree is constructed for the image with the extracted characteristic points, when the number of the characteristic points in the node is equal to 1, the quadtree is not divided downwards, if the number of the nodes is more than 1, the quadtree is continuously divided downwards until all the nodes only contain one characteristic point or the number of the divided nodes meets the requirement of the number of the characteristic points, and for the condition, the characteristic points with the maximum response value in the nodes are reserved and redundant characteristic points are discarded, so that the characteristic points are uniformly distributed.
CN201911422992.6A 2019-12-31 2019-12-31 Local map optimization method based on visual positioning Active CN111222514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911422992.6A CN111222514B (en) 2019-12-31 2019-12-31 Local map optimization method based on visual positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911422992.6A CN111222514B (en) 2019-12-31 2019-12-31 Local map optimization method based on visual positioning

Publications (2)

Publication Number Publication Date
CN111222514A true CN111222514A (en) 2020-06-02
CN111222514B CN111222514B (en) 2023-06-27

Family

ID=70829310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911422992.6A Active CN111222514B (en) 2019-12-31 2019-12-31 Local map optimization method based on visual positioning

Country Status (1)

Country Link
CN (1) CN111222514B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381841A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on GMS feature matching in dynamic scene
CN112435262A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN113012196A (en) * 2021-03-05 2021-06-22 华南理工大学 Positioning method based on information fusion of binocular camera and inertial navigation sensor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176929A1 (en) * 2017-03-27 2018-10-04 华为技术有限公司 Image background blurring method and apparatus
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176929A1 (en) * 2017-03-27 2018-10-04 华为技术有限公司 Image background blurring method and apparatus
WO2019140745A1 (en) * 2018-01-16 2019-07-25 广东省智能制造研究所 Robot positioning method and device
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘祥;: "基于光束法平差的RGB-D视觉里程计" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381841A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Semantic SLAM method based on GMS feature matching in dynamic scene
CN112435262A (en) * 2020-11-27 2021-03-02 广东电网有限责任公司肇庆供电局 Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN113012196A (en) * 2021-03-05 2021-06-22 华南理工大学 Positioning method based on information fusion of binocular camera and inertial navigation sensor

Also Published As

Publication number Publication date
CN111222514B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN107563446B (en) Target detection method for micro-operation system
CN109376681B (en) Multi-person posture estimation method and system
CN109583340B (en) Video target detection method based on deep learning
CN106934795B (en) A kind of automatic testing method and prediction technique of glue into concrete beam cracks
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN106599028B (en) Book content searching and matching method based on video image processing
CN111079545A (en) Three-dimensional target detection method and system based on image restoration
CN107038448A (en) Target detection model building method
CN111222514A (en) Local map optimization method based on visual positioning
CN103093198B (en) A kind of crowd density monitoring method and device
CN114693661A (en) Rapid sorting method based on deep learning
CN110310305B (en) Target tracking method and device based on BSSD detection and Kalman filtering
CN106845338B (en) Pedestrian detection method and system in video stream
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN115147488B (en) Workpiece pose estimation method and grabbing system based on dense prediction
CN114863492B (en) Method and device for repairing low-quality fingerprint image
CN111931686A (en) Video satellite target tracking method based on background knowledge enhancement
CN103268482A (en) Low-complexity gesture extracting and gesture depth acquiring method
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
CN117541652A (en) Dynamic SLAM method based on depth LK optical flow method and D-PROSAC sampling strategy
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN110490170B (en) Face candidate frame extraction method
CN107437257A (en) Moving object segmentation and dividing method under a kind of mobile background
CN116612385B (en) Remote sensing image multiclass information extraction method and system based on depth high-resolution relation graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230313

Address after: Room 401 and Room 402, Building 5, No. 690, Bibo Road, Pilot Free Trade Zone, Pudong New Area, Shanghai, March 2012

Applicant after: Shanghai Xingsi Semiconductor Co.,Ltd.

Address before: 710065 10th floor, block C, Xi'an National Digital publishing base, 996 Tiangu 7th Road, Yuhua Street office, high tech Zone, Xi'an City, Shaanxi Province

Applicant before: XI'AN AEROSPACE HUAXUN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant