CN112902966A - Fusion positioning system and method - Google Patents

Fusion positioning system and method Download PDF

Info

Publication number
CN112902966A
CN112902966A CN202110121774.XA CN202110121774A CN112902966A CN 112902966 A CN112902966 A CN 112902966A CN 202110121774 A CN202110121774 A CN 202110121774A CN 112902966 A CN112902966 A CN 112902966A
Authority
CN
China
Prior art keywords
point cloud
uniform
point
characteristic
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110121774.XA
Other languages
Chinese (zh)
Inventor
黄明飞
姚宏贵
燕兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Open Intelligent Machine Shanghai Co ltd
Original Assignee
Open Intelligent Machine Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Open Intelligent Machine Shanghai Co ltd filed Critical Open Intelligent Machine Shanghai Co ltd
Priority to CN202110121774.XA priority Critical patent/CN112902966A/en
Publication of CN112902966A publication Critical patent/CN112902966A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fusion positioning system and a fusion positioning method, and belongs to the technical field of object positioning. The system comprises a visual navigation module, a first characteristic point cloud picture and a second characteristic point cloud picture, wherein the visual navigation module is used for processing characteristic point coordinates of a plurality of objects in the surrounding environment by adopting a VSLAM technology and forming the first characteristic point cloud picture; the AI object recognition module is used for processing the position coordinates and the object types of a plurality of objects in the surrounding environment by adopting an AI object recognition technology and forming an object recognition result; and the positioning module is respectively connected with the visual navigation module and the AI object identification module and is used for fusing the first characteristic point cloud picture and the object identification result to form a second characteristic point cloud picture and making a decision on the second characteristic point cloud picture so as to process the second characteristic point cloud picture to obtain the coordinate of the current central point of the mobile equipment. The beneficial effects of the above technical scheme are: the problem that the positioning result is inaccurate due to error accumulation caused by positioning an object only by means of VSLAM is solved. In addition, the problem that the position of the located object is inaccurate due to the fact that the object feature point coordinates acquired by means of the VSLAM technology are few can be prevented.

Description

Fusion positioning system and method
Technical Field
The invention relates to the technical field of object space positioning of mobile equipment, in particular to a fusion positioning system and a fusion positioning method.
Background
VSLAM (Visual navigation) is a conventional technique commonly used in the prior art for positioning and navigating a mobile device, and is typically implemented as: extracting each frame of image feature point, carrying out coarse feature point matching on adjacent frames, removing unreasonable matching pairs by using RANSAC (random sample consensus) algorithm, and then obtaining position and attitude information. The position and posture information obtained here only calculates the motion of the adjacent frames, and carries out local estimation, which inevitably causes accumulated drift, because each time the motion between two images is estimated, certain error exists, and the previous error is gradually accumulated after the adjacent frames are transmitted for a plurality of times, and the track drift (drift) is more and more serious.
Therefore, currently, the VSLAM for positioning the object is limited by error accumulation, and has a drift error problem. In addition, it is more biased to the position location of the mobile device itself, and lacks the function of assisting in adding location information of objects in the surrounding environment, thereby affecting the accuracy of VSLAM location.
Disclosure of Invention
The invention aims to solve the problem that the accuracy rate of object positioning by adopting the VSLAM technology is low. A fusion positioning system and method are provided, which specifically include.
A converged positioning system, wherein the converged positioning system is adapted to locate coordinates of a centre point of a mobile device as the mobile device moves;
the fusion positioning system comprises
The visual navigation module is used for processing by adopting a VSLAM technology to obtain the characteristic point coordinates of a plurality of objects in the surrounding environment and forming a first characteristic point cloud picture;
the AI object recognition module is used for processing the position coordinates and the object types of a plurality of objects in the surrounding environment by adopting an AI object recognition technology and forming an object recognition result;
and the positioning module is respectively connected with the visual navigation module and the AI object identification module and is used for fusing the first characteristic point cloud picture and the object identification result to form a second characteristic point cloud picture and making a decision on the second characteristic point cloud picture so as to process the second characteristic point cloud picture to obtain the current coordinate of the central point of the mobile equipment.
Preferably, the fusion positioning system, wherein the positioning module specifically includes:
a fusion unit, configured to fuse the first feature point cloud image and the object identification result to form the second feature point cloud image;
and the decision logic unit is connected with the fusion unit and used for carrying out decision identification on the second characteristic point cloud picture so as to determine the current coordinate of the central point of the mobile equipment.
Preferably, in the fusion positioning system, the decision logic unit specifically includes:
the storage component is used for constructing a historical point cloud image library which stores a plurality of historical point cloud images in advance;
the matching component is connected with the storage component and is used for extracting a plurality of matched historical point cloud pictures from a historical point cloud picture library according to the second characteristic point cloud picture;
and the decision-making part is connected with the matching part and used for processing the second characteristic point cloud picture and the extracted plurality of historical point cloud pictures to obtain the current coordinates of the central point of the mobile equipment.
Preferably, in the fusion positioning system, the matching unit matches a plurality of similar historical point clouds from the historical point cloud atlas and outputs the historical point clouds according to the types and numbers of objects included in the second feature point cloud atlas;
the decision unit is configured to compare the extracted plurality of historical point cloud images with the second feature point cloud image, and calculate, according to a comparison result, coordinates of the current center point of the mobile device by using a camera pose estimation algorithm.
Preferably, the fusion positioning system, wherein the AI object recognition module specifically includes:
a feature point extraction unit for extracting coordinate values of a plurality of feature points of the object surface of the surrounding environment;
the uniform point extraction unit is connected with the characteristic point extraction unit and used for measuring and calculating coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
and the identification unit is respectively connected with the characteristic point extraction unit and the uniform point extraction unit and is used for identifying the object according to the characteristic points and the uniform points so as to obtain the object identification result.
Preferably, the fusion localization system, wherein the uniform point extracting unit specifically includes:
the uniform point calculating component is used for calculating and obtaining coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
the depth calculation component is connected with the uniform point calculation component and used for judging whether the difference between the depth coordinate value of the uniform point and other uniform points is too large and exceeds a preset depth threshold value or not aiming at each uniform point and outputting a judgment result;
the uniform point removing component is connected with the depth calculating component and is used for removing the corresponding uniform point when the judging result shows that whether the depth coordinate value of the uniform point is too far away from other uniform points and exceeds a preset depth threshold value;
the uniform point extracting unit outputs all the uniform points remaining after the elimination.
Preferably, in the fusion positioning system, the feature point extracting unit specifically includes:
a feature point extracting unit configured to extract a plurality of feature points of the surface of the object;
the object type judging component is connected with the characteristic point extracting component and used for judging the object type of the object according to the characteristic points;
a first judgment unit, connected to the feature point extraction unit and the object type judgment unit, respectively, for judging whether there are a plurality of objects having the same object type and similar feature points, and outputting a first judgment result;
second judging means, connected to the first judging means, for further judging whether or not a plurality of objects having the same object type and similar feature points can be combined when the first judgment result indicates that the plurality of objects exist, and outputting a second judgment result;
a processing unit, connected to the second determining unit, for according to the second determination result:
merging the feature points of a plurality of the objects when the plurality of the objects can be combined; and
and deleting the characteristic points of the plurality of objects when the plurality of objects cannot be combined.
A fusion positioning method is applied to the fusion positioning system and comprises the following steps:
step S1, processing by adopting VSLAM technology to obtain the characteristic point coordinates of a plurality of objects in the surrounding environment and form a first characteristic point cloud picture, and processing by adopting AI object identification technology to obtain the position coordinates and the object types of the plurality of objects in the surrounding environment and form an object identification result;
and step S2, fusing the first characteristic point cloud picture and the object recognition result to form a second characteristic point cloud picture, and making a decision on the second characteristic point cloud picture to process to obtain the current coordinate of the central point of the mobile equipment.
Preferably, the fusion positioning method includes the steps of constructing a historical point cloud map library in advance, wherein the historical point cloud map library stores a plurality of historical point cloud maps;
the step S2 specifically includes:
step S21, fusing the first characteristic point cloud picture and the object identification result to form the second characteristic point cloud picture;
step S22, extracting a plurality of matched historical point cloud pictures from a historical point cloud picture library according to the second characteristic point cloud picture;
step S23, processing the second feature point cloud graph and the extracted plurality of historical point cloud graphs to obtain coordinates of the current center point of the mobile device.
Preferably, in the fusion positioning method, in step S1, the processing to obtain the position coordinates and the object types of the multiple objects in the surrounding environment and form the object identification result by using an AI object identification technology specifically includes the following steps:
step S11, extracting coordinate values of a plurality of characteristic points on the surface of the object in the surrounding environment;
step S12, calculating and obtaining coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
and step S13, identifying the object according to the characteristic points and the uniform points to obtain the object identification result.
The invention has the beneficial effects that:
the method and the device have the advantages that the AI object identification technology is combined with the VSLAM navigation technology, namely, the object feature point coordinates, the object position coordinates and the object types are combined, the AI object identification technology can effectively correct object drift errors caused by the VSLAM along with the time development, the capacity of positioning the position coordinates of a plurality of objects in the environment can be provided, and therefore the problem that the positioning result is inaccurate due to the fact that the VSLAM is only used for positioning the objects is solved. In addition, the problem that the position of the located object is inaccurate due to the fact that the object feature point coordinates acquired by means of the VSLAM technology are few can be prevented.
Drawings
FIG. 1 is a general system framework diagram of a fusion positioning system of the present invention;
FIG. 2 is a schematic diagram of a specific structure of a positioning module;
FIG. 3 is a schematic diagram of a decision logic unit;
fig. 4 is a schematic structural diagram of an AI object recognition module;
FIG. 5 is a schematic diagram of a detailed structure of the uniform point extracting unit;
FIG. 6 is a schematic diagram of a specific structure of the feature point extracting unit;
FIG. 7 is a general flow diagram of a fusion positioning method;
fig. 8 is a detailed flowchart of step S2;
fig. 9 is a schematic flowchart of the specific process of performing object recognition by using the AI object recognition technique in step S1.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of real-time embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The invention is further described with reference to the following figures and specific examples.
The invention provides a fusion positioning system which is suitable for positioning the coordinate of the central point of a mobile device when the mobile device moves;
the fusion positioning system comprises
The visual navigation module 1 is used for processing by adopting a VSLAM technology to obtain the characteristic point coordinates of a plurality of objects in the surrounding environment and forming a first characteristic point cloud picture;
the AI object recognition module 2 is used for processing the position coordinates and the object types of a plurality of objects in the surrounding environment by adopting an AI object recognition technology and forming an object recognition result;
and the positioning module 3 is respectively connected with the visual navigation module and the AI object identification module and is used for fusing the first characteristic point cloud picture and the object identification result to form a second characteristic point cloud picture and making a decision on the second characteristic point cloud picture so as to process the second characteristic point cloud picture to obtain the coordinate of the current central point of the mobile equipment.
In particular, the VSLAM object positioning has a drift error problem at present, and in addition, it is more biased to the position positioning of the moving point itself, and lacks the ability to provide positioning information for surrounding objects. The application provides an object positioning technology combining an AI (artificial intelligence) object recognition technology and a VSLAM (virtual positioning infrastructure) technology, which can effectively correct drift error objects caused by the development of VSLAM along with time and also can provide the capability of positioning position coordinates of a plurality of objects in an environment.
The visual navigation module 1 in the present application performs positioning and navigation by using VSLAM technology, and may rely on a camera device provided on a mobile device to acquire image data of a surrounding environment, such as a monocular camera device, a binocular camera device, and a visual odometer provided on the mobile device. The method for obtaining the cloud images of the feature points by positioning the object according to the surrounding environment through the visual navigation module 1 is the prior art and is not described herein again.
The AI object recognition module 2 in the present application also obtains image data of a surrounding environment to be recognized by using an image acquisition device, such as a camera, mounted on the mobile device, and recognizes the image data by using a neural network model formed by pre-training, so as to recognize and obtain position coordinates and object types of each object in the surrounding environment included in the image data. The process of training the neural network model and performing recognition is the prior art, and is not described herein again.
In the application, after obtaining the first feature point cloud picture processed and output by the visual navigation module 1 and the object recognition result processed and output by the AI object recognition module 2, in order to solve the problem of drift error existing in the single VSLAM technology in the prior art, the first feature point cloud picture and the object recognition result are fused to form a second feature point cloud picture, and the second feature point cloud picture is used for positioning and forming the coordinate of the current central point of the mobile device, so that the drift error generated by the VSLAM technology is corrected by using the peripheral objects recognized by the AI object recognition technology, and the finally obtained positioning result is more accurate.
In a preferred embodiment of the present invention, as shown in fig. 2, the positioning module 3 specifically includes:
a fusion unit 31, configured to fuse the first feature point cloud image and the object identification result to form a second feature point cloud image;
and the decision logic unit 32 is connected to the fusion unit 31, and is configured to perform decision identification on the second feature point cloud image, so as to determine the coordinates of the current center point of the mobile device.
Further, as shown in fig. 3, the decision logic unit 32 specifically includes:
a storage unit 321 configured to pre-construct a historical point cloud map library in which a plurality of historical point cloud maps are stored;
a matching unit 322 connected to the storage unit 321, configured to extract a plurality of matched historical point clouds from the historical point cloud atlas according to the second feature point cloud atlas;
and the decision-making part 323 is connected with the matching part 322 and is used for processing the second feature point cloud picture and the extracted multiple historical point cloud pictures to obtain the coordinates of the current central point of the mobile device.
Further, the matching unit 322 matches a plurality of similar historical point clouds from the historical point cloud library according to the types and numbers of the objects included in the second feature point cloud atlas, and outputs the multiple similar historical point cloud atlas;
the decision unit 323 is configured to compare the extracted multiple historical point cloud images with the second feature point cloud image, and calculate, according to a comparison result, coordinates of a current center point of the mobile device by using a camera pose estimation algorithm.
Specifically, in this embodiment, first, point cloud images of objects included in a plurality of surrounding environments are obtained through a plurality of images of the surrounding environments acquired in advance, and the point cloud images are stored in a historical point cloud gallery as historical point cloud images. Each historical point cloud image is related to the current environment of the mobile device, and can comprise different objects placed in the environment and different positions and states of the same object, namely, multiple historical point cloud images are adopted to represent the change of the current environment. Certainly, the more historical point cloud images in the historical point cloud image library, the more complete the transformation state of the current environment can be described, and the more accurate the finally obtained positioning coordinate is.
After the historical point cloud images are set, the matching unit 321 performs matching from the historical point cloud image library according to the information, such as the object position coordinates, the object types, the object numbers, and the like, included in the second feature point cloud image, so as to find and output the historical point cloud images with the object types and the object numbers that are similar. The so-called "almost" is actually a fuzzy matching process, and can be implemented by adopting a matching degree weighting calculation and sorting manner, which is not described herein again.
After a plurality of similar historical point cloud images are obtained, the decision unit 323 calculates the information such as the coordinate position and angle of the current center point according to the distribution of the objects and the feature points in the obtained several historical object point cloud images and the distribution of the corresponding objects and feature points in the current second feature point cloud image. Specifically, the coordinate and angle information of the current central point of the mobile device can be calculated and obtained by adopting a camera pose estimation algorithm through the distribution of different positions of objects of the same type in the multiple historical point cloud images and the second characteristic point cloud image. Of course, it is not enough to perform calibration only by using one object as a reference, so if multiple same objects exist in multiple historical point clouds, one calculation will perform reference according to multiple same objects at the same time to calculate the coordinates of the central point, and after removing the result of the excessive difference, the rest results are averaged or subjected to weighted average calculation to obtain the coordinates of the current central point of the mobile device. The weighted average calculation may be based on the confidence of object recognition and the confidence of feature point extraction, and the higher the confidence is, the higher the weighting coefficient is.
In a preferred embodiment of the present invention, as shown in fig. 4, the AI object recognition module 2 specifically includes:
a feature point extraction unit 21 for extracting coordinate values of a plurality of feature points of the object surface of the surrounding environment;
the uniform point extraction unit 22 is connected with the characteristic point extraction unit 21 and used for measuring and calculating coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to a plurality of characteristic points of the object;
and the identification unit 23 is respectively connected with the characteristic point extraction unit 21 and the uniform point extraction unit 22 and is used for identifying the object according to the characteristic points and the uniform points so as to obtain an object identification result.
Compared with the process of carrying out object recognition by extracting image feature points in the traditional AI object recognition technology, the uniform points on the surface of the object are further increased on the basis of the object feature points in the embodiment, so that the object feature points are supplemented, and the problem that the object cannot be recognized due to too few object feature points is solved.
Specifically, in this embodiment, the feature point extracting unit 21 is configured to extract coordinates of a plurality of feature points on the surface of the object in the surrounding environment, where the feature points may be implemented in a differentiated point-taking manner, that is, it is determined whether there is a difference between a pixel point and a surrounding pixel point, and the pixel point with the obvious difference is used as the feature point, so as to construct the feature point on the surface of the whole object. Of course, the feature point information may also be obtained by processing in other manners in the prior art, and will not be described again.
Further, in this embodiment, after the information of the feature point is acquired, the uniform point information of the object surface is measured and calculated according to the information of the feature point. The uniform dots are uniformly distributed dots that are modeled on the surface of the object and supplement the characteristic point information. The information of the uniform point can be obtained by measuring and calculating the information of the obtained characteristic point, for example, a uniform point is obtained at a distance by taking the characteristic point as a reference, and the edge of the object surface is taken as a limit, so that the information of the uniform point of the object surface is measured and calculated.
In this embodiment, after the coordinates of the feature point and the uniform point are obtained, the coordinates of these points are integrated to identify the object, so as to identify and obtain the position coordinates and the type of the object.
In a preferred embodiment of the present invention, as shown in fig. 5, the uniform point extracting unit 22 specifically includes:
a uniform point calculation unit 221 configured to obtain coordinate values of a plurality of uniform points uniformly distributed on the surface of the object by measurement from the plurality of feature points of the object;
a depth calculating component 222, connected to the uniform point calculating component 221, for determining, for each uniform point, whether the depth coordinate value of the uniform point is too far away from other uniform points and exceeds a preset depth threshold, and outputting a determination result;
the uniform point removing component 223 is connected with the depth calculating component 222 and is used for removing the corresponding uniform point when the judgment result shows that whether the depth coordinate value of the uniform point is too far away from other uniform points and exceeds a preset depth threshold value;
the uniform point extracting unit 22 outputs all the uniform points remaining after the culling.
Specifically, in this embodiment, since the uniform points are obtained by measurement and calculation, in order to avoid including invalid uniform points, the obtained uniform points need to be screened and removed. The screening method can be as follows: firstly, acquiring depth information (Z-axis coordinate) of each uniform point on an XYZ axis, then judging according to the Z-axis coordinate, and if the Z-axis coordinate of a uniform point meets the following two conditions at the same time, indicating that the uniform point coordinate needs to be removed:
1) the depth value of the uniform point exceeds a preset depth threshold value, and the depth threshold value can be preset and is a reasonable value;
2) if the difference between the depth value of the uniform point and the depth values of other surrounding uniform points is too large, the difference between the depth value of the uniform point and the depth values of other surrounding uniform points can be calculated and compared with a preset difference threshold, and if the percentage of the number of the differences larger than the difference threshold in the total number is larger than a preset percentage (for example, 80%), it indicates that the difference between the depth value of the uniform point and the other surrounding uniform points is too large.
Finally, reserving and outputting the uniform points left after elimination to serve as points participating in subsequent calculation
In a preferred embodiment of the present invention, as shown in fig. 6, the feature point extracting unit 21 specifically includes:
a feature point extracting section 211 for extracting a plurality of feature points of the surface of the object;
an object type judgment section 212 connected to the feature point extraction section 211 for judging the object type of the object from the feature points;
a first judgment section 213 connected to the feature point extraction section 211 and the object type judgment section 212, respectively, for judging whether or not there are a plurality of objects having the same object type and similar feature points, and outputting a first judgment result;
a second judgment unit 214 connected to the first judgment unit 213, for further judging whether or not a plurality of objects can be combined when the first judgment result indicates that a plurality of objects having the same object type and similar feature points exist, and outputting a second judgment result;
processing means 215, connected to the second determining means 214, for, based on the second determination result:
when a plurality of objects can be combined, combining the characteristic points of the plurality of objects; and
when the plurality of objects cannot be combined, the feature points of the plurality of objects are deleted.
Specifically, in the present embodiment, when feature point extraction is performed on a surrounding object, recognition of an object contour is sometimes inaccurate, which results in an erroneous segmentation of the object and ultimately reduces the accuracy of the recognition result. In order to avoid this problem, after the information of the feature points is acquired, the information of the feature points needs to be merged. For example, two wardrobes placed together in the surrounding environment may be divided into two cupboards during identification, which is inconsistent with the actual environment, so that feature points need to be merged before identification. The specific merging process is as follows:
firstly, whether objects of the same type exist in the environment is judged according to the acquired information of the characteristic points, in other words, the types of the objects are identified once through the acquired characteristic points of the objects.
If the same type of objects exist and the feature point information is similar (the number and the coordinate values of the feature points are similar), further judging whether the same type of objects can be combined, merging the feature points of the combinable objects, and deleting all the feature points of the combinable objects. The manner of judging whether the objects can be combined may be: and judging whether the coordinates of the objects are close, if so, indicating that the objects can be combined, otherwise, indicating that the objects cannot be combined.
If the two conditions of "the object types are the same" and "the feature point information is similar" cannot be simultaneously satisfied, it indicates that the objects cannot be combined and are independent of each other, and the feature points of the objects are normally processed at this time.
And finally outputting the characteristic points of the objects left after the merging processing for subsequent identification.
In a preferred embodiment of the present invention, the fusion positioning system can be completely mounted on the mobile device, and can also process other functions besides the image acquisition function in the cloud, and the mobile device only needs to acquire images of the surrounding environment and upload the images to the cloud.
In a preferred embodiment of the present invention, based on the fusion positioning system described above, a fusion positioning method is now provided, which is specifically shown in fig. 7, and includes:
step S1, processing by adopting VSLAM technology to obtain the characteristic point coordinates of a plurality of objects in the surrounding environment and form a first characteristic point cloud picture, and processing by adopting AI object identification technology to obtain the position coordinates and the object types of the plurality of objects in the surrounding environment and form an object identification result;
and S2, fusing the first characteristic point cloud picture and the object recognition result to form a second characteristic point cloud picture, and making a decision on the second characteristic point cloud picture to process to obtain the coordinates of the current central point of the mobile equipment.
Further, in a preferred embodiment of the present invention, as shown in fig. 8, the step S2 specifically includes:
step S21, fusing the first characteristic point cloud picture and the object recognition result to form a second characteristic point cloud picture;
step S22, extracting a plurality of matched historical point cloud pictures from the historical point cloud picture library according to the second characteristic point cloud picture;
and step S23, processing the second characteristic point cloud picture and the extracted multiple historical point cloud pictures to obtain the coordinates of the current central point of the mobile device.
In a preferred embodiment of the present invention, as shown in fig. 9, in step S1, the processing to obtain the position coordinates and the object types of the multiple objects in the surrounding environment and form the object recognition result by using the AI object recognition technology specifically includes the following processes:
step S11, extracting coordinate values of a plurality of characteristic points on the surface of the object in the surrounding environment;
step S12, obtaining coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
and step S13, identifying the object according to the characteristic points and the uniform points to obtain an object identification result.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (10)

1. A converged positioning system, adapted to locate coordinates of a central point of a mobile device as the mobile device moves;
the fusion positioning system comprises
The visual navigation module is used for processing by adopting a VSLAM technology to obtain the characteristic point coordinates of a plurality of objects in the surrounding environment and forming a first characteristic point cloud picture;
the AI object recognition module is used for processing the position coordinates and the object types of a plurality of objects in the surrounding environment by adopting an AI object recognition technology and forming an object recognition result;
and the positioning module is respectively connected with the visual navigation module and the AI object identification module and is used for fusing the first characteristic point cloud picture and the object identification result to form a second characteristic point cloud picture and making a decision on the second characteristic point cloud picture so as to process the second characteristic point cloud picture to obtain the current coordinate of the central point of the mobile equipment.
2. The fusion positioning system of claim 1, wherein the positioning module specifically comprises:
a fusion unit, configured to fuse the first feature point cloud image and the object identification result to form the second feature point cloud image;
and the decision logic unit is connected with the fusion unit and used for carrying out decision identification on the second characteristic point cloud picture so as to determine the current coordinate of the central point of the mobile equipment.
3. The fusion positioning system of claim 2, wherein the decision logic unit specifically comprises:
the storage component is used for constructing a historical point cloud image library which stores a plurality of historical point cloud images in advance;
the matching component is connected with the storage component and is used for extracting a plurality of matched historical point cloud pictures from a historical point cloud picture library according to the second characteristic point cloud picture;
and the decision-making part is connected with the matching part and used for processing the second characteristic point cloud picture and the extracted plurality of historical point cloud pictures to obtain the current coordinates of the central point of the mobile equipment.
4. The fusion positioning system according to claim 3, wherein the matching means matches a plurality of similar historical point clouds from the historical point cloud library according to the types and numbers of objects included in the second feature point cloud image, and outputs the plurality of similar historical point cloud images;
the decision unit is configured to compare the extracted plurality of historical point cloud images with the second feature point cloud image, and calculate, according to a comparison result, coordinates of the current center point of the mobile device by using a camera pose estimation algorithm.
5. The fusion positioning system of claim 1, wherein the AI object identification module specifically comprises:
a feature point extraction unit for extracting coordinate values of a plurality of feature points of the object surface of the surrounding environment;
the uniform point extraction unit is connected with the characteristic point extraction unit and used for measuring and calculating coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
and the identification unit is respectively connected with the characteristic point extraction unit and the uniform point extraction unit and is used for identifying the object according to the characteristic points and the uniform points so as to obtain the object identification result.
6. The fusion positioning system of claim 5, wherein the uniform point extraction unit specifically comprises:
the uniform point calculating component is used for calculating and obtaining coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
the depth calculation component is connected with the uniform point calculation component and used for judging whether the difference between the depth coordinate value of the uniform point and other uniform points is too large and exceeds a preset depth threshold value or not aiming at each uniform point and outputting a judgment result;
the uniform point removing component is connected with the depth calculating component and is used for removing the corresponding uniform point when the judging result shows that whether the depth coordinate value of the uniform point is too far away from other uniform points and exceeds a preset depth threshold value;
the uniform point extracting unit outputs all the uniform points remaining after the elimination.
7. The fusion positioning system of claim 5, wherein the feature point extraction unit specifically comprises:
a feature point extracting unit configured to extract a plurality of feature points of the surface of the object;
the object type judging component is connected with the characteristic point extracting component and used for judging the object type of the object according to the characteristic points;
a first judgment unit, connected to the feature point extraction unit and the object type judgment unit, respectively, for judging whether there are a plurality of objects having the same object type and similar feature points, and outputting a first judgment result;
second judging means, connected to the first judging means, for further judging whether or not a plurality of objects having the same object type and similar feature points can be combined when the first judgment result indicates that the plurality of objects exist, and outputting a second judgment result;
a processing unit, connected to the second determining unit, for according to the second determination result:
merging the feature points of a plurality of the objects when the plurality of the objects can be combined; and
and deleting the characteristic points of the plurality of objects when the plurality of objects cannot be combined.
8. A fusion positioning method applied to the fusion positioning system according to any one of claims 1 to 7, and comprising:
step S1, processing by adopting VSLAM technology to obtain the characteristic point coordinates of a plurality of objects in the surrounding environment and form a first characteristic point cloud picture, and processing by adopting AI object identification technology to obtain the position coordinates and the object types of the plurality of objects in the surrounding environment and form an object identification result;
and step S2, fusing the first characteristic point cloud picture and the object recognition result to form a second characteristic point cloud picture, and making a decision on the second characteristic point cloud picture to process to obtain the current coordinate of the central point of the mobile equipment.
9. The fusion positioning method according to claim 8, wherein a historical point cloud map library storing a plurality of historical point cloud maps is constructed in advance;
the step S2 specifically includes:
step S21, fusing the first characteristic point cloud picture and the object identification result to form the second characteristic point cloud picture;
step S22, extracting a plurality of matched historical point cloud pictures from a historical point cloud picture library according to the second characteristic point cloud picture;
step S23, processing the second feature point cloud graph and the extracted plurality of historical point cloud graphs to obtain coordinates of the current center point of the mobile device.
10. The fusion positioning method according to claim 8, wherein in step S1, the processing of obtaining the position coordinates and the object types of the plurality of objects in the surrounding environment and forming the object recognition result by using an AI object recognition technology specifically includes the following processes:
step S11, extracting coordinate values of a plurality of characteristic points on the surface of the object in the surrounding environment;
step S12, calculating and obtaining coordinate values of a plurality of uniform points uniformly distributed on the surface of the object according to the plurality of characteristic points of the object;
and step S13, identifying the object according to the characteristic points and the uniform points to obtain the object identification result.
CN202110121774.XA 2021-01-28 2021-01-28 Fusion positioning system and method Pending CN112902966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110121774.XA CN112902966A (en) 2021-01-28 2021-01-28 Fusion positioning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110121774.XA CN112902966A (en) 2021-01-28 2021-01-28 Fusion positioning system and method

Publications (1)

Publication Number Publication Date
CN112902966A true CN112902966A (en) 2021-06-04

Family

ID=76120028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110121774.XA Pending CN112902966A (en) 2021-01-28 2021-01-28 Fusion positioning system and method

Country Status (1)

Country Link
CN (1) CN112902966A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278014A1 (en) * 2016-03-24 2017-09-28 Delphi Technologies, Inc. Method and a device for generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
CN110147095A (en) * 2019-03-15 2019-08-20 广东工业大学 Robot method for relocating based on mark information and Fusion
WO2019210978A1 (en) * 2018-05-04 2019-11-07 Huawei Technologies Co., Ltd. Image processing apparatus and method for an advanced driver assistance system
CN110807774A (en) * 2019-09-30 2020-02-18 广东工业大学 Point cloud classification and semantic segmentation method
CN110942515A (en) * 2019-11-26 2020-03-31 北京迈格威科技有限公司 Point cloud-based target object three-dimensional computer modeling method and target identification method
CN110956651A (en) * 2019-12-16 2020-04-03 哈尔滨工业大学 Terrain semantic perception method based on fusion of vision and vibrotactile sense
WO2020126596A1 (en) * 2018-12-18 2020-06-25 Robert Bosch Gmbh Method for determining an integrity range
CN111369610A (en) * 2020-03-05 2020-07-03 山东交通学院 Point cloud data gross error positioning and eliminating method based on credibility information
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN111666797A (en) * 2019-03-08 2020-09-15 深圳市速腾聚创科技有限公司 Vehicle positioning method and device and computer equipment
CN111798475A (en) * 2020-05-29 2020-10-20 浙江工业大学 Indoor environment 3D semantic map construction method based on point cloud deep learning

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230218A (en) * 2016-03-24 2017-10-03 德尔菲技术公司 Method and apparatus for generating the confidence level measurement to estimating derived from the image of the cameras capture on delivery vehicle
US20170278014A1 (en) * 2016-03-24 2017-09-28 Delphi Technologies, Inc. Method and a device for generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
WO2019210978A1 (en) * 2018-05-04 2019-11-07 Huawei Technologies Co., Ltd. Image processing apparatus and method for an advanced driver assistance system
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
WO2020126596A1 (en) * 2018-12-18 2020-06-25 Robert Bosch Gmbh Method for determining an integrity range
CN111666797A (en) * 2019-03-08 2020-09-15 深圳市速腾聚创科技有限公司 Vehicle positioning method and device and computer equipment
CN110147095A (en) * 2019-03-15 2019-08-20 广东工业大学 Robot method for relocating based on mark information and Fusion
CN110807774A (en) * 2019-09-30 2020-02-18 广东工业大学 Point cloud classification and semantic segmentation method
CN110942515A (en) * 2019-11-26 2020-03-31 北京迈格威科技有限公司 Point cloud-based target object three-dimensional computer modeling method and target identification method
CN110956651A (en) * 2019-12-16 2020-04-03 哈尔滨工业大学 Terrain semantic perception method based on fusion of vision and vibrotactile sense
CN111369610A (en) * 2020-03-05 2020-07-03 山东交通学院 Point cloud data gross error positioning and eliminating method based on credibility information
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar
CN111798475A (en) * 2020-05-29 2020-10-20 浙江工业大学 Indoor environment 3D semantic map construction method based on point cloud deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何松等: "基于激光SLAM和深度学习的语义地图构建", 《计算机研究与发展》, vol. 30, no. 9, 30 September 2020 (2020-09-30), pages 1 - 7 *
杨海清;唐怡豪;许倩倩;孙道洋;: "判别相关滤波融合深度信息的目标跟踪算法", 小型微型计算机系统, no. 04, 9 April 2020 (2020-04-09) *

Similar Documents

Publication Publication Date Title
CN107833236B (en) Visual positioning system and method combining semantics under dynamic environment
JP6760114B2 (en) Information processing equipment, data management equipment, data management systems, methods, and programs
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
CN106960449B (en) Heterogeneous registration method based on multi-feature constraint
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
CN111340922A (en) Positioning and mapping method and electronic equipment
CN114279433B (en) Automatic map data production method, related device and computer program product
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN114677323A (en) Semantic vision SLAM positioning method based on target detection in indoor dynamic scene
CN113011285B (en) Lane line detection method and device, automatic driving vehicle and readable storage medium
CN111998862A (en) Dense binocular SLAM method based on BNN
JP6922348B2 (en) Information processing equipment, methods, and programs
CN115830070A (en) Infrared laser fusion positioning method for inspection robot of traction substation
CN110851978A (en) Camera position optimization method based on visibility
CN114723811A (en) Stereo vision positioning and mapping method for quadruped robot in unstructured environment
CN116147618B (en) Real-time state sensing method and system suitable for dynamic environment
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN111553342A (en) Visual positioning method and device, computer equipment and storage medium
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
CN112902966A (en) Fusion positioning system and method
CN114399532A (en) Camera position and posture determining method and device
CN109827578B (en) Satellite relative attitude estimation method based on profile similitude
CN114373144B (en) Automatic identification method for circular identification points in high-speed video
CN118314162B (en) Dynamic visual SLAM method and device for time sequence sparse reconstruction
CN117576218B (en) Self-adaptive visual inertial navigation odometer output method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination