CN115719446A - Feature point integration positioning system and feature point integration positioning method - Google Patents

Feature point integration positioning system and feature point integration positioning method Download PDF

Info

Publication number
CN115719446A
CN115719446A CN202110967264.4A CN202110967264A CN115719446A CN 115719446 A CN115719446 A CN 115719446A CN 202110967264 A CN202110967264 A CN 202110967264A CN 115719446 A CN115719446 A CN 115719446A
Authority
CN
China
Prior art keywords
feature points
image
deep learning
experiment
integration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110967264.4A
Other languages
Chinese (zh)
Inventor
王俞芳
林義傑
王正楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Automotive Research and Testing Center
Original Assignee
Automotive Research and Testing Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Automotive Research and Testing Center filed Critical Automotive Research and Testing Center
Priority to CN202110967264.4A priority Critical patent/CN115719446A/en
Publication of CN115719446A publication Critical patent/CN115719446A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a feature point integrated positioning system and a feature point integrated positioning method. The image input source is used for shooting an environment to obtain sequence image data, and the sequence image data comprises a plurality of images. The analysis module is in signal connection with the image input source to receive the sequence image data, and comprises a mechanical vision detection unit, a deep learning detection unit and an integration unit. The machine vision detection unit generates a plurality of first feature points according to each image, the deep learning detection unit generates a plurality of second feature points according to each image, and the integration unit is used for integrating the first feature points and the second feature points into integrated feature points. The positioning module receives the integrated feature points of the images to confirm the position of the moving body relative to the environment at each time point. Thereby improving the stability of positioning.

Description

Feature point integrated positioning system and feature point integrated positioning method
Technical Field
The present invention relates to a feature point integration positioning system and a feature point integration positioning method, and more particularly, to a feature point integration positioning system and a feature point integration positioning method applied to an image SLAM.
Background
Synchronous Localization And Mapping (hereinafter referred to as SLAM) refers to a technique for sensing characteristics of an ambient environment during a moving process of an object to establish a map of the ambient environment And simultaneously localize a relationship between the object itself And the ambient environment. Based on the feature of being able to locate and construct maps simultaneously, in recent years, the requirements of SLAM are increasing, and the SLAM is applied to indoor automatic parking, warehouse logistics management, mobile phone exhibition hall guide and the like, wherein due to the cost of the sensor, the image SLAM mainly detecting images is more widely used in the market than the light-arrival SLAM mainly detecting point clouds.
For the image SLAM, in addition to the positioning accuracy, the positioning stability is also very important, and the biggest problem of the conventional image SLAM is insufficient stability, which is easy to lose the current position during the positioning process, or the time for retrieving the original position after losing the current position is too long, especially in the scenes with severe environmental changes, such as turning places and light changing places, the problem of losing the position is obvious. In addition, the conventional image SLAM has poor positioning accuracy outdoors, and is easily affected by light variation such as forward and backward light, road turning, or environmental variation caused by different vehicle placement, thereby causing loss of map construction or positioning.
In view of this, how to improve the positioning stability of the image SLAM is an objective of the related manufacturers.
Disclosure of Invention
In order to solve the above problems, the present invention provides a feature point integrated positioning system and a feature point integrated positioning method, which can effectively improve positioning stability through feature point integration.
According to an embodiment of the present invention, a feature point integration positioning system includes a moving object, an image input source, an analysis module, and a positioning module. The image input source is arranged on the moving object and used for shooting an environment to obtain a sequence of image data, wherein the sequence of image data comprises a plurality of images, and the plurality of images correspond to the plurality of time points one by one. The analysis module is in signal connection with the image input source to receive the sequence image data, and comprises a mechanical vision detection unit, a deep learning detection unit and an integration unit. The machine vision detection unit generates a plurality of first feature points belonging to each image according to each image, the deep learning detection unit generates a plurality of second feature points belonging to each image according to each image, and the integration unit is used for integrating the plurality of first feature points and the plurality of second feature points of each image into a plurality of integrated feature points of each image. The positioning module is in signal connection with the analysis module and receives the plurality of integrated characteristic points of each image so as to confirm a position of the moving body relative to the environment at each time point.
Therefore, the second characteristic points generated by the deep learning detection unit can make up the defects of the first characteristic points, so that the positioning is more accurate, and the positioning stability is improved.
In an embodiment of the feature point integration positioning system according to the foregoing embodiment, the machine vision detection unit may obtain the first feature points of each image by an ORB algorithm or a SIFT algorithm.
An embodiment of the feature point integration positioning system according to the foregoing embodiments may further include a map building module for building a map of the environment.
According to an embodiment of the feature point integration positioning system in the foregoing embodiment, the deep learning detection unit may be trained and matched with a plurality of environment difference images in advance to establish a deep learning model, and the deep learning model is used to determine the plurality of second feature points.
According to another embodiment of the present invention, a feature point integration positioning method is provided, which includes a capturing step, an analyzing step, an integrating step, and a positioning step. In the shooting step, an image input source shoots an environment to obtain a sequence of image data, wherein the sequence of image data comprises a plurality of images, and the plurality of images correspond to a plurality of time points one by one. In the analyzing step, a machine vision detecting unit generates a plurality of first feature points belonging to each image according to each image, and a deep learning detecting unit generates a plurality of second feature points belonging to each image according to each image. In the integration step, an integration unit integrates the first characteristic points and the second characteristic points of each image into integrated characteristic points of each image. In the positioning step, a moving object is positioned according to the plurality of integrated feature points of each image.
In an embodiment of the feature point integration positioning method according to the foregoing embodiment, in the integration step, the integration unit obtains three-dimensional point group data for the multiple integrated feature points of each image in a stereo geometric mode.
In an embodiment of the feature point integrated positioning method according to the foregoing embodiment, in the positioning step, a map of an environment may be constructed according to the foregoing integrated feature points of each image.
According to an embodiment of the feature point integration positioning method of the foregoing embodiment, the method may further include a pre-matching step, which includes training the deep learning detection unit with a plurality of environment difference images, and establishing a deep learning model of the deep learning detection unit; enabling the machine vision detection unit to respectively generate a plurality of front frame first experiment feature points and a plurality of rear frame first experiment feature points according to two experiment images which are time-sequential, and enabling the deep learning detection unit to respectively generate a plurality of front frame second experiment feature points and a plurality of rear frame second experiment feature points for the two experiment images by using a deep learning model; integrating the plurality of front frame first experiment characteristic points and the plurality of front frame second experiment characteristic points into a plurality of front frame integration experiment characteristic points by an integration unit, and integrating the plurality of rear frame first experiment characteristic points and the plurality of rear frame second experiment characteristic points into a plurality of rear frame integration experiment characteristic points; and matching the plurality of post-frame integration experiment feature points with the plurality of pre-frame integration experiment feature points to obtain a similarity. If the similarity is greater than or equal to a threshold value, the deep learning model is used by the deep learning detection unit in the analysis step; if the similarity is lower than the threshold value, the pre-matching step is repeated, the deep learning detection unit is retrained to establish another deep learning model of the deep learning detection unit, and the plurality of post-frame integration experiment feature points and the plurality of front-frame integration experiment feature points are updated to obtain another similarity.
An embodiment of the feature point integration positioning method according to the foregoing embodiment is further provided, wherein when the plurality of post-frame integration experiment feature points are matched with the plurality of pre-frame integration experiment feature points, a plurality of euclidean distances may be calculated.
An embodiment of the feature point integration positioning method according to the foregoing embodiments, wherein a plurality of included angles may be calculated when the plurality of post-frame integration experiment feature points are matched with the plurality of pre-frame integration experiment feature points.
According to an embodiment of the feature point integration positioning method in the foregoing embodiment, a plurality of objects in each environmental difference image may have a light difference or a position difference.
Drawings
FIG. 1 is a block diagram of a feature point integration positioning system according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a first feature point of an image generated by the mechanical vision detection unit in the embodiment of FIG. 1;
FIG. 3 is a diagram illustrating second feature points of an image generated by the deep learning detection unit in the embodiment of FIG. 1;
FIG. 4 is a diagram illustrating an integrated feature point of the integrated unit composite image in the embodiment of FIG. 1;
FIG. 5 is a diagram illustrating a relationship between a positioning error and time of the feature point integration positioning system of the embodiment of FIG. 1 and a comparison example;
FIG. 6 is a block flow diagram illustrating a method for integrated feature point positioning according to another embodiment of the present invention; and
FIG. 7 is a flowchart illustrating the pre-matching step of the embodiment shown in FIG. 6.
[ description of symbols ]
100 characteristic point integrated positioning system
110 moving body
120 image input source
130 analysis module
131 mechanical vision detecting unit
132 deep learning detection unit
133 Integrated Unit
140 positioning module
150 map building module
200 method for integrally positioning characteristic points
210 shooting step
220 analysis step
230 integration step
240 positioning step
250, a pre-matching step
251,252,253,254,255,256,257,258 sub-step
F1 first characteristic point
F2 second characteristic point
F3 integration feature points
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings. For the purposes of clarity, numerous implementation details are set forth in the following description. However, the reader should understand that these implementation details should not be used to limit the invention. That is, in some embodiments of the invention, these implementation details are not necessary. In addition, for the sake of simplicity, some conventional structures and elements are shown in the drawings in a simplified schematic manner; and repeated elements will likely be referred to using the same reference number or similar reference numbers.
In addition, when an element (or a mechanism or a module, etc.) is "connected," "disposed" or "coupled" to another element, it can be directly connected, disposed or coupled to the other element, or it can be indirectly connected, disposed or coupled to the other element, that is, there are other elements between the element and the other element. When an element is explicitly connected, disposed, or coupled to another element, it can be directly connected, disposed, or coupled to the other element without intervening elements. The terms first, second, third, etc. are used merely to describe various elements or components, but the elements/components themselves are not limited, so that the first element/component can be also referred to as the second element/component. Moreover, the combination of elements/components/mechanisms/modules herein is not a commonly known, conventional or familiar combination in the art, and thus, whether the combination is known or not cannot be easily determined by a person skilled in the art.
Referring to fig. 1, fig. 1 is a block diagram illustrating a feature point integration positioning system 100 according to an embodiment of the invention. The feature point integration positioning system 100 includes a moving object 110, an image input source 120, an analysis module 130, and a positioning module 140. The image input source 120 is disposed on the moving object 110 and is configured to capture an environment to obtain a sequence of image data, where the sequence of image data includes a plurality of images, and the plurality of images correspond to a plurality of time points one to one. The analysis module 130 is in signal connection with the image input source 120 to receive the sequential image data, and the analysis module 130 includes a machine vision detection unit 131, a deep learning detection unit 132, and an integration unit 133. The machine vision detecting unit 131 generates a plurality of first feature points F1 (shown in fig. 2) belonging to each image according to each image, the deep learning detecting unit 132 generates a plurality of second feature points F2 (shown in fig. 3) belonging to each image according to each image, and the integrating unit 133 is configured to integrate the plurality of first feature points F1 and the plurality of second feature points F2 of each image into a plurality of integrated feature points F3 (shown in fig. 4) of each image. The positioning module 140 is in signal connection with the analysis module 130, and the positioning module 140 receives the plurality of integrated feature points F3 of each image to determine a position of the moving object 110 relative to the environment at each time point.
Therefore, the second feature point F2 generated by the deep learning detection unit 132 can make up for the deficiency of the first feature point F1, so as to enable positioning to be more accurate and improve positioning stability. Details of the feature point integration positioning system 100 will be described later.
The image input source 120 may include at least one camera, and a movable object such as a vehicle or a robot on which the image input source 120 is mounted may be defined as the movable object 110. During the moving of the moving object 110, the image input source 120 can continuously capture a series of images at a series of adjacent time points, that is, capture one image of the environment at a first time point and capture another image of the environment at a second time point subsequent to the first time point, and can continuously capture and generate a plurality of images to form a series of image data.
When the analysis module 130 receives the sequence image data, the images can be analyzed in real time, and the images can be analyzed by the mechanical vision detection unit 131 and the deep learning detection unit 132 simultaneously or sequentially to generate the first feature point F1 and the second feature point F2, respectively. It should be noted that the feature point (feature point) referred to herein can refer to, for example, a point in an image where a gray scale value changes significantly or a point in an image where an edge of each object has a large curvature, and the definition of the feature point is well known in the art and will not be described herein again.
Referring to fig. 2 and fig. 1, in which fig. 2 illustrates a schematic diagram of first feature points F1 of an image generated by the machine vision detecting unit 131 in the embodiment of fig. 1, and only 2 first feature points F1 are indicated in fig. 2 for illustration, which is not intended to limit the present invention. The mecha-vision detecting unit 131 can obtain a plurality of first Feature points F1 of each image by using a conventional Feature extraction algorithm, such as an ORB algorithm (ordered FAST and Rotated bright Transform) or a Scale-Invariant Feature Transform (Scale-Invariant Feature Transform), but not limited thereto. As shown in fig. 2, the machine vision detecting unit 131 can identify each object in the image, such as a sign line on a road surface, a vehicle beside the road, and a building, and can generate a corresponding first feature point F1. However, due to the influence of large light and shadow changes, trees in front of the road cannot be identified, and buildings beside the road have a phenomenon of losing boundaries in places with large light and shadow changes.
Referring to fig. 3, and referring to fig. 1 and fig. 2 together, in which fig. 3 illustrates a schematic diagram of the second feature point F2 of the image generated by the deep learning detection unit 132 in the embodiment of fig. 1, and only 2 second feature points F2 are indicated in fig. 3 for illustration, which is not intended to limit the present invention. The deep learning detection unit 132 is trained in advance, and recognizes the image by using the established deep learning model, wherein the deep learning detection unit 132 uses a large amount of environment difference images with large environmental changes (strong forward and backward lights or turning changes) as learning sources in advance, so as to train the deep learning model capable of adapting to the environmental changes. As shown in fig. 3, the recognized image is the same as the image of fig. 2, and the deep learning detection unit 132 can clearly recognize the tree in front of the road and the boundary of the buildings beside the road where the light and shadow change is large.
Referring to fig. 4 and fig. 1 to 3, in which fig. 4 is a schematic diagram illustrating an integration feature point F3 of a synthesized image of the integration unit 133 in the embodiment of fig. 1, and fig. 4 only shows 2 integration feature points F3 for illustration, which is not intended to limit the present invention. After generating the first feature point F1 and the second feature point F2, the integration unit 133 may integrate the first feature point F1 and the second feature point F2, which may be obtained by superimposing all the first feature points F1 and all the second feature points F2 on the image to form an integrated feature point F3, that is, the integrated feature point F3 includes all the first feature points F1 and all the second feature points F2, and can retain the result recognized by the machine vision detecting unit 131 and the result recognized by the deep learning detecting unit 132.
For the conventional feature point extraction method, there is a limitation on the capture of feature points when the environment changes greatly, for example, too much front illumination may cause some feature points visible to the naked eye to be missed, but if the whole image is dimmed, the original captured feature points may be discarded. Therefore, in the present invention, the mecha-vision detecting unit 131 and the deep-learning detecting unit 132 perform feature point recognition on the same frame of image, so that the deep-learning detecting unit 132 focuses on the position where the mecha-vision detecting unit 131 is more likely to fail (i.e. cannot find the first feature point F1), finds the second feature point F2 that is correctly available, and makes up for the deficiency of the mecha-vision detecting unit 131, so that the integrated feature point F3 after integration can be free from the influence of light and shadow or environmental difference, and can completely present the features of each object in the image. After the integrated feature point F3 is formed, the positioning module 140 can determine the position of the current moving object 110 relative to the environment according to the two consecutive frames of images, so as to complete positioning.
Referring to fig. 5 and fig. 1 to 4, fig. 5 is a diagram illustrating a positioning error versus time relationship between the feature point integration positioning system 100 of the embodiment of fig. 1 and a comparative example, which is a result of positioning based on the first feature point F1 and is used to simulate a positioning system that uses a conventional feature point extraction method for positioning. As shown in fig. 5, the positioning system of the comparative example has a large positioning error and a problem of insufficient positioning stability, and on the contrary, the embodiment of fig. 1 of the present invention has a constant positioning error and a good positioning stability.
In addition, the feature point integrated positioning system 100 may further include a map building module 150, where the map building module 150 can build a map of the environment, and can build each object in the environment in the map according to the integrated feature points F3, and each object corresponds to each object in the image.
Referring to fig. 6 in conjunction with fig. 1 to 4, fig. 6 is a block flow diagram illustrating a method 200 for integrating and positioning feature points according to another embodiment of the present invention. The feature point integration positioning method 200 includes a shooting step 210, an analyzing step 220, an integrating step 230, and a positioning step 240, and the details of the feature point integration positioning method 200 will be described with reference to the feature point integration positioning system 100.
In the capturing step 210, an image input source 120 captures an environment to obtain a sequence of image data, where the sequence of image data includes a plurality of images, and the plurality of images correspond to a plurality of time points one by one.
In the analyzing step 220, a machine vision detecting unit 131 generates a plurality of first feature points F1 belonging to each image according to each image, and a deep learning detecting unit 132 generates a plurality of second feature points F2 belonging to each image according to each image.
In the integration step 230, an integration unit 133 integrates the first feature points F1 and the second feature points F2 of each image into a plurality of integrated feature points F3 of each image.
In the positioning step 240, a moving object 110 is positioned according to the integration feature points F3 of each image.
Therefore, the moving object 110 can move in an unknown environment, and the capturing step 210 is executed to capture images corresponding to each time point continuously at each time point, and then the images are transmitted to the machine vision detecting unit 131 and the deep learning detecting unit 132 by wired or wireless signals, so that the analyzing step 220 can be performed to generate the first feature point F1 and the second feature point F2 for the same images. Then, an integration step 230 is performed, in which the integration unit 133 obtains the first feature point F1 and the second feature point F2 in a wireless or wired manner, and all the first feature points F1 and all the second feature points F2 can be superimposed to generate an integrated feature point F3 of each image, and in the integration step 230, the integration unit 133 can further obtain a three-dimensional point group data for the plurality of integrated feature points F3 of each image in a three-dimensional geometric mode, each feature point in the point group is calculated and extracted by algorithms in the machine vision detection unit 131 and the deep learning detection unit 132, and both feature descriptions include positions, feature vectors, and the like. Finally, in the positioning step 240, the positional relationship between the moving object 110 and the environment can be found out from two consecutive frames of images, so as to complete the positioning, and further in the positioning step 240, a map of the environment can be constructed according to the integrated feature points F3 of each image.
The method 200 for integrated feature point positioning may further include a pre-matching step 250, which includes training the deep learning detection unit 132 with a plurality of environment difference images, and establishing a deep learning model of the deep learning detection unit 132; the machine vision detecting unit 131 generates a plurality of first experimental feature points of a previous frame and a plurality of first experimental feature points of a next frame according to two subsequent experimental images, and the deep learning detecting unit 132 generates a plurality of second experimental feature points of a previous frame and a plurality of second experimental feature points of a next frame for the two experimental images by using a deep learning model; integrating the plurality of previous frame first experiment feature points and the plurality of previous frame second experiment feature points into a plurality of previous frame integration experiment feature points, and integrating the plurality of next frame first experiment feature points and the plurality of next frame second experiment feature points into a plurality of next frame integration experiment feature points by the integrating unit 133; and matching the plurality of post-frame integration experiment feature points with the plurality of pre-frame integration experiment feature points to obtain a similarity. Wherein, if the similarity is greater than or equal to a threshold, the deep learning model is used by the deep learning detection unit 132 in the analyzing step 220; if the similarity is lower than the threshold, the pre-matching step 250 is repeated, and the deep learning detection unit 132 is retrained to establish another deep learning model of the deep learning detection unit 132, and the plurality of post-frame integration experiment feature points and the plurality of pre-frame integration experiment feature points are updated to obtain another similarity. That is to say, the present invention utilizes the pre-matching step 250 to find out the best deep learning model, and when the feature point integrated positioning system 100 actually operates, the deep learning detection unit 132 captures the second feature point F2 by using the best deep learning model, and then the integration unit 133 can directly integrate to generate the integrated feature point F3 without performing matching.
Referring to fig. 7, and referring to fig. 1 to 6 together, fig. 7 is a flowchart illustrating the pre-matching step 250 of the embodiment of fig. 6. Specifically, in the pre-matching step 250, the sub-step 251 is performed to train the deep learning detection unit 132, and each of the environment difference images used to train the deep learning detection unit 132 may include a plurality of objects, and the plurality of objects may have a light difference or a position difference. A portion of the environment difference image may be similar to fig. 2, for example, and includes objects such as sky, driveways, trees, and buildings, and the light difference between the sky and the trees is too large, so that the boundaries of the trees are fogged and not easily detected; in another part of the environment difference image, the detected feature points in the previous frame image disappear from the subsequent frame image due to the excessive position difference of the object at the corner. The above-mentioned environment difference images may be obtained by performing an intensive training on the deep learning detection unit 132 with emphasis on the position where the mechanical vision detection unit 131 is easy to recognize and lose, so as to increase the number and accuracy of the second feature points F2 captured by the deep learning detection unit 132 in the scene with strong environmental change, and focus the training on a specific scene instead of the commonly known features, thereby finding the correct available feature points and compensating for the deficiency of the mechanical vision detection unit 131.
Thereafter, sub-step 252 may be performed to obtain two experimental images, which may be obtained from the image input source 120 in real time or may be obtained from a file stored in a database, but not limited thereto. In sub-step 253, the machine vision detecting unit 131 may analyze the two experimental images to generate a first experimental feature point of the previous frame and a first experimental feature point of the next frame; in sub-step 254, the deep learning detection unit 132 may analyze the two experimental images to generate the first frame second experimental feature points and the second frame second experimental feature points, and then go to sub-step 255 to allow the integration unit 133 to generate the first frame integration experimental feature points and the second frame integration experimental feature points. It should be particularly noted that, the sub-step 253 and the sub-step 254 can be executed simultaneously, or after the first experimental feature point of the previous frame, the second experimental feature point of the previous frame and the first experimental feature point of the previous frame are generated, the first experimental feature point of the next frame, the second experimental feature point of the next frame and the second experimental feature point of the next frame are generated, which is not limited to this, and the first experimental feature point of the previous frame and the first experimental feature point of the next frame are equivalent to the first feature point F1 when the feature point integration positioning system 100 is actually operated; the second experimental feature point of the previous frame and the second experimental feature point of the subsequent frame are equivalent to a second feature point F2 when the feature point integrated positioning system 100 actually operates; the feature points of the previous frame integration experiment and the feature points of the next frame integration experiment are equivalent to the integration feature points F3 when the feature point integration positioning system 100 actually operates, and the feature point extraction and integration methods are the same, and the difference is only the name designation difference.
Again, sub-step 256 may be performed to perform the matching. When the plurality of post-frame integration experiment feature points are matched with the plurality of pre-frame integration experiment feature points, a plurality of Euclidean distances can be calculated; or when the plurality of post-frame integration experiment feature points are matched with the plurality of front-frame integration experiment feature points, a plurality of included angles can be calculated. And the similarity is calculated by the difference of Euclidean distances or the change of included angles, the higher the similarity is, the easier the matching is, and the easier the positioning is, in other words, the higher the overall stability is. The threshold of the similarity may be set to 75%, for example, but not limited thereto.
Finally, the sub-step 257 is executed to determine whether the similarity is greater than the threshold, if so, it indicates that the similarity between the feature points of the previous frame integration experiment and the feature points of the next frame integration experiment is high, and the feature points are not easy to lose, which means that the deep learning model is suitable and can be used when the feature point integration positioning system 100 is actually operated, and the sub-step 258 is executed to complete the pre-matching step 250. Otherwise, go back to substep 251 to retrain the deep learning detection unit 132.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A system for integrated feature point positioning, comprising:
a movable body;
the system comprises a moving body, an image input source, a processing unit and a processing unit, wherein the moving body is used for moving an image to a preset time point;
an analysis module in signal connection with the image input source for receiving the sequence of image data, the analysis module comprising:
a mechanical vision detecting unit for generating a plurality of first feature points belonging to each image according to each image;
a deep learning detection unit for generating a plurality of second feature points belonging to each image according to each image; and
an integration unit for integrating the first feature points and the second feature points of each image into integrated feature points of each image; and
and the positioning module is in signal connection with the analysis module and receives the plurality of integrated characteristic points of each image so as to confirm a position of the moving body relative to the environment at each time point.
2. The system of claim 1, wherein the machine vision detection unit obtains the first feature points of each image by an ORB algorithm or a SIFT algorithm.
3. The system of claim 1, further comprising a map construction module for constructing a map of the environment.
4. The system of claim 1, wherein the deep learning detection unit is trained and matched with a plurality of environment difference images in advance to create a deep learning model, and the deep learning model is used to determine the second feature points.
5. A method for integrally positioning feature points, comprising:
a shooting step, enabling an image input source to shoot an environment to obtain a sequence of image data, wherein the sequence of image data comprises a plurality of images, and the plurality of images correspond to a plurality of time points one by one;
an analysis step, enabling a mechanical vision detection unit to generate a plurality of first feature points belonging to each image according to each image, and enabling a deep learning detection unit to generate a plurality of second feature points belonging to each image according to each image;
an integration step, enabling an integration unit to integrate the first characteristic points and the second characteristic points of each image into integrated characteristic points of each image; and
and a positioning step, wherein a moving body is positioned according to the plurality of integrated characteristic points of each image.
6. The method as claimed in claim 5, wherein in the integrating step, the integrating unit obtains a three-dimensional point group data for the plurality of integrated feature points of each image in a stereo geometric mode.
7. The method as claimed in claim 5, wherein in the step of locating, a map of the environment is constructed according to the integrated feature points of each image.
8. The method of claim 5, further comprising a prior matching step, comprising:
training the deep learning detection unit by using a plurality of environment difference images, and establishing a deep learning model of the deep learning detection unit;
enabling the machine vision detection unit to respectively generate a plurality of front frame first experiment feature points and a plurality of rear frame first experiment feature points according to two experiment images which are time-sequential, and enabling the deep learning detection unit to respectively generate a plurality of front frame second experiment feature points and a plurality of rear frame second experiment feature points for the two experiment images according to the deep learning model;
integrating the plurality of front frame first experiment characteristic points and the plurality of front frame second experiment characteristic points into a plurality of front frame integration experiment characteristic points by the integration unit, and integrating the plurality of rear frame first experiment characteristic points and the plurality of rear frame second experiment characteristic points into a plurality of rear frame integration experiment characteristic points; and
matching the post-frame integration experiment feature points with the pre-frame integration experiment feature points to obtain a similarity;
wherein if the similarity is greater than or equal to a threshold, the deep learning model is used by the deep learning detection unit in the analyzing step; if the similarity is lower than the threshold value, the pre-matching step is repeated, the deep learning detection unit is retrained, another deep learning model of the deep learning detection unit is established, and the plurality of post-frame integration experiment feature points and the plurality of pre-frame integration experiment feature points are updated to obtain another similarity.
9. The feature point integration positioning method according to claim 8, wherein a plurality of euclidean distances are calculated when the plurality of post-frame integration experiment feature points are matched with the plurality of pre-frame integration experiment feature points.
10. The method as claimed in claim 8, wherein a plurality of angles are calculated when the feature points of the later frame integration experiment are matched with the feature points of the previous frame integration experiment.
11. The method of claim 8, wherein the plurality of objects in each of the environment difference images have a light difference or a position difference.
CN202110967264.4A 2021-08-23 2021-08-23 Feature point integration positioning system and feature point integration positioning method Pending CN115719446A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110967264.4A CN115719446A (en) 2021-08-23 2021-08-23 Feature point integration positioning system and feature point integration positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110967264.4A CN115719446A (en) 2021-08-23 2021-08-23 Feature point integration positioning system and feature point integration positioning method

Publications (1)

Publication Number Publication Date
CN115719446A true CN115719446A (en) 2023-02-28

Family

ID=85253341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110967264.4A Pending CN115719446A (en) 2021-08-23 2021-08-23 Feature point integration positioning system and feature point integration positioning method

Country Status (1)

Country Link
CN (1) CN115719446A (en)

Similar Documents

Publication Publication Date Title
CN108171247B (en) Vehicle re-identification method and system
CN110458025B (en) Target identification and positioning method based on binocular camera
Wang et al. A unified framework for mutual improvement of SLAM and semantic segmentation
US20220148292A1 (en) Method for glass detection in real scenes
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN111310728B (en) Pedestrian re-identification system based on monitoring camera and wireless positioning
CN111402331B (en) Robot repositioning method based on visual word bag and laser matching
CN115376034A (en) Motion video acquisition and editing method and device based on human body three-dimensional posture space-time correlation action recognition
Yang et al. Recognition and localization system of the robot for harvesting Hangzhou White Chrysanthemums
CN114170304B (en) Camera positioning method based on multi-head self-attention and replacement attention
CN113591735A (en) Pedestrian detection method and system based on deep learning
CN112669615A (en) Parking space detection method and system based on camera
CN115719446A (en) Feature point integration positioning system and feature point integration positioning method
US20230169747A1 (en) Feature point integration positioning system and feature point integration positioning method
CN114937233A (en) Identification method and identification device based on multispectral data deep learning
TWI773476B (en) Feature point integration positioning system and feature point integration positioning method
CN108876849B (en) Deep learning target identification and positioning method based on auxiliary identification
CN113592917A (en) Camera target handover method and handover system
CN110969659A (en) Space positioning device and method for passive marking point
CN112288817A (en) Three-dimensional reconstruction processing method and device based on image
CN112667832B (en) Vision-based mutual positioning method in unknown indoor environment
CN111160115A (en) Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information
CN113642430B (en) VGG+ NetVLAD-based high-precision visual positioning method and system for underground parking garage
US20230298203A1 (en) Method for selecting surface points from a cad model for locating industrial 3d objects, application of this method to the location of industrial 3d objects, and augmented reality system usi

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination