WO2022016803A1 - Visual positioning method and apparatus, electronic device, and computer readable storage medium - Google Patents
Visual positioning method and apparatus, electronic device, and computer readable storage medium Download PDFInfo
- Publication number
- WO2022016803A1 WO2022016803A1 PCT/CN2020/139166 CN2020139166W WO2022016803A1 WO 2022016803 A1 WO2022016803 A1 WO 2022016803A1 CN 2020139166 W CN2020139166 W CN 2020139166W WO 2022016803 A1 WO2022016803 A1 WO 2022016803A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- database
- feature points
- feature
- query image
- points
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 91
- 238000000034 method Methods 0.000 title claims abstract description 81
- 239000013598 vector Substances 0.000 claims abstract description 198
- 239000011159 matrix material Substances 0.000 claims description 87
- 230000009466 transformation Effects 0.000 claims description 48
- 238000013528 artificial neural network Methods 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 15
- 230000004927 fusion Effects 0.000 claims description 7
- 230000001131 transforming effect Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 30
- 238000005516 engineering process Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 10
- 238000013527 convolutional neural network Methods 0.000 description 9
- 235000009499 Vanilla fragrans Nutrition 0.000 description 5
- 244000263375 Vanilla tahitensis Species 0.000 description 5
- 235000012036 Vanilla tahitensis Nutrition 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007903 penetration ability Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the embodiments of the present disclosure are based on a Chinese patent application with an application number of 202010710996.0 and an application date of July 22, 2020, and claim the priority of the Chinese patent application.
- the entire content of the Chinese patent application is incorporated herein by reference.
- the embodiments of the present disclosure relate to the technical field of computer vision, and in particular, to a visual positioning method and apparatus, an electronic device, and a computer-readable storage medium.
- GPS Global Position System, global positioning system
- positioning technology based on wireless local area network or Bluetooth
- positioning technology based on ultra-wideband are technologies that all have certain limitations. Among them, the GPS signal penetration ability is poor, and it is difficult to achieve effective and accurate positioning in a densely built environment or an indoor environment. In addition, even in an open scene, in order to achieve high-precision positioning, professional GPS equipment with a high cost is required, so it is difficult to achieve consumer-level applications.
- the positioning technology based on wireless local area network or Bluetooth needs to pre-arrange relevant devices in the to-be-located area, the layout process is cumbersome, the reliability and accuracy are poor, and the positioning range is small.
- UWB-based positioning technology can achieve relatively high accuracy, but it requires at least three receivers, and the space between the transmitter and receiver needs to be kept open, which limits the application scenarios of UWB-based positioning technology .
- the UWB-based positioning technology often needs to multiply the number of receivers, resulting in poor system reliability.
- the above-mentioned positioning technology can usually only obtain position information, and it is difficult to obtain reliable attitude information.
- the visual positioning technology not only has a simpler way of obtaining information, but also does not require much modification to the positioning scene.
- the visual positioning technology can not only locate the position information, but also locate the attitude information, so that the positioning results can not only serve the needs of conventional position information acquisition, but also realize more intelligent applications, such as augmented reality.
- the speed of visual localization is low.
- the embodiments of the present disclosure provide a technical solution for visual positioning.
- a visual positioning method including:
- the feature vector of the feature points of the query image search for the database feature points matching the feature points of the query image, wherein the database feature points represent the feature points of the database image;
- the visual positioning result of the query image is determined.
- the extracting feature vectors of feature points of the query image includes:
- Feature extraction is performed on at least two images of the query image and the at least one transformed image to obtain feature vectors of feature points of the query image.
- the obtained feature vectors of the feature points of the query image can be It reflects the richer and more comprehensive information in the query image, and the extracted feature vector has strong robustness to environmental changes such as illumination, thereby helping to improve the accuracy of visual positioning.
- performing feature extraction on at least two images in the query image and the at least one transformed image to obtain a feature vector of feature points of the query image including:
- Feature fusion is performed on the at least two grouped convolution results to obtain feature vectors of feature points of the query image.
- the deep features of the query image can be obtained, thereby improving the robustness of subsequent feature point matching and improving the reliability of visual positioning. sex.
- the searching for a database feature point matching the feature point of the query image according to the feature vector of the feature point of the query image includes:
- a first group of database feature points that match the feature points of the query image is determined according to the database class centers that match a plurality of sub-feature vectors of the feature points of the query image.
- the feature vector of the feature points of the query image is decomposed into lower-dimensional sub-feature vectors before matching, thereby improving the speed of determining the database feature points matching the feature points of the query image.
- the method before the searching for a database class center matching a plurality of sub-feature vectors of the feature points of the query image, the method further includes:
- the feature vector of the any database feature point is decomposed to obtain a plurality of sub-feature vectors of the any database feature point, wherein the any one The dimension of the sub-feature vector of the database feature point is less than the dimension of the feature vector of any of the database feature points;
- a corresponding relationship between any of the database feature points and the database class center is established.
- determining the first group of database feature points matching the feature points of the query image according to the database class centers matched with multiple sub-feature vectors of the feature points of the query image including :
- Geometric verification is performed on the candidate database feature points, and a first group of database feature points matching the feature points of the query image is determined.
- the candidate database feature points corresponding to the feature points of the query image are determined according to the database class centers that match with multiple sub-feature vectors of the feature points of the query image, and then the candidate database feature points are determined.
- Geometric verification is performed to determine the first group of database feature points that match the feature points of the query image, so that the first group of database feature points that match the feature points of the query image can be quickly and accurately determined.
- performing geometric verification on the candidate database feature points to determine the first group of database feature points matching the feature points of the query image including:
- the first group of database feature points matching the feature points of the query image is determined.
- geometric verification is performed by means of matrix interval voting, so that the feature points matching the feature points of the query image can be quickly determined, thereby improving the visual positioning speed.
- determining the first group of database feature points matching the feature points of the query image according to the candidate database feature points corresponding to the similarity transformation matrix in the target matrix interval including:
- the first group of database feature points is determined according to candidate database feature points in the database image whose candidate database feature points satisfy the second quantity condition.
- the candidate database feature points can be filtered to obtain the first set of database feature points, thereby determining the visual positioning result of the query image based on the first set of database feature points, which helps to improve the accuracy of the determined visual positioning result. accuracy.
- the searching for a database feature point matching the feature point of the query image according to the feature vector of the feature point of the query image further includes:
- the second group of database feature points are database feature points other than the first group of database feature points in the multiple groups of database feature points corresponding to the three-dimensional coordinates;
- a visual positioning result of the query image is determined.
- This implementation can increase the number of associated point pairs through reverse search, thereby improving the robustness of visual positioning.
- a visual positioning device comprising:
- a first extraction part configured to extract feature vectors of feature points of the query image
- a search section configured to search for a database feature point matching the feature point of the query image according to the feature vector of the feature point of the query image, wherein the database feature point represents the feature point of the database image;
- the determining part is configured to determine the visual positioning result of the query image according to the matched database feature points.
- the first extraction part is configured as:
- Feature extraction is performed on at least two images of the query image and the at least one transformed image to obtain feature vectors of feature points of the query image.
- the first extraction part is configured as:
- Feature fusion is performed on the at least two grouped convolution results to obtain feature vectors of feature points of the query image.
- the lookup section is configured as:
- a first group of database feature points that match the feature points of the query image is determined according to the database class centers that match a plurality of sub-feature vectors of the feature points of the query image.
- the apparatus further includes:
- the second extraction part is configured to extract feature vectors of multiple database feature points
- the decomposition part is configured to decompose the feature vector of any database feature point for any database feature point in the plurality of database feature points to obtain a plurality of sub-feature vectors of the any database feature point, wherein, the dimension of the sub-feature vector of the feature point of any database is less than the dimension of the feature vector of the feature point of any database;
- the clustering part is configured to cluster the sub-feature vectors of the plurality of database feature points to obtain the database class center;
- the establishing part is configured to establish the corresponding relationship between the feature point of any database and the database class center.
- the lookup section is configured as:
- Geometric verification is performed on the candidate database feature points, and a first group of database feature points matching the feature points of the query image is determined.
- the lookup section is configured as:
- the first group of database feature points matching the feature points of the query image is determined.
- the lookup section is configured as:
- the first group of database feature points is determined according to candidate database feature points in the database image whose candidate database feature points satisfy the second quantity condition.
- the lookup section is configured as:
- the second group of database feature points are database feature points other than the first group of database feature points in the multiple groups of database feature points corresponding to the three-dimensional coordinates;
- a visual positioning result of the query image is determined.
- an electronic device comprising: one or more processors; a memory configured to store executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored in the memory to perform the above method.
- a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above method when executed by a processor.
- a computer program including computer-readable codes, and when the computer-readable codes are executed in an electronic device, the processor in the electronic device implements the above when executed. method.
- the positioning process is more direct and effective, the memory consumption is lower, the time-consuming of visual positioning can be reduced, and the positioning process is more reliable.
- FIG. 1 shows a flowchart of a visual positioning method provided by an embodiment of the present disclosure.
- FIG. 2 shows an exemplary schematic diagram of performing transformation processing on a query image to obtain multiple transformed images corresponding to the query image provided by an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram illustrating an exemplary input of a query image and multiple transformed images into a vanilla CNN, and outputting feature maps of the query image and multiple transformed images via the vanilla CNN provided by an embodiment of the present disclosure.
- FIG. 4 shows an exemplary embodiment of the present disclosure that performs grouped convolution on the feature points of a query image and multiple transformed images through two convolutional neural networks, and performs bilinear pooling on the grouped convolution results to obtain Schematic diagram of the eigenvectors of the feature points.
- FIG. 5 shows a block diagram of a visual positioning apparatus provided by an embodiment of the present disclosure.
- FIG. 6 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
- FIG. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
- the feature points related to the query image are searched for
- the matched database feature points, and according to the matched database feature points, the visual positioning result of the query image is determined, so that there is no need to retrieve and obtain a local map in the visual positioning, and feature point matching is performed directly.
- the point-matched database feature points determine the visual positioning result of the query image, so that the positioning process is more direct and effective, the memory consumption is lower, the time-consuming of visual positioning can be reduced, and the positioning process is more reliable.
- FIG. 1 shows a flowchart of a visual positioning method provided by an embodiment of the present disclosure.
- the execution subject of the visual positioning method may be a visual positioning device.
- the visual positioning method may be executed by a terminal device or a cloud server or other processing device.
- the terminal device may be a robot, a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device Or wearable devices, etc.
- the visual positioning method may be implemented by a processor invoking computer-readable instructions stored in a memory. As shown in FIG. 1 , the visual positioning method includes steps S11 to S13.
- step S11 feature vectors of feature points of the query image are extracted.
- the feature points of the query image may correspond to the pixels of the query image, that is, the position of any feature point of the query image in the query image may be uniquely determined according to the pixel corresponding to the feature point.
- the extracted feature vector of the feature points of the query image can provide pixel-level description information.
- the number of feature points of the query image may be less than or equal to the number of pixels of the query image. In one example, the number of feature points of the query image may be less than the number of pixels of the query image. For example, the value range of the number of feature points of the query image may be 500-2500.
- the extracting the feature vector of the feature points of the query image includes: transforming the query image to obtain at least one transformed image corresponding to the query image; Feature extraction is performed on at least two images in the at least one transformed image to obtain feature vectors of feature points of the query image.
- the transformation processing may be at least one of rotation, scaling, mirroring, distortion, and the like.
- FIG. 2 shows a schematic diagram of transforming a query image to obtain a plurality of transformed images corresponding to the query image.
- a query image may be transformed to obtain multiple transformed images corresponding to the query image, and feature extraction is performed on the query image and multiple transformed images corresponding to the query image to obtain the query image.
- the feature vector of the feature points of the query image may be transformed to obtain multiple transformed images corresponding to the query image, and feature extraction is performed on the query image and multiple transformed images corresponding to the query image to obtain the query image.
- the obtained feature vectors of the feature points of the query image can be It reflects the richer and more comprehensive information in the query image, and the extracted feature vector has strong robustness to environmental changes such as illumination, thereby helping to improve the accuracy of visual positioning.
- the performing feature extraction on at least two images in the query image and the at least one transformed image to obtain a feature vector of feature points of the query image includes: extracting the query image The image and at least two images in the at least one transformed image are respectively input into the first neural network, and the feature maps of the at least two images are output through the first neural network; the feature maps of the at least two images are Perform group convolution to obtain at least two grouped convolution results; perform feature fusion on the at least two grouped convolution results to obtain feature vectors of feature points of the query image.
- the first neural network may be a convolutional neural network, such as vanilla CNN or the like.
- FIG. 3 shows a schematic diagram of inputting a query image (not shown in FIG. 3 ) and a plurality of transformed images into a vanilla CNN, and outputting feature maps of the query image and the plurality of transformed images via the vanilla CNN.
- dense image description features can be obtained, that is, feature vectors that can extract a large number of feature points (eg, 500-2500 feature points).
- the feature vector of the feature points of the query image extracted by this example may be called a GIFT (Group Invariant Feature Transform, group invariant feature transform) feature.
- GIFT Group Invariant Feature Transform, group invariant feature transform
- the neural network usually only uses the information of a single image to extract features, which will There is a pooling process, which will cause the loss of a certain amount of image information, while GIFT uses multiple transformed images of the image to obtain the deep structure of the feature through grouping convolution, instead of the original pooling process. It is more comprehensive, so that the robustness of subsequent feature point matching can be improved, and the reliability of visual positioning can be improved.
- performing grouped convolution on the feature maps of the at least two images to obtain at least two grouped convolution results includes: dividing the feature maps of the at least two images into a first feature map group and the second feature map group; input the first feature map group into the second neural network, and output the grouped convolution result of the first feature map group through the second neural network; A third neural network is input, and the grouped convolution result of the second feature map group is output via the third neural network.
- the first feature map group includes part of the feature maps in the feature maps of the at least two images
- the second feature map group includes another part of the feature maps in the feature maps of the at least two images.
- the second neural network and the third neural network may both be convolutional neural networks.
- the grouped convolution results of the first feature map group and the second feature map group are respectively processed by the second neural network and the third neural network, so that the deep structure of the feature can be obtained, so that the obtained feature information It is more comprehensive and the overall matching effect is more robust.
- the number of neural networks used for group convolution may also be more than three.
- the performing feature fusion on the at least two items of grouped convolution results to obtain feature vectors of feature points of the query image includes: performing bilinear pooling on the at least two items of grouped convolution results The transformation operation is performed to obtain the feature vector of the feature points of the query image.
- the fusion may also be performed in a manner such as concat, which is not limited in this embodiment of the present disclosure.
- Figure 4 shows that the feature points of the query image and multiple transformed images are grouped by convolution through two convolutional neural networks, and bilinear pooling is performed on the grouped convolution results to obtain the feature vector of the feature points of the query image.
- an activation function can be used for processing after each convolutional layer of the second neural network and the third neural network.
- the activation function can be ReLU (Rectified Linear Unit, linear rectification unit).
- step S12 a database feature point matching the feature point of the query image is searched according to the feature vector of the feature point of the query image, wherein the database feature point represents the feature point of the database image.
- the database feature points matching the feature points of the query image may be determined according to the similarity between the feature vectors of the feature points.
- the searching for the database feature points matching the feature points of the query image according to the feature vector of the feature points of the query image includes: The vector is decomposed to obtain a plurality of sub-feature vectors of the feature points of the query image, wherein the dimension of the sub-feature vectors of the feature points of the query image is less than the dimension of the feature vectors of the feature points of the query image; find A database class center matched with a plurality of sub-feature vectors of the feature points of the query image, wherein the database class center represents the class center of the sub-feature vectors of the database feature points; The database class center of the feature vector matching determines the first group of database feature points matching the feature points of the query image.
- the feature vector of the feature points of the query image is [s 1 ,s 2 ,s 3 ,s 4 ,s 5 ,s 6 ,s 7 ,s 8 ,s 9 ], and the feature vector of the feature points of the query image is decomposed For 3 sub-eigenvectors, get [s 1 ,s 2 ,s 3 ], [s 4 ,s 5 ,s 6 ] and [s 7 ,s 8 ,s 9 ].
- the dimension of the feature vector of the feature points of the query image can be much higher.
- the embodiments of the present disclosure do not limit the dimension of the feature vector of the feature points of the query image, the number of sub-feature vectors of the feature points of the query image, and the dimension of the sub-feature vectors of the feature points of the query image.
- the feature vector of the feature points of the query image is decomposed into lower-dimensional sub-feature vectors before matching, thereby improving the speed of determining the database feature points matching the feature points of the query image.
- the method before the searching for a database class center matching a plurality of sub-feature vectors of the feature points of the query image, the method further includes: extracting feature vectors of a plurality of database feature points; Any database feature point in the plurality of database feature points, decompose the feature vector of the any database feature point, and obtain a plurality of sub-feature vectors of the any database feature point, wherein the any database feature
- the dimension of the sub-feature vector of the point is less than the dimension of the feature vector of the feature point of any database
- the sub-feature vector of the feature points of the multiple databases is clustered to obtain the database class center; the database of any one is established Correspondence between feature points and database class centers.
- a plurality of database images may be included in the database, wherein the database images represent images in the database.
- the feature vector of the database feature points in the database image can be extracted in a manner similar to the manner of extracting the feature vector of the feature points of the query image described above.
- the value range of the number of database feature points extracted from each database image may be 500-2500.
- the feature vector of any database feature point may be decomposed into a plurality of sub-feature vectors.
- the feature vector of a feature point in a database is [1,3,2,3,4,5,3,2,1]
- the feature vector of the feature point in the database can be decomposed into 3 sub-feature vectors [1,3 ,2], [3,4,5] and [3,2,1].
- the dimension of the feature vector of the database feature point may be much higher, which is not limited in this embodiment of the present disclosure.
- methods such as K-means, KD tree, or vocabulary tree can be used to cluster sub-feature vectors of multiple database feature points to obtain database class centers.
- the correspondence between the database feature points and the database class center is recorded.
- the correspondence between the database class center and the database feature points corresponding to all sub-feature vectors in the class to which the database class center belongs may be recorded. For example, if the sub-feature vectors of database feature point 5, database feature point 6, and database feature point 7 belong to the class to which database class center 1 belongs, then database class center 1 and database feature point 5, database feature point 6, and database feature point can be recorded.
- the correspondence between 7, for example, can be recorded as (1:5,6,7).
- database feature point 1 corresponds to database class center 2, database class center 5, and database class center 8
- database class center 5 corresponds to database class center 2
- database class center 5 corresponds to database class center 8
- database class center 8 can be recorded , for example, it can be written as (1:2,5,8).
- an indexer can be built from all database class centers.
- the determining the first group of database feature points matching the feature points of the query image according to the database class centers matched with multiple sub-feature vectors of the feature points of the query image includes: Determine the candidate database feature points corresponding to the feature points of the query image according to the database class centers matching the multiple sub-feature vectors of the feature points of the query image; perform geometric verification on the candidate database feature points, and determine The first set of database feature points matching the feature points of the query image.
- the corresponding feature points of the query image can be determined.
- a Cartesian product method can be used to determine the candidate database feature points corresponding to the feature points of the query image.
- the feature vector of the feature point A of the query image corresponds to 3 sub-feature vectors A1, A2 and A3
- the database class center matching the sub-feature vector A1 is the database class center P1
- the database class center matching the sub-feature vector A2 is The database class center P2
- the database class center that matches the sub-feature vector A3 is the database class center P3
- the database feature points corresponding to all the sub-feature vectors in the class to which the database class centers P1, P2, and P3 belong can be determined as The candidate database feature point corresponding to the feature point A of the query image.
- database feature points corresponding to all sub-feature vectors in the class to which database class center P1 belongs include database feature points D1, D2, D5, and D6, and database feature points corresponding to all sub-feature vectors in the class to which database class center P2 belongs Including database feature points D1, D7, D8, D9, database feature points corresponding to all sub-feature vectors in the class to which database class center P3 belongs, including database feature points D3, D4, D10, D1-D10 can be determined as all The candidate database feature point corresponding to the feature point A of the query image.
- the feature vector of the feature point A of the query image corresponds to 3 sub-feature vectors A1, A2 and A3, the database class center matching the sub-feature vector A1 is the database class center P1, and the database class center matching the sub-feature vector A2
- the database class center is the database class center P2
- the database class center matching the sub-feature vector A3 is the database class center P3
- the database feature points corresponding to the database class centers P1, P2 and P3 can be determined as the query image
- the feature point A corresponds to the candidate database feature point.
- the database feature points corresponding to all sub-feature vectors in the class to which database class center P1 belongs include database feature points D1, D2, D3, D5, and D6, and the database corresponding to all sub-feature vectors in the class to which database class center P2 belongs
- the feature points include database feature points D1, D2, D3, D7, D8, and D9.
- the database feature points corresponding to all sub-feature vectors in the class to which the database class center P3 belongs include database feature points D1, D3, D4, and D10.
- D1 and D3 are respectively determined as candidate database feature points corresponding to the feature point A of the query image.
- the number of candidate database feature points corresponding to each feature point of the query image may be one or more, for example, 25 at most. That is, for any feature point of the query image, multiple database feature points closest to the feature point may be used as candidate database feature points corresponding to the feature point.
- the candidate database feature points corresponding to the feature points of the query image are determined according to the database class centers that match with multiple sub-feature vectors of the feature points of the query image, and then the candidate database feature points are analyzed.
- the geometric verification is to determine the first group of database feature points that match the feature points of the query image, so that the first group of database feature points that match the feature points of the query image can be quickly and accurately determined.
- performing geometric verification on the candidate database feature points and determining a first group of database feature points matching the feature points of the query image includes: determining the candidate database feature points and the corresponding query image The similarity transformation matrix between the feature points; in a plurality of preset matrix intervals, determine the matrix interval to which the similarity transformation matrix belongs; The matrix interval is determined as the target matrix interval; according to the candidate database feature points corresponding to the similarity transformation matrix in the target matrix interval, the first group of database feature points matching the feature points of the query image is determined.
- the feature point of any candidate database feature point corresponding to the query image represents the feature point of the query image that matches the candidate database feature point.
- a similarity transformation matrix between the candidate database feature point and the corresponding query image feature point can be constructed according to the coordinates, scale and rotation angle of the candidate database feature point.
- the similarity transformation matrix consists of 4 elements
- the preset matrix interval includes the value range of each element in the similarity transformation matrix.
- the preset matrix interval can be expressed as According to the value of each element in the similarity transformation matrix and the value range of each element in the preset matrix interval, the matrix interval to which each similarity transformation matrix belongs can be determined, so that the number of similar transformation matrices in each matrix interval can be determined. .
- the first quantity condition may be that the number of similar transformation matrices is greater than or equal to the second preset value.
- the matrix interval is determined as the target matrix interval.
- the first quantity condition may be the M matrix intervals that belong to the largest number of similar transformation matrices, where M is a positive integer. According to the number of similar transformation matrices in each matrix interval, M matrix intervals with the largest number of similar transformation matrices can be determined, and the M matrix intervals with the largest number of similar transformation matrices can be respectively determined as target matrix intervals.
- the first quantity condition may be that the number of similar transformation matrices is greater than or equal to the second preset value, and belongs to the M matrix intervals with the largest number of similar transformation matrices.
- the matrix interval can be determined as the target matrix interval.
- geometric verification is performed by means of matrix interval voting, so that the feature points matching the feature points of the query image can be quickly determined, thereby improving the visual positioning speed.
- the determining the first group of database feature points matching the feature points of the query image according to the candidate database feature points corresponding to the similarity transformation matrix in the target matrix interval includes: determining a backup Select the database image to which the database feature point belongs, wherein the candidate database feature point represents the candidate database feature point corresponding to the similarity transformation matrix in the target matrix interval; the database image that satisfies the second quantity condition according to the candidate database feature point
- the candidate database feature points in the first group of database feature points are determined. Accordingly, the candidate database feature points can be filtered to obtain the first set of database feature points, thereby determining the visual positioning result of the query image based on the first set of database feature points, which helps to improve the accuracy of the determined visual positioning result.
- the second quantity condition may be greater than or equal to the first preset value, for example, the first preset value is equal to 12.
- the candidate database feature points in any database image satisfy the second quantity condition, it may be determined that the candidate database feature points in any database image belong to the first group of database feature points.
- the database image with the most feature points of the candidate database may be selected from the database images whose feature points of the candidate database satisfy the second quantity condition.
- the first N database images, and the candidate database feature points in the N database images are determined as belonging to the first group of database feature points.
- N is a positive integer, for example, N equals 30.
- the candidate database feature points in all database images whose feature points of the candidate database meet the second quantity condition may be determined as the first group of database feature points.
- the visual positioning result of the query image is a positioning failure.
- RANSAC Random Sample Consensus
- the searching for a database feature point matching the feature point of the query image according to the feature vector of the feature point of the query image further includes: determining that the first group of database feature points corresponds to three-dimensional coordinates; determine the second group of database feature points corresponding to the three-dimensional coordinates; the second group of database feature points are those other than the first group of database feature points in the multiple groups of database feature points corresponding to the three-dimensional coordinates database feature points; determining the visual positioning result of the query image according to the first group of database feature points and the second group of database feature points.
- the three-dimensional coordinates corresponding to the first group of database feature points may be determined.
- a three-dimensional coordinate can correspond to multiple database feature points.
- all the database feature points corresponding to the three-dimensional coordinates can be determined, and the database feature points corresponding to the three-dimensional coordinates other than the first group of database feature points can be determined.
- the database feature points of are determined as the second group of database feature points.
- the 2D feature points in the database picture that match the 2D feature points in the query picture are determined, and then the 2D feature points in the query picture are obtained according to the correspondence between the 2D feature points and the 3D feature points of the database picture.
- the corresponding relationship between points and 3D feature points compared with the related art, which only uses the matching of 2D feature points to 3D feature points for positioning, the embodiment of the present disclosure can increase the number of associated point pairs through reverse search, Thus, the robustness of visual positioning can be improved.
- the associated point pair represents the database feature point and the corresponding three-dimensional coordinate.
- step S13 the visual positioning result of the query image is determined according to the matched database feature points.
- the visual positioning result of the query image may include pose information corresponding to the query image.
- the pose information may include one or both of position information and attitude information.
- the position information may be represented by coordinates, and the attitude information may be represented by angles.
- the visual positioning result of the query image may include six degrees of freedom pose information of the query image.
- a method such as Perspective-n-Point (PnP) may be used to determine the visual positioning result of the query image.
- PnP Perspective-n-Point
- an efficient perspective n-point Efficient Perspective-n-Point, EPnP
- perspective 3-point Perspective-3-Point, P3P
- direct least squares Direct Least-Squares, DLS
- the third preset value may be equal to 12.
- the pose information corresponding to the query image can be obtained.
- the interior points can represent the feature points that are correctly matched in the case of solving the pose.
- the inliers can be determined according to the inlier_mask of the RANSAC algorithm.
- the pose information corresponding to the query image can also be optimized by a nonlinear optimizer to obtain the final visual positioning result.
- a query image may be collected by the user equipment, and the user equipment sends a visual positioning request to the cloud server, where the visual positioning request carries the query image; the cloud server uses the visual positioning method provided by the embodiment of the present disclosure to perform processing, to obtain the visual positioning result of the query image, and return the visual positioning result of the query image to the user equipment.
- the user equipment may be a device with a camera function, such as a mobile phone.
- the visual positioning request may also include camera intrinsic parameter information of the user equipment, for example, may include the focal length and the position of the principal point.
- the embodiments of the present disclosure can be applied to various application scenarios such as positioning and navigation systems, high-precision maps, and augmented reality products.
- the embodiments of the present disclosure can be used to provide visual positioning and navigation services in large indoor scenes such as shopping malls, airports, and museums, and solve the problem that effective positioning cannot be performed in indoor scenes because there is no GPS signal.
- high-precision maps can be enhanced to achieve higher-precision positioning combined with GPS signals, and visual positioning services can be provided in places with weak outdoor GPS signals.
- the embodiments of the present disclosure can quickly obtain the six-degree-of-freedom position and attitude information of the user equipment, the embodiments of the present disclosure can be applied to augmented reality applications.
- multiple photos of shopping malls can be taken first as database images.
- transform the database image to obtain multiple transformed images corresponding to the database image; perform feature extraction on the database image and multiple transformed images corresponding to the database image to obtain the features of the database image
- the feature vector of the point that is, the feature vector of the database feature point.
- the feature vector of the database feature point can be decomposed to obtain multiple sub-feature vectors of the database feature point.
- the sub-feature vectors of all database feature points are clustered to obtain multiple database class centers.
- any database feature point establish the corresponding relationship between the database feature point and the database class center.
- the robot can use the currently collected image as a query image. Transform the query image to obtain multiple transformed images corresponding to the query image; perform feature extraction on the query image and the multiple transformed images corresponding to the query image to obtain feature vectors of feature points of the query image.
- the feature vector of the feature points of the query image is decomposed to obtain multiple sub-feature vectors of the feature points of the query image. Find the database class centers that match the multiple sub-feature vectors of the feature points of the query image, and determine the first set of database features that match the feature points of the query image according to the database class centers that match the multiple sub-feature vectors of the feature points of the query image. point.
- the visual positioning result of the query image is determined, that is, the current position information of the robot in the shopping mall and the current posture information of the robot are determined.
- the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the execution order of each step should be based on its function and possible intrinsic Logical OK.
- embodiments of the present disclosure also provide visual positioning devices, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any of the visual positioning methods provided by the embodiments of the present disclosure, and the corresponding technical solutions and descriptions and refer to the methods. Some of the corresponding records will not be repeated.
- FIG. 5 shows a block diagram of a visual positioning apparatus provided by an embodiment of the present disclosure.
- the visual positioning device includes: a first extraction part 51 configured to extract feature vectors of the feature points of the query image; a search part 52 configured to extract feature vectors of the feature points of the query image according to the feature vectors of the query image , searching for the database feature points matching the feature points of the query image, wherein the database feature points represent the feature points of the database image; the determining part 53 is configured to determine the query based on the matched database feature points The visual localization result of the image.
- the first extraction part 51 is configured to: perform transformation processing on the query image to obtain at least one transformed image corresponding to the query image; Feature extraction is performed on at least two images in the images to obtain feature vectors of feature points of the query image.
- the first extraction part 51 is configured to: input at least two images of the query image and the at least one transformed image into a first neural network respectively, A neural network outputs the feature maps of the at least two images; performs grouped convolution on the feature maps of the at least two images to obtain at least two grouped convolution results; performs feature on the at least two grouped convolution results Fusion to obtain the feature vector of the feature points of the query image.
- the searching part 52 is configured to: decompose the feature vector of the feature points of the query image to obtain multiple sub-feature vectors of the feature points of the query image, wherein the The dimension of the sub-feature vector of the feature point of the query image is less than the dimension of the feature vector of the feature point of the query image; find the database class center that matches the multiple sub-feature vectors of the feature point of the query image, wherein all the The database class center represents the class center of the sub-feature vector of the database feature point; according to the database class center matched with the multiple sub-feature vectors of the feature point of the query image, determine the first group matching the feature point of the query image. Database feature points.
- the apparatus further includes: a second extraction part configured to extract feature vectors of multiple database feature points; a decomposition part configured to extract feature vectors for any of the multiple database feature points; A database feature point, decompose the feature vector of any database feature point to obtain a plurality of sub-feature vectors of any database feature point, wherein the dimension of the sub-feature vector of any database feature point is less than The dimension of the feature vector of the feature points of any database; the clustering part is configured to cluster the sub-feature vectors of the multiple database feature points to obtain the database class center; the establishment part is configured to establish all Describe the correspondence between any database feature point and the database class center.
- the searching part 52 is configured to: determine the candidate database corresponding to the feature points of the query image according to the database class centers matching a plurality of sub-feature vectors of the feature points of the query image Feature points; perform geometric verification on the candidate database feature points, and determine a first group of database feature points matching the feature points of the query image.
- the searching part 52 is configured to: determine the similarity transformation matrix between the feature points of the candidate database and the feature points of the corresponding query image; in a plurality of preset matrix intervals, Determine the matrix interval to which the similarity transformation matrix belongs; determine the matrix interval in which the number of similar transformation matrices in the plurality of matrix intervals satisfies the first quantity condition as the target matrix interval; according to the similarity transformation matrix in the target matrix interval For the corresponding candidate database feature points, the first group of database feature points matching the feature points of the query image is determined.
- the searching part 52 is configured to: determine the database image to which the candidate database feature point belongs, wherein the candidate database feature point indicates that the similarity transformation matrix in the target matrix interval corresponds to The candidate database feature points of the candidate database; the first group of database feature points is determined according to the candidate database feature points in the database images whose candidate database feature points satisfy the second quantity condition.
- the searching part 52 is configured to: determine three-dimensional coordinates corresponding to the first group of database feature points; determine a second group of database feature points corresponding to the three-dimensional coordinates;
- the group database feature points are database feature points other than the first group of database feature points in the multiple groups of database feature points corresponding to the three-dimensional coordinates; according to the first group of database feature points and the second group of database feature points , and determine the visual positioning result of the query image.
- the positioning process is more direct and effective, the memory consumption is lower, the time-consuming of visual positioning can be reduced, and the positioning process is more reliable.
- the functions or included parts of the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments.
- a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., of course, a unit, a module or a non-modularity.
- Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
- Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
- a processor in the device executes a method for implementing the visual positioning method provided by any of the above embodiments. instruction.
- Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the visual positioning method provided by any of the foregoing embodiments.
- Embodiments of the present disclosure further provide an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke executable instructions stored in the memory instruction to execute the above method.
- the electronic device may be provided as a terminal, server or other form of device.
- FIG. 6 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
- electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
- electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816 .
- the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
- the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
- processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
- processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
- Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic or Optical Disk Magnetic Disk
- Power supply assembly 806 provides power to various components of electronic device 800 .
- Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
- Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
- the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
- the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
- Audio component 810 is configured to output and/or input audio signals.
- audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
- audio component 810 also includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
- Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
- the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
- Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
- the electronic device 800 may access wireless networks based on communication standards, such as Wi-Fi, 2G, 3G, 4G/LTE, 5G, or a combination thereof.
- the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A programmed gate array
- controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
- a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
- FIG. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server.
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource, represented by memory 1932, for storing instructions, such as applications, executable by the processing component 1922.
- An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
- Electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows or similar.
- a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
- a computer program comprising computer readable code which, when executed in an electronic device, implements the above method when executed by a processor in the electronic device.
- Embodiments of the present disclosure may be systems, methods and/or computer program products.
- the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
- a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Examples (a non-exhaustive list) of computer readable storage media may include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory) ), static random access memory (SRAM), portable compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory sticks, floppy disks, mechanically encoded devices such as punch cards with instructions stored thereon Or the protruding structure in the groove, and any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- SRAM static random access memory
- CD-ROM compact disc read only memory
- DVD digital versatile disc
- memory sticks floppy disks, mechanically encoded devices such as punch cards with instructions stored thereon Or the protruding structure in the groove, and any suitable combination of the above.
- Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
- the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- the computer program instructions for carrying out the operations of the disclosed embodiments may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state setting data, or programmed in one or more Source or object code written in any combination of languages, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
- LAN local area network
- WAN wide area network
- custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs)
- FPGAs field programmable gate arrays
- PDAs programmable logic arrays
- Computer readable program instructions are executed to implement various aspects of the embodiments of the present disclosure.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
- the computer program product can be implemented in hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in other embodiments of the present disclosure, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
- a software development kit Software Development Kit, SDK
- Embodiments of the present disclosure relate to a visual positioning method and apparatus, an electronic device, and a computer-readable storage medium.
- the method includes: extracting a feature vector of a feature point of a query image; searching for a database feature point matching the feature point of the query image according to the feature vector of the feature point of the query image, wherein the database feature point represents feature points of the database image; determine the visual positioning result of the query image according to the matched database feature points.
- the visual positioning result of the query image is described, so there is no need to retrieve and obtain a local map in the visual positioning, and feature point matching is performed directly, and the visual positioning result of the query image is determined according to the database feature points matching the feature points of the query image, so that the positioning process It is more direct and effective, with lower memory consumption, which can reduce the time-consuming of visual positioning, and the positioning process is more reliable.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (13)
- 一种视觉定位方法,包括:A visual positioning method comprising:提取查询图像的特征点的特征向量;Extract the feature vector of the feature points of the query image;根据所述查询图像的特征点的特征向量,查找与所述查询图像的特征点匹配的数据库特征点,其中,所述数据库特征点表示数据库图像的特征点;According to the feature vector of the feature points of the query image, search for the database feature points matching the feature points of the query image, wherein the database feature points represent the feature points of the database image;根据所述匹配的数据库特征点,确定所述查询图像的视觉定位结果。According to the matched database feature points, the visual positioning result of the query image is determined.
- 根据权利要求1所述的方法,其中,所述提取查询图像的特征点的特征向量,包括:The method according to claim 1, wherein the extracting the feature vector of the feature points of the query image comprises:对查询图像进行变换处理,得到所述查询图像对应的至少一个变换图像;Transforming the query image to obtain at least one transformed image corresponding to the query image;对所述查询图像和所述至少一个变换图像中的至少两个图像进行特征提取,得到所述查询图像的特征点的特征向量。Feature extraction is performed on at least two images of the query image and the at least one transformed image to obtain feature vectors of feature points of the query image.
- 根据权利要求2所述的方法,其中,所述对所述查询图像和所述至少一个变换图像中的至少两个图像进行特征提取,得到所述查询图像的特征点的特征向量,包括:The method according to claim 2, wherein, performing feature extraction on at least two images in the query image and the at least one transformed image to obtain feature vectors of feature points of the query image, comprising:将所述查询图像和所述至少一个变换图像中的至少两个图像分别输入第一神经网络中,经由所述第一神经网络输出所述至少两个图像的特征图;Inputting at least two images of the query image and the at least one transformed image into a first neural network respectively, and outputting feature maps of the at least two images via the first neural network;对所述至少两个图像的特征图进行分组卷积,得到至少两项分组卷积结果;Performing grouped convolution on the feature maps of the at least two images to obtain at least two grouped convolution results;对所述至少两项分组卷积结果进行特征融合,得到所述查询图像的特征点的特征向量。Feature fusion is performed on the at least two grouped convolution results to obtain feature vectors of feature points of the query image.
- 根据权利要求1至3中任意一项所述的方法,其中,所述根据所述查询图像的特征点的特征向量,查找与所述查询图像的特征点匹配的数据库特征点,包括:The method according to any one of claims 1 to 3, wherein, according to the feature vector of the feature points of the query image, searching for the database feature points matching the feature points of the query image comprises:将所述查询图像的特征点的特征向量进行分解,得到所述查询图像的特征点的多个子特征向量,其中,所述查询图像的特征点的子特征向量的维数小于所述查询图像的特征点的特征向量的维数;Decompose the feature vector of the feature points of the query image to obtain a plurality of sub-feature vectors of the feature points of the query image, wherein the dimension of the sub-feature vectors of the feature points of the query image is smaller than the dimension of the query image. The dimension of the feature vector of the feature point;查找与所述查询图像的特征点的多个子特征向量匹配的数据库类中心,其中,所述数据库类中心表示数据库特征点的子特征向量的类中心;Finding a database class center that matches a plurality of sub-feature vectors of the feature points of the query image, wherein the database class center represents the class center of the sub-feature vectors of the database feature points;根据与所述查询图像的特征点的多个子特征向量匹配的数据库类中心,确定与所述查询图像的特征点匹配的第一组数据库特征点。A first group of database feature points matching the feature points of the query image is determined according to the database class centers matching the plurality of sub-feature vectors of the feature points of the query image.
- 根据权利要求4所述的方法,其中,在所述查找与所述查询图像的特征点的多个子特征向量匹配的数据库类中心之前,所述方法还包括:The method according to claim 4, wherein before the searching for a database class center matching a plurality of sub-feature vectors of the feature points of the query image, the method further comprises:提取多个数据库特征点的特征向量;Extract feature vectors of multiple database feature points;对于所述多个数据库特征点中的任一数据库特征点,将所述任一数据库特征点的特征向量进行分解,得到所述任一数据库特征点的多个子特征向量,其中,所述任一数据库特征点的子特征向量的维数小于所述任一数据库特征点的特征向量的维数;For any database feature point among the plurality of database feature points, the feature vector of the any database feature point is decomposed to obtain a plurality of sub-feature vectors of the any database feature point, wherein the any one The dimension of the sub-feature vector of the database feature point is less than the dimension of the feature vector of any of the database feature points;对所述多个数据库特征点的子特征向量进行聚类,得到数据库类中心;Clustering the sub-feature vectors of the multiple database feature points to obtain a database class center;建立所述任一数据库特征点与数据库类中心之间的对应关系。A corresponding relationship between any of the database feature points and the database class center is established.
- 根据权利要求4或5所述的方法,其中,所述根据与所述查询图像的特征点的多个子特征向量匹配的数据库类中心,确定与所述查询图像的特征点匹配的第一组数据库特征点,包括:The method according to claim 4 or 5, wherein the first group of databases matching the feature points of the query image is determined according to the database class centers matching a plurality of sub-feature vectors of the feature points of the query image Feature points, including:根据与所述查询图像的特征点的多个子特征向量匹配的数据库类中心,确定所述查询图像的特征点对应的候选数据库特征点;Determine the candidate database feature points corresponding to the feature points of the query image according to the database class centers matching the multiple sub-feature vectors of the feature points of the query image;对所述候选数据库特征点进行几何验证,确定与所述查询图像的特征点匹配的第一组数据库特征点。Geometric verification is performed on the candidate database feature points, and a first group of database feature points matching the feature points of the query image is determined.
- 根据权利要求6所述的方法,其中,所述对所述候选数据库特征点进行几何验证,确定与所述查询图像的特征点匹配的第一组数据库特征点,包括:The method according to claim 6, wherein, performing geometric verification on the candidate database feature points to determine the first group of database feature points matching the feature points of the query image, comprising:确定所述候选数据库特征点与相应的查询图像的特征点之间的相似变换矩阵;determining the similarity transformation matrix between the feature points of the candidate database and the feature points of the corresponding query image;在预设的多个矩阵区间中,确定所述相似变换矩阵所属的矩阵区间;In a plurality of preset matrix intervals, determine the matrix interval to which the similarity transformation matrix belongs;将所述多个矩阵区间中相似变换矩阵的数量满足第一数量条件的矩阵区间,确定为目标矩阵区间;Determining a matrix interval where the number of similar transformation matrices in the plurality of matrix intervals satisfies the first quantity condition as a target matrix interval;根据所述目标矩阵区间中的相似变换矩阵对应的候选数据库特征点,确定与所述查询图像的特征点匹配的所述第一组数据库特征点。According to the candidate database feature points corresponding to the similarity transformation matrix in the target matrix interval, the first group of database feature points matching the feature points of the query image is determined.
- 根据权利要求7所述的方法,其中,所述根据所述目标矩阵区间中的相似变换矩阵对应的候选数据库特征点,确定与所述查询图像的特征点匹配的所述第一组数据库特征点,包括:The method according to claim 7, wherein the first group of database feature points matching the feature points of the query image is determined according to the candidate database feature points corresponding to the similarity transformation matrix in the target matrix interval ,include:确定备选数据库特征点所属的数据库图像,其中,所述备选数据库特征点表示所述目标矩阵区间中的相似变换矩阵对应的候选数据库特征点;Determine the database image to which the candidate database feature point belongs, wherein the candidate database feature point represents the candidate database feature point corresponding to the similarity transformation matrix in the target matrix interval;根据备选数据库特征点满足第二数量条件的数据库图像中的备选数据库特征点,确定所述第一组数据库特征点。The first group of database feature points is determined according to candidate database feature points in the database image whose candidate database feature points satisfy the second quantity condition.
- 根据权利要求4至8中任意一项所述的方法,其中,所述根据所述查询图像的特征点的特征向量,查找与所述查询图像的特征点匹配的数据库特征点,还包括:The method according to any one of claims 4 to 8, wherein, according to the feature vector of the feature points of the query image, searching for the database feature points matching the feature points of the query image, further comprising:确定所述第一组数据库特征点对应的三维坐标;determining the three-dimensional coordinates corresponding to the first group of database feature points;确定所述三维坐标对应的第二组数据库特征点;所述第二组数据库特征点为与所述三维坐标对应的多组数据库特征点中所述第一组数据库特征点以外的数据库特征点;Determine the second group of database feature points corresponding to the three-dimensional coordinates; the second group of database feature points are database feature points other than the first group of database feature points in the multiple groups of database feature points corresponding to the three-dimensional coordinates;根据所述第一组数据库特征点和所述第二组数据库特征点,确定所述查询图像的视觉定位结果。According to the first set of database feature points and the second set of database feature points, a visual positioning result of the query image is determined.
- 一种视觉定位装置,包括:A visual positioning device, comprising:第一提取部分,被配置为提取查询图像的特征点的特征向量;a first extraction part configured to extract feature vectors of feature points of the query image;查找部分,被配置为根据所述查询图像的特征点的特征向量,查找与所述查询图像的特征点匹配的数据库特征点,其中,所述数据库特征点表示数据库图像的特征点;a search section configured to search for a database feature point matching the feature point of the query image according to the feature vector of the feature point of the query image, wherein the database feature point represents the feature point of the database image;确定部分,被配置为根据所述匹配的数据库特征点,确定所述查询图像的视觉定位结果。The determining part is configured to determine the visual positioning result of the query image according to the matched database feature points.
- 一种电子设备,包括:An electronic device comprising:一个或多个处理器;one or more processors;被配置为存储可执行指令的存储器;memory configured to store executable instructions;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行权利要求1至9中任意一项所述的方法。wherein the one or more processors are configured to invoke executable instructions stored in the memory to perform the method of any one of claims 1-9.
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至9中任意一项所述的方法。A computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the method of any one of claims 1 to 9 when executed by a processor.
- 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行时实现权利要求1至9中任意一项所述的方法。A computer program, comprising computer-readable codes, when the computer-readable codes are executed in an electronic device, a processor in the electronic device implements the method described in any one of claims 1 to 9 when executed. method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010710996.0 | 2020-07-22 | ||
CN202010710996.0A CN111859003B (en) | 2020-07-22 | 2020-07-22 | Visual positioning method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022016803A1 true WO2022016803A1 (en) | 2022-01-27 |
Family
ID=73002310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/139166 WO2022016803A1 (en) | 2020-07-22 | 2020-12-24 | Visual positioning method and apparatus, electronic device, and computer readable storage medium |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN111859003B (en) |
TW (1) | TW202205206A (en) |
WO (1) | WO2022016803A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111859003B (en) * | 2020-07-22 | 2021-12-28 | 浙江商汤科技开发有限公司 | Visual positioning method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104596519A (en) * | 2015-02-17 | 2015-05-06 | 哈尔滨工业大学 | RANSAC algorithm-based visual localization method |
CN104820718A (en) * | 2015-05-22 | 2015-08-05 | 哈尔滨工业大学 | Image classification and searching method based on geographic position characteristics and overall situation vision characteristics |
US20150286894A1 (en) * | 2012-11-16 | 2015-10-08 | Enswers Co., Ltd. | System and method for providing additional information using image matching |
CN110296686A (en) * | 2019-05-21 | 2019-10-01 | 北京百度网讯科技有限公司 | Localization method, device and the equipment of view-based access control model |
CN110390356A (en) * | 2019-07-03 | 2019-10-29 | Oppo广东移动通信有限公司 | Visual dictionary generation method and device, storage medium |
CN111859003A (en) * | 2020-07-22 | 2020-10-30 | 浙江商汤科技开发有限公司 | Visual positioning method and device, electronic equipment and storage medium |
-
2020
- 2020-07-22 CN CN202010710996.0A patent/CN111859003B/en active Active
- 2020-12-24 WO PCT/CN2020/139166 patent/WO2022016803A1/en active Application Filing
-
2021
- 2021-05-04 TW TW110116124A patent/TW202205206A/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150286894A1 (en) * | 2012-11-16 | 2015-10-08 | Enswers Co., Ltd. | System and method for providing additional information using image matching |
CN104596519A (en) * | 2015-02-17 | 2015-05-06 | 哈尔滨工业大学 | RANSAC algorithm-based visual localization method |
CN104820718A (en) * | 2015-05-22 | 2015-08-05 | 哈尔滨工业大学 | Image classification and searching method based on geographic position characteristics and overall situation vision characteristics |
CN110296686A (en) * | 2019-05-21 | 2019-10-01 | 北京百度网讯科技有限公司 | Localization method, device and the equipment of view-based access control model |
CN110390356A (en) * | 2019-07-03 | 2019-10-29 | Oppo广东移动通信有限公司 | Visual dictionary generation method and device, storage medium |
CN111859003A (en) * | 2020-07-22 | 2020-10-30 | 浙江商汤科技开发有限公司 | Visual positioning method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
YUAN LIU; ZEHONG SHEN; ZHIXUAN LIN; SIDA PENG; HUJUN BAO; XIAOWEI ZHOU: "GIFT: Learning Transformation-Invariant Dense Visual Descriptors via Group CNNs", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 14 November 2019 (2019-11-14), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081532326 * |
Also Published As
Publication number | Publication date |
---|---|
CN111859003A (en) | 2020-10-30 |
TW202205206A (en) | 2022-02-01 |
CN111859003B (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11120078B2 (en) | Method and device for video processing, electronic device, and storage medium | |
TWI753348B (en) | Pose determination method, pose determination device, electronic device and computer readable storage medium | |
US8391615B2 (en) | Image recognition algorithm, method of identifying a target image using same, and method of selecting data for transmission to a portable electronic device | |
TWI761851B (en) | Image processing method, image processing apparatus, electronic device, and computer-readable storage medium | |
EP3206163B1 (en) | Image processing method, mobile device and method for generating a video image database | |
US20200327353A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
US20140253592A1 (en) | Method for providing augmented reality, machine-readable storage medium, and portable terminal | |
CN111538855B (en) | Visual positioning method and device, electronic equipment and storage medium | |
JP2021524957A (en) | Image processing methods and their devices, terminals and computer programs | |
CN110059652B (en) | Face image processing method, device and storage medium | |
CN110909209B (en) | Live video searching method and device, equipment, server and storage medium | |
CN110781957A (en) | Image processing method and device, electronic equipment and storage medium | |
WO2023103377A1 (en) | Calibration method and apparatus, electronic device, storage medium, and computer program product | |
WO2022033111A1 (en) | Image information extraction method, training method and apparatus, medium, and electronic device | |
AU2014271204A1 (en) | Image recognition of vehicle parts | |
WO2023273499A1 (en) | Depth measurement method and apparatus, electronic device, and storage medium | |
WO2022016803A1 (en) | Visual positioning method and apparatus, electronic device, and computer readable storage medium | |
CN114581525A (en) | Attitude determination method and apparatus, electronic device, and storage medium | |
WO2023066373A1 (en) | Sample image determination method and apparatus, device, and storage medium | |
WO2023155393A1 (en) | Feature point matching method and apparatus, electronic device, storage medium and computer program product | |
WO2023155350A1 (en) | Crowd positioning method and apparatus, electronic device, and storage medium | |
WO2022217882A1 (en) | Pose data processing method, and interface, apparatus, system, device and medium | |
WO2022110801A1 (en) | Data processing method and apparatus, electronic device, and storage medium | |
US20220321798A1 (en) | Shooting method, apparatus, and electronic device | |
US20220345621A1 (en) | Scene lock mode for capturing camera images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20946117 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20946117 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20946117 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 260723) |