WO2021057742A1 - Positioning method and apparatus, device, and storage medium - Google Patents

Positioning method and apparatus, device, and storage medium Download PDF

Info

Publication number
WO2021057742A1
WO2021057742A1 PCT/CN2020/116924 CN2020116924W WO2021057742A1 WO 2021057742 A1 WO2021057742 A1 WO 2021057742A1 CN 2020116924 W CN2020116924 W CN 2020116924W WO 2021057742 A1 WO2021057742 A1 WO 2021057742A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
image
target
sampling
feature
Prior art date
Application number
PCT/CN2020/116924
Other languages
French (fr)
Chinese (zh)
Inventor
金珂
杨宇尘
陈岩
方攀
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021057742A1 publication Critical patent/WO2021057742A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the embodiments of the present application relate to electronic technology, and relate to, but are not limited to, positioning methods and devices, equipment, and storage media.
  • this positioning method is mainly based on the relative positional relationship between the characters and fixed objects in the image; in this way, the electronic device requires recognizable characters and fixed objects in the image when realizing positioning; otherwise, the positioning is Will fail. Therefore, the robustness of this positioning method is poor.
  • the positioning method, device, device, and storage medium provided by the embodiments of the present application do not depend on the fixed object and the object to be positioned in the image, so it has better robustness.
  • the technical solutions of the embodiments of the present application are implemented as follows:
  • the positioning method provided by the embodiments of the present application includes: determining feature points in an image to be processed collected by an image collection device; acquiring attribute information of the feature points; and comparing the attribute information of the feature points with a pre-built point cloud map The attribute information of the multiple sampling points in the data is matched to obtain the positioning result of the image acquisition device.
  • the positioning device includes: a first determining module configured to determine a feature point in an image to be processed collected by an image acquisition device; an attribute information obtaining module configured to obtain attribute information of the feature point; positioning The module is configured to match the attribute information of the characteristic point with the attribute information of a plurality of sampling points in a pre-built point cloud map to obtain a positioning result of the image acquisition device.
  • the electronic device provided by the embodiment of the present application includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the positioning method when the program is executed.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned positioning method are realized.
  • the method used to collect the image to be processed can be determined
  • the positioning result of the image acquisition device in this way, when positioning the object to be positioned carrying the image acquisition device, the positioning method does not depend on the fixed object and the object to be positioned in the image to be processed, so better results can be obtained Robustness.
  • FIG. 1 is a schematic diagram of an implementation process of a positioning method according to an embodiment of this application.
  • FIG. 2 is a schematic diagram of determining camera coordinates of multiple first target sampling points according to an embodiment of the application
  • FIG. 3 is a schematic diagram of an implementation process of a positioning method according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of the implementation process of the method for constructing a point cloud map according to an embodiment of the application
  • FIG. 5 is a schematic diagram of a feature point matching pair according to an embodiment of the application.
  • FIG. 6A is a schematic structural diagram of a positioning device according to an embodiment of the application.
  • 6B is a schematic structural diagram of another positioning device according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the application.
  • first ⁇ second ⁇ third involved in the embodiments of this application is used to distinguish different objects, and does not represent a specific order for the objects. Understandably, “first ⁇ second ⁇ third” Where permitted, the specific order or sequence can be interchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • the embodiments of the application provide a positioning method, which can be applied to electronic devices, which can be mobile phones, tablet computers, notebook computers, desktop computers, servers, robots, drones and other devices with information processing capabilities.
  • the functions implemented by the positioning method can be implemented by a processor in the electronic device calling program code.
  • the program code can be stored in a computer storage medium. It can be seen that the electronic device includes at least a processor and a storage medium.
  • FIG. 1 is a schematic diagram of the implementation process of the positioning method according to the embodiment of this application. As shown in FIG. 1, the method may include the following steps S101 to S103:
  • Step S101 Determine the characteristic points in the image to be processed collected by the image collecting device.
  • the feature point is the pixel point with a certain feature in the image to be processed.
  • the electronic device implements step S101, it usually takes the corner points in the image to be processed as feature points.
  • the image to be processed is a two-dimensional image.
  • the image to be processed is a red, green, and blue (Red, Green, Blue, RGB) image.
  • the image to be processed can also be an image in other formats, for example, the image is grayscale. image.
  • the image acquisition device may be various.
  • the image acquisition device is a monocular camera or a multi-camera (for example, a binocular camera).
  • the electronic device may include an image capture device, that is, the image capture device is installed in the electronic device.
  • the electronic device is a smart phone with at least one camera.
  • the electronic device may not include an image capture device, and the image capture device may send the captured image to the electronic device.
  • Step S102 Obtain the attribute information of the characteristic point.
  • the attribute information of the feature point is unique information of the feature point.
  • the attribute information of the feature point includes at least one of the following: image feature and camera coordinates.
  • the attribute information of the feature point includes image features and camera coordinates.
  • the image feature can be a feature descriptor of the feature point, or other information that can describe the image feature of the feature point.
  • the camera coordinates of the feature point refer to the coordinates of the feature point in the camera coordinate system.
  • the camera coordinates can be two-dimensional coordinates or three-dimensional coordinates.
  • Step S103 Match the attribute information of the characteristic point with the attribute information of multiple sampling points in a pre-built point cloud map to obtain a positioning result of the image acquisition device.
  • electronic devices can obtain sampling points on the surface of an object through image acquisition, and construct a point cloud map based on the world coordinates of these sampling points. That is, the world coordinates in the attribute information of the sampling point are the world coordinates of the sampling point in the sample image.
  • the point cloud map construction process can be implemented through steps S801 to S805 in the following embodiments.
  • the type of point cloud map can be sparse point cloud or dense point cloud.
  • the spacing between sampling points is greater than the spacing threshold, and the attribute information of the sampling points can be the world coordinates and image features of the sampling points; in the dense point cloud, the spacing between the sampling points is less than the spacing threshold, sampling
  • the attribute information of the point may include the world coordinates of the sampling point, but not the image characteristics of the sampling point.
  • the dense point cloud can also include the image features of the sampling points, but this greatly increases the data volume of the dense point cloud, which consumes a lot of storage resources of the memory.
  • the number of sampling points of the sparse point cloud is much smaller than the number of sampling points of the dense point cloud.
  • the sampling point is actually a feature point in the sample image where it is located, and the attribute information of the sampling point is unique information of the sampling point. It's just that the point is called the sampling point in the point cloud map, and the point is called the feature point in the image, so that the reader can clearly distinguish whether these points are points in the point cloud map or points in the image.
  • the attribute information of the sampling point includes at least one of the following: image characteristics and world coordinates. In some embodiments, the attribute information of the sampling point includes image features and world coordinates. In other embodiments, the attribute information of the sampling point includes world coordinates, but does not include image features. Understandably, the world coordinates of the sampling point refer to the coordinates of the sampling point in the world coordinate system. The world coordinates can be two-dimensional coordinates or three-dimensional coordinates.
  • the electronic device can determine the attribute information of the feature points in the image to be processed and the attribute information of multiple sampling points in the pre-built point cloud map to determine that it is used to collect the to-be-processed image.
  • the image is the positioning result of the image capture device. In this way, when the electronic device locates the image acquisition device, its positioning method does not depend on the fixed object and the object to be positioned in the image to be processed, so better robustness can be obtained.
  • the attribute information of the sampling points in the point cloud map includes image features and does not include image features, and the corresponding positioning methods are different.
  • the positioning method includes the following embodiments.
  • the embodiment of the present application provides a positioning method, and the method may include the following steps S201 to S204:
  • Step S201 determining feature points in the image to be processed collected by the image collecting device
  • Step S202 Acquire the image feature of the feature point and the camera coordinates of the feature point
  • Step S203 Match the image features of the feature points with the image features of multiple sampling points in a pre-built point cloud map to obtain a first target sampling point.
  • the purpose of matching is to be able to find out the target sampling points that represent the same spatial location point as the feature point from multiple sampling points.
  • a sampling point that is the same or similar to the image feature of the feature point in the point cloud map is usually determined as the first target sampling point.
  • the electronic device determines the first target sampling point through step S303 and step S304 in the following embodiment.
  • Step S204 Determine the positioning result of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point.
  • the point cloud map includes image features and world coordinates of multiple sampling points. Understandably, if the camera coordinates of a plurality of feature points and the world coordinates of the first target sampling point matching each of the feature points are known, the image acquisition device can be determined through steps S404 to S407 in the following embodiment World coordinates and orientation (ie the posture of the image capture device).
  • the electronic device can more accurately determine the first matching point from the multiple sampling points based on the image features of the feature points and the image features of the multiple sampling points. A target sampling point, thereby improving positioning accuracy.
  • the embodiment of the present application further provides a positioning method, and the method may include the following steps S301 to S305:
  • Step S301 determining feature points in the image to be processed collected by the image collecting device
  • Step S302 acquiring the image feature of the feature point and the camera coordinates of the feature point;
  • Step S303 Determine the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point.
  • the similarity refers to the similarity between the image features of the sampling points and the image features of the feature points.
  • the electronic device may determine the similarity through various methods. For example, the Euclidean distance between the image feature of the sampling point and the image feature of the feature point is determined, and the Euclidean distance is determined as the similarity.
  • the Hamming distance or cosine similarity between the image feature of the sampling point and the image feature of the feature point may also be determined, and the Hamming distance or cosine similarity is determined as the similarity.
  • the parameter type may be the Euclidean distance, Hamming distance, or cosine similarity.
  • Step S304 Determine the sampling point whose similarity with the feature point meets the first condition as the first target sampling point.
  • the electronic device when it implements step S304, it may determine a sampling point with a similarity between the plurality of sampling points and a characteristic point that is less than or equal to a similarity threshold as the first target sampling point. For example, the sampling point whose Euclidean distance with the feature point is less than or equal to the Euclidean distance threshold is determined as the first target sampling point; or, among the multiple sampling points, the sampling point with the smallest similarity to the feature point is determined The sampling point is determined as the first target sampling point. That is, the first condition is that the similarity is less than or equal to the similarity threshold, or the first condition is that the similarity is the smallest.
  • Step S305 Determine the positioning result of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point.
  • the electronic device when the electronic device implements step S305, it can determine at least one of the following positioning results according to the camera coordinates of the feature point and the world coordinates of the first target sampling point matching the feature point: image acquisition The world coordinates of the device and the orientation (ie posture) of the image capture device. For example, through steps S404 to S407 in the following embodiment, the world coordinates of the image capture device and the orientation of the image capture device are determined.
  • the embodiment of the present application further provides a positioning method, and the method may include the following steps S401 to S407:
  • Step S401 determining feature points in the image to be processed collected by the image collecting device
  • Step S402 acquiring the image feature of the feature point and the camera coordinates of the feature point;
  • Step S403 Match the image features of the feature points with the image features of multiple sampling points in a pre-built point cloud map to obtain a first target sampling point.
  • the point cloud map includes attribute information of multiple sampling points, and the attribute information of each sampling point includes image features and world coordinates.
  • Step S404 Determine the camera coordinates of the multiple first target sampling points according to the world coordinates of the multiple first target sampling points and the camera coordinates of the feature points matching the multiple first target sampling points.
  • the plurality of first target sampling points are at least three first target sampling points. That is, in step S404, according to the camera coordinates of the at least three feature points and the world coordinates of the first target sampling point that matches the three feature points, it can be accurately determined to match the three feature points. The camera coordinates of the first target sampling point.
  • point O is the origin of the camera coordinate system, that is, the optical center of the image capture device, and the multiple first target sampling points are the capitals A and B shown in FIG.
  • the three sampling points of, C, in the image to be processed 20 the feature point that matches the sampling point A is the lowercase feature point a, and the feature point that matches the sampling point B is the lowercase feature point b, which matches the sampling point C
  • the feature point of is the lowercase feature point c.
  • ⁇ a,b> refers to ⁇ aOb
  • ⁇ a,c> refers to ⁇ aOc
  • ⁇ b,c> refers to ⁇ bOc.
  • OA, OB, and OC are the distances between the origin of the coordinate system of the target local map and the target points A, B, and C, respectively.
  • the direction is from point O to point a;
  • the direction is from point O to point b;
  • the direction is from point O to point c.
  • Step S405 Determine a first rotation relationship and a first translation relationship of the camera coordinate system relative to the world coordinate system according to the world coordinates of the multiple first target sampling points and the camera coordinates of the multiple first target sampling points.
  • the first rotation relationship and the first translation relationship of the camera coordinate system relative to the world coordinate system can be determined.
  • Step S406 Determine a positioning result of the image acquisition device according to the first translation relationship and the first rotation relationship; wherein the positioning result includes coordinate information and orientation information.
  • the first of the camera coordinate system relative to the world coordinate system is determined.
  • the robot is instructed to perform the next action.
  • knowing the user's orientation can more accurately guide the user in which direction to walk/drive.
  • knowing the direction of the vehicle can more accurately control which direction the vehicle is traveling in.
  • the positioning method includes the following embodiments.
  • FIG. 3 is a schematic diagram of the implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 3, the method may include the following steps S501 to S505:
  • Step S501 determining feature points in the image to be processed collected by the image collection device
  • Step S502 acquiring the camera coordinates of the feature point
  • Step S503 According to the iterative strategy, the camera coordinates of the multiple feature points are matched with the world coordinates of the multiple sampling points in the pre-built point cloud map, and the target rotation relationship and target translation of the camera coordinate system relative to the world coordinate system are obtained. relationship.
  • the point cloud map includes the world coordinates of the sampling points, but does not include the image features of the sampling points. Understandably, when storing a point cloud map, image features generally occupy a relatively large storage space. For example, an image feature is a feature descriptor. Normally, the feature descriptor of each sampling point has 256 bytes, which requires the electronic device to allocate at least 256 bytes of storage space for each sampling point to store the feature description child. In some embodiments, the point cloud map does not include the image features of multiple sampling points. In this way, the data volume of the point cloud map can be greatly reduced, thereby saving the storage space of the point cloud map in the electronic device.
  • the point cloud map does not include the image features of the sampling points, that is, under the premise that the camera coordinates of multiple feature points and the world coordinates of multiple sampling points are known, an iterative strategy is used to try to find the camera coordinate system relative to the world.
  • the target rotation relationship and target translation relationship of the coordinate system can realize the positioning of the image acquisition device and obtain high-precision positioning results.
  • the search of the target rotation relationship and the target translation relationship for example, through steps S603 to S608 in the following embodiment, iteratively find the sampling points that are closest (ie, the best match) to multiple feature points, so as to obtain the target rotation relationship and the target translation relationship.
  • Step S504 Determine the orientation of the image acquisition device according to the target rotation relationship.
  • Step S505 Determine the world coordinates of the image acquisition device according to the target translation relationship and the target rotation relationship.
  • the positioning method there is no need to extract the image features of the feature points, and it is not necessary to match the image features of the feature points with the image features of multiple sampling points in the point cloud map.
  • the camera coordinates of multiple feature points are matched with the world coordinates of multiple sampling points to realize the positioning of the image acquisition device. In this way, there is no need to store the image features of multiple sampling points in the point cloud map, thereby greatly saving the storage space of the point cloud map.
  • the embodiment of the present application further provides a positioning method, and the method may include the following steps S601 to S610:
  • Step S601 Determine feature points in the image to be processed collected by the image collection device
  • Step S602 acquiring the camera coordinates of the feature point
  • Step S603 Select an initial target sampling point that matches the characteristic point from a plurality of sampling points in the pre-built point cloud map.
  • step S603 when the electronic device implements step S603, it can first set the initial rotation relationship and initial translation relationship of the camera coordinate system relative to the world coordinate system; then, according to the camera coordinates, initial rotation relationship and initial rotation relationship of the feature point , The feature point is matched with multiple sampling points, and the initial target sampling point that matches the feature point is selected from the multiple sampling points.
  • the initial target sampling point can be selected through steps S703 to S705 in the following embodiments.
  • step S603 is used to select sampling points that may match the feature point.
  • the initial target sampling point may not be a point that truly matches the feature point; therefore, the following steps S604 to S608 are required to further determine the initial target sampling point Whether it is a point that really matches the feature point.
  • Step S604 according to the camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points, determine the second rotation relationship and the second rotation relationship of the camera coordinate system relative to the world coordinate system. Translation relationship.
  • the electronic device when it implements the steps, it can construct an error function based on the camera coordinates of multiple feature points and the world coordinates of the initial target sampling points that match the multiple feature points; then, the current The optimal second rotation relationship and second translation relationship.
  • the camera coordinates of the feature point is denoted by p i
  • the world coordinates of the initial target sampling points are expressed as q i to represent, then the following formula (7) can be listed:
  • E(R, T) is the error function
  • R and T are the second rotation relationship and the second translation relationship to be solved, respectively. Then, the optimal solution of R and T in equation (7) can be solved by the least square method.
  • Step S605 Determine the first world coordinate of the feature point according to the second rotation relationship, the second translation relationship and the camera coordinates of the feature point.
  • the camera coordinates of the feature point can be converted into the first world coordinates of the feature point. If the selected initial target sampling point and feature point represent the same location point in the actual physical space, or two similar location points, then the first world coordinates determined in step S605 should be the same as the initial target sampling point The world coordinates are the same or similar. Conversely, if the two represent not the same location point, nor two similar location points, then the first world coordinate determined in step S605 is different from the world coordinate of the initial target sampling point and is not similar.
  • the matching error of multiple feature points can be determined by the following step S606, and based on the matching error and the first threshold, it is determined whether the initial target sampling point is a point that actually matches the feature point, and then the target conversion relationship and the target translation relationship are determined. .
  • Step S606 Determine the matching error of the multiple feature points according to the first world coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points.
  • step S606 when the electronic device implements step S606, it can determine the matching errors of multiple feature points through step S708 and step S709 in the following embodiments. That is, according to the first world coordinates of the feature point and the world coordinates of the initial target sampling point, the distance between the feature point and the initial target sampling point is determined; then, the distance between the multiple feature points and the matching initial target sampling point is determined. The distance determines the matching error.
  • Step S607 if the matching error is greater than the first threshold, return to step S603, reselect the initial target sampling point, and re-determine the matching error until the re-determined matching error is less than the first threshold.
  • the matching error is greater than the first threshold, it means that the currently selected initial target sampling point is not the sampling point that matches the feature point, and the two refer to not the same location point or similar location points in the physical space.
  • the second rotation relationship and the second translation relationship obtained in the current iteration can be determined as the target rotation relationship and the target translation relationship, respectively.
  • the orientation (ie attitude) of the image acquisition device is determined according to the second rotation relationship obtained in the current iteration, and the second translation relationship obtained in the current iteration is determined. , Determine the coordinates of the image acquisition device in the point cloud map (ie world coordinates).
  • Step S608 determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value as the target rotation relationship; determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value
  • the translation relationship is determined as the target translation relationship.
  • Step S609 Determine the orientation of the image acquisition device according to the target rotation relationship.
  • Step S610 Determine the world coordinates of the image acquisition device according to the target translation relationship and the target rotation relationship.
  • the embodiment of the present application further provides a positioning method, and the method may include the following steps S701 to S713:
  • Step S701 determining feature points in the image to be processed collected by the image collection device
  • Step S702 acquiring the camera coordinates of the feature point
  • Step S703 Acquire the third rotation relationship and the third translation relationship of the camera coordinate system relative to the world coordinate system.
  • the third rotation relationship and the third translation relationship may be set to an initial value respectively.
  • Step S704 determining the second world coordinates of the feature point according to the third rotation relationship, the third translation relationship, and the camera coordinates of the feature point;
  • Step S705 Match the second world coordinates of the feature point with the world coordinates of the multiple sampling points to obtain an initial target sampling point.
  • step S705 when the electronic device implements step S705, it can determine the distance between the second world coordinates of the feature point and the world coordinates of the sampling point, and then determine the sampling point closest to the feature point as the initial target sampling point. Or, the sampling point whose distance is less than or equal to the distance threshold is determined as the initial target sampling point. In some embodiments, the Euclidean distance between the second world coordinates of the feature point and the world coordinates of the sampling point can be determined, and the Euclidean distance is taken as the distance between the feature point and the sampling point.
  • Step S706 according to the camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points, determine the second rotation relationship and the second rotation relationship of the camera coordinate system relative to the world coordinate system. Translation relationship
  • Step S707 Determine the first world coordinate of the feature point according to the second rotation relationship, the second translation relationship and the camera coordinates of the feature point;
  • Step S708 Determine the distance between the feature point and the initial target sampling point according to the first world coordinates of the feature point and the world coordinates of the initial target sampling point.
  • the electronic device may also determine the Euclidean distance between the first world coordinates and the world coordinates of the initial target sampling point, and use the Euclidean distance as the distance between the feature point and the initial target sampling point.
  • Step S709 Determine the matching error according to the distance between the multiple feature points and the matched initial target sampling point.
  • the average distance between the multiple feature points and the matched initial target sampling point may be determined as the matching error.
  • the first of the feature points The world coordinates are represented by p′ i
  • the world coordinate of the initial target sampling point is represented by q i
  • the matching error d can be obtained by the following formula (8):
  • Step S710 if the matching error is greater than the first threshold, use the second translation relationship as the third translation relationship, and use the second rotation relationship as the third rotation relationship, return to step S704, and reselect the initial Target sampling points, and re-determine the matching error until the re-determined matching error is less than the first threshold.
  • the matching error is greater than the first threshold, it means that the acquired third rotation relationship and third translation relationship do not conform to reality.
  • the obtained initial target sampling point is not a point that really matches the feature point.
  • the second translation relationship can be used as the third translation relationship, and the second rotation relationship can be used as the third rotation relationship. Step S704 to step S709, until the matching error is less than the first threshold.
  • Step S711 determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value as the target rotation relationship; determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value
  • the translation relationship is determined as the target translation relationship;
  • Step S712 Determine the orientation of the image acquisition device according to the target rotation relationship
  • Step S713 Determine the world coordinates of the image acquisition device according to the target translation relationship and the target rotation relationship.
  • the point cloud map construction process may include the following steps S801 to S805:
  • Step S801 Obtain multiple sample images.
  • the image acquisition device may be used to collect sample images at a preset frame rate.
  • the sample image can be a two-dimensional sample image, for example, a monocular camera is used to collect red, green, and blue (Red, Green, Blue, RGB) images at a fixed frame rate; or it can be from a pre-collected sample image library Obtain the multiple sample images in.
  • step S802 the multiple sample images are processed to obtain a first sampling point set, and the first sampling point set includes at least the world coordinates of the sampling points in the multiple sample images.
  • the image features and camera coordinates of the sampling points can be obtained, but the world coordinates of the sampling points in the sample image are unknown.
  • multiple sample images may be processed by a three-dimensional reconstruction method, so as to obtain the world coordinates of the sampling points.
  • the structure from motion (SFM) method is used to initialize multiple sample images to obtain the world coordinates of multiple sample points.
  • the first sampling point set includes the world coordinates of a plurality of sampling points, and does not include the image characteristics of the sampling points. In other embodiments, the first sampling point set includes not only the world coordinates of multiple sampling points, but also the image characteristics of the sampling points.
  • step S802 when the electronic device implements step S802, it can determine the first sampling point set through steps S902 to 906 in the following embodiments.
  • Step S803 Obtain other sample images except for the multiple sample images.
  • the electronic device can use the image acquisition device to acquire sample images in real time at a preset frame rate, and execute the following steps S804 and S805; or it can also acquire the other sample images from a pre-established sample image library.
  • Step S804 Determine the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set.
  • step S804 is used to determine the world coordinates of the sampling points in the other sample images. In this way, the time cost of map construction can be greatly reduced.
  • step S804 in the case where the image feature of the sampling point and the world coordinate of the sampling point are included in the first sampling point set, the steps S201 to S201 to that provided in the above-mentioned embodiment can be adopted.
  • step S204 step S301 to step S305, or step S401 to step S407, the world coordinates of sampling points in other sample images are determined, and the obtained second sampling point set includes the world coordinates and image features of the sampling points in other sample images.
  • the electronic device can perform steps S501 to S505, and steps S601 to S610 similar to those provided in the foregoing embodiment. Or step S701 to step S713, determine the world coordinates of the sampling points in the other sample images, and the obtained second sampling point set includes the world coordinates of the sampling points in the other sample images, but does not include image features.
  • Step S805 Construct a point cloud map according to the first sampling point set and the second sampling point set.
  • the first sampling point set and the second sampling point set may be combined to obtain a point cloud map.
  • the point cloud map is actually a data set, and the distance between sampling points in the data set is greater than the first threshold.
  • the electronic device in the initial stage of map construction, after the electronic device obtains a first sampling point set including at least the world coordinates of a plurality of sampling points through multiple sample images, according to the first sampling point set and other acquired samples The attribute information of the sampling points in the image is determined, the world coordinates of the sampling points in other sample images are determined, and the second sampling point set is obtained; in this way, the world coordinates of the sampling points in other sample images can be quickly obtained, thereby reducing the time cost of map construction .
  • the embodiment of the present application further provides a method for constructing a point cloud map, and the method may include the following steps S901 to S909:
  • Step S901 acquiring multiple sample images
  • Step S902 Obtain the image features and camera coordinates of the sampling points in the sample image.
  • the image acquisition device may be used to collect sample images at a preset frame rate, and process the collected sample images in real time, and extract the image characteristics and the image characteristics of the sampling points in the sample images. Camera coordinates.
  • Step S903 According to the image characteristics of the sampling points in the multiple sample images, select the first target image and the second target image that meet the second condition from the multiple sample images.
  • the selected first target image and second target image are generally two sample images with relatively large parallax; in this way, the determination of the first target image or the second target image can be improved.
  • the accuracy of the world coordinates of the sampling points in the image is conducive to the subsequent acquisition of higher positioning accuracy.
  • the electronic device determines the first target image and the second target image through steps S113 to S116 in the following embodiment.
  • Step S904 Determine a fourth rotation relationship and a fourth translation relationship between the first target image and the second target image.
  • the four-point method in the Random Sample Consensus (RANSAC) algorithm may be used to process the first target image and the second target image, and calculate the homography matrix.
  • RNSAC Random Sample Consensus
  • Step S905 Determine the world coordinates of the sampling points in the first target image according to the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling points in the first target image.
  • the world coordinates of the sampling points in the first target image may be obtained by triangulation calculation.
  • Step S906 Determine a first set of sampling points according to the world coordinates of the sampling points in each of the first target sample images.
  • the electronic device may be based on each first target sample image or For the world coordinates of the sampling points in the second target sample image, the first sampling point set may be determined.
  • the first set of sampling points includes the world coordinates of the sampling points, but does not include the image features of the sampling points. In other embodiments, the first set of sampling points includes the world coordinates of the sampling points and the image features of the sampling points.
  • Step S907 acquiring other sample images except for the multiple sample images
  • Step S908 Determine the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set;
  • Step S909 Construct a point cloud map according to the first sampling point set and the second sampling point set.
  • the embodiment of the present application further provides a method for constructing a point cloud map, and the method may include the following steps S111 to S122:
  • Step S111 acquiring multiple sample images
  • Step S112 acquiring image features and camera coordinates of sampling points in the sample image
  • Step S113 Perform pairwise matching on the multiple sample images according to the image characteristics of the sampling points in the multiple sample images to obtain a first matching pair set of each pair of sample images.
  • each sample image is matched with other sample images.
  • the electronic device can match sample image 1 with sample image 2, match sample image 1 with sample image 3, match sample image 1 with sample image 4, and match sample image 1.
  • the obtained first matching pair set includes the matching relationship between the sampling points in the two images, that is, it includes multiple sampling point matching pairs.
  • step S114 the matching pairs of sampling points that do not meet the third condition in the first matching pair set are eliminated to obtain a second matching pair set.
  • the elimination method can use the eight-point method in RANSAC to calculate the basic matrix, and the matching pairs that do not meet the basic matrix are selected to be eliminated; in this way, some matching pairs of sampling points with poor robustness can be eliminated. Thereby improving the robustness of the algorithm.
  • Step S115 selecting a target matching pair set whose number of matching pairs meets the second condition from each of the second matching pair sets.
  • the second condition may be set as the number of matching pairs is greater than the first value and less than the second value.
  • Step S116 Determine the two sample images corresponding to the target matching pair set as the first target image and the second target image;
  • Step S117 determining a fourth rotation relationship and a fourth translation relationship between the first target image and the second target image
  • Step S118 determining the world coordinates of the sampling points in the first target image according to the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling points in the first target image;
  • Step S119 Determine the first set of sampling points according to the world coordinates of the sampling points in each of the first target sample images
  • Step S120 acquiring other sample images except for the multiple sample images
  • Step S121 determining the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set;
  • Step S122 Construct a point cloud map according to the first sampling point set and the second sampling point set.
  • an indoor positioning technology based on sparse point clouds is implemented, which can help users locate their positions in real time.
  • This solution can extract image features and construct a sparse point cloud map (that is, an example of the point cloud map) for indoor scenes.
  • the positioning process does not depend on external base station equipment, with low cost, high positioning accuracy and strong robustness.
  • the program consists of two main parts: building a map and visual positioning.
  • the construction of the map part is mainly to collect RGB image information (ie, the sample image) through a monocular camera, and extract image features to construct a sparse point cloud map, which includes at least the following steps S11 to S15:
  • Step S11 using a monocular camera to collect RGB images at a fixed frame rate
  • Step S12 real-time extraction of attribute information in the RGB image (such as image features and camera coordinates of sampling points in the image) in the acquisition process;
  • Step S13 after collecting a certain number of RGB images, initialize the relative rotation and translation of the images using the SFM method;
  • Step S14 After the initialization is completed, calculate the three-dimensional world coordinates (ie some embodiments of the world coordinates) of the sparse points (ie sampling points) of the subsequent image through the PnP (Perspective-n-Point) algorithm to obtain a sparse point cloud map;
  • PnP Perspective-n-Point
  • step S15 the sparse point cloud map and its corresponding image features are stored, for example, the information is serialized and stored locally as an offline map.
  • the process of feature extraction is actually the process of interpreting and labeling RGB images.
  • FAST corner points are extracted from RGB images, and the number of corner points extracted is generally fixed at 150 for image tracking; and ORB descriptors are extracted for the corner points, which are used as feature descriptors for sparse points. match.
  • 150 is an empirical value, and the number of corner points to be extracted is not limited in this application. Too few corner points will lead to a high tracking failure rate, while too many corner points will affect the efficiency of the algorithm.
  • the SFM algorithm includes at least the following steps S131 to S139:
  • Step S131 Perform pairwise matching on a certain number of images, and establish a matching relationship between the sparse points of the image using the method of Euclidean distance judgment;
  • Step S132 the matching pairs are eliminated, the elimination method adopts the RANSAC eight-point method to calculate the basic matrix, and the matching pairs that do not satisfy the basic matrix are selected to be eliminated;
  • Step S133 After the matching relationship is established, a tracking list is generated, and the tracking list refers to a collection of image names of points with the same name;
  • Step S134 removing invalid matches in the tracking list
  • Step S135 searching for the initial image pair, the purpose is to find the image pair with the largest camera baseline, and use the RANSAC algorithm four-point method to calculate the homography matrix.
  • the matching points that satisfy the homography matrix are called interior points, and those that are not satisfied are called exterior points. Find the image pair with the smallest proportion of interior points;
  • Step S136 searching for the relative rotation and translation of the initialized image pair, the method is to calculate the essential matrix by the RANSAC eight-point method, and obtain the relative rotation and translation between the image pairs by SVD decomposition of the essential matrix;
  • Step S137 Obtain the three-dimensional world coordinates of the sparse points in the initialization image pair through triangulation calculation
  • Step S138 Repeating steps S136 and S137 on other images, the relative rotation and translation of all images, and the three-dimensional world coordinates of sparse points can be obtained;
  • step S139 the three-dimensional world coordinates of the rotation, translation, and sparse point between the obtained images are optimized through the beam adjustment method. This is a non-linear optimization process, the purpose is to reduce the error of the SFM results.
  • an offline map based on the sparse point cloud can be constructed.
  • the map stores the sparse point cloud and its image attribute information (including three-dimensional world coordinates and descriptor information) in a binary format to the local, during the visual positioning process , The map will be loaded and used.
  • the visual positioning part mainly collects the current RGB image through the monocular camera, loads the constructed offline map, and uses the descriptor matching to find the matching pair between the current feature point and the sparse point of the map. Finally, the PnP algorithm is used to solve the current camera's precise pose in the map to achieve the purpose of positioning.
  • k may include the following steps S21 to S25:
  • Step S21 loading a pre-built offline map (ie a sparse point cloud map);
  • Step S22 using a monocular camera to collect RGB images
  • Step S23 extract the attribute information in the current frame image in real time during the acquisition process
  • Step S24 Find a matching pair between the current feature point and the sparse point on the map through descriptor matching
  • step S25 after finding enough matching pairs, the accurate pose of the current camera in the map coordinate system is solved through the PnP algorithm.
  • step S23 For the real-time extraction of the attribute information in the current frame image in step S23, reference may be made to the above step S12.
  • the algorithm includes at least the following steps S241 to S244:
  • Step S242 Calculate F 1N and the M-th (initially 0) feature point F 2M in the sparse point cloud, and calculate the Euclidean distance d NM between the feature point descriptors;
  • step S244 the matching pairs between the feature points of the current image and the sparse points of the map are sorted out as the algorithm output, and the algorithm ends.
  • step S25 for solving the precise pose of the current camera in the map coordinate system through the PnP algorithm in step S25, in some embodiments, as shown in FIG. 5:
  • step S24 it is judged that a matching pair sequence is formed in step S24 (in this example, the matching pair sequence is ⁇ F 0 , F 1 , F 2 ⁇ ). If the number of elements in the matching pair sequence is greater than TH 2 , step S25 is performed; otherwise, the algorithm ends .
  • the SolvePnP function in OpenCV is called to find the pose of the current camera in the map coordinate system.
  • the principle of the PnP algorithm is as follows:
  • the input of the PnP algorithm is three-dimensional (3D) points (that is, the three-dimensional world coordinates of the sparse points in the map coordinate system) and the 2D points obtained by the projection of these 3D points in the current image (that is, the camera of the feature point in the current frame) Coordinate)
  • the output of the algorithm is the pose transformation of the current frame relative to the origin of the map coordinate system (that is, the pose of the current frame in the map coordinate system).
  • the PnP algorithm does not directly obtain the camera pose matrix according to the matching pair sequence, but first obtains the 3D coordinates of the corresponding 2D point in the current coordinate system; then, according to the 3D coordinates in the map coordinate system and the current coordinate system 3D coordinates to solve the camera pose.
  • visual features can be used to achieve the positioning purpose in a predefined sparse point cloud map, and to obtain its own position and posture in the map coordinate system (that is, the world coordinate system).
  • the positioning result has high accuracy, does not need to rely on external base station equipment, has low cost and strong robustness.
  • the camera movement is used to obtain the three-dimensional information of the feature point, and the position and posture can be provided in the positioning result at the same time, which improves the positioning accuracy compared with other indoor positioning methods;
  • the stored map is in the form of a sparse point cloud, which is equivalent to sparse sampling of the image, and the map size is compressed to a certain degree compared with the traditional method;
  • the three-dimensional information of the image feature is fully excavated and combined with the high-precision and high-robustness image matching algorithm for indoor environment positioning.
  • map construction the three-dimensional world coordinates and descriptor information of the feature points in the visual image are collected and stored as an offline map in the form of a sparse point cloud.
  • the descriptor matching method is used to find the matching pair of the current feature point in the sparse point cloud, and then the current position and posture of the current self are accurately calculated through the PnP algorithm.
  • the combination of the two forms a low-cost, high-precision, and robust indoor positioning method.
  • the sparse point cloud stores three-dimensional world coordinates and descriptor information including image feature points.
  • the descriptor information is used for matching with the feature points in the current image during the visual positioning process.
  • the image feature descriptor can be an ORB descriptor, and the descriptor information of each image feature point occupies 256 bytes of space.
  • each sparse point must be allocated 256 bytes as the storage space of the feature descriptor, which occupies a large proportion of the size of the final offline map.
  • the embodiments of the present application provide the following extension solutions.
  • the electronic device can serialize and store the three-dimensional world coordinates of the sparse point cloud.
  • the embodiment of the present application provides a positioning method, which may include the following steps S31 to S35:
  • Step S31 loading a pre-built offline map
  • Step S32 using a monocular camera to collect RGB images
  • Step S33 real-time extraction of attribute information (ie, camera coordinates of feature points) in the current frame image during the acquisition process;
  • Step S34 Calculate the three-dimensional camera coordinates of the feature points in the current image to form a local point cloud
  • step S35 the local point cloud and the sparse point cloud of the map are matched through the Iterative Closest Point (ICP) algorithm to solve the accurate pose of the current camera in the map coordinate system.
  • ICP Iterative Closest Point
  • the ICP algorithm is essentially an optimal registration method based on the least squares method.
  • the algorithm repeatedly selects the corresponding point pairs and calculates the optimal rigid body transformation until the convergence accuracy requirements of the correct registration are met.
  • ICP algorithm is the basic principle: to be respectively matched target point cloud point cloud P and Q in the source, according to certain constraints, find the nearest point (p i, q i), and then calculate the optimal rotation R Translate T to minimize the error function.
  • the formula of the error function E(R,T) is:
  • n is the number of adjacent point pairs
  • p i is a point in the target point cloud P
  • q i is the closest point in the source point cloud Q corresponding to p i
  • R is the rotation matrix (also known as the rotation relationship)
  • T is Translation vector (also called translation relationship).
  • step S31 to step S35 visual features can be used to achieve the positioning purpose in the predefined sparse point cloud map, and obtain its own position and posture in the map coordinate system. And the predetermined sparse point cloud map does not need to store additional feature point descriptor information, which reduces the size of the offline map.
  • the embodiment of the present application provides a positioning device, which includes each module and each unit included in each module, which can be implemented by a processor in a terminal; of course, it can also be implemented by a specific logic circuit.
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • FIG. 6A is a schematic structural diagram of a positioning device according to an embodiment of the application.
  • the device 600 includes a first determining module 601, an attribute information acquiring module 602, and a positioning module 603.
  • the first determining module 601 is configured to Determine the characteristic points in the image to be processed collected by the image acquisition device;
  • the attribute information acquisition module 602 is configured to acquire the attribute information of the characteristic points;
  • the positioning module 603 is configured to compare the attribute information of the characteristic points with the pre-built The attribute information of multiple sampling points in the point cloud map is matched to obtain the positioning result of the image acquisition device.
  • the attribute information of the feature point includes at least one of the following: the image feature of the feature point, and the camera coordinates of the feature point;
  • the attribute information of the sampling point includes at least one of the following: the The image feature of the sampling point, and the world coordinates of the sampling point.
  • the positioning module 603 includes: a matching unit configured to match the image features of the feature point with the image features of the multiple sampling points to obtain the first target sampling point; the positioning unit is configured to To determine the positioning result of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point.
  • the matching unit is configured to determine the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point; The sampling point whose similarity between the feature points meets the first condition is determined as the first target sampling point.
  • the positioning unit is configured to determine the plurality of first target sampling points according to the world coordinates of the plurality of first target sampling points and the camera coordinates of the feature points matching the plurality of first target sampling points.
  • Camera coordinates of a target sampling point according to the world coordinates of the plurality of first target sampling points and the camera coordinates of the plurality of first target sampling points, determine the first rotation relationship of the camera coordinate system relative to the world coordinate system and A first translation relationship; determine the world coordinates of the image capture device according to the first translation relationship and the first rotation relationship; determine the orientation of the image capture device according to the first rotation relationship.
  • the matching unit is configured to match the camera coordinates of the multiple feature points with the world coordinates of the multiple sampling points according to an iterative strategy to obtain the camera coordinate system relative to the world coordinate system.
  • the target rotation relationship and the target translation relationship; the positioning unit is further configured to: determine the orientation of the image acquisition device according to the target rotation relationship; determine the image according to the target translation relationship and the target rotation relationship Collect the world coordinates of the device.
  • the matching unit includes: a selection subunit configured to select an initial target sampling point that matches the characteristic point from the plurality of sampling points; a transformation relationship determination subunit configured to select an initial target sampling point that matches the characteristic point from the plurality of sampling points; The camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points that match the multiple feature points, and determine the second rotation relationship and the second translation relationship of the camera coordinate system relative to the world coordinate system; A world coordinate determination subunit, configured to determine the first world coordinate of the feature point according to the second rotation relationship, the second translation relationship, and the camera coordinates of the feature point; a matching error determination subunit, configured To determine the matching error of the multiple feature points according to the first world coordinates of the multiple feature points and the world coordinates of the initial target sampling points that match the multiple feature points; the iterative subunit is configured to: If the matching error is greater than the first threshold, the initial target sampling point is reselected, and the matching error is re-determined until the re-determined matching error is less than
  • the selection subunit is configured to: obtain a third rotation relationship and a third translation relationship of the camera coordinate system relative to the world coordinate system; The three translation relations and the camera coordinates of the feature point are used to determine the second world coordinates of the feature point; the second world coordinates of the feature point are matched with the world coordinates of the multiple sampling points to obtain the The initial target sampling point.
  • the matching error determining subunit is configured to determine the characteristic point and the initial target sampling point according to the first world coordinates of the characteristic point and the world coordinates of the initial target sampling point According to the distance between the multiple feature points and the matching initial target sampling point, the matching error is determined.
  • the iterative subunit is configured to, if the matching error is greater than a first threshold, use the second translation relationship as the third translation relationship, and the second rotation relationship as the third rotation relationship, and reselect The initial target sampling point.
  • the device 600 further includes: an image acquisition module 604, configured to acquire multiple sample images; and an image processing module 605, configured to process the multiple sample images, Obtain a first sampling point set, the first sampling point set includes at least the world coordinates of the sampling points in the multiple sample images; the image acquisition module 604 is further configured to acquire other than the multiple sample images Sample image; the second determination module 606 is configured to determine the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images, to obtain the first Two sampling point sets; the map construction module 607 is configured to construct the point cloud map according to the first sampling point set and the second sampling point set.
  • the image processing module 605 includes: an attribute information acquiring unit configured to acquire image features and camera coordinates of sampling points in the sample image; a target image determining unit configured to The image characteristics of the sampling points, select the first target image and the second target image that meet the second condition from the multiple sample images; the transformation relationship determination unit is configured to determine the first target image and the second target image A fourth rotation relationship and a fourth translation relationship between the target images; a world coordinate determination unit configured to be based on the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling point in the first target image , Determining the world coordinates of the sampling points in the first target image; a set determining unit configured to determine the first sampling point set according to the world coordinates of the sampling points in each of the first target sample images.
  • the target image determining unit is configured to: perform pairwise matching on the multiple sample images according to the image characteristics of the sampling points in the multiple sample images to obtain the first pair of sample images.
  • the above positioning method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a server, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read ON1ly Memory, ROM), magnetic disk or optical disk and other media that can store program codes.
  • U disk mobile hard disk
  • read only memory Read ON1ly Memory
  • ROM magnetic disk or optical disk
  • other media that can store program codes.
  • FIG. 7 is a schematic diagram of a hardware entity of the electronic device according to an embodiment of the application.
  • the hardware entity of the electronic device 700 includes: a memory 701 and a processor. 702.
  • the memory 701 stores a computer program that can run on the processor 702, and the processor 702 implements the steps in the positioning method provided in the foregoing embodiment when the processor 702 executes the program.
  • the memory 701 is configured to store instructions and applications executable by the processor 702, and can also cache data to be processed or processed by the processor 702 and each module in the electronic device 700 (for example, image data, audio data, voice communication data, and Video communication data) can be implemented by flash memory (FLASH) or random access memory (RaN1dom Access Memory, RAM).
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the positioning method provided in the foregoing embodiment are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read-only memory (Read ON1ly Memory, ROM), a magnetic disk or an optical disk.
  • the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of a software product in essence or a part that contributes to related technologies.
  • the computer software product is stored in a storage medium and includes a number of instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a server, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in embodiments of the present application are a positioning method and apparatus, a device, and a storage medium. The method comprises: determining a feature point in an image to be processed collected by an image collection device; obtaining attribute information of the feature point; and matching the attribute information of the feature point with attribute information of a plurality of sampling points in a pre-constructed point cloud map to obtain a positioning result of the image collection device, wherein world coordinates in the attribute information of the sampling points are the world coordinates of sampling points in a sample image.

Description

定位方法及装置、设备、存储介质Positioning method and device, equipment and storage medium
相关申请的交叉引用Cross-references to related applications
本申请基于申请号为201910921484.6、申请日为2019年09月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本申请。This application is filed based on the Chinese patent application with the application number 201910921484.6 and the filing date on September 27, 2019, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby introduced in this application in full .
技术领域Technical field
本申请实施例涉及电子技术,涉及但不限于定位方法及装置、设备、存储介质。The embodiments of the present application relate to electronic technology, and relate to, but are not limited to, positioning methods and devices, equipment, and storage media.
背景技术Background technique
基于图像信息进行定位的相关技术中,目前主要是通过识别摄像头模组采集得到的图像中的人物和固定物体,以确定人物的位置。该方案将图像中该固定物体与预先构建的室内地图进行匹配,从而确定该固定物体在室内的对应位置;然后,根据该固定物体的位置确定该人物在室内的位置;其中,确定该人物的位置的整体思路为:通过图像识别的方法识别图像中的固定物体,并根据图像中该固定物体与人物之间的相对位置关系和该固定物体在室内的位置,确定该人物的位置。Among the related technologies for positioning based on image information, currently, people and fixed objects in the images collected by the camera module are mainly identified to determine the position of the person. This solution matches the fixed object in the image with a pre-built indoor map to determine the corresponding position of the fixed object indoors; then, the indoor position of the person is determined according to the position of the fixed object; wherein, the position of the person is determined The overall idea of the location is: to identify the fixed object in the image through the image recognition method, and determine the position of the person according to the relative position relationship between the fixed object and the person in the image and the location of the fixed object in the room.
然而,这种定位方法主要根据图像中的人物和固定物体之间的相对位置关系进行定位;这样,电子设备在实现定位时就要求图像中必须有可以识别出的人物和固定物体;否则定位就会失效。所以,这种定位方法的鲁棒性较差。However, this positioning method is mainly based on the relative positional relationship between the characters and fixed objects in the image; in this way, the electronic device requires recognizable characters and fixed objects in the image when realizing positioning; otherwise, the positioning is Will fail. Therefore, the robustness of this positioning method is poor.
发明内容Summary of the invention
有鉴于此,本申请实施例提供的定位方法及装置、设备、存储介质,定位方法不依赖于图像中必须有固定物体和待定位对象,因此具有较好的鲁棒性。本申请实施例的技术方案是这样实现的:In view of this, the positioning method, device, device, and storage medium provided by the embodiments of the present application do not depend on the fixed object and the object to be positioned in the image, so it has better robustness. The technical solutions of the embodiments of the present application are implemented as follows:
本申请实施例提供的定位方法,包括:确定图像采集设备所采集的待处理图像中的特征点;获取所述特征点的属性信息;将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果。The positioning method provided by the embodiments of the present application includes: determining feature points in an image to be processed collected by an image collection device; acquiring attribute information of the feature points; and comparing the attribute information of the feature points with a pre-built point cloud map The attribute information of the multiple sampling points in the data is matched to obtain the positioning result of the image acquisition device.
本申请实施例提供的定位装置,包括:第一确定模块,配置为确定图像采集设备所采集的待处理图像中的特征点;属性信息获取模块,配置为获取所述特征点的属性信息;定位模块,配置为将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果。The positioning device provided by the embodiment of the present application includes: a first determining module configured to determine a feature point in an image to be processed collected by an image acquisition device; an attribute information obtaining module configured to obtain attribute information of the feature point; positioning The module is configured to match the attribute information of the characteristic point with the attribute information of a plurality of sampling points in a pre-built point cloud map to obtain a positioning result of the image acquisition device.
本申请实施例提供的电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述定位方法中的步骤。The electronic device provided by the embodiment of the present application includes a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the positioning method when the program is executed.
本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述定位方法中的步骤。The embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned positioning method are realized.
在本申请实施例提供的定位方法中,根据待处理图像中的特征点的属性信息和预先构建的点云地图中多个采样点的属性信息,即可确定用于采集所述待处理图像的图像采集设备的定位结果;如此,在对携带图像采集设备的待定位对象进行定位时,其定位方法不依赖于所 述待处理图像中必须有固定物体和该待定位对象,所以能够获得较好的鲁棒性。In the positioning method provided in the embodiment of the present application, according to the attribute information of the feature points in the image to be processed and the attribute information of multiple sampling points in the pre-built point cloud map, the method used to collect the image to be processed can be determined The positioning result of the image acquisition device; in this way, when positioning the object to be positioned carrying the image acquisition device, the positioning method does not depend on the fixed object and the object to be positioned in the image to be processed, so better results can be obtained Robustness.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并于说明书一起用于说明本申请的技术方案。The drawings here are incorporated into the specification and constitute a part of the specification. These drawings show embodiments that conform to the application, and are used in the specification to illustrate the technical solution of the application.
图1为本申请实施例定位方法的实现流程示意图;FIG. 1 is a schematic diagram of an implementation process of a positioning method according to an embodiment of this application;
图2为本申请实施例确定多个第一目标采样点的相机坐标的示意图;2 is a schematic diagram of determining camera coordinates of multiple first target sampling points according to an embodiment of the application;
图3为本申请实施例定位方法的实现流程示意图;FIG. 3 is a schematic diagram of an implementation process of a positioning method according to an embodiment of the application;
图4为本申请实施例点云地图的构建方法的实现流程示意图;4 is a schematic diagram of the implementation process of the method for constructing a point cloud map according to an embodiment of the application;
图5为本申请实施例特征点匹配对示意图;FIG. 5 is a schematic diagram of a feature point matching pair according to an embodiment of the application;
图6A为本申请实施例定位装置的结构示意图;6A is a schematic structural diagram of a positioning device according to an embodiment of the application;
图6B为本申请实施例另一定位装置的结构示意图;6B is a schematic structural diagram of another positioning device according to an embodiment of the application;
图7为本申请实施例电子设备的一种硬件实体示意图。FIG. 7 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the application.
具体实施方式detailed description
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。In order to make the objectives, technical solutions, and advantages of the embodiments of the present application clearer, the specific technical solutions of the present application will be described in further detail below in conjunction with the drawings in the embodiments of the present application. The following examples are used to illustrate the application, but are not used to limit the scope of the application.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terminology used herein is only for the purpose of describing the embodiments of the application, and is not intended to limit the application.
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。In the following description, “some embodiments” are referred to, which describe a subset of all possible embodiments, but it is understood that “some embodiments” may be the same subset or different subsets of all possible embodiments, and Can be combined with each other without conflict.
需要指出,本申请实施例所涉及的术语“第一\第二\第三”用以区别不同的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。It should be pointed out that the term "first\second\third" involved in the embodiments of this application is used to distinguish different objects, and does not represent a specific order for the objects. Understandably, "first\second\third" Where permitted, the specific order or sequence can be interchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
本申请实施例提供一种定位方法,所述方法可以应用于电子设备,所述电子设备可以是手机、平板电脑、笔记本电脑、台式计算机、服务器、机器人、无人机等具有信息处理能力的设备。所述定位方法所实现的功能可以通过所述电子设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,所述电子设备至少包括处理器和存储介质。The embodiments of the application provide a positioning method, which can be applied to electronic devices, which can be mobile phones, tablet computers, notebook computers, desktop computers, servers, robots, drones and other devices with information processing capabilities. . The functions implemented by the positioning method can be implemented by a processor in the electronic device calling program code. Of course, the program code can be stored in a computer storage medium. It can be seen that the electronic device includes at least a processor and a storage medium.
图1为本申请实施例定位方法的实现流程示意图,如图1所示,所述方法可以包括以下步骤S101至步骤S103:FIG. 1 is a schematic diagram of the implementation process of the positioning method according to the embodiment of this application. As shown in FIG. 1, the method may include the following steps S101 to S103:
步骤S101,确定图像采集设备所采集的待处理图像中的特征点。Step S101: Determine the characteristic points in the image to be processed collected by the image collecting device.
可以理解地,特征点即为待处理图像中具有一定特征的像素点。电子设备在实现步骤S101时,通常将待处理图像中的角点作为特征点。一般来说,待处理图像为二维图像,例如待处理图像为红、绿、蓝(Red、Green、Blue,RGB)图像,当然待处理图像还可是其他格式的图像,例如该图像为灰度图像。Understandably, the feature point is the pixel point with a certain feature in the image to be processed. When the electronic device implements step S101, it usually takes the corner points in the image to be processed as feature points. Generally speaking, the image to be processed is a two-dimensional image. For example, the image to be processed is a red, green, and blue (Red, Green, Blue, RGB) image. Of course, the image to be processed can also be an image in other formats, for example, the image is grayscale. image.
在本申请实施例中,图像采集设备可以是多种多样的。例如,图像采集设备为单目摄像头或多目摄像头(例如双目摄像头)。需要说明的是,电子设备可以包括图像采集设备,也 就是说,图像采集设备被安装在电子设备中。例如,电子设备为具有至少一个摄像头的智能手机。当然,在一些实施例中,电子设备也可以不包括图像采集设备,图像采集设备可以将采集的图像发送给电子设备。In the embodiments of the present application, the image acquisition device may be various. For example, the image acquisition device is a monocular camera or a multi-camera (for example, a binocular camera). It should be noted that the electronic device may include an image capture device, that is, the image capture device is installed in the electronic device. For example, the electronic device is a smart phone with at least one camera. Of course, in some embodiments, the electronic device may not include an image capture device, and the image capture device may send the captured image to the electronic device.
步骤S102,获取所述特征点的属性信息。Step S102: Obtain the attribute information of the characteristic point.
可以理解地,特征点的属性信息为特征点所特有的信息。特征点的属性信息至少包括以下之一:图像特征、相机坐标。在一些实施例中,特征点的属性信息包括图像特征和相机坐标。图像特征可以是该特征点的特征描述子,或者其他可以描述该特征点的图像特征的信息。可以理解地,特征点的相机坐标指的是特征点在相机坐标系中的坐标。相机坐标可以是二维坐标,也可以是三维坐标。Understandably, the attribute information of the feature point is unique information of the feature point. The attribute information of the feature point includes at least one of the following: image feature and camera coordinates. In some embodiments, the attribute information of the feature point includes image features and camera coordinates. The image feature can be a feature descriptor of the feature point, or other information that can describe the image feature of the feature point. Understandably, the camera coordinates of the feature point refer to the coordinates of the feature point in the camera coordinate system. The camera coordinates can be two-dimensional coordinates or three-dimensional coordinates.
步骤S103,将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果。Step S103: Match the attribute information of the characteristic point with the attribute information of multiple sampling points in a pre-built point cloud map to obtain a positioning result of the image acquisition device.
在物理空间中,电子设备可以通过图像采集方式获得物体表面的采样点,基于这些采样点的世界坐标构建点云地图。即,采样点的属性信息中的世界坐标是样本图像中的采样点的世界坐标。点云地图的构建过程可以通过如下实施例中的步骤S801至步骤S805实现。点云地图的类型可以是稀疏点云,还可以是稠密点云。在稀疏点云中,采样点之间的间距均大于间距阈值,采样点的属性信息可以是采样点的世界坐标和图像特征;在稠密点云中,采样点之间的间距小于间距阈值,采样点的属性信息可以包括采样点的世界坐标,而不包括采样点的图像特征。当然,稠密点云中还可以包括采样点的图像特征,但是这样就大大增加了稠密点云的数据量,存储时会耗费存储器的大量存储资源。对于同一物理区域所对应的点云地图,稀疏点云的采样点数目远远小于稠密点云的采样点数目。In physical space, electronic devices can obtain sampling points on the surface of an object through image acquisition, and construct a point cloud map based on the world coordinates of these sampling points. That is, the world coordinates in the attribute information of the sampling point are the world coordinates of the sampling point in the sample image. The point cloud map construction process can be implemented through steps S801 to S805 in the following embodiments. The type of point cloud map can be sparse point cloud or dense point cloud. In the sparse point cloud, the spacing between sampling points is greater than the spacing threshold, and the attribute information of the sampling points can be the world coordinates and image features of the sampling points; in the dense point cloud, the spacing between the sampling points is less than the spacing threshold, sampling The attribute information of the point may include the world coordinates of the sampling point, but not the image characteristics of the sampling point. Of course, the dense point cloud can also include the image features of the sampling points, but this greatly increases the data volume of the dense point cloud, which consumes a lot of storage resources of the memory. For the point cloud map corresponding to the same physical area, the number of sampling points of the sparse point cloud is much smaller than the number of sampling points of the dense point cloud.
可以理解地,采样点实际上也是其所在样本图像中的特征点,采样点的属性信息为采样点所特有的信息。只不过在点云地图中将该点称为采样点,在图像中将该点称为特征点,从而使读者能够清楚地区分这些点是点云地图中的点,还是图像中的点。采样点的属性信息至少包括以下之一:图像特征、世界坐标。在一些实施例中,采样点的属性信息包括图像特征和世界坐标。在另一些实施例中,采样点的属性信息包括世界坐标,而不包括图像特征。可以理解地,采样点的世界坐标指的是采样点在世界坐标系中的坐标。世界坐标可以是二维坐标,也可以是三维坐标。It is understandable that the sampling point is actually a feature point in the sample image where it is located, and the attribute information of the sampling point is unique information of the sampling point. It's just that the point is called the sampling point in the point cloud map, and the point is called the feature point in the image, so that the reader can clearly distinguish whether these points are points in the point cloud map or points in the image. The attribute information of the sampling point includes at least one of the following: image characteristics and world coordinates. In some embodiments, the attribute information of the sampling point includes image features and world coordinates. In other embodiments, the attribute information of the sampling point includes world coordinates, but does not include image features. Understandably, the world coordinates of the sampling point refer to the coordinates of the sampling point in the world coordinate system. The world coordinates can be two-dimensional coordinates or three-dimensional coordinates.
在本申请实施例提供的定位方法中,电子设备根据待处理图像中的特征点的属性信息和预先构建的点云地图中多个采样点的属性信息,即可确定用于采集所述待处理图像的图像采集设备的定位结果。如此,电子设备在对图像采集设备进行定位时,其定位方法不依赖于待处理图像中必须有固定物体和待定位对象,所以能够获得较好的鲁棒性。In the positioning method provided by the embodiment of the present application, the electronic device can determine the attribute information of the feature points in the image to be processed and the attribute information of multiple sampling points in the pre-built point cloud map to determine that it is used to collect the to-be-processed image. The image is the positioning result of the image capture device. In this way, when the electronic device locates the image acquisition device, its positioning method does not depend on the fixed object and the object to be positioned in the image to be processed, so better robustness can be obtained.
需要说明的是,点云地图中采样点的属性信息包括图像特征和不包括图像特征,其对应的定位方法是不同的。在采样点的属性信息包括图像特征和世界坐标的情况下,定位方法包括以下几种实施例。It should be noted that the attribute information of the sampling points in the point cloud map includes image features and does not include image features, and the corresponding positioning methods are different. In the case where the attribute information of the sampling point includes image features and world coordinates, the positioning method includes the following embodiments.
本申请实施例提供一种定位方法,所述方法可以包括以下步骤S201至步骤S204:The embodiment of the present application provides a positioning method, and the method may include the following steps S201 to S204:
步骤S201,确定图像采集设备所采集的待处理图像中的特征点;Step S201, determining feature points in the image to be processed collected by the image collecting device;
步骤S202,获取所述特征点的图像特征和所述特征点的相机坐标;Step S202: Acquire the image feature of the feature point and the camera coordinates of the feature point;
步骤S203,将所述特征点的图像特征与预先构建的点云地图中多个采样点的图像特征进行匹配,得出第一目标采样点。Step S203: Match the image features of the feature points with the image features of multiple sampling points in a pre-built point cloud map to obtain a first target sampling point.
可以理解地,匹配的目的是为了能够从多个采样点中找出与所述特征点表征的是同一空间位置点的目标采样点。在一些实施例中,通常将点云地图中,与特征点的图像特征相同或相似的采样点确定为第一目标采样点。例如,电子设备通过如下实施例中的步骤S303和步骤S304确定第一目标采样点。Understandably, the purpose of matching is to be able to find out the target sampling points that represent the same spatial location point as the feature point from multiple sampling points. In some embodiments, a sampling point that is the same or similar to the image feature of the feature point in the point cloud map is usually determined as the first target sampling point. For example, the electronic device determines the first target sampling point through step S303 and step S304 in the following embodiment.
步骤S204,根据所述特征点的相机坐标和所述第一目标采样点的世界坐标,确定所述图像采集设备的定位结果。Step S204: Determine the positioning result of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point.
在一些实施例中,点云地图中包括多个采样点的图像特征和世界坐标。可以理解地,如果已知多个特征点的相机坐标和与每一所述特征点匹配的第一目标采样点的世界坐标,则可以通过如下实施例中的步骤S404至步骤S407确定图像采集设备的世界坐标和朝向(即图像采集设备的姿态)。In some embodiments, the point cloud map includes image features and world coordinates of multiple sampling points. Understandably, if the camera coordinates of a plurality of feature points and the world coordinates of the first target sampling point matching each of the feature points are known, the image acquisition device can be determined through steps S404 to S407 in the following embodiment World coordinates and orientation (ie the posture of the image capture device).
在本申请实施例提供的定位方法中,电子设备根据特征点的图像特征和多个采样点的图像特征,能够更加准确地从所述多个采样点中确定出与所述特征点匹配的第一目标采样点,从而提高定位精度。In the positioning method provided by the embodiment of the present application, the electronic device can more accurately determine the first matching point from the multiple sampling points based on the image features of the feature points and the image features of the multiple sampling points. A target sampling point, thereby improving positioning accuracy.
本申请实施例再提供一种定位方法,所述方法可以包括以下步骤S301至步骤S305:The embodiment of the present application further provides a positioning method, and the method may include the following steps S301 to S305:
步骤S301,确定图像采集设备所采集的待处理图像中的特征点;Step S301, determining feature points in the image to be processed collected by the image collecting device;
步骤S302,获取所述特征点的图像特征和所述特征点的相机坐标;Step S302, acquiring the image feature of the feature point and the camera coordinates of the feature point;
步骤S303,根据所述采样点的图像特征和所述特征点的图像特征,确定所述采样点与所述特征点之间的相似度。Step S303: Determine the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point.
可以理解地,所述相似度,指的是采样点的图像特征与特征点的图像特征之间的相近程度。电子设备在实现步骤S303时,可以通过多种方法确定所述相似度。例如,确定采样点的图像特征和特征点的图像特征之间的欧氏距离,将欧式距离确定为所述相似度。在一些实施例中,也可以确定采样点的图像特征和特征点的图像特征之间的汉明距离或者余弦相似度,将汉明距离或者余弦相似度确定为所述相似度。这里对于表征所述相似度的参数类型不做限定。所述参数类型可以是所述欧氏距离、汉明距离或余弦相似度等。Understandably, the similarity refers to the similarity between the image features of the sampling points and the image features of the feature points. When the electronic device implements step S303, it may determine the similarity through various methods. For example, the Euclidean distance between the image feature of the sampling point and the image feature of the feature point is determined, and the Euclidean distance is determined as the similarity. In some embodiments, the Hamming distance or cosine similarity between the image feature of the sampling point and the image feature of the feature point may also be determined, and the Hamming distance or cosine similarity is determined as the similarity. There is no limitation on the type of parameters that characterize the similarity. The parameter type may be the Euclidean distance, Hamming distance, or cosine similarity.
步骤S304,将与所述特征点之间的相似度满足第一条件的采样点,确定为第一目标采样点。Step S304: Determine the sampling point whose similarity with the feature point meets the first condition as the first target sampling point.
在一些实施例中,电子设备在实现步骤S304时,可以将所述多个采样点中与特征点之间的相似度小于或等于相似度阈值的采样点,确定为第一目标采样点。例如,将与特征点之间的欧氏距离小于或等于欧式距离阈值的采样点,确定为第一目标采样点;或者,将所述多个采样点中与特征点之间的相似度最小的采样点,确定为第一目标采样点。即,第一条件为相似度小于或等于相似度阈值,或者,第一条件为相似度最小。In some embodiments, when the electronic device implements step S304, it may determine a sampling point with a similarity between the plurality of sampling points and a characteristic point that is less than or equal to a similarity threshold as the first target sampling point. For example, the sampling point whose Euclidean distance with the feature point is less than or equal to the Euclidean distance threshold is determined as the first target sampling point; or, among the multiple sampling points, the sampling point with the smallest similarity to the feature point is determined The sampling point is determined as the first target sampling point. That is, the first condition is that the similarity is less than or equal to the similarity threshold, or the first condition is that the similarity is the smallest.
步骤S305,根据所述特征点的相机坐标和所述第一目标采样点的世界坐标,确定所述图像采集设备的定位结果。Step S305: Determine the positioning result of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point.
在一些实施例中,电子设备在实现步骤S305时,可以根据特征点的相机坐标和与特征点匹配的第一目标采样点的世界坐标,确定以下所述定位结果中的至少之一:图像采集设备的世界坐标、图像采集设备的朝向(即姿态)。例如,通过如下实施例中的步骤S404至步骤S407,确定图像采集设备的世界坐标和图像采集设备的朝向。In some embodiments, when the electronic device implements step S305, it can determine at least one of the following positioning results according to the camera coordinates of the feature point and the world coordinates of the first target sampling point matching the feature point: image acquisition The world coordinates of the device and the orientation (ie posture) of the image capture device. For example, through steps S404 to S407 in the following embodiment, the world coordinates of the image capture device and the orientation of the image capture device are determined.
本申请实施例再提供一种定位方法,所述方法可以包括以下步骤S401至步骤S407:The embodiment of the present application further provides a positioning method, and the method may include the following steps S401 to S407:
步骤S401,确定图像采集设备所采集的待处理图像中的特征点;Step S401, determining feature points in the image to be processed collected by the image collecting device;
步骤S402,获取所述特征点的图像特征和所述特征点的相机坐标;Step S402, acquiring the image feature of the feature point and the camera coordinates of the feature point;
步骤S403,将所述特征点的图像特征与预先构建的点云地图中多个采样点的图像特征进行匹配,得出第一目标采样点。Step S403: Match the image features of the feature points with the image features of multiple sampling points in a pre-built point cloud map to obtain a first target sampling point.
在一些实施例中,点云地图中包括多个采样点的属性信息,每一采样点的属性信息包括图像特征和世界坐标。In some embodiments, the point cloud map includes attribute information of multiple sampling points, and the attribute information of each sampling point includes image features and world coordinates.
步骤S404,根据多个第一目标采样点的世界坐标和与所述多个第一目标采样点匹配的特征点的相机坐标,确定所述多个第一目标采样点的相机坐标。Step S404: Determine the camera coordinates of the multiple first target sampling points according to the world coordinates of the multiple first target sampling points and the camera coordinates of the feature points matching the multiple first target sampling points.
在一些实施例中,所述多个第一目标采样点至少为3个第一目标采样点。也就是说,在步骤S404中,根据至少3个特征点的相机坐标和与所述3个特征点匹配的第一目标采样点的世界坐标,才能够准确地确定与所述3个特征点匹配的第一目标采样点的相机坐标。In some embodiments, the plurality of first target sampling points are at least three first target sampling points. That is, in step S404, according to the camera coordinates of the at least three feature points and the world coordinates of the first target sampling point that matches the three feature points, it can be accurately determined to match the three feature points. The camera coordinates of the first target sampling point.
举例来说,如图2所示,点O为所述相机坐标系的原点,即图像采集设备的光心,所述 多个第一目标采样点为图2中所示的大写的A、B、C这3个采样点,在待处理图像20中,与采样点A匹配的特征点为小写的特征点a,与采样点B匹配的特征点为小写的特征点b,与采样点C匹配的特征点为小写的特征点c。For example, as shown in FIG. 2, point O is the origin of the camera coordinate system, that is, the optical center of the image capture device, and the multiple first target sampling points are the capitals A and B shown in FIG. The three sampling points of, C, in the image to be processed 20, the feature point that matches the sampling point A is the lowercase feature point a, and the feature point that matches the sampling point B is the lowercase feature point b, which matches the sampling point C The feature point of is the lowercase feature point c.
根据余弦定理可以列出如下公式(1):According to the law of cosines, the following formula (1) can be listed:
OA 2+OB 2-2·OA·OB·cos<a,b>=AB 2 OA 2 +OB 2 -2·OA·OB·cos<a,b>=AB 2
OA 2+OC 2-2·OA·OC·cos<a,c>=AC 2      (1); OA 2 +OC 2 -2·OA·OC·cos<a,c>=AC 2 (1);
OB 2+OC 2-2·OB·OC·cos<b,c>=BC 2 OB 2 +OC 2 -2·OB·OC·cos<b,c>=BC 2
式(1)中,<a,b>指的是∠aOb,<a,c>指的是∠aOc,<b,c>指的是∠bOc。OA、OB和OC分别为目标局部地图的坐标系的原点与目标点A、B、C的距离。In formula (1), <a,b> refers to ∠aOb, <a,c> refers to ∠aOc, and <b,c> refers to ∠bOc. OA, OB, and OC are the distances between the origin of the coordinate system of the target local map and the target points A, B, and C, respectively.
对上式进行消元,同时除以OC 2,并令
Figure PCTCN2020116924-appb-000001
则可得出如下式(2):
Eliminate the above formula, divide by OC 2 at the same time, and set
Figure PCTCN2020116924-appb-000001
Then the following formula (2) can be obtained:
Figure PCTCN2020116924-appb-000002
Figure PCTCN2020116924-appb-000002
接着进行替换,令
Figure PCTCN2020116924-appb-000003
则可得出如下式(3):
Then replace, so
Figure PCTCN2020116924-appb-000003
Then the following formula (3) can be obtained:
x 2+y 2-2·x·y·cos<a,b>=u 2 x 2 +y 2 -2·x·y·cos<a,b>=u 2
x 2+1-2·x·cos<a,c>=wu     (3); x 2 +1-2·x·cos<a,c>=wu (3);
y 2+1-2·y·cos<b,c>=vu y 2 +1-2·y·cos<b,c>=vu
将上面的式(1)带入式(2)和式(3),则可得出如下式(4):Incorporating the above formula (1) into formula (2) and formula (3), the following formula (4) can be obtained:
Figure PCTCN2020116924-appb-000004
Figure PCTCN2020116924-appb-000004
在式(4)中,w、v、cos<a,c>、cos<b,c>、cos<a,b>都是已知量,因此未知量只有x和y两个,因此通过上式(4)中的两个方程可以求得x和y的值,接着根据如下公式(5)的三个方程就可以求解OA、OB和OC的值:In formula (4), w, v, cos<a,c>, cos<b,c>, cos<a,b> are all known quantities, so there are only two unknown quantities x and y, so by the above The two equations in formula (4) can be used to obtain the values of x and y, and then the values of OA, OB and OC can be solved according to the three equations in the following formula (5):
Figure PCTCN2020116924-appb-000005
Figure PCTCN2020116924-appb-000005
最后求解A、B、C这3个采样点的相机坐标,根据向量公式(6),可得:Finally, the camera coordinates of the three sampling points A, B, and C are solved. According to the vector formula (6), we can get:
Figure PCTCN2020116924-appb-000006
Figure PCTCN2020116924-appb-000006
式(6)中,
Figure PCTCN2020116924-appb-000007
的方向为点O到点a;
Figure PCTCN2020116924-appb-000008
的方向为点O到点b;
Figure PCTCN2020116924-appb-000009
的方向为点O到点c。
In formula (6),
Figure PCTCN2020116924-appb-000007
The direction is from point O to point a;
Figure PCTCN2020116924-appb-000008
The direction is from point O to point b;
Figure PCTCN2020116924-appb-000009
The direction is from point O to point c.
步骤S405,根据所述多个第一目标采样点的世界坐标和所述多个第一目标采样点的相机坐标,确定相机坐标系相对于世界坐标系的第一旋转关系和第一平移关系。Step S405: Determine a first rotation relationship and a first translation relationship of the camera coordinate system relative to the world coordinate system according to the world coordinates of the multiple first target sampling points and the camera coordinates of the multiple first target sampling points.
可以理解地,如果已知所述多个第一目标采样点的世界坐标和相机坐标,就可以确定相机坐标系相对于世界坐标系的第一旋转关系和第一平移关系。Understandably, if the world coordinates and camera coordinates of the multiple first target sampling points are known, the first rotation relationship and the first translation relationship of the camera coordinate system relative to the world coordinate system can be determined.
步骤S406,根据所述第一平移关系和所述第一旋转关系,确定所述图像采集设备的定位结果;其中,所述定位结果包括坐标信息和朝向信息。Step S406: Determine a positioning result of the image acquisition device according to the first translation relationship and the first rotation relationship; wherein the positioning result includes coordinate information and orientation information.
在本申请实施例提供的定位方法中,根据所述多个第一目标采样点的世界坐标和所述多 个第一目标采样点的相机坐标,确定相机坐标系相对于世界坐标系的第一旋转关系和第一平移关系;如此,能够根据第一平移关系和第一旋转关系,确定图像采集设备的朝向,从而使得定位方法能够适用于更多的应用场景。例如,根据机器人的当前朝向,指示机器人执行下一个动作。再如,在导航应用中,已知用户的朝向,能够更加精确地指导用户向哪个方向行走/行驶。又如,在无人驾驶应用中,已知车辆的朝向,能够更准确地控制车辆向哪个方向行驶。In the positioning method provided by the embodiment of the present application, according to the world coordinates of the multiple first target sampling points and the camera coordinates of the multiple first target sampling points, the first of the camera coordinate system relative to the world coordinate system is determined. The rotation relationship and the first translation relationship; in this way, the orientation of the image acquisition device can be determined according to the first translation relationship and the first rotation relationship, so that the positioning method can be applied to more application scenarios. For example, according to the current orientation of the robot, the robot is instructed to perform the next action. For another example, in a navigation application, knowing the user's orientation can more accurately guide the user in which direction to walk/drive. For another example, in an unmanned driving application, knowing the direction of the vehicle can more accurately control which direction the vehicle is traveling in.
在点云地图中采样点的属性信息包括世界坐标,而不包括图像特征的情况下,定位方法包括以下几种实施例。In the case where the attribute information of the sampling points in the point cloud map includes world coordinates but does not include image features, the positioning method includes the following embodiments.
本申请实施例提供一种定位方法,图3为本申请实施例定位方法的实现流程示意图,如图3所示,所述方法可以包括以下步骤S501至步骤S505:The embodiment of the present application provides a positioning method. FIG. 3 is a schematic diagram of the implementation process of the positioning method according to the embodiment of the present application. As shown in FIG. 3, the method may include the following steps S501 to S505:
步骤S501,确定图像采集设备所采集的待处理图像中的特征点;Step S501, determining feature points in the image to be processed collected by the image collection device;
步骤S502,获取所述特征点的相机坐标;Step S502, acquiring the camera coordinates of the feature point;
步骤S503,根据迭代策略,将多个特征点的相机坐标与预先构建的点云地图中多个采样点的世界坐标进行匹配,得出相机坐标系相对于世界坐标系的目标旋转关系和目标平移关系。Step S503: According to the iterative strategy, the camera coordinates of the multiple feature points are matched with the world coordinates of the multiple sampling points in the pre-built point cloud map, and the target rotation relationship and target translation of the camera coordinate system relative to the world coordinate system are obtained. relationship.
在一些实施例中,点云地图中包括采样点的世界坐标,但不包括采样点的图像特征。可以理解地,在存储点云地图时,图像特征一般占据比较大的存储空间。例如,图像特征为特征描述子,通常情况下,每个采样点的特征描述子具有256个字节,这就需要电子设备给每个采样点分配至少256个字节的存储空间来存储特征描述子。在一些实施例中,点云地图中不包括多个采样点的图像特征,如此,可以大大降低点云地图的数据量,从而节约点云地图在电子设备中的存储空间。In some embodiments, the point cloud map includes the world coordinates of the sampling points, but does not include the image features of the sampling points. Understandably, when storing a point cloud map, image features generally occupy a relatively large storage space. For example, an image feature is a feature descriptor. Normally, the feature descriptor of each sampling point has 256 bytes, which requires the electronic device to allocate at least 256 bytes of storage space for each sampling point to store the feature description child. In some embodiments, the point cloud map does not include the image features of multiple sampling points. In this way, the data volume of the point cloud map can be greatly reduced, thereby saving the storage space of the point cloud map in the electronic device.
在点云地图不包括采样点的图像特征的情况下,即,在已知多个特征点的相机坐标和多个采样点的世界坐标的前提下,通过迭代策略,尝试寻找相机坐标系相对于世界坐标系的目标旋转关系和目标平移关系,即可实现对图像采集设备的定位,获得高精度的定位结果。In the case that the point cloud map does not include the image features of the sampling points, that is, under the premise that the camera coordinates of multiple feature points and the world coordinates of multiple sampling points are known, an iterative strategy is used to try to find the camera coordinate system relative to the world. The target rotation relationship and target translation relationship of the coordinate system can realize the positioning of the image acquisition device and obtain high-precision positioning results.
对于目标旋转关系和目标平移关系的寻找,例如,通过如下实施例步骤S603至步骤S608,迭代地寻找与多个特征点最邻近(即最匹配)的采样点,从而得到目标旋转关系和目标平移关系。For the search of the target rotation relationship and the target translation relationship, for example, through steps S603 to S608 in the following embodiment, iteratively find the sampling points that are closest (ie, the best match) to multiple feature points, so as to obtain the target rotation relationship and the target translation relationship.
步骤S504,根据所述目标旋转关系,确定所述图像采集设备的朝向。Step S504: Determine the orientation of the image acquisition device according to the target rotation relationship.
步骤S505,根据所述目标平移关系和所述目标旋转关系,确定所述图像采集设备的世界坐标。Step S505: Determine the world coordinates of the image acquisition device according to the target translation relationship and the target rotation relationship.
在本申请实施例提供的定位方法中,无需提取特征点的图像特征,更无需将特征点的图像特征和点云地图中的多个采样点的图像特征进行匹配,而是通过迭代策略,将多个特征点的相机坐标与多个采样点的世界坐标进行匹配,即可实现对图像采集设备的定位。如此,使得点云地图中无需存储多个采样点的图像特征,从而大大节约了点云地图的存储空间。In the positioning method provided by the embodiment of the present application, there is no need to extract the image features of the feature points, and it is not necessary to match the image features of the feature points with the image features of multiple sampling points in the point cloud map. The camera coordinates of multiple feature points are matched with the world coordinates of multiple sampling points to realize the positioning of the image acquisition device. In this way, there is no need to store the image features of multiple sampling points in the point cloud map, thereby greatly saving the storage space of the point cloud map.
本申请实施例再提供一种定位方法,所述方法可以包括以下步骤S601至步骤S610:The embodiment of the present application further provides a positioning method, and the method may include the following steps S601 to S610:
步骤S601,确定图像采集设备所采集的待处理图像中的特征点;Step S601: Determine feature points in the image to be processed collected by the image collection device;
步骤S602,获取所述特征点的相机坐标;Step S602, acquiring the camera coordinates of the feature point;
步骤S603,从预先构建的点云地图中的多个采样点中,选取与所述特征点匹配的初始目标采样点。Step S603: Select an initial target sampling point that matches the characteristic point from a plurality of sampling points in the pre-built point cloud map.
在一些实施例中,电子设备在实现步骤S603时,首先可以设置相机坐标系相对于世界坐标系的初始旋转关系和初始平移关系;然后,根据特征点的相机坐标、初始旋转关系和初始旋转关系,将特征点与多个采样点进行匹配,从而从多个采样点中选取与特征点匹配的初始目标采样点。在一些实施例中,可以通过如下实施例中的步骤S703至步骤S705,选取初始目标采样点。In some embodiments, when the electronic device implements step S603, it can first set the initial rotation relationship and initial translation relationship of the camera coordinate system relative to the world coordinate system; then, according to the camera coordinates, initial rotation relationship and initial rotation relationship of the feature point , The feature point is matched with multiple sampling points, and the initial target sampling point that matches the feature point is selected from the multiple sampling points. In some embodiments, the initial target sampling point can be selected through steps S703 to S705 in the following embodiments.
实际上,通过步骤S603,是为了选取与特征点可能匹配的采样点,初始目标采样点可能不是与特征点真正匹配的点;因此,需要通过如下步骤S604至步骤S608,进一步确定初始目标采样点是否是与特征点真正匹配的点。In fact, step S603 is used to select sampling points that may match the feature point. The initial target sampling point may not be a point that truly matches the feature point; therefore, the following steps S604 to S608 are required to further determine the initial target sampling point Whether it is a point that really matches the feature point.
步骤S604,根据多个特征点的相机坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述相机坐标系相对于所述世界坐标系的第二旋转关系和第二平移关系。Step S604, according to the camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points, determine the second rotation relationship and the second rotation relationship of the camera coordinate system relative to the world coordinate system. Translation relationship.
在一些实施例中,电子设备在实现步骤时,可以根据多个特征点的相机坐标和与多个特征点匹配的初始目标采样点的世界坐标,构建误差函数;然后,通过最小二乘法求解当前最优的第二旋转关系和第二平移关系。例如,包括n个特征点的相机坐标的集合表示为P={p 1,p 2,...,p i,...,p n},特征点的相机坐标用p i来表示,与n个特征点匹配的初始目标采样点的世界坐标的集合表示为Q={q 1,q 2,...,q i,...,q n},初始目标采样点的世界坐标用q i来表示,那么可以列出如下式(7): In some embodiments, when the electronic device implements the steps, it can construct an error function based on the camera coordinates of multiple feature points and the world coordinates of the initial target sampling points that match the multiple feature points; then, the current The optimal second rotation relationship and second translation relationship. For example, the camera coordinate comprising n feature point set is expressed as P = {p 1, p 2 , ..., p i, ..., p n}, the camera coordinates of the feature point is denoted by p i, and The set of world coordinates of the initial target sampling points matched by n feature points is expressed as Q={q 1 ,q 2 ,...,q i ,...,q n }, and the world coordinates of the initial target sampling points are expressed as q i to represent, then the following formula (7) can be listed:
Figure PCTCN2020116924-appb-000010
Figure PCTCN2020116924-appb-000010
式中,E(R,T)为误差函数,R和T分别为待求解的第二旋转关系和第二平移关系。那么,可以通过最小二乘法求解式(7)中R和T的最优解。In the formula, E(R, T) is the error function, and R and T are the second rotation relationship and the second translation relationship to be solved, respectively. Then, the optimal solution of R and T in equation (7) can be solved by the least square method.
步骤S605,根据所述第二旋转关系、所述第二平移关系和所述特征点的相机坐标,确定所述特征点的第一世界坐标。Step S605: Determine the first world coordinate of the feature point according to the second rotation relationship, the second translation relationship and the camera coordinates of the feature point.
在获得最优解,即第二旋转关系和第二平移关系之后,即可将特征点的相机坐标转换为特征点的第一世界坐标。如果选取的初始目标采样点和特征点在实际物理空间中表示的同一个位置点,或者是两个相近的位置点,那么在步骤S605中确定的第一世界坐标应该是与初始目标采样点的世界坐标相同或相近的。反之,如果两者表示的不是同一位置点,也不是两个相近的位置点,那么,在步骤S605中确定的第一世界坐标与初始目标采样点的世界坐标不同,也不相近。基于此,可以通过如下步骤S606确定多个特征点的匹配误差,从而基于匹配误差和第一阈值,确定初始目标采样点是否是与特征点真正匹配的点,进而确定目标转换关系和目标平移关系。After obtaining the optimal solution, that is, the second rotation relationship and the second translation relationship, the camera coordinates of the feature point can be converted into the first world coordinates of the feature point. If the selected initial target sampling point and feature point represent the same location point in the actual physical space, or two similar location points, then the first world coordinates determined in step S605 should be the same as the initial target sampling point The world coordinates are the same or similar. Conversely, if the two represent not the same location point, nor two similar location points, then the first world coordinate determined in step S605 is different from the world coordinate of the initial target sampling point and is not similar. Based on this, the matching error of multiple feature points can be determined by the following step S606, and based on the matching error and the first threshold, it is determined whether the initial target sampling point is a point that actually matches the feature point, and then the target conversion relationship and the target translation relationship are determined. .
步骤S606,根据所述多个特征点的第一世界坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述多个特征点的匹配误差。Step S606: Determine the matching error of the multiple feature points according to the first world coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points.
在一些实施例中,电子设备在实现步骤S606时,可以通过如下实施例中的步骤S708和步骤S709确定多个特征点的匹配误差。也就是,根据特征点的第一世界坐标和初始目标采样点的世界坐标,确定特征点与初始目标采样点之间的距离;然后,根据多个特征点与匹配的初始目标采样点之间的距离,确定所述匹配误差。In some embodiments, when the electronic device implements step S606, it can determine the matching errors of multiple feature points through step S708 and step S709 in the following embodiments. That is, according to the first world coordinates of the feature point and the world coordinates of the initial target sampling point, the distance between the feature point and the initial target sampling point is determined; then, the distance between the multiple feature points and the matching initial target sampling point is determined. The distance determines the matching error.
步骤S607,如果所述匹配误差大于第一阈值,返回步骤S603,重新选取初始目标采样点,并重新确定匹配误差,直至重新确定的匹配误差小于所述第一阈值为止。Step S607, if the matching error is greater than the first threshold, return to step S603, reselect the initial target sampling point, and re-determine the matching error until the re-determined matching error is less than the first threshold.
可以理解地,如果匹配误差大于第一阈值,说明当前选取的初始目标采样点并不是与特征点匹配的采样点,两者指代的不是物理空间中同一位置点或者相近的位置点。此时,需要返回步骤S603,重新选取初始目标采样点,并基于重新选取的初始目标采样点,重新执行步骤S604至步骤S606,以重新确定匹配误差,直至重新确定的匹配误差小于第一阈值时,确定当前迭代中选取的初始目标采样点是与特征点真正匹配的点,此时可以将当前迭代获得的第二旋转关系和第二平移关系分别确定为目标旋转关系和目标平移关系。Understandably, if the matching error is greater than the first threshold, it means that the currently selected initial target sampling point is not the sampling point that matches the feature point, and the two refer to not the same location point or similar location points in the physical space. At this time, it is necessary to return to step S603, reselect the initial target sampling point, and re-execute steps S604 to S606 based on the reselected initial target sampling point to re-determine the matching error until the re-determined matching error is less than the first threshold , It is determined that the initial target sampling point selected in the current iteration is a point that truly matches the feature point. At this time, the second rotation relationship and the second translation relationship obtained in the current iteration can be determined as the target rotation relationship and the target translation relationship, respectively.
反之,在一些实施例中,如果匹配误差小于或等于所述第一阈值,则根据当前迭代获得的第二旋转关系确定图像采集设备的朝向(即姿态),根据当前迭代获得的第二平移关系,确定图像采集设备在点云地图中的坐标(即世界坐标)。Conversely, in some embodiments, if the matching error is less than or equal to the first threshold, the orientation (ie attitude) of the image acquisition device is determined according to the second rotation relationship obtained in the current iteration, and the second translation relationship obtained in the current iteration is determined. , Determine the coordinates of the image acquisition device in the point cloud map (ie world coordinates).
步骤S608,将所述匹配误差小于或等于所述第一阈值时确定的第二旋转关系,确定为所述目标旋转关系;将所述匹配误差小于或等于所述第一阈值时确定的第二平移关系,确定为所述目标平移关系。Step S608, determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value as the target rotation relationship; determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value The translation relationship is determined as the target translation relationship.
步骤S609,根据所述目标旋转关系,确定所述图像采集设备的朝向。Step S609: Determine the orientation of the image acquisition device according to the target rotation relationship.
步骤S610,根据所述目标平移关系和所述目标旋转关系,确定所述图像采集设备的世界坐标。Step S610: Determine the world coordinates of the image acquisition device according to the target translation relationship and the target rotation relationship.
本申请实施例再提供一种定位方法,所述方法可以包括以下步骤S701至步骤S713:The embodiment of the present application further provides a positioning method, and the method may include the following steps S701 to S713:
步骤S701,确定图像采集设备所采集的待处理图像中的特征点;Step S701, determining feature points in the image to be processed collected by the image collection device;
步骤S702,获取所述特征点的相机坐标;Step S702, acquiring the camera coordinates of the feature point;
步骤S703,获取相机坐标系相对于世界坐标系的第三旋转关系和第三平移关系。在一些实施例中,可以将所述第三旋转关系和所述第三平移关系分别设置一个初始值。Step S703: Acquire the third rotation relationship and the third translation relationship of the camera coordinate system relative to the world coordinate system. In some embodiments, the third rotation relationship and the third translation relationship may be set to an initial value respectively.
步骤S704,根据所述第三旋转关系、所述第三平移关系和所述特征点的相机坐标,确定所述特征点的第二世界坐标;Step S704, determining the second world coordinates of the feature point according to the third rotation relationship, the third translation relationship, and the camera coordinates of the feature point;
步骤S705,将所述特征点的第二世界坐标与所述多个采样点的世界坐标进行匹配,得出初始目标采样点。Step S705: Match the second world coordinates of the feature point with the world coordinates of the multiple sampling points to obtain an initial target sampling point.
在一些实施例中,电子设备在实现步骤S705时,可以确定特征点的第二世界坐标与采样点的世界坐标之间的距离,然后将距离特征点最近的采样点确定为初始目标采样点,或者将距离小于或等于距离阈值的采样点确定为初始目标采样点。在一些实施例中,可以确定特征点的第二世界坐标与采样点的世界坐标之间的欧氏距离,将欧式距离作为特征点与采样点之间的距离。In some embodiments, when the electronic device implements step S705, it can determine the distance between the second world coordinates of the feature point and the world coordinates of the sampling point, and then determine the sampling point closest to the feature point as the initial target sampling point. Or, the sampling point whose distance is less than or equal to the distance threshold is determined as the initial target sampling point. In some embodiments, the Euclidean distance between the second world coordinates of the feature point and the world coordinates of the sampling point can be determined, and the Euclidean distance is taken as the distance between the feature point and the sampling point.
步骤S706,根据多个特征点的相机坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述相机坐标系相对于所述世界坐标系的第二旋转关系和第二平移关系;Step S706, according to the camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points, determine the second rotation relationship and the second rotation relationship of the camera coordinate system relative to the world coordinate system. Translation relationship
步骤S707,根据所述第二旋转关系、所述第二平移关系和所述特征点的相机坐标,确定所述特征点的第一世界坐标;Step S707: Determine the first world coordinate of the feature point according to the second rotation relationship, the second translation relationship and the camera coordinates of the feature point;
步骤S708,根据所述特征点的第一世界坐标和所述初始目标采样点的世界坐标,确定所述特征点与所述初始目标采样点之间的距离。Step S708: Determine the distance between the feature point and the initial target sampling point according to the first world coordinates of the feature point and the world coordinates of the initial target sampling point.
在一些实施例中,电子设备也可以确定第一世界坐标与初始目标采样点的世界坐标之间的欧式距离,将所述欧式距离作为特征点与初始目标采样点之间的距离。In some embodiments, the electronic device may also determine the Euclidean distance between the first world coordinates and the world coordinates of the initial target sampling point, and use the Euclidean distance as the distance between the feature point and the initial target sampling point.
步骤S709,根据所述多个特征点与匹配的初始目标采样点之间的距离,确定所述匹配误差。Step S709: Determine the matching error according to the distance between the multiple feature points and the matched initial target sampling point.
在一些实施例中,电子设备在实现步骤S709时,可以将多个特征点与匹配的初始目标采样点之间的平均距离,确定为匹配误差。例如,包括n个特征点的第一世界坐标的集合表示为P′={p′ 1,p′ 2,...,p′ i,...,p′ n},特征点的第一世界坐标用p′ i表示,与n个特征点匹配的初始目标采样点的世界坐标的集合表示为Q={q 1,q 2,...,q i,...,q n},初始目标采样点的世界坐标用q i表示,那么,通过如下公式(8),可以求取匹配误差d: In some embodiments, when the electronic device implements step S709, the average distance between the multiple feature points and the matched initial target sampling point may be determined as the matching error. For example, the set of first world coordinates including n feature points is expressed as P′={p′ 1 ,p′ 2 ,...,p′ i ,...,p′ n }, the first of the feature points The world coordinates are represented by p′ i , and the set of world coordinates of the initial target sampling points that match the n feature points is represented as Q={q 1 ,q 2 ,...,q i ,...,q n }, The world coordinate of the initial target sampling point is represented by q i , then the matching error d can be obtained by the following formula (8):
Figure PCTCN2020116924-appb-000011
Figure PCTCN2020116924-appb-000011
式中||p′ i-q i|| 2表示,特征点与匹配的初始目标采样点之间的欧式距离。 Where ||p′ i -q i || 2 represents the Euclidean distance between the feature point and the matched initial target sampling point.
步骤S710,如果所述匹配误差大于第一阈值,将所述第二平移关系作为所述第三平移关系,将所述第二旋转关系作为所述第三旋转关系,返回步骤S704,重新选取初始目标采样点,并重新确定匹配误差,直至重新确定的匹配误差小于所述第一阈值为止。Step S710, if the matching error is greater than the first threshold, use the second translation relationship as the third translation relationship, and use the second rotation relationship as the third rotation relationship, return to step S704, and reselect the initial Target sampling points, and re-determine the matching error until the re-determined matching error is less than the first threshold.
可以理解地,如果匹配误差大于第一阈值,说明获取的第三旋转关系和第三平移关系是不符合实际的。换句话说,得出的初始目标采样点不是真正与所述特征点匹配的点,此时,可以将第二平移关系作为第三平移关系,将第二旋转关系作为第三旋转关系,重新执行步骤S704至步骤S709,直至匹配误差小于所述第一阈值为止。Understandably, if the matching error is greater than the first threshold, it means that the acquired third rotation relationship and third translation relationship do not conform to reality. In other words, the obtained initial target sampling point is not a point that really matches the feature point. In this case, the second translation relationship can be used as the third translation relationship, and the second rotation relationship can be used as the third rotation relationship. Step S704 to step S709, until the matching error is less than the first threshold.
步骤S711,将所述匹配误差小于或等于所述第一阈值时确定的第二旋转关系,确定为所述目标旋转关系;将所述匹配误差小于或等于所述第一阈值时确定的第二平移关系,确定为 所述目标平移关系;Step S711, determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value as the target rotation relationship; determining a second rotation relationship determined when the matching error is less than or equal to the first threshold value The translation relationship is determined as the target translation relationship;
步骤S712,根据所述目标旋转关系,确定所述图像采集设备的朝向;Step S712: Determine the orientation of the image acquisition device according to the target rotation relationship;
步骤S713,根据所述目标平移关系和所述目标旋转关系,确定所述图像采集设备的世界坐标。Step S713: Determine the world coordinates of the image acquisition device according to the target translation relationship and the target rotation relationship.
在一些实施例中,点云地图的构建过程,如图4所示,可以包括以下步骤S801至步骤S805:In some embodiments, the point cloud map construction process, as shown in FIG. 4, may include the following steps S801 to S805:
步骤S801,获取多张样本图像。Step S801: Obtain multiple sample images.
在一些实施例中,电子设备在实现步骤S801时,可以利用图像采集设备,按照预设的帧率进行样本图像的采集。样本图像可以是二维样本图像,例如,采用单目摄像头,以固定帧率进行红、绿、蓝(Red、Green、Blue,RGB)图像的采集;或者,也可以从预先采集的样本图像库中获取所述多张样本图像。In some embodiments, when the electronic device implements step S801, the image acquisition device may be used to collect sample images at a preset frame rate. The sample image can be a two-dimensional sample image, for example, a monocular camera is used to collect red, green, and blue (Red, Green, Blue, RGB) images at a fixed frame rate; or it can be from a pre-collected sample image library Obtain the multiple sample images in.
步骤S802,对所述多张样本图像进行处理,得到第一采样点集合,所述第一采样点集合中至少包括所述多张样本图像中的采样点的世界坐标。In step S802, the multiple sample images are processed to obtain a first sampling point set, and the first sampling point set includes at least the world coordinates of the sampling points in the multiple sample images.
在点云地图构建的初始阶段,可以获得采样点的图像特征和相机坐标,但是样本图像中的采样点的世界坐标是无从得知的。在一些实施例中,可以通过三维重建方法对多张样本图像进行处理,从而得到所述采样点的世界坐标。例如,通过运动中恢复结构(Structure from motion,SFM)方法,对多张样本图像进行初始化处理,从而得到多个采样点的世界坐标。在一些实施例中,第一采样点集合中包括多个采样点的世界坐标,不包括采样点的图像特征。在另一些实施例中,第一采样点集合中不仅包括多个采样点的世界坐标,还包括采样点的图像特征。In the initial stage of point cloud map construction, the image features and camera coordinates of the sampling points can be obtained, but the world coordinates of the sampling points in the sample image are unknown. In some embodiments, multiple sample images may be processed by a three-dimensional reconstruction method, so as to obtain the world coordinates of the sampling points. For example, the structure from motion (SFM) method is used to initialize multiple sample images to obtain the world coordinates of multiple sample points. In some embodiments, the first sampling point set includes the world coordinates of a plurality of sampling points, and does not include the image characteristics of the sampling points. In other embodiments, the first sampling point set includes not only the world coordinates of multiple sampling points, but also the image characteristics of the sampling points.
在一些实施例中,电子设备在实现步骤S802时,可以通过如下实施例中的步骤S902至步骤906确定所述第一采样点集合。In some embodiments, when the electronic device implements step S802, it can determine the first sampling point set through steps S902 to 906 in the following embodiments.
步骤S803,获取除所述多张样本图像外的其他样本图像。Step S803: Obtain other sample images except for the multiple sample images.
类似地,电子设备可以利用图像采集设备,按照预设的帧率实时采集样本图像,并执行如下步骤S804和步骤S805;或者还可以从预先建立的样本图像库中获取所述其他样本图像。Similarly, the electronic device can use the image acquisition device to acquire sample images in real time at a preset frame rate, and execute the following steps S804 and S805; or it can also acquire the other sample images from a pre-established sample image library.
步骤S804,根据所述第一采样点集合和获取的所述其他样本图像中采样点的属性信息,确定所述其他样本图像中的采样点的世界坐标,得到第二采样点集合。Step S804: Determine the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set.
事实上,通过步骤S802确定多个采样点的世界坐标,其时间复杂度是比较高的。因此,在地图构建的初始阶段,获得多个采样点的世界坐标之后,通过步骤S804来确定所述其他样本图像中的采样点的世界坐标。如此,可以大大降低地图构建的时间成本。In fact, the time complexity of determining the world coordinates of multiple sampling points through step S802 is relatively high. Therefore, in the initial stage of map construction, after obtaining the world coordinates of multiple sampling points, step S804 is used to determine the world coordinates of the sampling points in the other sample images. In this way, the time cost of map construction can be greatly reduced.
在一些实施例中,电子设备在实现步骤S804时,在第一采样点集合中包括采样点的图像特征和采样点的世界坐标的情况下,可以通过类似于上述实施例所提供的步骤S201至步骤S204、步骤S301至步骤S305、或者步骤S401至步骤S407,确定其他样本图像中的采样点的世界坐标,得到的第二采样点集合包括其他样本图像中的采样点的世界坐标和图像特征。In some embodiments, when the electronic device implements step S804, in the case where the image feature of the sampling point and the world coordinate of the sampling point are included in the first sampling point set, the steps S201 to S201 to that provided in the above-mentioned embodiment can be adopted. In step S204, step S301 to step S305, or step S401 to step S407, the world coordinates of sampling points in other sample images are determined, and the obtained second sampling point set includes the world coordinates and image features of the sampling points in other sample images.
在第一采样点集合中包括采样点的世界坐标,而不包括采样点的图像特征和的情况下,电子设备可以通过类似于上述实施例所提供的步骤S501至步骤S505、步骤S601至步骤S610、或者步骤S701至步骤S713,确定所述其他样本图像中的采样点的世界坐标,得到的第二采样点集合包括其他样本图像中的采样点的世界坐标,而不包括图像特征。In the case that the world coordinates of the sampling points are included in the first sampling point set, but the image features of the sampling points are not included, the electronic device can perform steps S501 to S505, and steps S601 to S610 similar to those provided in the foregoing embodiment. Or step S701 to step S713, determine the world coordinates of the sampling points in the other sample images, and the obtained second sampling point set includes the world coordinates of the sampling points in the other sample images, but does not include image features.
步骤S805,根据所述第一采样点集合和所述第二采样点集合,构建点云地图。Step S805: Construct a point cloud map according to the first sampling point set and the second sampling point set.
在一些实施例中,电子设备在实现步骤S805时,可以将所述第一采样点集合和所述第二采样点集合进行合并,从而得到点云地图。也就是说,点云地图实际上是一个数据集合,该数据集合中采样点之间的间距大于第一阈值。In some embodiments, when the electronic device implements step S805, the first sampling point set and the second sampling point set may be combined to obtain a point cloud map. In other words, the point cloud map is actually a data set, and the distance between sampling points in the data set is greater than the first threshold.
在本申请实施例中,在地图构建的初始阶段,电子设备通过多张样本图像获得至少包括多个采样点的世界坐标的第一采样点集合之后,根据第一采样点集合和获取的其他样本图像 中采样点的属性信息,确定其他样本图像中的采样点的世界坐标,得到第二采样点集合;如此,可以快速获得其他样本图像中的采样点的世界坐标,从而降低地图构建的时间成本。In the embodiment of the present application, in the initial stage of map construction, after the electronic device obtains a first sampling point set including at least the world coordinates of a plurality of sampling points through multiple sample images, according to the first sampling point set and other acquired samples The attribute information of the sampling points in the image is determined, the world coordinates of the sampling points in other sample images are determined, and the second sampling point set is obtained; in this way, the world coordinates of the sampling points in other sample images can be quickly obtained, thereby reducing the time cost of map construction .
本申请实施例再提供一种点云地图的构建方法,所述方法可以包括以下步骤S901至步骤S909:The embodiment of the present application further provides a method for constructing a point cloud map, and the method may include the following steps S901 to S909:
步骤S901,获取多张样本图像;Step S901, acquiring multiple sample images;
步骤S902,获取所述样本图像中采样点的图像特征和相机坐标。Step S902: Obtain the image features and camera coordinates of the sampling points in the sample image.
在一些实施例中,电子设备在实现步骤S902时,可以利用图像采集设备,按照预设的帧率进行样本图像的采集,并实时处理采集的样本图像,提取样本图像中采样点的图像特征和相机坐标。In some embodiments, when the electronic device implements step S902, the image acquisition device may be used to collect sample images at a preset frame rate, and process the collected sample images in real time, and extract the image characteristics and the image characteristics of the sampling points in the sample images. Camera coordinates.
步骤S903,根据所述多张样本图像中采样点的图像特征,从所述多张样本图像中挑选出满足第二条件的第一目标图像和第二目标图像。Step S903: According to the image characteristics of the sampling points in the multiple sample images, select the first target image and the second target image that meet the second condition from the multiple sample images.
在一些实施例中,电子设备在实现步骤S903时,挑选出的第一目标图像和第二目标图像一般是视差比较大的两张样本图像;这样,可以提高确定第一目标图像或第二目标图像中的采样点的世界坐标的准确度,进而有利于后续获得更高的定位精度。例如,电子设备通过如下实施例中步骤S113至步骤S116确定第一目标图像和第二目标图像。In some embodiments, when the electronic device implements step S903, the selected first target image and second target image are generally two sample images with relatively large parallax; in this way, the determination of the first target image or the second target image can be improved. The accuracy of the world coordinates of the sampling points in the image is conducive to the subsequent acquisition of higher positioning accuracy. For example, the electronic device determines the first target image and the second target image through steps S113 to S116 in the following embodiment.
步骤S904,确定所述第一目标图像与所述第二目标图像之间的第四旋转关系和第四平移关系。Step S904: Determine a fourth rotation relationship and a fourth translation relationship between the first target image and the second target image.
在一些实施例中,电子设备在实现步骤S904时,可以采用随机抽样一致(Random Sample Consensus,RANSAC)算法中的四点法对第一目标图像和第二目标图像进行处理,计算单应矩阵,从而获得第四旋转关系和第四平移关系。In some embodiments, when the electronic device implements step S904, the four-point method in the Random Sample Consensus (RANSAC) algorithm may be used to process the first target image and the second target image, and calculate the homography matrix. Thus, the fourth rotational relationship and the fourth translation relationship are obtained.
步骤S905,根据所述第四旋转关系、所述第四平移关系和所述第一目标图像中的采样点的相机坐标,确定所述第一目标图像中的采样点的世界坐标。在一些实施例中,电子设备在实现步骤S905时,可以通过三角化计算得到所述第一目标图像中的采样点的世界坐标。Step S905: Determine the world coordinates of the sampling points in the first target image according to the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling points in the first target image. In some embodiments, when the electronic device implements step S905, the world coordinates of the sampling points in the first target image may be obtained by triangulation calculation.
步骤S906,根据每一所述第一目标样本图像中的采样点的世界坐标,确定第一采样点集合。Step S906: Determine a first set of sampling points according to the world coordinates of the sampling points in each of the first target sample images.
可以理解地,第一目标样本图像中的采样点与匹配的第二目标样本图像的采样点实际上同一位置点;所以,在一些实施例中,电子设备可以根据每一第一目标样本图像或者第二目标样本图像中的采样点的世界坐标,确定第一采样点集合即可。在一些实施例中,第一采样点集合包括采样点的世界坐标,不包括采样点的图像特征。在另一些实施例中,第一采样点集合包括采样点的世界坐标和采样点的图像特征。It is understandable that the sampling point in the first target sample image and the sampling point of the matched second target sample image are actually at the same location; therefore, in some embodiments, the electronic device may be based on each first target sample image or For the world coordinates of the sampling points in the second target sample image, the first sampling point set may be determined. In some embodiments, the first set of sampling points includes the world coordinates of the sampling points, but does not include the image features of the sampling points. In other embodiments, the first set of sampling points includes the world coordinates of the sampling points and the image features of the sampling points.
步骤S907,获取除所述多张样本图像外的其他样本图像;Step S907, acquiring other sample images except for the multiple sample images;
步骤S908,根据所述第一采样点集合和获取的所述其他样本图像中采样点的属性信息,确定所述其他样本图像中的采样点的世界坐标,得到第二采样点集合;Step S908: Determine the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set;
步骤S909,根据第一采样点集合和第二采样点集合,构建点云地图。Step S909: Construct a point cloud map according to the first sampling point set and the second sampling point set.
本申请实施例再提供一种点云地图的构建方法,所述方法可以包括以下步骤S111至步骤S122:The embodiment of the present application further provides a method for constructing a point cloud map, and the method may include the following steps S111 to S122:
步骤S111,获取多张样本图像;Step S111, acquiring multiple sample images;
步骤S112,获取所述样本图像中采样点的图像特征和相机坐标;Step S112, acquiring image features and camera coordinates of sampling points in the sample image;
步骤S113,根据所述多张样本图像中采样点的图像特征,将所述多张样本图像进行两两匹配,得到每一对样本图像的第一匹配对集合。Step S113: Perform pairwise matching on the multiple sample images according to the image characteristics of the sampling points in the multiple sample images to obtain a first matching pair set of each pair of sample images.
所谓两两匹配指的是:每一张样本图像都和其他样本图像进行匹配。例如,多张样本图像包括样本图像1至6,电子设备可以将样本图像1与样本图像2匹配,将样本图像1与样本图像3匹配,将样本图像1与样本图像4匹配,将样本图像1与样本图像5匹配,将样本图像1与样本图像6匹配,将样本图像2与样本图像3匹配,将样本图像2与样本图像4匹 配,将样本图像2与样本图像5匹配,将样本图像2与样本图像6匹配,······,将样本图像5与样本图像6匹配。获得的第一匹配对集合包括的是两张图像中采样点之间的匹配关系,即,包括多个采样点匹配对。The so-called pairwise matching refers to: each sample image is matched with other sample images. For example, if multiple sample images include sample images 1 to 6, the electronic device can match sample image 1 with sample image 2, match sample image 1 with sample image 3, match sample image 1 with sample image 4, and match sample image 1. Match with sample image 5, match sample image 1 with sample image 6, match sample image 2 with sample image 3, match sample image 2 with sample image 4, match sample image 2 with sample image 5, match sample image 2 with sample image 2 Match with sample image 6,..., match sample image 5 with sample image 6. The obtained first matching pair set includes the matching relationship between the sampling points in the two images, that is, it includes multiple sampling point matching pairs.
步骤S114,剔除所述第一匹配对集合中不满足第三条件的采样点匹配对,得到第二匹配对集合。In step S114, the matching pairs of sampling points that do not meet the third condition in the first matching pair set are eliminated to obtain a second matching pair set.
电子设备在实现步骤S114时,剔除方法可以采用RANSAC中的八点法计算基础矩阵,不满足基础矩阵的匹配对就选择剔除掉;如此,可以剔除一些鲁棒性比较差的采样点匹配对,从而提高算法的鲁棒性。When the electronic device implements step S114, the elimination method can use the eight-point method in RANSAC to calculate the basic matrix, and the matching pairs that do not meet the basic matrix are selected to be eliminated; in this way, some matching pairs of sampling points with poor robustness can be eliminated. Thereby improving the robustness of the algorithm.
步骤S115,从每一所述第二匹配对集合中挑选出匹配对数目满足所述第二条件的目标匹配对集合。Step S115, selecting a target matching pair set whose number of matching pairs meets the second condition from each of the second matching pair sets.
一般来说,匹配对数目太多时,说明两张图像的视差比较小;但是,匹配对数目较少时,又无法确定两张图像之间的第四旋转关系和第四平移关系。在一些实施例中,第二条件可以设置为匹配对数目大于第一数值且小于第二数值。Generally speaking, when the number of matching pairs is too large, the parallax of the two images is relatively small; however, when the number of matching pairs is small, the fourth rotation relationship and the fourth translation relationship between the two images cannot be determined. In some embodiments, the second condition may be set as the number of matching pairs is greater than the first value and less than the second value.
步骤S116,将所述目标匹配对集合对应的两张样本图像,确定为第一目标图像和第二目标图像;Step S116: Determine the two sample images corresponding to the target matching pair set as the first target image and the second target image;
步骤S117,确定所述第一目标图像与所述第二目标图像之间的第四旋转关系和第四平移关系;Step S117, determining a fourth rotation relationship and a fourth translation relationship between the first target image and the second target image;
步骤S118,根据所述第四旋转关系、所述第四平移关系和所述第一目标图像中的采样点的相机坐标,确定所述第一目标图像中的采样点的世界坐标;Step S118, determining the world coordinates of the sampling points in the first target image according to the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling points in the first target image;
步骤S119,根据每一所述第一目标样本图像中的采样点的世界坐标,确定所述第一采样点集合;Step S119: Determine the first set of sampling points according to the world coordinates of the sampling points in each of the first target sample images;
步骤S120,获取除所述多张样本图像外的其他样本图像;Step S120, acquiring other sample images except for the multiple sample images;
步骤S121,根据所述第一采样点集合和获取的所述其他样本图像中采样点的属性信息,确定所述其他样本图像中的采样点的世界坐标,得到第二采样点集合;Step S121, determining the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set;
步骤S122,根据所述第一采样点集合和所述第二采样点集合,构建点云地图。Step S122: Construct a point cloud map according to the first sampling point set and the second sampling point set.
下面将说明本申请实施例在一个实际的应用场景中的示例性应用。An exemplary application of the embodiments of the present application in an actual application scenario will be described below.
在本申请实施例中,实现了一种基于稀疏点云的室内定位技术,可以帮助用户实时定位自身位置。该方案可以针对室内场景,提取图像特征并构建稀疏点云地图(即所述点云地图的一种示例)。定位过程不依赖于外部基站设备,成本低,定位精度高,鲁棒性强。该方案包含两个主要部分:构建地图和视觉定位。In the embodiments of the present application, an indoor positioning technology based on sparse point clouds is implemented, which can help users locate their positions in real time. This solution can extract image features and construct a sparse point cloud map (that is, an example of the point cloud map) for indoor scenes. The positioning process does not depend on external base station equipment, with low cost, high positioning accuracy and strong robustness. The program consists of two main parts: building a map and visual positioning.
在本申请实施例中,构建地图部分主要是通过单目摄像头采集RGB图像信息(即所述样本图像),并提取图像特征构建稀疏点云地图,至少包括以下步骤S11至步骤S15:In the embodiment of the present application, the construction of the map part is mainly to collect RGB image information (ie, the sample image) through a monocular camera, and extract image features to construct a sparse point cloud map, which includes at least the following steps S11 to S15:
步骤S11,利用单目摄像头,以固定帧率进行RGB图像采集;Step S11, using a monocular camera to collect RGB images at a fixed frame rate;
步骤S12,采集过程中实时提取RGB图像中的属性信息(例如图像中采样点的图像特征和相机坐标);Step S12, real-time extraction of attribute information in the RGB image (such as image features and camera coordinates of sampling points in the image) in the acquisition process;
步骤S13,采集一定数量的RGB图像后,利用SFM方法对图像的相对旋转和平移进行初始化;Step S13, after collecting a certain number of RGB images, initialize the relative rotation and translation of the images using the SFM method;
步骤S14,初始化完成后,通过PnP(Perspective-n-Point)算法计算后续图像稀疏点(即采样点)的三维世界坐标(即所述世界坐标的一些实施例),得到稀疏点云地图;Step S14: After the initialization is completed, calculate the three-dimensional world coordinates (ie some embodiments of the world coordinates) of the sparse points (ie sampling points) of the subsequent image through the PnP (Perspective-n-Point) algorithm to obtain a sparse point cloud map;
步骤S15,存储稀疏点云地图及其对应的图像特征,例如,将这些信息序列化地存储到本地作为离线地图。In step S15, the sparse point cloud map and its corresponding image features are stored, for example, the information is serialized and stored locally as an offline map.
其中,针对步骤S12中,提取RGB图像中的图像特征,这里给出如下解释。特征提取的过程实际上是对RGB图像进行解释和标注的过程。在一些实施例中,对RGB图像提取FAST角点,提取的角点数量一般固定为150个,用于图像跟踪;并对该角点进行ORB描述子的提取,用于稀疏点的特征描述子匹配。需要说明的是,150为经验值,在本申请中对于 提取的角点数量不做限定。角点数量过少,会导致跟踪失败率高,而角点数量过多,又会影响算法效率。Among them, for the extraction of image features in the RGB image in step S12, the following explanation is given here. The process of feature extraction is actually the process of interpreting and labeling RGB images. In some embodiments, FAST corner points are extracted from RGB images, and the number of corner points extracted is generally fixed at 150 for image tracking; and ORB descriptors are extracted for the corner points, which are used as feature descriptors for sparse points. match. It should be noted that 150 is an empirical value, and the number of corner points to be extracted is not limited in this application. Too few corner points will lead to a high tracking failure rate, while too many corner points will affect the efficiency of the algorithm.
其中,针对步骤S13中的利用SFM方法对图像的相对旋转和平移进行初始化,这里给出如下解释。首先,在采集到一定数量的图像后,通过SFM方法对图像的相对旋转和平移进行初始化,并得到稀疏点的三维世界坐标。SFM算法至少包括以下步骤S131至步骤S139:Among them, for the initialization of the relative rotation and translation of the image using the SFM method in step S13, the following explanation is given here. First, after a certain number of images are collected, the relative rotation and translation of the images are initialized by the SFM method, and the three-dimensional world coordinates of the sparse points are obtained. The SFM algorithm includes at least the following steps S131 to S139:
步骤S131,对一定数量的图像进行两两匹配,使用欧氏距离判断的方法建立图像稀疏点之间的匹配关系;Step S131: Perform pairwise matching on a certain number of images, and establish a matching relationship between the sparse points of the image using the method of Euclidean distance judgment;
步骤S132,对匹配对进行剔除,剔除方法采用RANSAC八点法计算基础矩阵,不满足基础矩阵的匹配对就选择剔除掉;Step S132, the matching pairs are eliminated, the elimination method adopts the RANSAC eight-point method to calculate the basic matrix, and the matching pairs that do not satisfy the basic matrix are selected to be eliminated;
步骤S133,匹配关系建立后,生成追踪列表,追踪列表是指同名点的图像名集合;Step S133: After the matching relationship is established, a tracking list is generated, and the tracking list refers to a collection of image names of points with the same name;
步骤S134,剔除追踪列表中的无效匹配;Step S134, removing invalid matches in the tracking list;
步骤S135,寻找初始化像对,目的是找到相机基线最大的像对,采用RANSAC算法四点法计算单应矩阵,满足单应矩阵的匹配点称为内点,不满足的称为外点。找到内点占比最小的像对;Step S135, searching for the initial image pair, the purpose is to find the image pair with the largest camera baseline, and use the RANSAC algorithm four-point method to calculate the homography matrix. The matching points that satisfy the homography matrix are called interior points, and those that are not satisfied are called exterior points. Find the image pair with the smallest proportion of interior points;
步骤S136,寻找初始化像对的相对旋转和平移,方法为通过RANSAC八点法计算本质矩阵,通过对本质矩阵SVD分解得到像对之间的相对旋转和平移;Step S136, searching for the relative rotation and translation of the initialized image pair, the method is to calculate the essential matrix by the RANSAC eight-point method, and obtain the relative rotation and translation between the image pairs by SVD decomposition of the essential matrix;
步骤S137,通过三角化计算得到初始化像对中的稀疏点的三维世界坐标;Step S137: Obtain the three-dimensional world coordinates of the sparse points in the initialization image pair through triangulation calculation;
步骤S138,对其他图像反复执行步骤S136和S137,就可以得到所有图像的相对旋转和平移,以及稀疏点的三维世界坐标;Step S138: Repeating steps S136 and S137 on other images, the relative rotation and translation of all images, and the three-dimensional world coordinates of sparse points can be obtained;
步骤S139,通过光束平差法,优化得到的图像之间的旋转、平移和稀疏点的三维世界坐标。这是一个非线性优化的过程,目的是降低SFM结果的误差。In step S139, the three-dimensional world coordinates of the rotation, translation, and sparse point between the obtained images are optimized through the beam adjustment method. This is a non-linear optimization process, the purpose is to reduce the error of the SFM results.
基于步骤S11至步骤S15可以构建出一张基于稀疏点云的离线地图,该地图以二进制格式存储稀疏点云及其图像属性信息(包括三维世界坐标和描述子信息)到本地,在视觉定位过程中,该地图将被加载使用。Based on steps S11 to S15, an offline map based on the sparse point cloud can be constructed. The map stores the sparse point cloud and its image attribute information (including three-dimensional world coordinates and descriptor information) in a binary format to the local, during the visual positioning process , The map will be loaded and used.
在本申请实施例中,视觉定位部分主要是通过单目摄像头,采集当前的RGB图像,加载构建好的离线地图,并利用描述子匹配找到当前特征点和地图稀疏点之间的匹配对,最后再通过PnP算法求解当前相机在地图中的精确位姿,以达到定位目的。在一些实施例中,k可以包括以下步骤S21至步骤S25:In the embodiment of this application, the visual positioning part mainly collects the current RGB image through the monocular camera, loads the constructed offline map, and uses the descriptor matching to find the matching pair between the current feature point and the sparse point of the map. Finally, Then the PnP algorithm is used to solve the current camera's precise pose in the map to achieve the purpose of positioning. In some embodiments, k may include the following steps S21 to S25:
步骤S21,加载预先构建好的离线地图(即稀疏点云地图);Step S21, loading a pre-built offline map (ie a sparse point cloud map);
步骤S22,利用单目摄像头进行RGB图像采集;Step S22, using a monocular camera to collect RGB images;
步骤S23,采集过程中实时提取当前帧图像中的属性信息;Step S23, extract the attribute information in the current frame image in real time during the acquisition process;
步骤S24,通过描述子匹配找到当前特征点和地图稀疏点之间的匹配对;Step S24: Find a matching pair between the current feature point and the sparse point on the map through descriptor matching;
步骤S25,找到足够多的匹配对之后,通过PnP算法求解当前相机在地图坐标系中的精确位姿。In step S25, after finding enough matching pairs, the accurate pose of the current camera in the map coordinate system is solved through the PnP algorithm.
其中,针对步骤S23中的实时提取当前帧图像中的属性信息可参考上述步骤S12。For the real-time extraction of the attribute information in the current frame image in step S23, reference may be made to the above step S12.
其中,针对步骤S24中的通过描述子匹配找到当前特征点和地图稀疏点之间的匹配对,算法至少包括以下步骤S241至步骤S244:Among them, for finding the matching pair between the current feature point and the sparse point of the map through the descriptor matching in step S24, the algorithm includes at least the following steps S241 to S244:
步骤S241,对当前图像中提取到的第N(初始为0)个特征点F 1N,设定欧式距离最小值d min=d TH,设定匹配点
Figure PCTCN2020116924-appb-000012
Step S241, for the Nth (initially 0) feature point F 1N extracted in the current image, set the minimum Euclidean distance d min =d TH , and set the matching point
Figure PCTCN2020116924-appb-000012
步骤S242,计算F 1N和稀疏点云中的第M(初始为0)个特征点F 2M,计算特征点描述子之间的欧式距离d NMStep S242: Calculate F 1N and the M-th (initially 0) feature point F 2M in the sparse point cloud, and calculate the Euclidean distance d NM between the feature point descriptors;
步骤S243,判断欧式距离d NM与欧式距离最小值d min,若d NM<d min,则d min=d NM,
Figure PCTCN2020116924-appb-000013
接着M=M+1,若稀疏点云中的稀疏点未遍历完毕,则跳转回步骤S242;否则N=N+1,跳转回步骤S241;若当前图像的特征点遍历完毕,则跳转到步骤S244;
Step S243, judge the Euclidean distance d NM and the minimum Euclidean distance d min , if d NM <d min , then d min = d NM ,
Figure PCTCN2020116924-appb-000013
Then M=M+1, if the sparse point in the sparse point cloud has not been traversed, skip back to step S242; otherwise, N=N+1, skip back to step S241; if the feature points of the current image are traversed, skip Go to step S244;
步骤S244,整理当前图像的特征点和地图稀疏点之间的匹配对作为算法输出,算法结束。In step S244, the matching pairs between the feature points of the current image and the sparse points of the map are sorted out as the algorithm output, and the algorithm ends.
其中,针对步骤S25中的通过PnP算法求解当前相机在地图坐标系中的精确位姿,在一些实施例中,如图5所示:Among them, for solving the precise pose of the current camera in the map coordinate system through the PnP algorithm in step S25, in some embodiments, as shown in FIG. 5:
首先,判断步骤S24中形成匹配对序列(在本实例中匹配对序列为{F 0,F 1,F 2}),若匹配对序列的元素数量大于TH 2,则进行步骤S25;否则算法结束。在该实施例中,基于匹配对序列,调用OpenCV中的SolvePnP函数求解出当前相机在地图坐标系下的位姿。其中PnP算法的原理如下: First, it is judged that a matching pair sequence is formed in step S24 (in this example, the matching pair sequence is {F 0 , F 1 , F 2 }). If the number of elements in the matching pair sequence is greater than TH 2 , step S25 is performed; otherwise, the algorithm ends . In this embodiment, based on the matched pair sequence, the SolvePnP function in OpenCV is called to find the pose of the current camera in the map coordinate system. The principle of the PnP algorithm is as follows:
PnP算法的输入是三维(three dimensional,3D)点(即地图坐标系下的稀疏点的三维世界坐标)和这些3D点在当前图像中的投影得到的2D点(即当前帧中特征点的相机坐标),算法的输出是当前帧相对于地图坐标系原点的位姿变换(即当前帧在地图坐标系中的位姿)。The input of the PnP algorithm is three-dimensional (3D) points (that is, the three-dimensional world coordinates of the sparse points in the map coordinate system) and the 2D points obtained by the projection of these 3D points in the current image (that is, the camera of the feature point in the current frame) Coordinate), the output of the algorithm is the pose transformation of the current frame relative to the origin of the map coordinate system (that is, the pose of the current frame in the map coordinate system).
PnP算法不是直接根据匹配对序列求出相机位姿矩阵的,而是先求出对应的2D点在当前坐标系下的3D坐标;然后,根据地图坐标系下的3D坐标和当前坐标系下的3D坐标求解相机位姿的。The PnP algorithm does not directly obtain the camera pose matrix according to the matching pair sequence, but first obtains the 3D coordinates of the corresponding 2D point in the current coordinate system; then, according to the 3D coordinates in the map coordinate system and the current coordinate system 3D coordinates to solve the camera pose.
基于步骤S21至步骤S25可以通过视觉特征,在预定义的稀疏点云地图中达成定位目的,得到自身在地图坐标系(即所述世界坐标系)下的位置和姿态。该定位结果精度较高,不需要依赖外部基站设备,成本低,鲁棒性强。Based on steps S21 to S25, visual features can be used to achieve the positioning purpose in a predefined sparse point cloud map, and to obtain its own position and posture in the map coordinate system (that is, the world coordinate system). The positioning result has high accuracy, does not need to rely on external base station equipment, has low cost and strong robustness.
在本申请实施例中,利用相机运动并得到了特征点的三维信息,在定位结果上可以同时提供位置和姿态,相对于其他室内定位方法提高了定位准确度;In the embodiment of the present application, the camera movement is used to obtain the three-dimensional information of the feature point, and the position and posture can be provided in the positioning result at the same time, which improves the positioning accuracy compared with other indoor positioning methods;
在本申请实施例中,存储的地图形式为稀疏点云,相当于图像的稀疏采样,在地图大小上较之传统方法有一定程度的压缩;In the embodiment of the present application, the stored map is in the form of a sparse point cloud, which is equivalent to sparse sampling of the image, and the map size is compressed to a certain degree compared with the traditional method;
在本申请实施例中,在建图和定位过程中只需要用到普通的移动终端设备,不需要引入其他外部基站设备,因此成本低廉;In the embodiment of the present application, only ordinary mobile terminal equipment is needed in the process of mapping and positioning, and other external base station equipment is not required to be introduced, so the cost is low;
在本申请实施例中,不需要引入物体识别等错误率较高的算法,定位成功率高,鲁棒性强。In the embodiments of the present application, there is no need to introduce algorithms with high error rates such as object recognition, and the positioning success rate is high, and the robustness is strong.
在本申请实施例中,充分挖掘图像特征的三维信息并结合高精度高鲁棒性图像匹配算法的进行室内环境定位。在地图构建上,通过采集视觉图像中特征点的三维世界坐标和描述子信息,以稀疏点云的形式存储为离线地图。在定位方法上,采用描述子匹配的方法找到当前特征点在稀疏点云中的匹配对,再通过PnP算法精确地计算出当前自身位置和姿态。两者结合形成了一套低成本、高精度、强鲁棒性的室内定位方法。In the embodiment of the present application, the three-dimensional information of the image feature is fully excavated and combined with the high-precision and high-robustness image matching algorithm for indoor environment positioning. In map construction, the three-dimensional world coordinates and descriptor information of the feature points in the visual image are collected and stored as an offline map in the form of a sparse point cloud. In the positioning method, the descriptor matching method is used to find the matching pair of the current feature point in the sparse point cloud, and then the current position and posture of the current self are accurately calculated through the PnP algorithm. The combination of the two forms a low-cost, high-precision, and robust indoor positioning method.
在地图构建上,稀疏点云存储了包括图像特征点的三维世界坐标和描述子信息。其中的描述子信息是用于视觉定位过程中,与当前图像中的特征点进行匹配而用的。其中,图像特征描述子可以是ORB描述子,每一个图像特征点的描述子信息都要占据256个字节空间。针对于以稀疏点云形式存储的离线地图,每个稀疏点都要分配256字节作为特征描述子的存储空间,在最终离线地图的大小中占据了不小的比例。为了缩减离线地图的大小,本申请实施例提供以下扩展方案。In map construction, the sparse point cloud stores three-dimensional world coordinates and descriptor information including image feature points. The descriptor information is used for matching with the feature points in the current image during the visual positioning process. Among them, the image feature descriptor can be an ORB descriptor, and the descriptor information of each image feature point occupies 256 bytes of space. For offline maps stored in the form of sparse point clouds, each sparse point must be allocated 256 bytes as the storage space of the feature descriptor, which occupies a large proportion of the size of the final offline map. In order to reduce the size of the offline map, the embodiments of the present application provide the following extension solutions.
在地图构建部分,电子设备可以序列化存储稀疏点云的三维世界坐标。In the map building part, the electronic device can serialize and store the three-dimensional world coordinates of the sparse point cloud.
在视觉定位部分,本申请实施例提供了一种定位方法,可以包括以下步骤S31至步骤S35:In the visual positioning part, the embodiment of the present application provides a positioning method, which may include the following steps S31 to S35:
步骤S31,加载预先构建好的离线地图;Step S31, loading a pre-built offline map;
步骤S32,利用单目摄像头进行RGB图像采集;Step S32, using a monocular camera to collect RGB images;
步骤S33,采集过程中实时提取当前帧图像中的属性信息(即特征点的相机坐标);Step S33, real-time extraction of attribute information (ie, camera coordinates of feature points) in the current frame image during the acquisition process;
步骤S34,计算当前图像中特征点的三维相机坐标形成局部点云;Step S34: Calculate the three-dimensional camera coordinates of the feature points in the current image to form a local point cloud;
步骤S35,通过迭代最近点(Iterative Closest Point,ICP)算法匹配局部点云和地图稀疏点云,求解当前相机在地图坐标系中的精确位姿。In step S35, the local point cloud and the sparse point cloud of the map are matched through the Iterative Closest Point (ICP) algorithm to solve the accurate pose of the current camera in the map coordinate system.
其中,针对步骤S35中的通过ICP算法匹配局部点云和地图稀疏点云,这里给出如下解释。Among them, for the matching of the local point cloud and the map sparse point cloud by the ICP algorithm in step S35, the following explanation is given here.
ICP算法其本质上是基于最小二乘法的最优配准方法。该算法重复进行选择对应关系点 对,计算最优刚体变换,直到满足正确配准的收敛精度要求。ICP算法的基本原理是:分别在待匹配的目标点云P和源点云Q中,按照一定的约束条件,找到最邻近的点(p i,q i),然后计算出最优的旋转R和平移T,使得误差函数最小,误差函数E(R,T)的公式为: The ICP algorithm is essentially an optimal registration method based on the least squares method. The algorithm repeatedly selects the corresponding point pairs and calculates the optimal rigid body transformation until the convergence accuracy requirements of the correct registration are met. ICP algorithm is the basic principle: to be respectively matched target point cloud point cloud P and Q in the source, according to certain constraints, find the nearest point (p i, q i), and then calculate the optimal rotation R Translate T to minimize the error function. The formula of the error function E(R,T) is:
Figure PCTCN2020116924-appb-000014
Figure PCTCN2020116924-appb-000014
其中n为邻近点对的数量,p i为目标点云P中的一点,q i为源点云Q中与p i对应的最近点,R为旋转矩阵(也称为旋转关系),T为平移向量(也称为平移关系)。 Where n is the number of adjacent point pairs, p i is a point in the target point cloud P, q i is the closest point in the source point cloud Q corresponding to p i , R is the rotation matrix (also known as the rotation relationship), and T is Translation vector (also called translation relationship).
基于步骤S31至步骤S35可以通过视觉特征,在预定义的稀疏点云地图中达成定位目的,得到自身在地图坐标系下的位置和姿态。并且预定的稀疏点云地图中不需要存储额外的特征点描述子信息,压缩了离线地图的大小。Based on step S31 to step S35, visual features can be used to achieve the positioning purpose in the predefined sparse point cloud map, and obtain its own position and posture in the map coordinate system. And the predetermined sparse point cloud map does not need to store additional feature point descriptor information, which reduces the size of the offline map.
基于前述的实施例,本申请实施例提供一种定位装置,该装置包括各模块、以及各模块所包括的各单元,可以通过终端中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。Based on the foregoing embodiment, the embodiment of the present application provides a positioning device, which includes each module and each unit included in each module, which can be implemented by a processor in a terminal; of course, it can also be implemented by a specific logic circuit. In the implementation process, the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
图6A为本申请实施例定位装置的结构示意图,如图6A所示,所述装置600包括第一确定模块601、属性信息获取模块602和定位模块603,其中:第一确定模块601,配置为确定图像采集设备所采集的待处理图像中的特征点;属性信息获取模块602,配置为获取所述特征点的属性信息;定位模块603,配置为将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果。FIG. 6A is a schematic structural diagram of a positioning device according to an embodiment of the application. As shown in FIG. 6A, the device 600 includes a first determining module 601, an attribute information acquiring module 602, and a positioning module 603. The first determining module 601 is configured to Determine the characteristic points in the image to be processed collected by the image acquisition device; the attribute information acquisition module 602 is configured to acquire the attribute information of the characteristic points; the positioning module 603 is configured to compare the attribute information of the characteristic points with the pre-built The attribute information of multiple sampling points in the point cloud map is matched to obtain the positioning result of the image acquisition device.
在一些实施例中,所述特征点的属性信息至少包括以下之一:所述特征点的图像特征、所述特征点的相机坐标;所述采样点的属性信息至少包括以下之一:所述采样点的图像特征、所述采样点的世界坐标。In some embodiments, the attribute information of the feature point includes at least one of the following: the image feature of the feature point, and the camera coordinates of the feature point; the attribute information of the sampling point includes at least one of the following: the The image feature of the sampling point, and the world coordinates of the sampling point.
在一些实施例中,定位模块603,包括:匹配单元,配置为将所述特征点的图像特征与所述多个采样点的图像特征进行匹配,得出第一目标采样点;定位单元,配置为根据所述特征点的相机坐标和所述第一目标采样点的世界坐标,确定所述图像采集设备的定位结果。In some embodiments, the positioning module 603 includes: a matching unit configured to match the image features of the feature point with the image features of the multiple sampling points to obtain the first target sampling point; the positioning unit is configured to To determine the positioning result of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point.
在一些实施例中,所述匹配单元,配置为根据所述采样点的图像特征和所述特征点的图像特征,确定所述采样点与所述特征点之间的相似度;将与所述特征点之间的相似度满足第一条件的采样点,确定为所述第一目标采样点。In some embodiments, the matching unit is configured to determine the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point; The sampling point whose similarity between the feature points meets the first condition is determined as the first target sampling point.
在一些实施例中,所述定位单元,配置为:根据多个第一目标采样点的世界坐标和与所述多个第一目标采样点匹配的特征点的相机坐标,确定所述多个第一目标采样点的相机坐标;根据所述多个第一目标采样点的世界坐标和所述多个第一目标采样点的相机坐标,确定相机坐标系相对于世界坐标系的第一旋转关系和第一平移关系;根据所述第一平移关系和所述第一旋转关系,确定所述图像采集设备的世界坐标;根据所述第一旋转关系,确定所述图像采集设备的朝向。In some embodiments, the positioning unit is configured to determine the plurality of first target sampling points according to the world coordinates of the plurality of first target sampling points and the camera coordinates of the feature points matching the plurality of first target sampling points. Camera coordinates of a target sampling point; according to the world coordinates of the plurality of first target sampling points and the camera coordinates of the plurality of first target sampling points, determine the first rotation relationship of the camera coordinate system relative to the world coordinate system and A first translation relationship; determine the world coordinates of the image capture device according to the first translation relationship and the first rotation relationship; determine the orientation of the image capture device according to the first rotation relationship.
在一些实施例中,所述匹配单元,配置为:根据迭代策略,将多个特征点的相机坐标与所述多个采样点的世界坐标进行匹配,得出相机坐标系相对于世界坐标系的目标旋转关系和目标平移关系;所述定位单元,还配置为:根据所述目标旋转关系,确定所述图像采集设备的朝向;根据所述目标平移关系和所述目标旋转关系,确定所述图像采集设备的世界坐标。In some embodiments, the matching unit is configured to match the camera coordinates of the multiple feature points with the world coordinates of the multiple sampling points according to an iterative strategy to obtain the camera coordinate system relative to the world coordinate system. The target rotation relationship and the target translation relationship; the positioning unit is further configured to: determine the orientation of the image acquisition device according to the target rotation relationship; determine the image according to the target translation relationship and the target rotation relationship Collect the world coordinates of the device.
在一些实施例中,所述匹配单元包括:选取子单元,配置为从所述多个采样点中选取与所述特征点匹配的初始目标采样点;变换关系确定子单元,配置为根据所述多个特征点的相机坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述相机坐标系相对于所述世界坐标系的第二旋转关系和第二平移关系;第一世界坐标确定子单元,配置为根据所述第二旋转关系、所述第二平移关系和所述特征点的相机坐标,确定所述特征点的第一世界坐标;匹配误差确定子单元,配置为根据所述多个特征点的第一世界坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述多个特征点的匹配误差;迭代子单元,配置 为如果所述匹配误差大于第一阈值,重新选取初始目标采样点,并重新确定匹配误差,直至重新确定的匹配误差小于所述第一阈值为止;目标变换关系确定子单元,配置为将所述匹配误差小于或等于所述第一阈值时确定的第二旋转关系,确定为所述目标旋转关系;将所述匹配误差小于或等于所述第一阈值时确定的第二平移关系,确定为所述目标平移关系。In some embodiments, the matching unit includes: a selection subunit configured to select an initial target sampling point that matches the characteristic point from the plurality of sampling points; a transformation relationship determination subunit configured to select an initial target sampling point that matches the characteristic point from the plurality of sampling points; The camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points that match the multiple feature points, and determine the second rotation relationship and the second translation relationship of the camera coordinate system relative to the world coordinate system; A world coordinate determination subunit, configured to determine the first world coordinate of the feature point according to the second rotation relationship, the second translation relationship, and the camera coordinates of the feature point; a matching error determination subunit, configured To determine the matching error of the multiple feature points according to the first world coordinates of the multiple feature points and the world coordinates of the initial target sampling points that match the multiple feature points; the iterative subunit is configured to: If the matching error is greater than the first threshold, the initial target sampling point is reselected, and the matching error is re-determined until the re-determined matching error is less than the first threshold; the target transformation relationship determination subunit is configured to set the matching error to be less than Or the second rotation relationship determined when the first threshold is equal to or less is determined as the target rotation relationship; the second translation relationship determined when the matching error is less than or equal to the first threshold is determined as the target translation relationship.
在一些实施例中,所述选取子单元,配置为:获取所述相机坐标系相对于所述世界坐标系的第三旋转关系和第三平移关系;根据所述第三旋转关系、所述第三平移关系和所述特征点的相机坐标,确定所述特征点的第二世界坐标;将所述特征点的第二世界坐标与所述多个采样点的世界坐标进行匹配,得出所述初始目标采样点。In some embodiments, the selection subunit is configured to: obtain a third rotation relationship and a third translation relationship of the camera coordinate system relative to the world coordinate system; The three translation relations and the camera coordinates of the feature point are used to determine the second world coordinates of the feature point; the second world coordinates of the feature point are matched with the world coordinates of the multiple sampling points to obtain the The initial target sampling point.
在一些实施例中,所述匹配误差确定子单元,配置为:根据所述特征点的第一世界坐标和所述初始目标采样点的世界坐标,确定所述特征点与所述初始目标采样点之间的距离;根据所述多个特征点与匹配的初始目标采样点之间的距离,确定所述匹配误差。In some embodiments, the matching error determining subunit is configured to determine the characteristic point and the initial target sampling point according to the first world coordinates of the characteristic point and the world coordinates of the initial target sampling point According to the distance between the multiple feature points and the matching initial target sampling point, the matching error is determined.
所述迭代子单元,配置为如果所述匹配误差大于第一阈值,将所述第二平移关系作为所述第三平移关系,将所述第二旋转关系作为所述第三旋转关系,重新选取初始目标采样点。The iterative subunit is configured to, if the matching error is greater than a first threshold, use the second translation relationship as the third translation relationship, and the second rotation relationship as the third rotation relationship, and reselect The initial target sampling point.
在一些实施例中,如图6B所示,所述装置600,还包括:图像获取模块604,配置为获取多张样本图像;图像处理模块605,配置为对所述多张样本图像进行处理,得到第一采样点集合,所述第一采样点集合中至少包括所述多张样本图像中的采样点的世界坐标;图像获取模块604,还配置为获取除所述多张样本图像外的其他样本图像;第二确定模块606,配置为根据所述第一采样点集合和获取的所述其他样本图像中采样点的属性信息,确定所述其他样本图像中的采样点的世界坐标,得到第二采样点集合;地图构建模块607,配置为根据所述第一采样点集合和所述第二采样点集合,构建所述点云地图。In some embodiments, as shown in FIG. 6B, the device 600 further includes: an image acquisition module 604, configured to acquire multiple sample images; and an image processing module 605, configured to process the multiple sample images, Obtain a first sampling point set, the first sampling point set includes at least the world coordinates of the sampling points in the multiple sample images; the image acquisition module 604 is further configured to acquire other than the multiple sample images Sample image; the second determination module 606 is configured to determine the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images, to obtain the first Two sampling point sets; the map construction module 607 is configured to construct the point cloud map according to the first sampling point set and the second sampling point set.
在一些实施例中,图像处理模块605,包括:属性信息获取单元,配置为获取所述样本图像中采样点的图像特征和相机坐标;目标图像确定单元,配置为根据所述多张样本图像中采样点的图像特征,从所述多张样本图像中挑选出满足第二条件的第一目标图像和第二目标图像;变换关系确定单元,配置为确定所述第一目标图像与所述第二目标图像之间的第四旋转关系和第四平移关系;世界坐标确定单元,配置为根据所述第四旋转关系、所述第四平移关系和所述第一目标图像中的采样点的相机坐标,确定所述第一目标图像中的采样点的世界坐标;集合确定单元,配置为根据每一所述第一目标样本图像中的采样点的世界坐标,确定所述第一采样点集合。In some embodiments, the image processing module 605 includes: an attribute information acquiring unit configured to acquire image features and camera coordinates of sampling points in the sample image; a target image determining unit configured to The image characteristics of the sampling points, select the first target image and the second target image that meet the second condition from the multiple sample images; the transformation relationship determination unit is configured to determine the first target image and the second target image A fourth rotation relationship and a fourth translation relationship between the target images; a world coordinate determination unit configured to be based on the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling point in the first target image , Determining the world coordinates of the sampling points in the first target image; a set determining unit configured to determine the first sampling point set according to the world coordinates of the sampling points in each of the first target sample images.
在一些实施例中,所述目标图像确定单元,配置为:根据所述多张样本图像中采样点的图像特征,将所述多张样本图像进行两两匹配,得到每一对样本图像的第一匹配对集合;剔除所述第一匹配对集合中不满足第三条件的采样点匹配对,得到第二匹配对集合;从每一所述第二匹配对集合中挑选出匹配对数目满足所述第二条件的目标匹配对集合;将所述目标匹配对集合对应的两张样本图像,确定为第一目标图像和第二目标图像。In some embodiments, the target image determining unit is configured to: perform pairwise matching on the multiple sample images according to the image characteristics of the sampling points in the multiple sample images to obtain the first pair of sample images. A set of matching pairs; the matching pairs of sampling points that do not meet the third condition in the first matching pair set are eliminated to obtain a second matching pair set; the number of matching pairs is selected from each second matching pair set that meets all The target matching pair set of the second condition; the two sample images corresponding to the target matching pair set are determined as the first target image and the second target image.
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。The description of the above device embodiment is similar to the description of the above method embodiment, and has similar beneficial effects as the method embodiment. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的定位方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、服务器、机器人、无人机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read ON1ly Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。It should be noted that, in the embodiments of the present application, if the above positioning method is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies. The computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a server, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read only memory (Read ON1ly Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
对应地,本申请实施例提供一种电子设备,图7为本申请实施例电子设备的一种硬件实体示意图,如图7所示,该电子设备700的硬件实体包括:包括存储器701和处理器702, 所述存储器701存储有可在处理器702上运行的计算机程序,所述处理器702执行所述程序时实现上述实施例中提供的定位方法中的步骤。Correspondingly, an embodiment of the present application provides an electronic device. FIG. 7 is a schematic diagram of a hardware entity of the electronic device according to an embodiment of the application. As shown in FIG. 7, the hardware entity of the electronic device 700 includes: a memory 701 and a processor. 702. The memory 701 stores a computer program that can run on the processor 702, and the processor 702 implements the steps in the positioning method provided in the foregoing embodiment when the processor 702 executes the program.
存储器701配置为存储由处理器702可执行的指令和应用,还可以缓存待处理器702以及电子设备700中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(RaN1dom Access Memory,RAM)实现。The memory 701 is configured to store instructions and applications executable by the processor 702, and can also cache data to be processed or processed by the processor 702 and each module in the electronic device 700 (for example, image data, audio data, voice communication data, and Video communication data) can be implemented by flash memory (FLASH) or random access memory (RaN1dom Access Memory, RAM).
对应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的定位方法中的步骤。Correspondingly, an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the positioning method provided in the foregoing embodiment are implemented.
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。It should be pointed out here that the description of the above storage medium and device embodiments is similar to the description of the above method embodiment, and has similar beneficial effects as the method embodiment. For technical details not disclosed in the storage medium and device embodiments of this application, please refer to the description of the method embodiments of this application for understanding.
应理解,说明书通篇中提到的“一个实施例”或“一些实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一些实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。It should be understood that “one embodiment” or “some embodiments” mentioned throughout the specification means that a specific feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, appearances of "in one embodiment" or "in some embodiments" in various places throughout the specification do not necessarily refer to the same embodiment. In addition, these specific features, structures or characteristics can be combined in one or more embodiments in any suitable manner. It should be understood that in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application. The implementation process constitutes any limitation. The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments.
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。The above description of the various embodiments tends to emphasize the differences between the various embodiments, and the same or similarities can be referred to each other. For the sake of brevity, the details are not repeated herein.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如对象A和/或对象B,可以表示:单独存在对象A,同时存在对象A和对象B,单独存在对象B这三种情况。The term "and/or" in this article is only an association relationship that describes associated objects, which means that there can be three relationships, for example, object A and/or object B, which can mean: object A exists alone, and object A and object exist at the same time. B, there are three cases of object B alone.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, method, article, or device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or device that includes the element.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. The device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms. of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units; they may be located in one place or distributed on multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, the functional units in the embodiments of the present application can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit; The unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read ON1ly Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。A person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware. The foregoing program can be stored in a computer readable storage medium. When the program is executed, the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read-only memory (Read ON1ly Memory, ROM), a magnetic disk or an optical disk.
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技 术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、服务器、机器人、无人机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, if the above-mentioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application can be embodied in the form of a software product in essence or a part that contributes to related technologies. The computer software product is stored in a storage medium and includes a number of instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a server, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application. The aforementioned storage media include: removable storage devices, ROMs, magnetic disks or optical discs and other media that can store program codes.
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。The methods disclosed in the several method embodiments provided in this application can be combined arbitrarily without conflict to obtain new method embodiments.
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。The features disclosed in the several product embodiments provided in this application can be combined arbitrarily without conflict to obtain new product embodiments.
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。The features disclosed in the several method or device embodiments provided in this application can be combined arbitrarily without conflict to obtain a new method embodiment or device embodiment.
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only the implementation manners of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Covered in the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (20)

  1. 一种定位方法,所述方法包括:A positioning method, the method includes:
    确定图像采集设备所采集的待处理图像中的特征点;Determine the feature points in the image to be processed collected by the image acquisition device;
    获取所述特征点的属性信息;Acquiring attribute information of the characteristic point;
    将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果;其中,所述采样点的属性信息中的世界坐标是样本图像中的采样点的世界坐标。Match the attribute information of the characteristic point with the attribute information of multiple sampling points in the pre-built point cloud map to obtain the positioning result of the image acquisition device; wherein, the world coordinates in the attribute information of the sampling point Is the world coordinate of the sampling point in the sample image.
  2. 根据权利要求1所述的方法,其中,所述将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果,包括:The method according to claim 1, wherein the matching the attribute information of the characteristic point with the attribute information of a plurality of sampling points in a pre-built point cloud map to obtain a positioning result of the image acquisition device, include:
    将所述特征点的图像特征与所述多个采样点的图像特征进行匹配,得出第一目标采样点;Matching the image features of the feature points with the image features of the multiple sampling points to obtain a first target sampling point;
    根据所述特征点的相机坐标和所述第一目标采样点的世界坐标,确定所述图像采集设备的定位结果。The positioning result of the image acquisition device is determined according to the camera coordinates of the characteristic point and the world coordinates of the first target sampling point.
  3. 根据权利要求2所述的方法,其中,所述将所述特征点的图像特征与所述多个采样点的图像特征进行匹配,得出第一目标采样点,包括:The method according to claim 2, wherein the matching the image features of the feature point with the image features of the multiple sampling points to obtain the first target sampling point comprises:
    根据所述采样点的图像特征和所述特征点的图像特征,确定所述采样点与所述特征点之间的相似度;Determining the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point;
    将所述多个采样点中与所述特征点之间的相似度满足第一条件的采样点,确定为所述第一目标采样点。A sampling point whose similarity with the characteristic point among the plurality of sampling points meets a first condition is determined as the first target sampling point.
  4. 根据权利要求3所述的方法,其中,所述将所述多个采样点中与所述特征点之间的相似度满足第一条件的采样点,确定为所述第一目标采样点,包括:The method according to claim 3, wherein the determining the sampling point whose similarity between the plurality of sampling points and the characteristic point satisfies a first condition as the first target sampling point comprises :
    将所述多个采样点中与所属特征点之间的相似度小于或等于相似度阈值的采样点,确定为所述第一目标采样点;或者,将所述多个采样点中与所述特征点之间的相似度最小的采样点,确定为所述第一目标采样点。Determine the sampling point whose similarity between the plurality of sampling points and the characteristic point to which it belongs is less than or equal to the similarity threshold as the first target sampling point; or, compare the plurality of sampling points with the The sampling point with the smallest similarity between the feature points is determined as the first target sampling point.
  5. 根据权利要求3或4所述的方法,其中,所述根据所述采样点的图像特征和所述特征点的图像特征,确定所述采样点与所述特征点之间的相似度,包括:The method according to claim 3 or 4, wherein the determining the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point comprises:
    确定所述采样点的图像特征与所述特征点的图像特征之间的欧氏距离,将所述欧式距离确定为所述相似度。The Euclidean distance between the image feature of the sampling point and the image feature of the feature point is determined, and the Euclidean distance is determined as the similarity.
  6. 根据权利要求2所述的方法,其中,所述定位结果包括世界坐标和朝向;所述根据所述特征点的相机坐标和所述第一目标采样点的世界坐标,确定所述图像采集设备的定位结果,包括:The method according to claim 2, wherein the positioning result includes world coordinates and orientation; said determining the position of the image acquisition device according to the camera coordinates of the feature point and the world coordinates of the first target sampling point Positioning results, including:
    根据多个第一目标采样点的世界坐标和与所述多个第一目标采样点匹配的特征点的相机坐标,确定所述多个第一目标采样点的相机坐标;Determining the camera coordinates of the multiple first target sampling points according to the world coordinates of the multiple first target sampling points and the camera coordinates of the feature points matching the multiple first target sampling points;
    根据所述多个第一目标采样点的世界坐标和所述多个第一目标采样点的相机坐标,确定相机坐标系相对于世界坐标系的第一旋转关系和第一平移关系;Determine the first rotation relationship and the first translation relationship of the camera coordinate system relative to the world coordinate system according to the world coordinates of the plurality of first target sampling points and the camera coordinates of the plurality of first target sampling points;
    根据所述第一平移关系和所述第一旋转关系,确定所述图像采集设备的世界坐标;根据所述第一旋转关系,确定所述图像采集设备的朝向。Determine the world coordinates of the image capture device according to the first translation relationship and the first rotation relationship; determine the orientation of the image capture device according to the first rotation relationship.
  7. 根据权利要求1所述的方法,其中,所述特征点的属性信息包括相机坐标;所述采样点的属性信息包括世界坐标;所述定位结果包括世界坐标和朝向;The method according to claim 1, wherein the attribute information of the feature point includes camera coordinates; the attribute information of the sampling point includes world coordinates; and the positioning result includes world coordinates and orientation;
    所述将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果,包括:The matching the attribute information of the characteristic point with the attribute information of multiple sampling points in a pre-built point cloud map to obtain the positioning result of the image acquisition device includes:
    根据迭代策略,将多个特征点的相机坐标与所述多个采样点的世界坐标进行匹配,得出相机坐标系相对于世界坐标系的目标旋转关系和目标平移关系;According to the iterative strategy, the camera coordinates of the multiple feature points are matched with the world coordinates of the multiple sampling points to obtain the target rotation relationship and the target translation relationship of the camera coordinate system relative to the world coordinate system;
    根据所述目标旋转关系,确定所述图像采集设备的朝向;根据所述目标平移关系和所述 目标旋转关系,确定所述图像采集设备的世界坐标。Determine the orientation of the image capture device according to the target rotation relationship; determine the world coordinates of the image capture device according to the target translation relationship and the target rotation relationship.
  8. 根据权利要求7所述的方法,其中,所述根据迭代策略,将多个特征点的相机坐标与所述多个采样点的世界坐标进行匹配,得出相机坐标系相对于世界坐标系的目标旋转关系和目标平移关系,包括:The method according to claim 7, wherein the camera coordinates of a plurality of feature points are matched with the world coordinates of the plurality of sampling points according to an iterative strategy to obtain the target of the camera coordinate system relative to the world coordinate system Rotation relationship and target translation relationship, including:
    从所述多个采样点中选取与所述特征点匹配的初始目标采样点;Selecting an initial target sampling point matching the characteristic point from the plurality of sampling points;
    根据所述多个特征点的相机坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述相机坐标系相对于所述世界坐标系的第二旋转关系和第二平移关系;According to the camera coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points, determine the second rotation relationship and the second translation of the camera coordinate system relative to the world coordinate system relationship;
    根据所述第二旋转关系、所述第二平移关系和所述特征点的相机坐标,确定所述特征点的第一世界坐标;Determine the first world coordinates of the feature point according to the second rotation relationship, the second translation relationship, and the camera coordinates of the feature point;
    根据所述多个特征点的第一世界坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述多个特征点的匹配误差;Determine the matching error of the multiple feature points according to the first world coordinates of the multiple feature points and the world coordinates of the initial target sampling points matching the multiple feature points;
    如果所述匹配误差大于第一阈值,重新选取初始目标采样点,并重新确定匹配误差,直至重新确定的匹配误差小于所述第一阈值为止;If the matching error is greater than the first threshold, reselect the initial target sampling point and re-determine the matching error until the re-determined matching error is less than the first threshold;
    将所述匹配误差小于或等于所述第一阈值时确定的第二旋转关系,确定为所述目标旋转关系;将所述匹配误差小于或等于所述第一阈值时确定的第二平移关系,确定为所述目标平移关系。Determining the second rotation relationship determined when the matching error is less than or equal to the first threshold value as the target rotation relationship; determining the second translation relationship when the matching error is less than or equal to the first threshold value, Determined as the target translation relationship.
  9. 根据权利要求8所述的方法,其中,所述从所述多个采样点中选取与所述特征点匹配的初始目标采样点,包括:8. The method according to claim 8, wherein the selecting an initial target sampling point matching the characteristic point from the plurality of sampling points comprises:
    获取所述相机坐标系相对于所述世界坐标系的第三旋转关系和第三平移关系;Acquiring a third rotation relationship and a third translation relationship of the camera coordinate system relative to the world coordinate system;
    根据所述第三旋转关系、所述第三平移关系和所述特征点的相机坐标,确定所述特征点的第二世界坐标;Determine the second world coordinates of the feature point according to the third rotation relationship, the third translation relationship, and the camera coordinates of the feature point;
    将所述特征点的第二世界坐标与所述多个采样点的世界坐标进行匹配,得出所述初始目标采样点。Matching the second world coordinates of the characteristic point with the world coordinates of the multiple sampling points to obtain the initial target sampling point.
  10. 根据权利要求8所述的方法,其中,所述根据所述多个特征点的第一世界坐标和与所述多个特征点匹配的初始目标采样点的世界坐标,确定所述多个特征点的匹配误差,包括:8. The method according to claim 8, wherein the plurality of feature points are determined based on the first world coordinates of the plurality of feature points and the world coordinates of the initial target sampling points matching the plurality of feature points The matching error includes:
    根据所述特征点的第一世界坐标和所述初始目标采样点的世界坐标,确定所述特征点与所述初始目标采样点之间的距离;Determine the distance between the feature point and the initial target sampling point according to the first world coordinates of the feature point and the world coordinates of the initial target sampling point;
    根据所述多个特征点与匹配的初始目标采样点之间的距离,确定所述匹配误差。The matching error is determined according to the distance between the plurality of feature points and the matched initial target sampling point.
  11. 根据权利要求9所述的方法,其中,所述如果所述匹配误差大于第一阈值,重新选取初始目标采样点,包括:The method according to claim 9, wherein, if the matching error is greater than a first threshold, reselecting an initial target sampling point comprises:
    如果所述匹配误差大于第一阈值,将所述第二平移关系作为所述第三平移关系,将所述第二旋转关系作为所述第三旋转关系,重新选取初始目标采样点。If the matching error is greater than the first threshold, the second translation relationship is used as the third translation relationship, and the second rotation relationship is used as the third rotation relationship, and the initial target sampling point is reselected.
  12. 根据权利要求1至11任一项所述的方法,其中,所述点云地图的构建过程包括:The method according to any one of claims 1 to 11, wherein the construction process of the point cloud map comprises:
    获取多张样本图像;Obtain multiple sample images;
    对所述多张样本图像进行处理,得到第一采样点集合,所述第一采样点集合中至少包括所述多张样本图像中的采样点的世界坐标;Processing the multiple sample images to obtain a first sampling point set, where the first sampling point set includes at least the world coordinates of the sampling points in the multiple sample images;
    获取除所述多张样本图像外的其他样本图像;Obtaining sample images other than the plurality of sample images;
    根据所述第一采样点集合和获取的所述其他样本图像中采样点的属性信息,确定所述其他样本图像中的采样点的世界坐标,得到第二采样点集合;Determining the world coordinates of the sampling points in the other sample images according to the first sampling point set and the acquired attribute information of the sampling points in the other sample images to obtain a second sampling point set;
    根据所述第一采样点集合和所述第二采样点集合,构建所述点云地图。The point cloud map is constructed according to the first sampling point set and the second sampling point set.
  13. 根据权利要求12所述的方法,其中,所述对所述多张样本图像进行处理,得到第一采样点集合,包括:The method according to claim 12, wherein said processing said plurality of sample images to obtain a first set of sampling points comprises:
    获取所述样本图像中采样点的图像特征和相机坐标;Acquiring image features and camera coordinates of sampling points in the sample image;
    根据所述多张样本图像中采样点的图像特征,从所述多张样本图像中挑选出满足第二条件的第一目标图像和第二目标图像;According to the image characteristics of the sampling points in the multiple sample images, select the first target image and the second target image that meet the second condition from the multiple sample images;
    确定所述第一目标图像与所述第二目标图像之间的第四旋转关系和第四平移关系;Determining a fourth rotation relationship and a fourth translation relationship between the first target image and the second target image;
    根据所述第四旋转关系、所述第四平移关系和所述第一目标图像中的采样点的相机坐标,确定所述第一目标图像中的采样点的世界坐标;Determine the world coordinates of the sampling points in the first target image according to the fourth rotation relationship, the fourth translation relationship, and the camera coordinates of the sampling points in the first target image;
    根据每一所述第一目标样本图像中的采样点的世界坐标,确定所述第一采样点集合。The first sampling point set is determined according to the world coordinates of the sampling points in each of the first target sample images.
  14. 根据权利要求13所述的方法,其中,所述根据所述多张样本图像中采样点的图像特征,从所述多张样本图像中挑选出满足第二条件的第一目标图像和第二目标图像,包括:The method according to claim 13, wherein the first target image and the second target that meet the second condition are selected from the plurality of sample images according to the image characteristics of the sampling points in the plurality of sample images Images, including:
    根据所述多张样本图像中采样点的图像特征,将所述多张样本图像进行两两匹配,得到每一对样本图像的第一匹配对集合;Performing pairwise matching on the multiple sample images according to the image characteristics of the sampling points in the multiple sample images to obtain a first matching pair set of each pair of sample images;
    剔除所述第一匹配对集合中不满足第三条件的采样点匹配对,得到第二匹配对集合;Removing matching pairs of sampling points that do not meet the third condition in the first matching pair set to obtain a second matching pair set;
    从每一所述第二匹配对集合中挑选出匹配对数目满足所述第二条件的目标匹配对集合;Selecting, from each of the second matching pair sets, a target matching pair set whose number of matching pairs meets the second condition;
    将所述目标匹配对集合对应的两张样本图像,确定为第一目标图像和第二目标图像。The two sample images corresponding to the target matching pair set are determined as the first target image and the second target image.
  15. 一种定位装置,包括:A positioning device includes:
    第一确定模块,配置为确定图像采集设备所采集的待处理图像中的特征点;The first determining module is configured to determine the characteristic points in the image to be processed collected by the image collecting device;
    属性信息获取模块,配置为获取所述特征点的属性信息;The attribute information obtaining module is configured to obtain the attribute information of the characteristic point;
    定位模块,配置为将所述特征点的属性信息与预先构建的点云地图中多个采样点的属性信息进行匹配,得出所述图像采集设备的定位结果。The positioning module is configured to match the attribute information of the characteristic point with the attribute information of a plurality of sampling points in a pre-built point cloud map to obtain a positioning result of the image acquisition device.
  16. 根据权利要求15所述的装置,其中,所述定位模块,包括:The device according to claim 15, wherein the positioning module comprises:
    匹配单元,配置为将所述特征点的图像特征与所述多个采样点的图像特征进行匹配,得出第一目标采样点;A matching unit configured to match the image features of the feature point with the image features of the multiple sampling points to obtain a first target sampling point;
    定位单元,配置为根据所述特征点的相机坐标和所述第一目标采样点的世界坐标,确定所述图像采集设备的定位结果。The positioning unit is configured to determine the positioning result of the image acquisition device according to the camera coordinates of the characteristic point and the world coordinates of the first target sampling point.
  17. 根据权利要求16所述的装置,其中,所述匹配单元,配置为:The device according to claim 16, wherein the matching unit is configured to:
    根据所述采样点的图像特征和所述特征点的图像特征,确定所述采样点与所述特征点之间的相似度;Determining the similarity between the sampling point and the feature point according to the image feature of the sampling point and the image feature of the feature point;
    将与所述特征点之间的相似度满足第一条件的采样点,确定为所述第一目标采样点。A sampling point whose similarity with the feature point satisfies a first condition is determined as the first target sampling point.
  18. 根据权利要求16所述的装置,其中,所述定位结果包括世界坐标和朝向;所述匹配单元,配置为:根据迭代策略,将多个特征点的相机坐标与所述多个采样点的世界坐标进行匹配,得出相机坐标系相对于世界坐标系的目标旋转关系和目标平移关系;The device according to claim 16, wherein the positioning result includes world coordinates and orientation; the matching unit is configured to: according to an iterative strategy, compare the camera coordinates of the multiple feature points with the world of the multiple sampling points. Coordinates are matched, and the target rotation relationship and target translation relationship of the camera coordinate system relative to the world coordinate system are obtained;
    定位单元,配置为:根据所述目标旋转关系,确定所述图像采集设备的朝向;根据所述目标平移关系和所述目标旋转关系,确定所述图像采集设备的世界坐标。The positioning unit is configured to: determine the orientation of the image capture device according to the target rotation relationship; determine the world coordinates of the image capture device according to the target translation relationship and the target rotation relationship.
  19. 一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至14任一项所述定位方法中的步骤。An electronic device, comprising a memory and a processor, the memory storing a computer program that can run on the processor, and the processor implements the positioning method of any one of claims 1 to 14 when the processor executes the program step.
  20. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至14任一项所述定位方法中的步骤。A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps in the positioning method according to any one of claims 1 to 14.
PCT/CN2020/116924 2019-09-27 2020-09-22 Positioning method and apparatus, device, and storage medium WO2021057742A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910921484.6A CN110705574B (en) 2019-09-27 2019-09-27 Positioning method and device, equipment and storage medium
CN201910921484.6 2019-09-27

Publications (1)

Publication Number Publication Date
WO2021057742A1 true WO2021057742A1 (en) 2021-04-01

Family

ID=69197854

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116924 WO2021057742A1 (en) 2019-09-27 2020-09-22 Positioning method and apparatus, device, and storage medium

Country Status (2)

Country Link
CN (1) CN110705574B (en)
WO (1) WO2021057742A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160317A (en) * 2021-04-29 2021-07-23 福建汇川物联网技术科技股份有限公司 Pan-tilt target tracking control method and device, pan-tilt control equipment and storage medium
CN113379663A (en) * 2021-06-18 2021-09-10 特斯联科技集团有限公司 Space positioning method and device
CN114136316A (en) * 2021-12-01 2022-03-04 珠海一微半导体股份有限公司 Inertial navigation error elimination method based on point cloud characteristic points, chip and robot
CN114155242A (en) * 2022-02-08 2022-03-08 天津聚芯光禾科技有限公司 Automatic identification method and positioning method based on automatic identification method
CN114416764A (en) * 2022-02-24 2022-04-29 上海商汤临港智能科技有限公司 Map updating method, device, equipment and storage medium
CN114563687A (en) * 2022-02-25 2022-05-31 苏州浪潮智能科技有限公司 PCB fixing jig, automatic positioning method and system and storage medium
CN114913352A (en) * 2022-05-05 2022-08-16 山东高速建设管理集团有限公司 Multi-source information space-time registration method and system based on joint similarity matching
CN115248430A (en) * 2021-09-23 2022-10-28 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN116026342A (en) * 2023-03-29 2023-04-28 中国科学院西安光学精密机械研究所 Space target pose measurement method based on cluster elastic dispersion
CN116403380A (en) * 2023-06-08 2023-07-07 北京中科慧眼科技有限公司 Overrun monitoring method and device based on road side binocular camera
CN116974400A (en) * 2023-09-14 2023-10-31 深圳市磐鼎科技有限公司 Screen touch recognition method, device, equipment and storage medium
CN114416764B (en) * 2022-02-24 2024-10-29 上海商汤临港智能科技有限公司 Map updating method, device, equipment and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705574B (en) * 2019-09-27 2023-06-02 Oppo广东移动通信有限公司 Positioning method and device, equipment and storage medium
CN111652934B (en) * 2020-05-12 2023-04-18 Oppo广东移动通信有限公司 Positioning method, map construction method, device, equipment and storage medium
WO2022087916A1 (en) * 2020-10-28 2022-05-05 华为技术有限公司 Positioning method and apparatus, and electronic device and storage medium
CN114241045A (en) * 2021-11-04 2022-03-25 广西电网有限责任公司南宁供电局 Power transmission line forest fire distance measuring method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064602A1 (en) * 2012-09-05 2014-03-06 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
CN106940704A (en) * 2016-11-25 2017-07-11 北京智能管家科技有限公司 A kind of localization method and device based on grating map
CN109146932A (en) * 2018-07-17 2019-01-04 北京旷视科技有限公司 Determine the methods, devices and systems of the world coordinates of target point in image
CN109461211A (en) * 2018-11-12 2019-03-12 南京人工智能高等研究院有限公司 Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud
CN110705574A (en) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 Positioning method and device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064602A1 (en) * 2012-09-05 2014-03-06 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
CN106940704A (en) * 2016-11-25 2017-07-11 北京智能管家科技有限公司 A kind of localization method and device based on grating map
CN109146932A (en) * 2018-07-17 2019-01-04 北京旷视科技有限公司 Determine the methods, devices and systems of the world coordinates of target point in image
CN109461211A (en) * 2018-11-12 2019-03-12 南京人工智能高等研究院有限公司 Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud
CN110705574A (en) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 Positioning method and device, equipment and storage medium

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160317A (en) * 2021-04-29 2021-07-23 福建汇川物联网技术科技股份有限公司 Pan-tilt target tracking control method and device, pan-tilt control equipment and storage medium
CN113160317B (en) * 2021-04-29 2024-04-16 福建汇川物联网技术科技股份有限公司 PTZ target tracking control method and device, PTZ control equipment and storage medium
CN113379663A (en) * 2021-06-18 2021-09-10 特斯联科技集团有限公司 Space positioning method and device
CN115248430A (en) * 2021-09-23 2022-10-28 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN115248430B (en) * 2021-09-23 2023-08-25 上海仙途智能科技有限公司 Target object positioning method, device, terminal and medium
CN114136316A (en) * 2021-12-01 2022-03-04 珠海一微半导体股份有限公司 Inertial navigation error elimination method based on point cloud characteristic points, chip and robot
CN114155242A (en) * 2022-02-08 2022-03-08 天津聚芯光禾科技有限公司 Automatic identification method and positioning method based on automatic identification method
CN114416764A (en) * 2022-02-24 2022-04-29 上海商汤临港智能科技有限公司 Map updating method, device, equipment and storage medium
CN114416764B (en) * 2022-02-24 2024-10-29 上海商汤临港智能科技有限公司 Map updating method, device, equipment and storage medium
CN114563687B (en) * 2022-02-25 2024-01-23 苏州浪潮智能科技有限公司 PCB fixing jig, automatic positioning method, system and storage medium
CN114563687A (en) * 2022-02-25 2022-05-31 苏州浪潮智能科技有限公司 PCB fixing jig, automatic positioning method and system and storage medium
CN114913352B (en) * 2022-05-05 2023-05-26 山东高速建设管理集团有限公司 Multi-source information space-time registration method and system based on joint similarity matching
CN114913352A (en) * 2022-05-05 2022-08-16 山东高速建设管理集团有限公司 Multi-source information space-time registration method and system based on joint similarity matching
CN116026342A (en) * 2023-03-29 2023-04-28 中国科学院西安光学精密机械研究所 Space target pose measurement method based on cluster elastic dispersion
CN116026342B (en) * 2023-03-29 2023-08-18 中国科学院西安光学精密机械研究所 Space target pose measurement method based on cluster elastic dispersion
CN116403380A (en) * 2023-06-08 2023-07-07 北京中科慧眼科技有限公司 Overrun monitoring method and device based on road side binocular camera
CN116974400A (en) * 2023-09-14 2023-10-31 深圳市磐鼎科技有限公司 Screen touch recognition method, device, equipment and storage medium
CN116974400B (en) * 2023-09-14 2024-01-16 深圳市磐鼎科技有限公司 Screen touch recognition method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110705574B (en) 2023-06-02
CN110705574A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2021057742A1 (en) Positioning method and apparatus, device, and storage medium
WO2021057744A1 (en) Positioning method and apparatus, and device and storage medium
WO2021057743A1 (en) Map fusion method, apparatus, device and storage medium
WO2021057739A1 (en) Positioning method and device, apparatus, and storage medium
TWI777538B (en) Image processing method, electronic device and computer-readable storage media
CN105283905B (en) Use the robust tracking of Points And lines feature
US10033941B2 (en) Privacy filtering of area description file prior to upload
WO2021057745A1 (en) Map fusion method and apparatus, device and storage medium
US20160335275A1 (en) Privacy-sensitive query for localization area description file
EP4056952A1 (en) Map fusion method, apparatus, device, and storage medium
TWI745818B (en) Method and electronic equipment for visual positioning and computer readable storage medium thereof
CN111323024A (en) Positioning method and device, equipment and storage medium
WO2023151251A1 (en) Map construction method and apparatus, pose determination method and apparatus, and device and computer program product
JP7336653B2 (en) Indoor positioning method using deep learning
KR20220055072A (en) Method for indoor localization using deep learning
CN112750164A (en) Lightweight positioning model construction method, positioning method and electronic equipment
Sui et al. An accurate indoor localization approach using cellphone camera
WO2022252036A1 (en) Method and apparatus for acquiring obstacle information, movable platform and storage medium
Dong et al. A rgb-d slam algorithm combining orb features and bow
KR102384177B1 (en) Auto topology mapping method based on omni-directional image and system thereof
KR102431122B1 (en) Method and system for map tartet tracking
KR102495005B1 (en) Method for indoor localization using deep learning
JP2018198030A (en) Information processor and program
KR20210082027A (en) Auto topology mapping method based on omni-directional image and system thereof
WO2022089723A1 (en) Mapping server and device communicating with mapping server

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20868569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20868569

Country of ref document: EP

Kind code of ref document: A1