Detailed Description
The invention will be described in detail hereinafter with reference to the drawings and embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a pose determination method for a mobile device according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the pose determination method of the mobile device in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-described method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
As an alternative embodiment, the following explains terms referred to in the present application:
and (4) milemeter: the method comprises the steps of preliminarily estimating the angle and distance variation of the mobile robot, namely estimating the pose of the robot at the current moment according to the pose variation at the previous moment and the pose at the previous moment by using a common wheeled encoder;
two-dimensional laser radar: the sensor is used for acquiring two-dimensional plane information and detecting two-dimensional plane contour information of the surrounding environment;
in this embodiment, a pose determination method for a mobile device running on the mobile terminal is provided, and fig. 2 is a flowchart of the pose determination method for a mobile device according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, under the condition that the mobile equipment is determined to be lost in positioning, a first image shot by the mobile equipment is obtained;
step S204, searching a second image with the similarity larger than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
step S206, rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and step S208, determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses.
Through the steps, under the condition that the mobile equipment is determined to be lost in positioning, a first image obtained by shooting of the mobile equipment is obtained; searching a second image with the similarity of the second image and the first image being greater than or equal to a first preset threshold value and a first pose associated with the second image in a pre-established image library; rotating the first poses according to the first preset angle difference to obtain N first candidate poses; and determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses. Therefore, the purpose of determining the current pose of the mobile equipment through the laser positioning score is achieved. And then solved the great problem of robot position appearance positioning error, reached the effect that improves robot position appearance location rate of accuracy.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
As an alternative implementation, the mobile device may be a mobile robot, and as shown in fig. 3, the mobile robot in this application includes a mobile chassis, a 2D lidar sensor, and a camera, and a motion controller, a motor, a battery, an embedded computer, a odometer, and the like are disposed in the mobile chassis.
As an optional implementation manner, an image library established in advance stores images shot by the mobile robot at each pose in advance, in this embodiment, when it is determined that the mobile robot is lost in positioning, a first image is obtained through shooting by a camera of the robot, a second image with a similarity greater than or equal to a first preset threshold is searched in the image library through matching of feature points and descriptors, the first pose associated with the second image is used as an initial value, and then the current pose of the robot is further determined on the basis of the pose by a laser repositioning method.
As an optional implementation manner, when the robot is lost in positioning, according to a current image frame (first image) collected by the robot, extracting feature points of the image frame and calculating a descriptor, and then detecting whether there is an image frame with a very high similarity in a pre-established image library through a loop detection algorithm DBOW3, a first preset threshold may be set, and the image frame with the similarity greater than or equal to the first preset threshold may be used as a second image similar to the first image. And taking the first pose of the second image as a preliminary estimation result of the current pose of the robot.
As an alternative embodiment, feature point matching calculation may be performed by a descriptor using feature points on the second image and feature points of the first image. And searching the characteristic points on the second image within a certain range of the characteristic points of the first image, thereby accelerating the characteristic point matching calculation. And removing external points of the matched feature points in the first image and the second image by using epipolar geometric constraint, and solving a first pose of a preliminary estimation result of the pose of the first image by using a PnP algorithm by using a 3D point corresponding to the matched point on the second image and a 2D point of the first image.
Wherein, the epipolar geometric constraint formula is as follows:
x 0 Ex 1 =0
wherein x is 0 Representing normalized coordinates of feature points on the first image match; x is the number of 1 Representing normalized coordinates of feature points on the second image match; e denotes an essential matrix.
Optionally, the determining the current pose of the mobile device according to the laser positioning scores of the N first candidate poses comprises: determining the current pose of the mobile equipment by using a branch and bound method and through scores of N groups of laser point clouds of the mobile equipment on N first candidate poses respectively falling into a group of probability grid maps with different resolutions, wherein the group of probability grid maps with different resolutions are obtained by down-sampling an original probability grid map, the original probability grid map comprises a plurality of grids, and a numerical value marked on each grid is used for representing the score of the laser falling onto the grid.
As an alternative embodiment, a priori map created by laser Simultaneous Localization and Mapping (SLAM) is saved as a probability grid map with different resolutions, and the value identified on each grid in the probability grid map may be a probability value, which may be used to indicate the probability that the current grid is occupied by the obstacle, and the score of the grid is higher the closer to the probability grid. In this embodiment, as shown in fig. 4, a schematic diagram of an original probability grid map according to an alternative embodiment of the present invention is shown, where each box in the map is used to represent a grid, and a value, such as 0.4, 0.7, and the like, identified on the grid is used to represent a probability that the grid is occupied by an obstacle, and the probability value may also be represented as a score of a laser point cloud falling on the grid. In this embodiment, the pose score calculation refers to projecting the laser point cloud into the probability grid map based on the current pose, and if the laser falls on the corresponding probability grid, taking the corresponding value of the corresponding grid as the score of the current point cloud, and finally, obtaining the average score of all point clouds in the point cloud data as the score of the current pose.
As an alternative embodiment, a search range with a certain size may be automatically added on the basis of the position of the first pose preliminarily estimated, for example, a search box with a predetermined size, such as a search box of 2mx2m, may be generated by adding predetermined ranges respectively up, down, left, right, and the like with the position of the first pose as a center position. A series of rotation candidate poses is generated based on the first pose and a first preset angle difference, wherein the first preset angle difference may be determined according to actual conditions, and may be 30 degrees, 45 degrees, 60 degrees, and the like. In this embodiment, the first pose may also be translated based on a probability grid map with an arbitrary resolution (e.g., a probability grid map with a minimum resolution), and the first pose is translated to the search box boundary from the first pose as a starting point to obtain translation candidate poses, where the N first candidate poses include: a rotation candidate pose and/or a translation candidate pose. In this embodiment, the obtained rotation poses may also be translated in the search box according to a probability grid map based on any resolution, and from the position where each rotation pose is located, the rotation poses may be translated to the search box boundary to generate a series of translated candidate poses.
As an alternative, by determining scores of candidate poses in probability grid maps under different resolutions, it is assumed that an original probability grid map of 8cm is downsampled by 4 layers to obtain a map with a resolution of 1.28m, and the downsampled layers are determined according to the size of a prior map. And similarly, 2-layer downsampling is carried out on the 2cm high-resolution map to obtain an 8cm high-resolution map, so that the result searched on the 8cm high-resolution priori map can be accurately transited to the 2cm high-resolution priori map.
The score S of all candidate poses (x, y, z) in each grid in the search box at the minimum resolution can be calculated based on the search range represented by the search box of the probability grid map at the minimum resolution i1 . Then finding out the optimal score S of all grids in the search range B . Fig. 5 is a schematic diagram of a branch-and-bound method according to an alternative embodiment of the present invention. The optimal pose is selected as the current pose of the robot by a branch and bound method, and the method specifically comprises the following steps:
hypothesis and optimal score S B The corresponding grid is G B Then with a grid G B And solving the upper bound of the leaf node of the subtree node as a father node. Namely pair G B Branching is performed to determine G B Of the four branches (down-sampled level 1) score the highest score SB S1 The corresponding grid is G B2 Then subsequently to G B2 Branching (downsampling layer 2), and so on, until the original map resolution (downsampling layer 4), determining the highest score SB of the leaf node at layer 4 S4 . At this time, the scores S of other nodes at the 3 rd layer of the subtree are compared i3 And SB S4 A comparison is made. If S is i3 <SB S4 If the sub-tree using the node as the father node is cut down, all the candidate frames under the node are not countedAnd calculating a pose score. If S is i3 >SB S4 Then Gsi is branched to determine whether Gsi has a node larger than Gs4 in the branch, and if yes, the upper bound is updated. And repeating the steps from the third layer, the second layer to the first layer, and updating the upper bound value B1 of the subtree T1. Then judging the score S of other nodes on the layer 1 i1 If less than B1, if S i1 <B1, then to S i1 And all the nodes and the whole subtree taking the nodes as father nodes are cut off to quickly remove the candidate frames. If Si1>B1, starting to branch Si1, and if the score of the branch node is less than B1, cutting off all subtrees taking the branch node as a father node; if the score of the branch node is larger than B1, continuing branching the branch node, and so on until the original resolution (until the layer 4), then judging whether the score of the leaf node of the layer is larger than B1, if so, updating the leaf node, otherwise, not performing any processing. And sequentially carrying out the same operation on other nodes on the layer 1 to finally obtain the optimal pose on the 8cm resolution prior map, and taking the obtained optimal pose as the current pose of the robot.
And c, based on the pose searched by the prior map with the resolution of 8cm, taking the pose as the center, the size of a search box is 0.2mx0.2m, the search angle is limited to be plus or minus 10 degrees, and on the prior map with the resolution of 2cm, the method in the step c is used for carrying out quick search to obtain the fine search on the map with the resolution of 2 cm. Wherein, 2 layers of the 2cm resolution map is downsampled. And (3) 2cm is subjected to down-sampling to obtain a map with 8cm resolution, and the pose searched by the 8cm coarse resolution priori map is subjected to accurate transition. By the branch delimitation method of the multi-resolution map, the search range on the high-resolution map is narrowed by using the search result on the coarse-resolution map, and the search speed on the high-resolution map is improved.
Optionally, the down-sampling the original probability grid map to obtain the set of probability grid maps with different resolutions includes: performing down-sampling on the original probability grid map through a bidirectional sliding window to obtain a probability grid map with a preset resolution, wherein the resolution of the probability grid map with the preset resolution is smaller than that of the original probability grid map; and sequencing the probability grid map with the preset resolution and the original probability grid map according to the resolution to obtain a group of probability grid maps with different resolutions.
As an optional implementation manner, the bidirectional sliding window includes an up-down sliding window and a left-right sliding window, for example, fig. 6 is a schematic diagram of a probability map obtained after the original probability grid map is slid left and right according to an optional embodiment of the present invention, and fig. 7 is a schematic diagram of a probability map obtained after the original probability grid map is slid up and down according to an optional embodiment of the present invention. The grid probability of the resolution map after the down sampling is the maximum value of all grids in the sliding window space corresponding to the original resolution map. Namely, the resolution pose score after the down sampling is ensured to be the upper bound of the original resolution pose score. And the pose score calculation refers to projecting the laser point cloud into a probability grid map based on the current pose, taking the probability value of the corresponding grid as the score of the current point cloud when the laser falls on the corresponding probability grid, and finally solving the average score of all the point clouds of the frame of point cloud data as the score of the current pose.
As an optional implementation mode, the pose of the robot at the moment t is recorded as x
t =[x y θ]
T Let the k-th laser data of the laser radar have the coordinate [ x ] relative to the robot coordinate system
k,sens y
k,sens θ
k,sens ]
T The k beam of laser data
The coordinates in the world coordinate system are:
wherein the content of the first and second substances,
is the coordinate of the k laser beam in the world coordinate system.
Optionally, the method further comprises: acquiring laser point cloud of the mobile equipment at the current pose; determining the matching rate of the laser point cloud of the mobile device at the current pose and the original probability grid map; and under the condition that the matching rate is smaller than or equal to a second preset threshold value, updating the original probability grid map according to the laser point cloud of the mobile equipment in the current pose.
As an alternative embodiment, the environment in which the robot operates may change, requiring an update of the image library for visual repositioning. In this embodiment, in the robot positioning process, it is determined that the original probability grid map does not accord with the laser data of the current pose by judging whether the matching score between the current laser data and the prior map (original probability grid map) is greater than a certain threshold (for example, 40 points), and the original probability grid map may be updated according to the laser data of the current pose. In the embodiment, the accuracy of calculating the current pose potential can be improved by updating the prior map in real time.
Optionally, before the searching, in a pre-established image library, for a second image whose similarity to the first image is greater than or equal to a first preset threshold, the method further includes: when the mobile equipment determines that the moving distance or the rotating angle is greater than or equal to a first preset threshold value in the moving process of the original probability grid map, the pose of the mobile equipment is a key pose; establishing an association relation between a key frame acquired by the mobile equipment at the key pose and a corresponding key pose, and storing the association relation in the image library, wherein the key pose comprises the first pose, and the key frame comprises the first image.
As an optional implementation, a probability grid map of the current robot operating environment is constructed based on laser simultaneous localization and mapping SLAM, and then a localization algorithm is run along a fixed route in the map, wherein the precision of the laser localization algorithm is higher than that of visual localization. In the process that the robot walks along a fixed route, a positioning algorithm estimates the current pose of the robot in real time, a key frame is selected according to whether the translation or rotation variation between the poses is larger than a certain threshold or time variation, extracted feature points and descriptors of the key frame are calculated and added into a visual feature library, and the threshold can be determined according to actual conditions.
As an optional embodiment, for the feature points extracted from the keyframe, the 3D point coordinates corresponding to the feature points are estimated by using a camera perspective projection model. Setting the pose of the key frame as T, the pixel coordinate of the extracted feature point on the frame as P, and the corresponding 3D point coordinate as P W And the camera internal reference is K, then:
in the above equation, the pose T and the pixel coordinate p of the keyframe are known, and the 3D point coordinate corresponding to the feature point can be obtained. And calculating a descriptor of each feature point for the extracted feature points, and adding the frame feature points into an image library. And for adjacent key frames, extracting feature points and calculating descriptors on the adjacent frames, searching for matching points according to the descriptors, and performing outlier rejection according to epipolar geometric constraints.
x 0 Ex 1 =0 (2)
Wherein: x is a radical of a fluorine atom 0 Representing the feature point normalization coordinates on the current frame matching; x is a radical of a fluorine atom 1 Representing feature point normalized coordinates on adjacent frame matches; e represents an essential matrix; and (3) regarding the feature points which can be matched on the current frame, using the 3D coordinates of the feature points corresponding to the camera of the previous frame as the 3D coordinates of the matched feature points of the frame, and calculating the corresponding 3D point coordinates of the feature points which are not matched by using a formula (1). In the later-stage visual repositioning, the pose of the current frame is estimated by using the pixel point coordinates and the corresponding 3D point coordinates matched in the current frame and the visual image library.
Optionally, the method further comprises: acquiring a third image shot by the mobile equipment at the current pose; determining the matching rate of a third image and the second image; and under the condition that the matching rate is smaller than or equal to a third preset threshold value, updating the second image in the image library to be the third image, and updating the first pose corresponding to the second image to be the current pose.
As an alternative embodiment, the environment in which the robot operates may change, requiring an update of the image library for visual repositioning. In this embodiment, in the robot positioning process, whether to update the image library is determined by judging whether the matching score of the current laser data and the prior map (original probability grid map) is greater than a certain threshold (for example, 40 points), where the threshold may be determined according to actual situations, and is not limited herein. And when the score is smaller than the threshold value, and the image frame which is closer to the current pose in the image library has low similarity with the current image, deleting the characteristics corresponding to the image frame which is close to the current frame in the image library, and adding the characteristics extracted by the current frame into the image library. In this embodiment, after the robot is determined to be at the current pose, the robot shoots an image at the current pose to obtain a third image. And comparing the third image shot at the current pose with the second image in the image library, if the matching rate is lower than a preset threshold value, indicating that the second image stored in the image library is not matched with the image shot at the current pose by the robot, and updating the second image stored in the image library into the image shot at the current pose by the robot. In this embodiment, the images in the image library are updated in real time, so that the effect of improving the accuracy of the initial estimation result of the current pose of the robot can be obtained.
Optionally, the method further comprises: if the second image is not found in the pre-established image library, controlling the mobile equipment to move for a preset distance or rotate for a second preset angle; acquiring a fourth image shot by the mobile equipment; searching a fifth image with the similarity greater than or equal to a first preset threshold value and a second pose associated with the fifth image in a pre-established image library, wherein the fifth image is an image shot by the mobile equipment in the second pose; rotating the second pose according to the first preset angle difference to obtain N second candidate poses, wherein N is an integer greater than or equal to 1, and the N second candidate poses comprise the second pose; determining a current pose of the mobile device according to the laser positioning scores of the N second candidate poses.
As an optional embodiment, when using the visual relocation, when the current frame image cannot find the image frame with high matching degree in the visual image library, firstly, on the basis of starting the robot obstacle avoidance function, the robot is rotated or moved, the view field of the robot is changed, and whether the image frame with high matching degree can be found in the image library is judged. And if the image frame with high similarity can be found, performing feature matching, eliminating outliers, and estimating the current pose as an initial value of the laser local relocation.
If the image frame with high similarity can not be found, the current initial pose is recurred by using the pose of a period of time before the loss of the positioning and the odometer data recorded in the period of time or using the pose when the machine is started and the odometer data moved after the machine is started, and the current initial pose is used as the initial value of the laser local relocation. Recording the pose x at the previous moment t-1 =[x y θ] T Wherein x and y are respectively horizontal and vertical coordinates in a world coordinate system and represent the positions of the robot on a two-dimensional plane where the x axis and the y axis are located, and theta represents the direction of the robot on the two-dimensional plane where the x axis and the y axis are located. The pose at the current moment can be predicted by utilizing the motion model. The equation of the motion model is shown in equation (4).
As an alternative implementation, FIG. 8 is a flow chart of vision and laser repositioning according to an alternative embodiment of the present invention. When the visual bag-of-word model is constructed, the key frame is selected according to the variation of the pose of the laser positioning based on the pose of the laser positioning, the visual image library is constructed conveniently, and the pose of the constructed image library is more accurate; when the working environment of the robot changes, the images in the image library are properly updated according to the laser positioning scores and the matching degree of the images acquired based on the current pose and the images with the similar poses in the image library, so that the success rate of visual repositioning under the condition of environmental change is improved.
The robot pose estimated by the vision repositioning has certain errors, the result of the vision repositioning is used as an initial value of the laser repositioning, the precision of the initial position of the robot is improved, the robot positioning is more quickly converged to an optimal value, and the positioning precision is ensured. When the positioning of the equipment is lost or the equipment is just started, the visual repositioning fails, the equipment stores the correct pose before the positioning is lost and a section of odometer data from the correct pose to the current pose or the pose before the equipment is shut down as initial values of the laser local repositioning, and the success rate of the repositioning and the speed of searching the pose are ensured. Or when the visual repositioning fails, on the basis of starting obstacle avoidance, the equipment rotates or arcs to change the visual field, simultaneously stores the odometer data of the route, and uses the result of the visual repositioning or the result recurred by the previous correct pose and inertial navigation information as the initial value of the laser repositioning, thereby ensuring the success of the repositioning.
The branch delimitation method based on the multi-resolution map is adopted to accelerate the speed of laser local repositioning, the coarse resolution map reduces the number of candidate solutions, and therefore the speed of laser repositioning is accelerated, and the map downsampling based on the coarse resolution map reduces the occupation of map memory; on the basis of the relocation search result of the coarse resolution map, the search range is reduced on the high resolution priori map, the search result on the high resolution map is quickly obtained, and the resolution of the high resolution priori map is consistent with that of the priori map for positioning the robot, so that the accuracy of local laser relocation is guaranteed.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a pose determination apparatus for a mobile device is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a configuration of a pose determination apparatus of a mobile device according to an embodiment of the present invention, as shown in fig. 9, the apparatus including: an obtaining module 92, configured to obtain a first image captured by a mobile device when it is determined that the mobile device is lost in positioning; a searching module 94, configured to search, in a pre-established image library, a second image with a similarity greater than or equal to a first preset threshold with the first image, and a first pose associated with the second image, where the second image is an image captured by the mobile device in the first pose; a rotation module 96, configured to perform rotation and/or translation processing on the first pose to obtain N first candidate poses, where N is an integer greater than or equal to 1, and the N first candidate poses include the first pose; a determining module 98, configured to determine the current pose of the mobile device according to the laser positioning scores of the N first candidate poses.
Optionally, the above apparatus is further configured to implement the determining the current pose of the mobile device according to the laser positioning scores of the N first candidate poses by: determining the current pose of the mobile equipment by using a branch and bound method and through scores of N groups of laser point clouds of the mobile equipment on N first candidate poses respectively falling into a group of probability grid maps with different resolutions, wherein the group of probability grid maps with different resolutions are obtained by down-sampling an original probability grid map, the original probability grid map comprises a plurality of grids, and a numerical value marked on each grid is used for representing the score of the laser falling onto the grid.
Optionally, the apparatus is further configured to perform the down-sampling on the original probability grid map to obtain the set of probability grid maps with different resolutions by: performing down-sampling on the original probability grid map through a bidirectional sliding window to obtain a probability grid map with a preset resolution, wherein the resolution of the probability grid map with the preset resolution is smaller than that of the original probability grid map; and sequencing the probability grid map with the preset resolution and the original probability grid map according to the resolution to obtain a group of probability grid maps with different resolutions.
Optionally, the apparatus is further configured to acquire a laser point cloud of the mobile device at the current pose; determining the matching rate of the laser point cloud of the mobile device at the current pose and the original probability grid map; and under the condition that the matching rate is smaller than or equal to a second preset threshold value, updating the original probability grid map according to the laser point cloud of the mobile equipment at the current pose.
Optionally, the apparatus is further configured to, before searching for a second image with a similarity greater than or equal to a first preset threshold with the first image in a pre-established image library, determine, in a moving process of the mobile device in the original probability grid map, that a moving distance or a rotation angle is greater than or equal to the first preset threshold, that the pose of the mobile device is a key pose; establishing an association relation between a key frame acquired by the mobile equipment at the key pose and a corresponding key pose, and storing the association relation in the image library, wherein the key pose comprises the first pose, and the key frame comprises the first image.
Optionally, the apparatus is further configured to acquire a third image captured by the mobile device in the current pose; determining the matching rate of a third image and the second image; and under the condition that the matching rate is smaller than or equal to a third preset threshold value, updating the second image in the image library to be the third image, and updating the first pose corresponding to the second image to be the current pose.
Optionally, the apparatus is further configured to control the mobile device to move by a predetermined distance or rotate by a second preset angle if the second image is not found in the pre-established image library; acquiring a fourth image shot by the mobile equipment; searching a fifth image with the similarity of the fourth image being larger than or equal to a first preset threshold value and a second pose associated with the fifth image in a pre-established image library, wherein the fifth image is an image shot by the mobile equipment in the second pose; rotating the second pose according to the first preset angle difference to obtain N second candidate poses, wherein N is an integer greater than or equal to 1, and the N second candidate poses comprise the second pose; determining a current pose of the mobile device according to the laser positioning scores of the N second candidate poses.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are located in different processors in any combination.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a first image shot by mobile equipment under the condition that the mobile equipment is determined to be lost in positioning;
s2, searching a second image with the similarity larger than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
s3, rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and S4, determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a first image shot by mobile equipment under the condition that the mobile equipment is determined to be lost in positioning;
s2, searching a second image with the similarity larger than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
s3, rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and S4, determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses. Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention shall be included in the protection scope of the present invention.