CN114998433A - Pose calculation method and device, storage medium and electronic equipment - Google Patents

Pose calculation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114998433A
CN114998433A CN202210619402.4A CN202210619402A CN114998433A CN 114998433 A CN114998433 A CN 114998433A CN 202210619402 A CN202210619402 A CN 202210619402A CN 114998433 A CN114998433 A CN 114998433A
Authority
CN
China
Prior art keywords
color image
target
frame color
characteristic information
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210619402.4A
Other languages
Chinese (zh)
Inventor
尹赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210619402.4A priority Critical patent/CN114998433A/en
Publication of CN114998433A publication Critical patent/CN114998433A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application discloses a pose calculation method, a pose calculation device, a storage medium and electronic equipment, and relates to the technical field of equipment positioning. If the target characteristic information exists in the current frame color image and the target historical characteristic information matched with the target characteristic information is confirmed to exist, the representing historical frame color image has the same target characteristic information as that in the current frame color image, so that the rotation component in the current pose of the electronic equipment can be calculated directly according to the target historical characteristic information and the target characteristic information in the current frame color image. Because the rotation component in the current pose is obtained based on the target historical characteristic information, namely the error of the rotation component in the current pose is related to the error of the rotation component in the historical pose in the historical frame color image, and the rotation component in the current pose does not depend on the correlation between the previous frame color image and the next frame color image, the accumulated error of the rotation component in the current pose is reduced, and the accuracy of calculating the current pose is improved.

Description

Pose calculation method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of device positioning technologies, and in particular, to a pose calculation method and apparatus, a storage medium, and an electronic device.
Background
With the rapid development of science and technology, people increasingly use various electronic devices in daily life, and positioning technologies such as indoor navigation and three-dimensional reconstruction of electronic devices are also one of the important points of research of those skilled in the art. The calculation of the current pose of the electronic device belongs to a key part of positioning technologies such as indoor navigation and three-dimensional reconstruction, and therefore a pose calculation scheme with high accuracy needs to be provided.
Disclosure of Invention
The application provides a pose calculation method, a pose calculation device, a storage medium and electronic equipment, which can solve the technical problem of low pose calculation accuracy in the related art.
In a first aspect, an embodiment of the present application provides a pose calculation method applied to an electronic device, where the method includes:
acquiring first plane information in a current frame depth image corresponding to a current frame color image; if the target characteristic information exists in the current frame color image according to the first plane information, whether target historical characteristic information matched with the target characteristic information exists is determined; if the target historical characteristic information matched with the target characteristic information exists, calculating a rotation component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information; calculating a translation component in the current pose of the electronic equipment, and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
In a second aspect, an embodiment of the present application provides a pose calculation apparatus applied to an electronic device, the apparatus including:
the plane information acquisition module is used for acquiring first plane information in the current frame depth image corresponding to the current frame color image; a historical characteristic information determining module, configured to determine whether target historical characteristic information matching the target characteristic information exists or not if it is determined that the target characteristic information exists in the current frame color image according to the first plane information; the rotating component calculating module is used for calculating a rotating component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information if the target historical characteristic information matched with the target characteristic information exists; and the current pose calculation module is used for calculating a translation component in the current pose of the electronic equipment and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the above-mentioned method.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program is adapted to be loaded by the processor and execute the steps of the method described above.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
the application provides a pose calculation method, after first plane information in a current frame depth image corresponding to a current frame color image is acquired, if the target characteristic information exists in the current frame color image according to the first plane information and the target historical characteristic information matched with the target characteristic information is confirmed to exist, the representative historical frame color image has the same target feature information as the current frame color image, since the target historical characteristic information is the characteristic information of the target characteristic information in the historical frame color image in the preset reference frame, therefore, the rotation component in the current pose of the electronic equipment can be calculated directly according to the target historical characteristic information and the target characteristic information in the current frame color image, and then, calculating a translation component in the current pose of the electronic device, and calculating the current pose of the electronic device according to the rotation component and the translation component. Because the rotation component in the current pose is obtained based on the target historical characteristic information, namely the error of the rotation component in the current pose is related to the error of the rotation component in the historical pose in the historical frame color image, and the rotation component in the current pose does not depend on the correlation between the previous frame color image and the next frame color image, the accumulated error of the rotation component in the current pose is calculated, and the accuracy of calculating the current pose is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is an exemplary system architecture diagram of a pose calculation method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a pose calculation method according to an embodiment of the present application;
fig. 3 is a schematic diagram of target feature information provided in an embodiment of the present application;
fig. 4 is a schematic diagram of historical target feature information provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of a pose calculation method according to another embodiment of the present application;
fig. 6 is a schematic flowchart of a pose calculation method according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a pose calculation method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a pose calculation apparatus according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the claims that follow.
Nowadays, industries such as mobile robots, unmanned driving, mobile phone indoor navigation and three-dimensional reconstruction, augmented reality and the like are rapidly developing, and the mobile robots have wide application prospects and commercial values in the future. The industry or application can not leave a key technology, namely the instant positioning technology. The mobile equipment needs centimeter-level self-positioning to acquire the position relation between the mobile equipment and the environment, so that subsequent tasks can be performed, and obviously, the existing positioning technology cannot meet the requirement on positioning precision.
Based on the above requirements, an instant positioning and Mapping (SLAM) technology is developed, the SLAM technology is a technology that centimeter-level positioning is realized by combining corresponding algorithms through information provided by a sensor of mobile equipment in a completely unknown environment, and the SLAM technology also includes environment Mapping while positioning. The method for realizing the SLAM by utilizing different sensors is different, the current mainstream sensors include a laser radar (LiDAR SLAM), a camera (Visual SLAM) and an inertial measurement unit (Visual-inertial SLAM), the most commonly assembled sensor on a mobile phone or a mobile robot is the camera sensor, and the process of positioning and drawing a camera body is completed through an image sequence acquired by the camera, which is generally called as a Visual SLAM technology.
At present, the algorithm framework of the visual SLAM is basically mature, the technology has a plurality of advantages, only a camera is used as an external sensor to provide input for the algorithm, the cost is greatly reduced, the camera can provide rich scene information, and the high-precision positioning and mapping are facilitated. However, there still exist some problems in the application of the current visual SLAM in the industry, such as positioning in the visual SLAM technology, that is, calculating the current pose of the electronic device or calculating the pose of a camera in the electronic device, generally, continuous positioning is realized in a front-and-back frame mode in the visual positioning process, that is, the pose corresponding to the current frame image is calculated based on the current frame image and the previous frame image, and because the pose calculated in the visual SLAM technology has a certain error, the serious positioning drift in the long-term positioning process can be caused by the continuously accumulated pose calculation error.
Based on the above technical problem, an embodiment of the present application provides a pose calculation method, which uses feature information including a plane to establish association constraint between a current frame and a historical frame, so as to solve a pose of an electronic device, and avoid an accumulated error generated by calculating the pose between successive frames in a related positioning algorithm.
Referring to fig. 1, fig. 1 is a schematic diagram of an exemplary system architecture of a pose calculation method according to an embodiment of the present disclosure.
As shown in fig. 1, the system architecture may include an electronic device 101, a network 102, and a server 103. Network 102 is the medium used to provide communications links between electronic device 101 and server 103. Network 102 may include various types of wired or wireless communication links, such as: the wired communication link includes an optical fiber, a twisted pair wire or a coaxial cable, and the Wireless communication link includes a bluetooth communication link, a Wireless-Fidelity (Wi-Fi) communication link, a microwave communication link, or the like.
The electronic device 101 may interact with the server 103 via the network 102 to receive messages from the server 103 or to send messages to the server 103, or the electronic device 101 may interact with the server 103 via the network 102 to receive messages or data sent by other users to the server 103. The electronic device 101 may be hardware or software. When the electronic device 101 is hardware, it may be a variety of electronic devices including, but not limited to, a smart watch, a smart phone, a tablet computer, a laptop portable computer, a desktop computer, a smart robot, and the like. When the electronic device 101 is software, it may be installed in the electronic device listed above, and it may be implemented as multiple software or software modules (for example, for providing distributed services), or may be implemented as a single software or software module, and is not limited in this respect.
The server 103 may be a business server providing various services. The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module, and is not limited in particular herein.
It should be understood that the number of electronic devices, networks, and servers in FIG. 1 is merely illustrative, and that any number of electronic devices, networks, and servers may be used, as desired for an implementation.
Referring to fig. 2, fig. 2 is a schematic flowchart of a pose calculation method according to an embodiment of the present disclosure. The execution subject of the embodiment of the application may be an electronic device executing the pose calculation method, a processor in the electronic device executing the pose calculation method, or a pose calculation service in the electronic device executing the pose calculation method. For convenience of description, a specific implementation procedure of the pose calculation method is described below by taking an example in which the execution subject is a processor in an electronic device.
As shown in fig. 2, the pose calculation method includes:
s201, first plane information in the current frame depth image corresponding to the current frame color image is obtained.
When continuous positioning is realized by adopting a mode of front and back frame color images, because the interval between the current frame color image and the previous frame color image is smaller, a certain calculation error exists in the pose obtained through the constraint calculation between the current frame color image and the previous frame color image, when the pose corresponding to the next frame color image is calculated, the constraint relation between the next frame color image and the current frame color image and the pose corresponding to the current frame color image are used, and further the error of the pose corresponding to the next frame color image is obtained and is continuously accumulated on the basis of the error of the pose corresponding to the current frame color image, so that the continuously accumulated pose calculation error can cause serious positioning drift in the long-term positioning process.
In a feasible solution, if it is determined that the same feature exists in the scene corresponding to the current frame color image in the scene corresponding to the historical frame color image after the electronic device moves the position, it may be considered that the electronic device moves to a position close to the previous position (i.e., the position of the electronic device corresponding to the historical frame color image), that is, the current frame color image is associated with the historical frame color image, and the electronic device calculates the corresponding pose each time the electronic device defaults to move to a position, that is, the historical poses corresponding to the previous position of the electronic device are already calculated, the electronic device may be set, so that after the electronic device extracts the preset feature in the historical frame color image, the feature may be transformed into a unified preset coordinate system based on the rotation component of the historical poses corresponding to the historical frame color image, the preset coordinate system can be a coordinate system such as a world coordinate system, so that after the electronic equipment extracts the same preset feature in the subsequent color image frame, whether the same preset feature exists in the unified preset coordinate system can be checked, and if the same preset feature exists, the feature transformed into the unified preset coordinate system and the preset feature can be directly used for calculating the rotation component of the current pose corresponding to the current color image frame.
Because the current pose corresponding to the current frame color image is calculated by using the historical pose corresponding to the historical frame color image with the same characteristics, wherein the historical frame color image is a certain frame color image before the current frame color image, and the error of the pose corresponding to the current frame color image is only related to the error of the pose corresponding to the historical frame color image, the error of the pose corresponding to any one frame color image between the historical frame color image and the current frame color image does not influence the pose calculation corresponding to the current frame color image, namely, the error of the pose corresponding to the current frame color image calculated by using the historical pose corresponding to the historical frame color image with the same characteristics is far smaller than the error of the pose calculated by using the current frame color image and the previous and next frame color images.
Based on the above thought, it is necessary to determine whether the preset feature that is the same as that in a certain historical frame color image exists in the current frame color image, that is, the historical frame color image cannot be any frame color image before the current frame color image, and since a plane in the scene is easier to distinguish and identify than a certain visual point in the scene, a feasible implementation manner of determining the historical frame color image corresponding to the current frame color image is to determine whether the preset feature (that is, the target feature information in the present application) exists according to a relationship between planes in the current scene corresponding to the current frame color image of the electronic device, and then determine the historical frame color image corresponding to the previous frame color image.
Therefore, first plane information in a current scene corresponding to a current frame color image of the electronic device needs to be acquired, in the embodiment of the application, an RBG camera may be disposed in the electronic device to shoot the color image of the scene where the electronic device is located through the RBG camera, an RGBD camera may be further disposed in the electronic device to shoot a depth image of the scene where the electronic device is located through the RGBD camera, and the depth image may provide depth information of the scene where the electronic device is located to restore a three-dimensional structure of the current scene quickly.
Because the RGBD camera is disposed in the electronic device, and a lens of the RGBD camera has a certain shooting range and a certain shooting angle, after the RGBD camera shoots a frame of depth image, the depth image corresponds to a current scene of the electronic device, that is, all planes or planes meeting a preset plane condition in a current frame depth image of a scene corresponding to the electronic device can be extracted based on the current frame depth image shot by the RGBD camera in the electronic device, and these planes can be regarded as first plane information in the current frame depth image or a part of the first plane information, that is, the first plane information at least includes the above-mentioned plane extracted from the current frame depth image, wherein plane extraction is performed on the current frame depth image, which is faster, and meanwhile, the obtained plane accuracy is higher.
Optionally, in this embodiment of the present application, the method for extracting all planes in the current frame depth image may not be limited, for example, a feasible implementation manner is that all pixel points on the current frame depth image are back-projected into a three-dimensional space according to the reference matrix and the corresponding depth value of the depth camera, so as to form a three-dimensional point cloud of the current frame depth image. A plane detection algorithm is used for plane extraction in the three-dimensional point cloud (the plane detection algorithm is not particularly limited, and may be a method for performing plane model fitting in the point cloud by using a Ranac algorithm, or a method for performing region growing after calculating normal vector information of three-dimensional points).
The mathematical representation of the plane in the extracted current frame depth image is { n, dis }, wherein n is a three-dimensional vector and represents a normal vector of the plane; dis represents the distance from the origin of the coordinate system corresponding to the three-dimensional space to the plane.
In the embodiment of the present application, the color image and the depth image are spatially and temporally aligned, and then the first plane information in the depth image of the current frame may be regarded as the first plane information in the color image of the current frame.
S202, if the target characteristic information exists in the current frame color image according to the first plane information, whether the target historical characteristic information matched with the target characteristic information exists is determined.
Among the plane relations, the most obvious relation is the angle between the planes and the number relation of the planes, so after the first plane information in the current frame color image of the electronic device is acquired, the angular relationship between the planes may be determined based on the normal vectors of the respective planes in the first plane information, further determining whether the scene corresponding to the electronic equipment has target characteristic information according to the angle relation among the planes and the quantity relation among the planes, in the embodiment of the present application, the color image and the depth image are spatially and temporally aligned, then the target feature information is the basis for determining whether the color image of the current frame is associated with the color image of a certain historical frame, in order to ensure the accuracy of judging whether the current frame color image is associated with the historical frame color image, it is necessary to ensure that the target feature information has identification degree and discrimination degree as much as possible.
In practical applications, the target feature information may include a plane satisfying a preset angle relationship and a preset number relationship, where the preset angle relationship and the preset number relationship may be set according to a specific manner of calculating the pose of the electronic device.
In the embodiment of the present application, the target feature information may include three planes, so that the target feature information may be transformed into a three-dimensional coordinate system, and the target feature information may be uniquely determined in the three-dimensional coordinate system, because the rotation component in the pose of the electronic device is a variable in the three-dimensional coordinate system, and in the process of subsequently calculating the rotation component in the pose of the electronic device, the target feature information needs to be transformed into a preset reference system to match with the historical feature information corresponding to the target feature information in the preset reference system, and then the rotation component in the pose of the electronic device needs to be calculated by directly using the historical feature information and the target feature information, and the preset reference system needs to be one of the three-dimensional coordinate systems.
Referring to fig. 3, fig. 3 is a schematic diagram of target feature information according to an embodiment of the present disclosure.
If the target feature information includes three planes satisfying the predetermined angular relationship, as shown in fig. 3, three planes (normal vectors of which are n, respectively) are detected in the depth image of the current frame 1 、n 2 And n 3 ) Then, the included angles between the three planes may be calculated respectively to determine whether the three planes satisfy a preset angle relationship, and specifically, the included angles between the normal vectors corresponding to the three planes may be calculated to realize the respective calculation of the included angles between the three planes. The formula for calculating the included angle between the normal vectors corresponding to the three planes is as follows:
θ 1 =arccos(n 1 ,n 2 );
θ 2 =arccos(n 2 ,n 3 );
θ 3 =arccos(n 1 ,n 3 );
wherein, theta 1 、θ 2 And theta 3 And respectively representing included angles among the three planes, and if the three included angles meet a preset angle relationship, determining that the three planes form target characteristic information.
Further, if the target feature information includes three planes satisfying the predetermined angular relationship, since the features of the planes such as the corners in the real scene are special, the target feature information may be set to include three planes perpendicular to each other,then theta 1 、θ 2 And theta 3 Respectively, the angle between the three planes, if theta 1 、θ 2 And theta 3 All at 90 degrees, or theta 1 、θ 2 And theta 3 All within a preset range around 90 degrees (e.g., between 88 degrees and 92 degrees), then the three planes may be determined to constitute target feature information.
If the target feature information is represented by three normal quantities of the target feature information after determining that the target feature information exists in the depth image of the current frame according to each plane, and in the embodiment of the present application, the character sf (structure feature) is used to represent the feature information, the target feature information may be represented as:
SF:[n 1 ,n 2 ,n 3 ]。
because the target characteristic information corresponds to special positions such as corners in an actual scene, and the special positions are not very common, whether the current frame color image is related to a certain historical frame color image can be judged based on the target characteristic information.
Specifically, in the embodiment of the present application, a historical feature information database is pre-established, where at least one historical feature information is stored in the database, each historical feature information is obtained by transforming object feature information in a historical frame color image into a preset reference system according to a rotation component of a historical pose corresponding to the historical frame color image, that is, in the embodiment of the present application, each obtained color image of a historical frame calculates a corresponding pose of the color image, so that each historical frame color image also corresponds to a historical pose, when it is determined that object feature information exists in the historical frame color image and the object feature information appears for the first time, the object feature information in the historical frame color image is transformed into the preset reference system according to the rotation component of the historical pose corresponding to the object feature information in the historical frame color image, so as to obtain the historical characteristic information corresponding to the historical frame color image.
The purpose of transforming the target characteristic information in the color image of the historical frame into the preset reference system is that the position of the electronic device may be different when the electronic device acquires the color image every time, although the target characteristic information can be determined in the two different color images of the frame, the target characteristic information is not likely to be the target characteristic information at the same position, so that the target characteristic information needs to be uniformly transformed into the preset reference system to obtain the historical characteristic information, and the position of each piece of the historical characteristic information in the preset reference system is uniquely determined so as to judge whether the target characteristic information in the color images of different frames is the target characteristic information at the same position.
After determining that the target characteristic information exists in the current frame depth image of the electronic device, whether the target characteristic information has matched target historical characteristic information under a preset reference frame needs to be searched from the historical characteristic information base to confirm whether the target historical characteristic information matched with the target characteristic information exists.
Optionally, a manner of finding target historical feature information matched with the target feature information from the historical feature information base is that, first, the target feature information is converted into a preset reference system according to a rotation component from a current frame color image to the preset reference system, where the rotation component from the current frame color image to the preset reference system may be a temporary rotation component, and the rotation component may be a rough estimation value, and may be obtained roughly through some hardware devices or algorithms, for example, or may directly use a rotation component of a pose in a previous frame color image as the temporary rotation component; and finally, if the plane included angle relationship meets the preset plane included angle relationship, confirming that the target historical feature information matched with the target feature information is found from the historical feature information base, namely confirming that the target historical feature information matched with the target feature information exists, and determining the historical feature information corresponding to the preset plane included angle relationship to be the target historical feature information, wherein the preset plane included angle relationship can be set according to actual needs.
For example, the preset plane included angle relationship may be set to be a plane included angle smaller than 10 degrees, and when the target feature information includes three planes satisfying the preset angle relationship, if the plane included angle included between the plane included after the target feature information is converted to the preset reference frame and a plane included in one of the historical feature information in the historical feature information base is smaller than 10 degrees, it may be considered that the target historical feature information matched with the target feature information is found from the historical feature information base, and it may be determined that there is target historical feature information matched with the target feature information.
And S203, if the target historical characteristic information matched with the target characteristic information exists, calculating a rotation component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information.
If the target characteristic information exists in the current frame color image according to the first plane information, and the target historical characteristic information matched with the target characteristic information under the preset reference system is found from the historical characteristic information base, namely the target historical characteristic information matched with the target characteristic information exists, the current frame color image corresponding to the target characteristic information is associated with the historical frame color image corresponding to the target historical characteristic information, namely the determined target characteristic information in the current frame color image and the determined target characteristic information in the historical frame color image are the characteristic information at the same position.
In the embodiment of the application, the target feature information composed of planes has higher discrimination and identification compared with visual feature points, and can be quickly and stably identified and matched with the same feature information observed among different frame color images.
The target historical characteristic information is obtained by transforming the target characteristic information into a preset reference system according to the rotation component of the historical pose corresponding to the target characteristic information, so that the rotation component in the current pose of the electronic equipment can be directly calculated according to the target historical characteristic information and the target characteristic information, and specifically, the rotation component in the current pose of the electronic equipment can be calculated according to the transformation relation between the target historical characteristic information and the target characteristic information in the current frame color image.
Referring to fig. 4, fig. 4 is a schematic diagram of historical target feature information according to an embodiment of the present disclosure.
As shown in fig. 4, three pieces of historical feature information, namely first historical feature information (frame1), sixth historical feature information (frame6) and tenth historical feature information (frame10), exist in the historical feature information base, wherein the first piece of historical feature information is obtained by transforming target feature information in the first frame color image into a preset reference system according to a rotation component of a historical pose corresponding to the first frame color image; the sixth historical characteristic information is obtained by transforming the target characteristic information in the sixth frame of color image into a preset reference system according to the rotation component of the historical pose corresponding to the first frame of color image; and the tenth historical characteristic information is obtained by transforming the target characteristic information in the tenth frame of color image into a preset reference system according to the rotation component of the historical pose corresponding to the first frame of color image.
Then, if the target historical feature information matched with the target feature information under the preset reference system is found from the historical feature information base to be the sixth historical feature information after the target feature information is determined to exist in the current frame color image according to each plane, the relationship between the target historical feature information and the target feature information can be obtained:
SF frame6 =R*SF cur
wherein, SF frame6 R is the rotation component of the current pose corresponding to the current frame color image, SF cur And target characteristic information corresponding to the current frame color image.
Therefore, based on the relationship between the target historical feature information and the target feature information, a calculation formula for obtaining the rotation component in the current pose of the electronic device is as follows:
R=SF frame6 *SF cur -1
then, according to the transformation relation between the target historical feature information and the target feature information in the current frame color image, the rotation component in the current pose of the electronic device can be calculated, that is, according to the product of the target historical feature information and the inverse transformation of the target feature information in the current frame color image, the rotation component in the current pose of the electronic device can be calculated.
And S204, calculating a translation component in the current pose of the electronic equipment, and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Because the current pose of the electronic device at least includes a rotation component and a translation component, after the rotation component in the current pose of the electronic device is calculated according to the target historical feature information and the target feature information, the translation component in the current pose of the electronic device can be continuously calculated, wherein the method for calculating the translation component in the current pose of the electronic device may not be limited. Furthermore, after the rotation component and the translation component of the electronic device are obtained, the rotation component and the translation component of the electronic device may be subjected to correlation processing, for example, the rotation component and the translation component of the electronic device are subjected to processing such as transformation, so as to calculate the current pose of the electronic device according to the rotation component and the translation component.
In the embodiment of the application, after the first plane information in the current frame depth image corresponding to the current frame color image is acquired, if the target characteristic information exists in the current frame color image according to the first plane information and the target historical characteristic information matched with the target characteristic information is confirmed to exist, the representative historical frame color image has the same target feature information as the current frame color image, since the target historical feature information is the feature information of the target feature information in the historical frame color image in the preset reference frame, therefore, the rotation component in the current pose of the electronic equipment can be calculated directly according to the target historical characteristic information and the target characteristic information in the current frame color image, and then, calculating a translation component in the current pose of the electronic equipment, and calculating the current pose of the electronic equipment according to the rotation component and the translation component. Because the rotation component in the current pose is obtained based on the target historical characteristic information, namely the error of the rotation component in the current pose is related to the error of the rotation component in the historical pose in the historical frame color image, and the rotation component in the current pose does not depend on the correlation between the previous frame color image and the next frame color image, the accumulated error of the rotation component in the current pose is calculated, and the accuracy of calculating the current pose is improved.
Referring to fig. 5, fig. 5 is a schematic flowchart of a pose calculation method according to another embodiment of the present application.
As shown in fig. 5, the method includes:
s501, acquiring first plane information in the current frame depth image corresponding to the current frame color image.
For step S501, please refer to the description in step S201, which is not described herein.
And S502, if the target characteristic information exists in the current frame color image according to the first plane information, determining whether the target historical characteristic information matched with the target characteristic information exists.
For step S502, please refer to the description in step S202, which is not repeated herein.
S503, if the target historical characteristic information matched with the target characteristic information exists, calculating a rotation component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information.
For step S503, please refer to the description in step S203, which is not described herein.
S504, solving a translation component in the current pose of the electronic equipment based on the rotation component in the current pose, and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Because the target historical feature information is obtained by transforming the target feature information in the historical frame color image into the preset reference system according to the rotation component of the historical pose corresponding to the historical frame color image, the rotation component in the current pose of the electronic equipment can be calculated only according to the target historical feature information and the target feature information, and the translation component in the historical pose is not involved in the target historical feature information, so that the translation component in the current pose cannot be directly obtained through the target historical feature information and the target feature information.
After the rotation component in the current pose of the electronic device is obtained, a constraint relation may be established based on the current frame color image and the reference frame color image, and then the rotation component in the current pose of the electronic device is used as a known quantity to obtain a translation component in the current pose.
Specifically, in an implementation, in a process based on a constraint relationship established between a current frame color image and a reference frame color image, a current frame color image of an electronic device and a reference frame color image corresponding to the current frame color image may be obtained first, where the reference frame color image corresponding to the current frame color image may be a color image near the current frame color image, for example, the reference frame color image may be a color image of a frame before the current frame color image, or may be a color image of several frames before the current frame color image, and the number of the reference frame color images may not be limited; and then constructing a reprojection error of the characteristic points between the current frame color image and the reference frame color image.
Specifically, in the process of constructing the reprojection error of the feature points between the current frame color image and the reference frame color image, the feature points corresponding to the current frame color image and the reference frame color image respectively need to be extracted, and the types of the feature points are not limited, and can be any commonly used visual feature points, such as fast corner points, Harris corner points, ORB feature points, and the like; then, corresponding matching feature points in the current frame color image and the reference frame color image are determined, and finally, a reprojection error of the feature points between the current frame color image and the reference frame color image is constructed based on the matching feature points.
For example, it is known that a feature point a (u) exists in a reference frame color image i 1 ,v 1 ) The feature point b (u) exists in the color image j of the current frame 2 ,v 2 ) Wherein the feature point a (u) 1 ,v 1 ) And a feature point b (u) 2 ,v 2 ) For a pair of matched feature points, the depth corresponding to the feature point a is d a The depth corresponding to the feature point b is d b Then the classical reprojection error form between feature points a and bThe following were used:
error reprojection =T ij d a k -1 (u 1 ,v 1 ,1)-k -1 (u 2 ,v 2 ,1) (1);
In the formula (1), T ij For the pose (including the rotation component and the translation component) between the color image i of the reference frame and the color image j of the current frame, k -1 Is an internal reference matrix of the RBG camera.
Optimizing T after obtaining reprojection error ij 、k -1 Two state variables, make error Reprojection Minimum to obtain the optimal pose and the optimal depth value of the feature point, so in the process of optimizing the reprojection error, T needs to be optimized ij 、k -1 Two state variables, make error Reprojection And minimizing to obtain the optimal pose and the optimal feature point depth value.
In the embodiment of the application, because the rotation component in the current pose is obtained, the rotation component in the current pose can be used as a known quantity to optimize T ij Of (1) a translation component, k -1 Two state variables, error Reprojection And the translation component in the current pose of the electronic equipment is solved based on the rotation component and the reprojection error in the current pose, so that the current pose of the electronic equipment can be calculated according to the rotation component and the translation component.
When local multi-frame color images are jointly optimized, thousands of reprojection errors are constructed when hundreds of feature points exist on each frame of image, which means that the depth values of a plurality of feature points need to be optimized during joint optimization, so that the scale of the optimization problem becomes very large, and the time for optimizing and solving the optimal pose is very long.
In the embodiment of the application, in addition to extracting the feature point information of the image, plane information in the image can be extracted, and in order to solve the problem in the optimization process, a feasible way is that in the process of constructing the reprojection error of the feature point between the color image of the current frame and the color image of the reference frame, the depth can be represented by using the plane information, so that the dimension of the state variable to be optimized is reduced, and the time consumption of the optimization solving process is reduced.
Referring to fig. 6, fig. 6 is a schematic flowchart of a pose calculation method according to another embodiment of the present application.
As shown in fig. 6, the method for constructing the reprojection error of the feature point between the current frame color image and the reference frame color image may include:
s601, determining target feature points in the feature points of the color image of the reference frame according to the feature points of the color image of the reference frame and second plane information of the reference depth image corresponding to the color image of the reference frame.
Specifically, in the process of constructing the reprojection error of the feature point between the current frame color image and the reference frame color image, the feature point of the reference frame color image corresponding to the current frame color image may be obtained first, and then the target feature point in the feature point of the reference frame color image is determined according to the feature point of the reference frame color image and the second plane information in the reference depth image corresponding to the reference frame color image, where the target feature point is in the preset range of the same plane to be optimized of the reference depth image.
S602, constructing a reprojection error of the feature points between the current frame color image and the reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment.
In order to reduce the dimension of the state variable to be optimized, in the embodiment of the application, the depth is represented by using the plane information, that is, the reprojection error of the feature point between the color image of the current frame and the color image of the reference frame is constructed based on the parameters of the plane to be optimized and the pose to be optimized of the electronic device.
For example, it is known that the feature point a (u) exists in the reference frame color image i 1 ,v 1 ) Falls within a predetermined range of a plane (the feature points can be considered to fall on a plane) which is p 1 {n 1 ,dis 1 } the feature point a (u) can be expressed by the following plane equation 1 ,v 1 ):
Figure BDA0003672605460000111
Further, d may be expressed as formula (2) a The plane parameter n for the variable to be optimized 1 And dis 1 Shown is that:
Figure BDA0003672605460000112
in the formula (3), the expression is given as long as it falls on the plane p 1 {n 1 ,dis 1 All the feature points in the preset range can be defined by the plane p 1 {n 1 ,dis 1 Parameter n of 1 And parameter dis 1 Shown.
Then if there are m target feature points falling on the plane p to be optimized in the color image of the reference frame 1 {n 1 ,dis 1 Within the preset range of (m), then if the reprojection error constraint of equation (1) is employed, then the depth values of the m target feature points need to be optimized in addition to the pose. In the embodiment of the application, the depth values of the m target feature points are all used as the plane p to be optimized 1 {n 1 ,dis 1 Parameter n of } 1 And parameter dis 1 It is shown that the plane p to be optimized can be obtained by substituting the formula (3) into the reprojection error of the formula (1), i.e. by substituting the formula (3) into the reprojection error of the formula (1) 1 {n 1 ,dis 1 The joint reprojection error of the m target feature points simplified by using the plane information is as follows:
Figure BDA0003672605460000121
then, based on the rotation component and the reprojection error in the current pose, solving the translation component in the current pose of the electronic device may specifically include: and acquiring target matching feature points corresponding to the target feature points in the current frame color image, and solving the minimum value of the re-projection error based on the target matching feature points and the rotation component to obtain the translation component in the current pose of the electronic equipment. That is, after the reprojection error is obtained, the corresponding target matching feature points in the current frame color image can be extracted according to the target feature points, then the minimum value of the reprojection error is solved based on the target matching feature points and the rotation component, that is, the rotation component is used as a known quantity, and the minimum value of the reprojection error is solved based on the target matching feature points, so that the translation component in the current pose of the electronic device can be obtained.
As can be seen from the formula (4), compared with the prior projection error optimization process, the pose needs to be optimized and the depth values of a plurality of feature points need to be optimized, so that the pose is optimized and the p is optimized 1 {n 1 ,dis 1 Parameter n of } 1 And parameter dis 1 Therefore, the dimension of the optimization variables is greatly reduced through the design, the optimization problem is lighter, the scale of the optimization problem is greatly reduced, the optimization efficiency is improved, and the optimization speed is greatly improved.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a pose calculation method according to another embodiment of the present application.
As shown in fig. 7, the method includes:
s701, acquiring first plane information in the current frame depth image corresponding to the current frame color image.
S702, if the target characteristic information exists in the current frame color image according to the first plane information, whether the target historical characteristic information matched with the target characteristic information exists is determined.
Regarding steps S701 to S702, please refer to the descriptions in steps S201 to S202, which are not described herein.
S703, if the target historical characteristic information matched with the target characteristic information does not exist, acquiring a current frame color image of the electronic equipment and a reference frame color image corresponding to the current frame color image, and constructing a reprojection error of the characteristic points between the current frame color image and the reference frame color image.
If the target characteristic information exists in the current frame color image according to each plane, and the target historical characteristic information matched with the target characteristic information under the preset reference system is not found from the historical characteristic information base, that is, the target historical characteristic information matched with the target characteristic information does not exist, the target characteristic information in the current frame color image is represented as the first-appearing target characteristic information, at the moment, the current frame color image of the electronic equipment and the reference frame color image corresponding to the current frame color image can be directly obtained, and the reprojection error of the characteristic point between the current frame color image and the reference frame color image is constructed.
The method for constructing the reprojection error of the characteristic points between the current frame color image and the reference frame color image comprises the following steps: determining a target characteristic point in the characteristic points of the reference frame color image according to the characteristic points of the reference frame color image and second plane information in the reference depth image corresponding to the reference frame color image, wherein the target characteristic point is in a preset range of the same plane to be optimized of the reference depth image; and constructing a reprojection error of the feature points between the current frame color image and the reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment.
For a specific description of constructing the reprojection error of the feature point between the current frame color image and the reference frame color image, reference may be made to the descriptions in step S601 to step S602, which are not repeated herein.
S704, solving a rotation component and a translation component in the current pose of the electronic equipment based on the reprojection error, and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Since the target feature information in the current frame color image is the first-appearing target feature information, the rotational component in the current pose cannot be obtained according to the target feature information, and the rotational component and the translational component in the current pose of the electronic device need to be directly solved based on the reprojection error.
Specifically, solving a rotation component and a translation component in the current pose of the electronic device based on the reprojection error includes: and acquiring target matching feature points corresponding to the target feature points in the current frame color image, solving the minimum value of the reprojection error based on the target matching feature points to obtain a rotation component and a translation component in the current pose of the electronic equipment, and further calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Optionally, after the rotation component and the translation component in the current pose of the electronic device are solved based on the reprojection error, historical feature information matched with the target feature information can be obtained according to the target feature information of the current frame color image, and the historical feature information is saved.
Specifically, the target feature information in the current frame color image can be transformed into a preset reference system according to the rotation component in the current pose of the electronic device to obtain the historical feature information of the current frame color image, and the historical feature information of the current frame color image is stored in a historical feature information base so as to facilitate the calculation of the rotation component in the pose of the subsequent frame image. Because the historical characteristic information base is built and continuously updated, the structural association constraint between the current frame and the historical frame is built by utilizing the characteristic information, so that drift-free rotation is solved, the accumulated error caused by the calculation pose between the previous frame and the next frame in a positioning algorithm is avoided, and more accurate and robust positioning can be realized without increasing the burden of the optimization problem of a positioning system.
Optionally, in a traditional positioning algorithm, the pose is calculated by generally using point features, in the embodiment of the application, the pose is calculated by using surface features with higher dimensionality than the point features, further, features with higher dimensionality, such as semantic features and the like, can be introduced to increase association constraints among different image frames, and can provide more accurate feature matching, so that a more accurate pose is obtained, and the positioning accuracy of the world SLAM algorithm is further improved.
Optionally, plane and scene structure information can be used, so that the construction of the environment map is more accurate. Specifically, the utilization of the corresponding plane and scene structure information in the embodiment of the present application is directed to a positioning link in the SLAM process. Because the plane and scene structure information also greatly helps the environment mapping environment in the SLAM process, for example, if a certain region in a scene is known to be a plane, a plane model can be used for three-dimensional reconstruction of the region, and a mode of constructing a small grid by using three-dimensional space points is not needed any more, so that the reconstructed map is smoother. Meanwhile, if the characteristic information exists, the constraint of the characteristic information can be added in the map, so that the reconstructed map is strictly vertical in the area with the characteristic information, and the included angle error cannot occur.
In the embodiment of the application, when the rotation component in the current pose cannot be obtained according to the target characteristic information, the reprojection error of the characteristic point between the current frame color image and the reference frame color image can be constructed based on the to-be-optimized plane parameter of the to-be-optimized plane and the to-be-optimized pose of the electronic device, so that the optimization problem in the process of obtaining the rotation component and the translation component in the current pose is lighter, the scale of the optimization problem is greatly reduced, the optimization efficiency is improved, and the optimization speed is greatly improved.
Optionally, in another embodiment of the present application, as long as at least one plane is included in the plane information extracted from the current frame depth image corresponding to the current frame color image of the electronic device, even if the target feature information cannot be determined to exist in the current frame color image according to the plane information, the target feature point in the feature points of the reference frame color image may still be determined according to the feature point of the reference frame color image and the plane information in the reference depth image corresponding to the reference frame color image, where the target feature point is in a preset range of the same plane to be optimized of the reference depth image; then, based on the to-be-optimized plane parameter of the to-be-optimized plane and the to-be-optimized pose of the electronic device, constructing a reprojection error of the feature point between the current frame color image and the reference frame color image, wherein for specific description of constructing the reprojection error of the feature point between the current frame color image and the reference frame color image, reference may be made to the descriptions in step S601 to step S602, which are not described herein again; and finally, acquiring target matching feature points corresponding to the target feature points in the current frame color image, solving the minimum value of the reprojection error based on the target matching feature points to obtain a rotation component and a translation component in the current pose of the electronic equipment, and further calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a pose calculation apparatus according to another embodiment of the present application.
As shown in fig. 8, the pose calculation apparatus 800 is applied to an electronic device including:
the plane extracting module 810 is configured to obtain first plane information in the current frame depth image corresponding to the current frame color image.
And a historical feature information determining module 820, configured to determine whether there is target historical feature information matching the target feature information if it is determined that there is target feature information in the current frame color image according to the first plane information.
And a rotation component calculation module 830, configured to calculate a rotation component in the current pose of the electronic device according to the target historical feature information and the target feature information if there is target historical feature information that matches the target feature information.
And the current pose calculating module 840 is used for calculating a translation component in the current pose of the electronic equipment and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Optionally, the target historical feature information is obtained by transforming the target feature information in the historical frame color image into a preset reference system according to the rotation component of the historical pose corresponding to the historical frame color image;
the rotation component calculating module 830 is further configured to calculate a rotation component in the current pose of the electronic device according to a transformation relationship between the target historical feature information and the target feature information in the current frame color image.
Optionally, the historical feature information determining module 820 is further configured to obtain a plane included angle relationship between the target feature information and the historical feature information after the target feature information is transformed to the preset reference frame; and if the plane included angle relation meets the preset plane included angle relation, determining that target historical characteristic information matched with the target characteristic information exists, and determining the historical characteristic information corresponding to the preset plane included angle relation as the target historical characteristic information.
Optionally, the current pose calculation module 840 is further configured to solve the translation component in the current pose of the electronic device based on the rotation component in the current pose.
Optionally, the current pose calculation module 840 is further configured to obtain a current frame color image of the electronic device and a reference frame color image corresponding to the current frame color image, and construct a reprojection error of a feature point between the current frame color image and the reference frame color image; and solving a translation component in the current pose of the electronic equipment based on the rotation component and the reprojection error in the current pose.
Optionally, the current pose calculation module 840 is further configured to determine a target feature point in the feature points of the reference frame color image according to the feature points of the reference frame color image and second plane information in the reference depth image corresponding to the reference frame color image, where the target feature point is within a preset range of the same to-be-optimized plane of the reference depth image; and constructing a reprojection error of the feature points between the current frame color image and the reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment.
Optionally, the current pose calculation module 840 is further configured to obtain target matching feature points corresponding to the target feature points in the current frame color image, and solve a minimum value of the reprojection error based on the target matching feature points and the rotation component to obtain a translation component in the current pose of the electronic device.
Optionally, the pose calculation apparatus 800 further includes:
and the first pose calculation module is used for acquiring the current frame color image and the reference frame color image corresponding to the current frame color image if the target historical characteristic information matched with the target characteristic information does not exist, and constructing a reprojection error of the characteristic points between the current frame color image and the reference frame color image.
And the second pose calculation module is used for solving the rotation component and the translation component in the current pose of the electronic equipment based on the reprojection error.
Optionally, the first pose calculation module is further configured to determine a target feature point in the feature points of the reference frame color image according to the feature points of the reference frame color image and second plane information in the reference depth image corresponding to the reference frame color image, where the target feature point is within a preset range of the same to-be-optimized plane of the reference depth image; and constructing a reprojection error of the feature points between the current frame color image and the reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment.
Optionally, the second pose calculation module is further configured to obtain a target matching feature point corresponding to the target feature point in the current frame color image, and solve a minimum value of the reprojection error based on the target matching feature point to obtain a rotation component and a translation component in the current pose of the electronic device.
Optionally, the pose calculation apparatus 800 further includes:
and the historical characteristic information base updating module is used for obtaining the historical characteristic information matched with the target characteristic information according to the target characteristic information of the current frame color image and storing the historical characteristic information.
In an embodiment of the present application, a pose calculation apparatus includes: the plane information acquisition module is used for acquiring first plane information in the current frame depth image corresponding to the current frame color image; the historical characteristic information determining module is used for determining whether target historical characteristic information matched with the target characteristic information exists or not if the target characteristic information exists in the current frame color image according to the first plane information; the rotating component calculating module is used for calculating a rotating component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information if the target historical characteristic information matched with the target characteristic information exists; and the current pose calculation module is used for calculating a translation component in the current pose of the electronic equipment and calculating the current pose of the electronic equipment according to the rotation component and the translation component. After first plane information in a current frame depth image corresponding to a current frame color image is obtained, if target characteristic information exists in the current frame color image according to the first plane information and the existence of target historical characteristic information matched with the target characteristic information is confirmed, the target historical characteristic information represents that the historical frame color image has the same target characteristic information as that in the current frame color image. Because the rotation component in the current pose is obtained based on the target historical characteristic information, namely the error of the rotation component in the current pose is related to the error of the rotation component in the historical pose in the historical frame color image, and the rotation component in the current pose does not depend on the correlation between the previous frame color image and the next frame color image, the accumulated error of the rotation component in the current pose is calculated, and the accuracy of calculating the current pose is improved.
Embodiments of the present application also provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method according to any of the above embodiments.
Further, please refer to fig. 9, where fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic device 900 may include: at least one central processor 901, at least one network interface 904, a user interface 903, a memory 905, at least one communication bus 902.
Wherein a communication bus 902 is used to enable the connective communication between these components.
The user interface 903 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 903 may also include a standard wired interface and a wireless interface.
The network interface 904 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
The central processor 901 may include one or more processing cores. The central processor 901 interfaces with various interfaces and lines throughout the electronic device 900 to perform various functions of the electronic device 900 and process data by executing or performing instructions, programs, code sets, or instruction sets stored in the memory 905 and invoking data stored in the memory 905. Optionally, the central Processing unit 901 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The CPU 901 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is to be understood that the modem may not be integrated into the central processor 901, and may be implemented by a single chip.
The Memory 905 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 905 includes a non-transitory computer-readable medium. The memory 905 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 905 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 905 may optionally be at least one storage device located remotely from the central processor 901. As shown in fig. 9, the memory 905, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a pose calculation program.
In the electronic device 900 shown in fig. 9, the user interface 903 is mainly used for providing an input interface for a user to obtain data input by the user; the central processor 901 may be configured to invoke the pose calculation program stored in the memory 905, and specifically perform the following operations:
acquiring first plane information in a current frame depth image corresponding to a current frame color image; if the target characteristic information exists in the current frame color image according to the first plane information, whether target historical characteristic information matched with the target characteristic information exists is determined; if the target historical characteristic information matched with the target characteristic information exists, calculating a rotation component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information; and calculating a translation component in the current pose of the electronic equipment, and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
Optionally, the target historical feature information is obtained by transforming the target feature information in the historical frame color image into a preset reference system according to the rotation component of the historical pose corresponding to the historical frame color image;
calculating a rotation component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information, wherein the rotation component comprises the following steps: and calculating a rotation component in the current pose of the electronic equipment according to the transformation relation between the target historical characteristic information and the target characteristic information in the current frame color image.
Optionally, the determining whether there is target history feature information matching the target feature information includes: acquiring a plane included angle relation between the target characteristic information and the historical characteristic information after the target characteristic information is converted into a preset reference system; and if the plane included angle relation meets the preset plane included angle relation, determining that target historical characteristic information matched with the target characteristic information exists, and determining the historical characteristic information corresponding to the preset plane included angle relation as the target historical characteristic information.
Optionally, calculating a translation component in the current pose of the electronic device comprises: solving a translation component in the current pose of the electronic device based on the rotation component in the current pose.
Optionally, solving for a translation component in the current pose of the electronic device based on the rotation component in the current pose comprises: acquiring a current frame color image and a reference frame color image corresponding to the current frame color image, and constructing a reprojection error of characteristic points between the current frame color image and the reference frame color image; and solving a translation component in the current pose of the electronic equipment based on the rotation component and the reprojection error in the current pose.
Optionally, constructing a reprojection error of the feature point between the current frame color image and the reference frame color image includes: determining a target characteristic point in the characteristic points of the reference frame color image according to the characteristic points of the reference frame color image and second plane information of the reference depth image corresponding to the reference frame color image, wherein the target characteristic point is in a preset range of the same plane to be optimized of the reference depth image; constructing a reprojection error of characteristic points between a current frame color image and a reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment; solving a translation component in the current pose of the electronic device based on the rotation component and the reprojection error in the current pose, comprising: and acquiring target matching feature points corresponding to the target feature points in the current frame color image, and solving the minimum value of the re-projection error based on the target matching feature points and the rotation component to obtain the translation component in the current pose of the electronic equipment.
Optionally, if there is no target historical feature information matched with the target feature information, obtaining a current frame color image and a reference frame color image corresponding to the current frame color image, and constructing a reprojection error of feature points between the current frame color image and the reference frame color image; and solving a rotation component and a translation component in the current pose of the electronic equipment based on the reprojection error.
Optionally, constructing a reprojection error of the feature point between the current frame color image and the reference frame color image includes: determining a target characteristic point in the characteristic points of the reference frame color image according to the characteristic points of the reference frame color image and second plane information of the reference depth image corresponding to the reference frame color image, wherein the target characteristic point is in a preset range of the same plane to be optimized of the reference depth image; constructing a reprojection error of characteristic points between a current frame color image and a reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment; solving a rotation component and a translation component in a current pose of the electronic device based on the reprojection error, comprising: and acquiring target matching feature points corresponding to the target feature points in the current frame color image, and solving the minimum value of the reprojection error based on the target matching feature points to obtain a rotation component and a translation component in the current pose of the electronic equipment.
Optionally, after solving the rotation component and the translation component in the current pose of the electronic device based on the reprojection error, the method further includes: and obtaining historical characteristic information matched with the target characteristic information according to the target characteristic information of the current frame color image, and storing the historical characteristic information.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, or all or part of the technical solutions, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the pose calculation method, apparatus, storage medium and electronic device provided by the present application, those skilled in the art will appreciate that the embodiments of the present application can be modified in various ways, and therefore, the present disclosure should not be construed as limiting the present application.

Claims (12)

1. A pose calculation method applied to an electronic device, the method comprising:
acquiring first plane information in a current frame depth image corresponding to a current frame color image;
if the target characteristic information exists in the current frame color image according to the first plane information, whether target historical characteristic information matched with the target characteristic information exists is determined;
if the target historical characteristic information matched with the target characteristic information exists, calculating a rotation component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information;
calculating a translation component in the current pose of the electronic equipment, and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
2. The method according to claim 1, wherein the target historical feature information is obtained by transforming target feature information in a historical frame color image into a preset reference system according to a rotation component of a historical pose corresponding to the historical frame color image;
the calculating a rotation component in the current pose of the electronic device according to the target historical feature information and the target feature information comprises:
and calculating a rotation component in the current pose of the electronic equipment according to the transformation relation between the target historical characteristic information and the target characteristic information in the current frame color image.
3. The method of claim 1, wherein the confirming whether there is target historical feature information matching the target feature information comprises:
acquiring a plane included angle relation between the target characteristic information and historical characteristic information after the target characteristic information is converted into a preset reference system;
and if the plane included angle relation meets a preset plane included angle relation, determining that target historical characteristic information matched with the target characteristic information exists, and determining the historical characteristic information corresponding to the preset plane included angle relation as the target historical characteristic information.
4. The method of claim 1, wherein the calculating a translation component in the current pose of the electronic device comprises:
solving for a translation component in the current pose of the electronic device based on the rotation component in the current pose.
5. The method of claim 4, wherein solving for a translation component in the current pose of the electronic device based on the rotation component in the current pose comprises:
acquiring the current frame color image and a reference frame color image corresponding to the current frame color image, and constructing a reprojection error of characteristic points between the current frame color image and the reference frame color image;
solving a translation component in the current pose of the electronic device based on the rotation component in the current pose and the reprojection error.
6. The method of claim 5, wherein the constructing of the reprojection error of the feature points between the current frame color image and the reference frame color image comprises:
determining a target feature point in the feature points of the reference frame color image according to the feature points of the reference frame color image and second plane information of the reference depth image corresponding to the reference frame color image, wherein the target feature point is in a preset range of the same plane to be optimized of the reference depth image;
constructing a reprojection error of the feature points between the current frame color image and the reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment;
solving for a translation component in the current pose of the electronic device based on the rotation component in the current pose and the reprojection error, comprising:
and acquiring a target matching feature point corresponding to the target feature point in the current frame color image, and solving the minimum value of the re-projection error based on the target matching feature point and the rotation component to obtain a translation component in the current pose of the electronic equipment.
7. The method according to claim 1, wherein if there is no target historical feature information matching the target feature information, obtaining the current frame color image and a reference frame color image corresponding to the current frame color image, and constructing a reprojection error of feature points between the current frame color image and the reference frame color image;
solving a rotation component and a translation component in the current pose of the electronic device based on the reprojection error.
8. The method of claim 7, wherein constructing the reprojection error of the feature points between the current frame color image and the reference frame color image comprises:
determining a target feature point in the feature points of the reference frame color image according to the feature points of the reference frame color image and second plane information of the reference depth image corresponding to the reference frame color image, wherein the target feature point is in a preset range of the same plane to be optimized of the reference depth image;
constructing a reprojection error of feature points between the current frame color image and the reference frame color image based on the to-be-optimized plane parameters of the to-be-optimized plane and the to-be-optimized pose of the electronic equipment;
the solving for rotation and translation components in the current pose of the electronic device based on the reprojection error includes:
and acquiring a target matching feature point corresponding to the target feature point in the current frame color image, and solving the minimum value of the re-projection error based on the target matching feature point to obtain a rotation component and a translation component in the current pose of the electronic equipment.
9. The method of claim 7, further comprising, after solving for a rotational component and a translational component in the current pose of the electronic device based on the reprojection error:
and obtaining historical characteristic information matched with the target characteristic information according to the target characteristic information of the current frame color image, and storing the historical characteristic information.
10. A pose calculation apparatus applied to an electronic device, the apparatus comprising:
the plane information acquisition module is used for acquiring first plane information in the current frame depth image corresponding to the current frame color image;
a historical characteristic information determining module, configured to determine whether target historical characteristic information matching the target characteristic information exists or not if it is determined that the target characteristic information exists in the current frame color image according to the first plane information;
the rotating component calculating module is used for calculating a rotating component in the current pose of the electronic equipment according to the target historical characteristic information and the target characteristic information if the target historical characteristic information matched with the target characteristic information exists;
and the current pose calculation module is used for calculating a translation component in the current pose of the electronic equipment and calculating the current pose of the electronic equipment according to the rotation component and the translation component.
11. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method according to any of claims 1 to 9.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 9 when executing the program.
CN202210619402.4A 2022-05-31 2022-05-31 Pose calculation method and device, storage medium and electronic equipment Pending CN114998433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210619402.4A CN114998433A (en) 2022-05-31 2022-05-31 Pose calculation method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210619402.4A CN114998433A (en) 2022-05-31 2022-05-31 Pose calculation method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114998433A true CN114998433A (en) 2022-09-02

Family

ID=83031746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210619402.4A Pending CN114998433A (en) 2022-05-31 2022-05-31 Pose calculation method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114998433A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965944A (en) * 2023-03-09 2023-04-14 安徽蔚来智驾科技有限公司 Target information detection method, device, driving device, and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965944A (en) * 2023-03-09 2023-04-14 安徽蔚来智驾科技有限公司 Target information detection method, device, driving device, and medium
CN115965944B (en) * 2023-03-09 2023-05-09 安徽蔚来智驾科技有限公司 Target information detection method, device, driving device and medium

Similar Documents

Publication Publication Date Title
US11270460B2 (en) Method and apparatus for determining pose of image capturing device, and storage medium
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
EP4027299A2 (en) Method and apparatus for generating depth map, and storage medium
CN113077548B (en) Collision detection method, device, equipment and storage medium for object
US20230249076A1 (en) Collision data processing method and apparatus, storage medium, program product, and electronic device
CN116051729B (en) Three-dimensional content generation method and device and electronic equipment
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
US20230401799A1 (en) Augmented reality method and related device
CN115239888B (en) Method, device, electronic equipment and medium for reconstructing three-dimensional face image
CN113870439A (en) Method, apparatus, device and storage medium for processing image
CN111868786A (en) Cross-equipment monitoring computer vision system
CN112733641A (en) Object size measuring method, device, equipment and storage medium
CN115797565A (en) Three-dimensional reconstruction model training method, three-dimensional reconstruction device and electronic equipment
JP7262530B2 (en) Location information generation method, related device and computer program product
KR102488517B1 (en) A method, a device, an electronic equipment and a storage medium for changing hairstyle
CN114627244A (en) Three-dimensional reconstruction method and device, electronic equipment and computer readable medium
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
CN115578515B (en) Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device
EP4086853A2 (en) Method and apparatus for generating object model, electronic device and storage medium
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN111292365B (en) Method, apparatus, electronic device and computer readable medium for generating depth map
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN116524165B (en) Migration method, migration device, migration equipment and migration storage medium for three-dimensional expression model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination