WO2024001526A1 - 图像处理方法、装置及电子设备 - Google Patents
图像处理方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2024001526A1 WO2024001526A1 PCT/CN2023/092813 CN2023092813W WO2024001526A1 WO 2024001526 A1 WO2024001526 A1 WO 2024001526A1 CN 2023092813 W CN2023092813 W CN 2023092813W WO 2024001526 A1 WO2024001526 A1 WO 2024001526A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- electronic device
- relative position
- coordinates
- feature points
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 61
- 239000011159 matrix material Substances 0.000 claims description 83
- 238000013519 translation Methods 0.000 claims description 52
- 238000012545 processing Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 21
- 230000003287 optical effect Effects 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular, to an image processing method, device and electronic equipment.
- the six-degree-of-freedom pose of the camera is estimated through the visual odometry system.
- the visual odometry system can obtain the posture of the camera by analyzing the coordinates.
- the MonoSLAM algorithm is used to analyze the coordinates of the feature points in the image captured by the camera, and then obtain the posture of the camera.
- the visual odometry system must analyze the coordinates of the feature points every time, making pose estimation more complex and time-consuming, which in turn results in lower efficiency of camera pose estimation.
- the present disclosure provides an image processing method, device and electronic equipment to solve the technical problem of low efficiency in camera pose determination in the prior art.
- the present disclosure provides an image processing method, which method includes:
- a first relative position is determined, where the first relative position is the posture when the electronic device captures the first image and the second image is captured by the electronic device.
- the posture when the electronic device captures the first image is determined.
- the present disclosure provides an image processing device, including a first acquisition module, a first determination module, a second acquisition module, a second determination module and a third determination module, wherein:
- the first acquisition module is configured to acquire a first image obtained by photographing a first object by an electronic device, where the first image includes feature points;
- the second acquisition module is configured to acquire the spatial coordinates of the part of the first object corresponding to the feature point relative to the electronic device when the electronic device captures a second image, and the second image is the The previous frame of the first image;
- the second determination module is configured to determine a first relative position according to the first image coordinates and the spatial coordinates, where the first relative position is the pose and posture when the electronic device captures the first image. The relative position between the poses when the electronic device captured the second image;
- the third determination module is configured to determine the posture of the electronic device when capturing the first image according to the first relative position.
- embodiments of the present disclosure provide an electronic device, including: a processor and a memory;
- the memory stores computer execution instructions
- the processor executes the computer execution instructions stored in the memory, so that the at least one processor executes the above first aspect and the various image processing methods that may be involved in the first aspect.
- embodiments of the present disclosure provide a computer-readable storage medium.
- Computer-executable instructions are stored in the computer-readable storage medium.
- the processor executes the computer-executed instructions, the above first aspect and the first aspect are implemented.
- Various aspects may involve the image processing methods.
- embodiments of the present disclosure provide a computer program product, including a computer program.
- the computer program When the computer program is executed by a processor, the computer program implements the above first aspect and various image processing methods that may be involved in the first aspect.
- FIG. 3 is a schematic flowchart of another image processing method provided by an embodiment of the present disclosure.
- Figure 5 is a schematic diagram of a process for determining first image coordinates provided by an embodiment of the present disclosure
- FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- Electronic equipment It is a device with wireless sending and receiving functions. Electronic devices can be deployed on land, including indoors or outdoors, handheld, wearable or vehicle-mounted; they can also be deployed on water (such as ships, etc.).
- the electronic device may be a mobile phone (mobile phone), a tablet computer (Pad), a computer with wireless transceiver functions, a virtual reality (VR) electronic device, an augmented reality (AR) electronic device, an industrial control ( Wireless terminals in industrial control, vehicle-mounted electronic equipment, wireless terminals in self-driving, wireless electronic equipment in remote medical, wireless electronic equipment in smart grid, transportation safety Wireless electronic devices in transportation safety, wireless electronic devices in smart city, wireless electronic devices in smart home, wearable electronic devices, etc.
- VR virtual reality
- AR augmented reality
- the electronic equipment involved in the embodiments of this application can also be called terminal, user equipment (UE), access electronic equipment, vehicle-mounted terminal, industrial control terminal, UE unit, UE station, mobile station, mobile station, remote station , remote electronic equipment, mobile equipment, UE electronic equipment, wireless communication equipment, UE agent or UE device, etc.
- Electronic equipment can also be stationary or mobile.
- the electronic device obtains the second relative position by updating the first relative position. Since the complexity of the relative position update is low, the electronic device can quickly determine the pose and shooting position of the first image. The relative position between the poses in the second image allows the electronic device to quickly obtain the current pose, thereby improving the efficiency of estimating the camera pose.
- Figure 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure. See Figure 1, including: electronic device and first object.
- the electronic device photographs the first object
- the electronic device obtains the first image.
- the first image includes the first object and the feature point A extracted by the electronic device in the first image.
- the electronic device acquires the first image, the relative position of the electronic device compared to the previous image captured is 1 meter to the left and 10 degrees of rotation.
- the electronic device determines the coordinates of the feature point A.
- the first image coordinate of the feature point A in the first image is (X, Y)
- the spatial coordinate of the feature point A relative to the electronic device when the electronic device captures the previous frame of the first image is (x, Y).
- the electronic device determines the posture when the electronic device captures the first image based on the first image coordinates, spatial coordinates and relative position. Since the complexity of updating the relative position in Kalman filtering is low, the electronic device can quickly determine the relative position between the pose when the first image is captured and the pose when the second image is captured, so that the electronic device The device quickly obtains the current pose, thereby improving the efficiency of estimating the camera pose.
- the present disclosure provides an image processing method, device and electronic equipment, which obtains a first image obtained by the electronic equipment photographing a first object, wherein the first image includes feature points, and determines the first image coordinates of the feature points in the first image. , obtain the spatial coordinates of the part of the first object corresponding to the feature point relative to the electronic device when the electronic device shoots the second image, where the second image is the previous frame image of the first image. According to the first image coordinates and space coordinates to determine the first relative position.
- the first relative position is the relative position between the posture when the electronic device captures the first image and the posture when the electronic device captures the second image. According to the first relative position, determine whether the electronic device captures the first image. The pose of the first image.
- the electronic device can accurately determine the error state during the position determination process through the first image coordinates and the spatial coordinates, and determine the position and posture of the electronic device when shooting the first image and when shooting the second image through the error state.
- the relative position between the poses Since the relative position acquisition complexity is low, the electronic device can quickly determine the relative position between the current pose and the pose of the previous frame of the image, and the electronic device passes The relative position can quickly obtain the current pose, thereby improving the efficiency of estimating the camera pose.
- FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present application. See Figure 2, the method can include:
- the execution subject of the embodiment of the present disclosure may be an electronic device, or may be an image processing device provided in the electronic device.
- the image processing device can be implemented by software, and the image processing device can also be implemented by a combination of software and hardware.
- the first object may be a photographed object of the electronic device.
- the first object can be a movable object such as a user or an airplane, or a stationary object such as a table or a chair (when the camera moves, the stationary object can also become a movable object in the video).
- the first image includes the first object and feature points.
- the feature points are used to mark the location of the first object in the first image.
- the first object is a table.
- the feature points in the image may be at the corners, legs, etc. of the table.
- the electronic device can set feature points in the first image. For example, when the electronic device captures the first image, it can add multiple feature points to the first image.
- the electronic device can obtain feature points in the previous frame image through tracking. For example, the electronic device can add multiple feature points to the first frame image (the electronic device has not selected the feature points in the initial image).
- the electronic device obtains the feature points in the second frame image, it can Optical flow tracking is performed on the feature points in the image, and then the feature points in the second frame of the image are obtained.
- the electronic device adds feature points to the table legs in the first frame of the image. In the second frame of the image, the position of the electronic device changes. The position of the table legs also changes in the second frame of the image, but the feature points are in The table legs are also indicated in frame 2.
- the first image coordinates are the 2D coordinates of the feature points in the first image.
- the first image is a two-dimensional image captured by an electronic device. A coordinate system is established with a vertex of the first image as the coordinate origin, and then the first image coordinates of each feature point can be expressed according to the coordinate system.
- the coordinates of the first image can be determined according to the following feasible implementation methods: Obtain the feature points in the second image The third image coordinates.
- the second image is the previous frame image of the first image.
- the electronic device can add at least one feature point to the initial image and determine the coordinates of the feature point as the third image coordinates. If the second image is not taken by the electronic device, The electronic device can track the feature points in the previous frame of the second image and obtain the third image coordinates of the feature points in the second image.
- Optical flow tracking or feature matching is performed on the third image coordinates to obtain the first image coordinates of the feature points in the first image. For example, after the electronic device obtains the third image coordinates of the feature points in the second image, the electronic device can track the feature points in the second image in the first image through optical flow tracking, and then obtain the feature points in the first image.
- the first image coordinate of the feature point improves the accuracy of image coordinate acquisition.
- the second image is the previous frame image of the first image.
- the spatial coordinates may be the 3D coordinates of the feature points in the first image in the camera coordinate system.
- the first object is a table, and if the feature point corresponds to the corner of the table, the spatial coordinates can be the 3D coordinates of the corner of the table in the camera coordinate system when the camera captures the second image.
- the electronic device can extract multiple feature points in the second image and obtain the 2D coordinates of the feature points.
- the electronic device can obtain the features through the 2D coordinates.
- the direction vector of the point (the depth can be set to unit 1); if the second image is not the initial image of the first object captured by the electronic device, the electronic device can obtain the spatial coordinates of the feature point through the previous image.
- the first relative position is the relative position between the posture when the electronic device captures the first image and the posture when the electronic device captures the second image.
- the first relative position is the relative translation and relative rotation of the electronic device when the current image is captured compared to when the electronic device captured the previous frame image.
- the electronic device may determine the first relative position according to the following possible implementation methods: obtain the second relative position between the pose when the electronic device captures the second image and the pose when the electronic device captures the previous frame of the second image. , determine the first relative position according to the first image coordinates, spatial coordinates and the second relative position.
- the second relative position includes a first relative translation and a first relative rotation.
- the first relative translation is the difference between the position when the electronic device captures the second image and the position when the electronic device captures the previous frame of the second image. For example, the electronic device captures a second image at position A, and captures the previous frame of the second image at position B. If the distance between position A and position B is 1 meter, the first relative translation is determined to be 1 meter.
- the first relative rotation is the difference between the rotation angle when the electronic device captures the second image and the rotation angle when the electronic device captures the previous frame of the second image. For example, if the electronic device captures the second image at a rotation angle of 30 degrees, and the electronic device captures the previous frame of the second image at a rotation angle of 60 degrees, then the first relative rotation is determined to be 30 degrees.
- the second image is the first frame image of the first object captured by the electronic device, then there is no previous frame image in the second image, and the second image is The initial image, therefore, the first relative translation and the first relative rotation corresponding to the second image are both 0.
- the relative translation and relative rotation of the previous frame image are both 0.
- the relative translation and relative rotation corresponding to the second frame of image (second image) can be obtained in Kalman filtering.
- the electronic device starts from the initial image and passes through The Kalman filter has been updating the relative translation and relative rotation corresponding to each image. Therefore, the electronic device can quickly obtain the relative translation and relative rotation corresponding to the previous frame image, thereby improving the accuracy of the pose estimation of the electronic device.
- the first relative position includes a relative translation of the target and a relative rotation of the target.
- the pose of the electronic device when shooting the first image is determined, specifically: according to the relative translation of the target, the global translation of the electronic device is determined.
- the relative rotation of the target the global rotation of the electronic device is determined, and according to the global translation and global rotation, the pose of the electronic device when capturing the first image is determined.
- the global translation is the difference between the current position and the initial position of the electronic device.
- the global rotation is the difference between the current rotation angle of the electronic device and the initial rotation angle. For example, by analyzing the relative translation of the target and the relative rotation of the target, the global translation and global rotation of the electronic device can be obtained, and then the posture of the electronic device when shooting the first image can be obtained. For example, if the global translation is 1 meter and the global rotation is 30 degrees, it is determined that the posture of the electronic device when capturing the first image is translated by 1 meter and rotated by 30 degrees compared to the initial position.
- the coordinate system of the feature points also needs to be converted.
- the coordinate system of the feature points also needs to be converted. For example, after determining the position of the camera in this frame of image, it is also necessary to convert the camera coordinate system of the feature point in the previous frame of image to the camera coordinate system of this frame.
- Embodiments of the present disclosure provide an image processing method to obtain a first image obtained by photographing a first object by an electronic device, wherein the first image includes feature points, determine the first image coordinates of the feature points in the first image, and obtain the electronic device.
- the device captures the second image
- the spatial coordinates of the part of the first object corresponding to the feature point relative to the electronic device are obtained
- the posture when the electronic device captures the second image and the posture of the electronic device when capturing the previous frame of the second image are obtained.
- the second relative position between poses determines the first relative position based on the first image coordinates, the spatial coordinates and the second relative position, and determines the pose when the electronic device captures the first image based on the first relative position.
- the electronic device can determine the error in the Kalman filter through the first image coordinate and the spatial coordinate, and then updates the second relative position through the error to obtain the first relative position. Since in the Kalman filter, the error state Because the relative position update complexity is low, the electronic device can quickly determine the current relative translation and relative rotation, and then obtain the current pose of the electronic device in a shorter time, improving the efficiency of pose estimation of the electronic device.
- FIG. 3 is a schematic flowchart of another image processing method provided by an embodiment of the present disclosure. Please refer to Figure 3.
- the method flow includes:
- step S301 may refer to step S201, which is no longer limited in this embodiment of the disclosure.
- the noise points are mismatched feature points in the first image.
- the multiple feature points in the second image fail to be tracked or matched incorrectly, the multiple feature points will be determined as noise points in the first image.
- the noise points can be obtained through the following feasible implementation methods: determining whether the electronic device includes a gyroscope. If the electronic device includes a gyroscope, the rotation angle of the electronic device is obtained through the gyroscope, and the noise point is determined using a preset algorithm (such as a two-point RANSAC algorithm). If the electronic device does not include a gyroscope, the electronic device cannot obtain the rotation angle, and the electronic device determines the noise point through the five-point method (such as the five-point RANSAC algorithm).
- a preset algorithm such as a two-point RANSAC algorithm
- the electronic device can convert the noise points in the first image into Click Delete.
- the first image includes 100 feature points, and if 30 of the feature points are noise points, the electronic device deletes the 30 noise points, leaving 70 feature points in the first image.
- the electronic device can also obtain the number of feature points in the first image, and when the number of feature points is less than or equal to the second threshold, add predetermined points to the first image. Set the number of feature points. For example, if the electronic device deletes the noise points in the first image and 30 feature points remain in the first image, since the number of feature points is less than the second threshold (the number of feature points is small), the electronic device can delete the noise points in the first image. Add 100 feature points to improve the accuracy of pose estimation.
- FIG. 4 is a schematic diagram of a noise point processing process provided by an embodiment of the present disclosure. See Figure 4, including the first image.
- the first image includes feature point A, feature point B, feature point C, feature point D, feature point E, feature point F and feature point G.
- Noise points among feature points are determined in the first image.
- the noise points include feature point A, feature point E and feature point G.
- the remaining feature points in the first image are feature point B, feature point C, feature point D and feature point F. Since the number of feature points in the first image is less than the second threshold, new feature points are added in the first image, where the new feature points include feature point H, feature point I, and feature point J.
- FIG. 5 is a schematic diagram of a process for determining first image coordinates according to an embodiment of the present disclosure. See Figure 5, including a first image and a second image.
- the second image is the previous frame image of the first image.
- the second image includes feature point A, and the image coordinates of feature point A in the second image are known (X, Y).
- optical flow is performed on feature point A in the second image.
- the image coordinates of feature point A in the first image are obtained as (x, y).
- the second image is the previous frame image of the first image.
- the electronic device passes the 2D image of the feature point coordinates to obtain the direction vector of the feature point.
- the depth can be set to a unit vector (such as 1), so that the spatial coordinates of the feature point in the initial image are (x, y, 1), where x is the feature point in the initial image.
- the abscissa of , y is the ordinate of the feature point in the initial image.
- the spatial coordinates of the feature point in the previous frame of the image can be used , obtain the spatial coordinates of the feature points in this frame of image.
- the electronic device starts to move, the electronic device has been updating the spatial coordinates of the feature points corresponding to each image starting from the initial image. In this way, through Kalman filtering, Quickly obtain the spatial coordinates of feature points in this frame of image, thereby improving the efficiency of electronic device pose estimation.
- step S305 may refer to step S204, which will not be described again in this embodiment of the disclosure.
- S306. Determine the first relative position according to the first image coordinates, spatial coordinates and the second relative position.
- the first relative position is the relative position between the posture when the electronic device captures the first image and the posture when the electronic device captures the second image.
- the first relative position can be determined according to the following feasible implementation: project the spatial coordinates corresponding to the feature points into the first image to obtain the second image coordinates of the feature points in the first image.
- the spatial coordinates of the feature points can be obtained (when the electronic device captures the previous frame image, the feature points are relative to the 3D coordinates of the electronic device, and the coordinates can be updated continuously in the Kalman filter), the spatial coordinates of the feature points can be projected to In this frame of image, the image coordinates (2D coordinates) projected by the feature points in this frame of image are obtained.
- the electronic device obtains the second image coordinates of the feature points in the first image by projecting the spatial coordinates of the feature points, and the electronic device obtains the second image coordinates of the feature points in the first image by tracking the feature points in the previous frame image.
- the error of the Kalman filter can be determined based on the first image coordinates of optical flow tracking and the second image coordinates obtained by projection, and the state maintained in the Kalman filter (relative translation, Relative rotation, and spatial coordinates of feature points) are updated.
- the first relative position can be obtained according to the following feasible implementation: determining the first difference between the first image coordinates and the second image coordinates. For example, the actual coordinates (first image coordinates) obtained by optical flow tracking can be subtracted from the projected coordinates (second image coordinates) to obtain the first difference value.
- the first relative position is obtained.
- the first difference value can be used as the extended Kalman Filter (EKF) state update value, and the state maintained by the Kalman filter (relative translation, relative rotation, and spatial coordinates of the feature points) is updated through the first difference value. Update to get the first relative position.
- EKF extended Kalman Filter
- the default system's equation of motion is stationary. Therefore, it is enough to update the predicted noise term in the covariance matrix. There is no need to update the state maintained by the Kalman filter.
- the first relative position is obtained according to the first difference value and the second relative position, specifically: obtaining the first matrix corresponding to the second relative position.
- the first matrix includes a coordinate matrix, a rotation matrix and a translation matrix.
- the first matrix may be a Jacobian matrix in Kalman filtering, and each state maintained by Kalman filtering corresponds to a Jacobian matrix.
- the relative translation maintained by Kalman filter corresponds to a Jacobian matrix
- the relative rotation corresponds to a Jacobian matrix
- the spatial coordinates of feature points correspond to a Jacobian matrix.
- the second matrix is obtained.
- the first difference is less than or equal to the first threshold, set the coordinate matrix to 0.
- the first difference is less than or equal to the first threshold, it means that the feature point of the previous frame image has a small offset relative to the feature point of the current frame image, and it is currently in a stationary state. Therefore, there is no need to modify the Jacobian matrix.
- the coordinate matrix (3D coordinate part of the feature point) is updated and the coordinate matrix is set to 0.
- the electronic device includes a gyroscope
- set the rotation matrix to 0.
- the electronic device can obtain the rotation angle through the gyroscope. Therefore, the electronic device does not need to obtain the relative rotation through the Kalman filter, and then the Jacobian matrix corresponding to each state maintained by the Kalman filter is The rotation matrix is set to 0, so that when the electronic device has a gyroscope, the complexity of Kalman filter state maintenance can be reduced, thereby improving the efficiency of electronic device pose estimation.
- the first difference value is processed through the basic formula of EFK, and then the state and covariance matrix of the Kalman filter are updated.
- the state of the Kalman filter maintained by this disclosure is relative position (relative translation and relative rotation)
- the complexity of the EFK operation can be effectively reduced, and the relative The position can quickly obtain the pose of the electronic device when shooting the first image, thereby improving the efficiency of pose estimation of the electronic device.
- the translation matrix and the rotation matrix can be cleared in the Jacobian matrix of the Kalman filter.
- the covariance matrix C in Kalman filtering can also be obtained according to the state update process of Kalman filtering to obtain the corresponding Jacobian matrix J (for example, C ⁇ J*C*C ⁇ T).
- Embodiments of the present disclosure provide an image processing method that acquires a first image obtained by photographing a first object by an electronic device, the first image includes feature points, determines the noise points of the first image among the feature points in the first image, and Delete the noise points in the first image, determine the first image coordinates of the feature points in the first image, and obtain the spatial coordinates of the part of the first object corresponding to the feature points relative to the electronic device when the electronic device shoots the second image, Obtain the second relative position between the pose when the electronic device captures the second image and the pose when the electronic device captures the previous frame of the second image, and determine based on the first image coordinates, space coordinates and the second relative position.
- the first relative position determines the posture when the electronic device captures the first image.
- the electronic device when the electronic device obtains the feature points in the first image, it can eliminate the noise points in the feature points, and when the number of feature points is low, it can replenish the feature points in the first image, so that The accuracy of pose estimation can be improved, and since in Kalman filtering, the error state is a relative position update with low complexity, the electronic device can quickly determine the current relative translation and relative rotation, and through relative translation and relative rotation can obtain the current pose of the electronic device in a shorter time (the computational complexity of this disclosure is less than the computational complexity of calculating the pose through the spatial coordinates of the five-point method), improving the efficiency of the pose estimation of the electronic device.
- FIG. 6 is a schematic process diagram of an image processing method provided by an embodiment of the present disclosure. See Figure 6, including: electronic device and first object.
- the electronic device photographs the first object, the electronic device obtains the first image.
- the first image includes feature points A, feature points B, feature points C, feature points D, feature points E, feature points F and feature points G extracted from the first image by the first object and the electronic device.
- the electronic device determines noise points among feature points in the first image. Among them, the noise points include feature point A, feature point E and feature point G.
- the electronic device deletes the noise points in the first image, and the remaining feature points in the first image are feature point B, feature point C, feature point D, and feature point F. Since the number of feature points in the first image is less than the second threshold, new feature points are added in the first image, where the new feature points include feature point H, feature point I, and feature point J.
- the electronic device acquires the previous frame of the first image
- the relative position of the electronic device compared to the previous two frames of images is to translate to the left by 1 meter and rotate by 10 degrees.
- the electronic device determines the A coordinate of each feature point.
- the first image coordinate of feature point B in the first image is (X, Y)
- the spatial coordinate of feature point B relative to the electronic device when the electronic device captures the previous frame of the first image is (x, Y).
- the first image coordinate of feature point C in the first image is (M, N)
- the spatial coordinate of feature point B relative to the electronic device when the electronic device captures the previous frame of the first image is (m, n, l) etc.
- the electronic device determines the relative position of the electronic device when shooting the first image and the previous frame image based on the relative position, the first image coordinate and the spatial coordinate of each feature point, and determines the electronic device based on the relative position.
- the pose when the first image was taken.
- the electronic device obtains the feature points in the first image, it can eliminate the noise points in the feature points, and when the number of feature points is low, it can replenish the feature points in the first image, which can improve the accuracy of the feature points.
- the electronic device can quickly determine the second relative position of the electronic device when the first image is captured, and the relative position can Quickly obtain the current pose of the electronic device, thereby improving the efficiency of estimating the camera pose.
- FIG. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
- the image processing device 70 includes a first acquisition module 71, a first determination module 72, a second acquisition module 73, a second determination module 74 and a third determination module 75, wherein:
- the first acquisition module 71 is configured to acquire a first image obtained by photographing a first object by an electronic device, where the first image includes feature points;
- the determination module 72 is configured to determine the first image coordinates of the feature point in the first image
- the second acquisition module 73 is configured to acquire the spatial coordinates of the part of the first object corresponding to the feature point relative to the electronic device when the electronic device captures a second image, and the second image is the the previous frame of the first image;
- the third determination module 75 is configured to determine the posture of the electronic device when capturing the first image according to the first relative position.
- the second determination module 74 is specifically used to:
- the first relative position is determined based on the first image coordinates, the spatial coordinates and the second relative position.
- the second determination module 74 is specifically used to:
- the first relative position is obtained according to the first image coordinates, the second image coordinates and the second relative position corresponding to the feature point.
- the second determination module 74 is specifically used to:
- the first relative position is obtained based on the first difference value and the second relative position.
- the second determination module 74 is specifically used to:
- the first matrix includes a coordinate matrix, a rotation matrix, and a translation matrix
- the second matrix is processed through Kalman filtering to obtain the first relative position.
- the second determination module 74 is specifically used to:
- the rotation matrix is set to 0.
- the second determination module 74 is specifically used to:
- the posture when the electronic device captures the first image is determined.
- the first determining module 72 is specifically used to:
- Optical flow tracking or feature matching is performed on the third image coordinates to obtain the first image coordinates of the feature points in the first image.
- the image processing device provided in this embodiment can be used to execute the technical solutions of the above method embodiments. Its implementation principles and technical effects are similar, and will not be described again in this embodiment.
- FIG. 8 is a schematic structural diagram of another image processing device provided by an embodiment of the present disclosure.
- the image processing device 70 also includes a third acquisition module 76, and the third acquisition module 76 is used for:
- the noise points are deleted from the first image.
- the third acquisition module 76 is also used to:
- the image processing device provided in this embodiment can be used to execute the technical solutions of the above method embodiments. Its implementation principles and technical effects are similar, and will not be described again in this embodiment.
- FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. Please refer to FIG. 9 , which shows a schematic structural diagram of an electronic device 900 suitable for implementing an embodiment of the present disclosure.
- the electronic device 900 may be a terminal device or a server.
- terminal devices may include but are not limited to mobile phones, laptops, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA for short), tablet computers (Portable Android Device, PAD for short), portable multimedia players (Portable Mobile terminals such as Media Player (PMP for short), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital TVs, desktop computers, etc.
- PDA Personal Digital Assistant
- PDA Personal Digital Assistant
- PAD Personal Android Device
- portable multimedia players Portable Mobile terminals such as Media Player (PMP for short
- vehicle-mounted terminals such as vehicle-mounted navigation terminals
- fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 9 is only
- the electronic device 900 may include a processing device (such as a central processing unit, a graphics processor, etc.) 901, which may process data according to a program stored in a read-only memory (Read Only Memory, ROM for short) 902 or from a storage device. 908 loads the program in the random access memory (Random Access Memory, RAM for short) 903 to perform various appropriate actions and processing. In the RAM 903, various programs and data required for the operation of the electronic device 900 are also stored.
- the processing device 901, ROM 902 and RAM 903 are connected to each other via a bus 904.
- An input/output (I/O for short) interface 905 is also connected to bus 904.
- the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD). ), an output device 907 such as a speaker, a vibrator, etc.; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909.
- the communication device 909 may allow the electronic device 900 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 9 illustrates an electronic device 900 having various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
- Computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programming read-only storage (Erasable Programmable Read-Only Memory, referred to as EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, referred to as CD-ROM), optical storage device, magnetic storage device, or any of the above suitable The combination.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
- Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
- the computer-readable medium carries one or more programs.
- the electronic device When the one or more programs are executed by the electronic device, the electronic device performs the method shown in the above embodiment.
- Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or it can be connected to an external computer Computer (e.g. connected via the Internet using an Internet service provider).
- LAN Local Area Network
- WAN Wide Area Network
- each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
- the units involved in the embodiments of the present disclosure can be implemented in software or hardware.
- the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
- the first acquisition unit can also be described as "the unit that acquires at least two Internet Protocol addresses.”
- exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Products ( Application Specific Standard Parts (ASSP for short), System on Chip (SOC for short), Complex Programmable Logic Device (CPLD for short), etc. wait.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Parts
- SOC System on Chip
- CPLD Complex Programmable Logic Device
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM portable compact disk read-only memory
- magnetic storage device or any suitable combination of the above.
- a first relative position is determined, where the first relative position is the posture when the electronic device captures the first image and the second image is captured by the electronic device.
- the posture when the electronic device captures the first image is determined.
- the first relative position is determined based on the first image coordinates, the spatial coordinates and the second relative position.
- determining the first relative position according to the first image coordinates, the spatial coordinates and the second relative position includes:
- the first relative position is obtained according to the first image coordinates, the second image coordinates and the second relative position corresponding to the feature point.
- obtaining the first relative position based on the first image coordinates, the second image coordinates and the second relative position corresponding to the feature point includes:
- the first relative position is obtained based on the first difference value and the second relative position.
- the first relative position is obtained based on the first difference and the second relative position, including:
- the first matrix includes a coordinate matrix, a rotation matrix, and a translation matrix
- the second matrix is processed through Kalman filtering to obtain the first relative position.
- updating the coordinate matrix and rotation matrix according to the first difference value includes:
- the rotation matrix is set to 0.
- the first relative position includes relative translation of the target and relative rotation of the target; according to the first relative position, determining the posture when the electronic device captures the first image includes: :
- the posture when the electronic device captures the first image is determined.
- determining the first image coordinates of the feature point in the first image includes:
- Optical flow tracking or feature matching is performed on the third image coordinates to obtain the first image coordinates of the feature points in the first image.
- the method before determining the first image coordinate of the feature point in the first image, the method further includes:
- the method further includes:
- one or more embodiments of the present disclosure provide an image processing device, which includes a first acquisition module, a first determination module, a second acquisition module, a second determination module and a third determination module, in:
- the first acquisition module is configured to acquire a first image obtained by photographing a first object by an electronic device, where the first image includes feature points;
- the determination module is configured to determine the first image coordinates of the feature point in the first image
- the second acquisition module is configured to acquire the spatial coordinates of the part of the first object corresponding to the feature point relative to the electronic device when the electronic device captures a second image, and the second image is the The previous frame of the first image;
- the second determination module is configured to determine a first relative position according to the first image coordinates and the spatial coordinates, where the first relative position is the pose and posture when the electronic device captures the first image. The relative position between the poses when the electronic device captured the second image;
- the third determination module is configured to determine the pose of the electronic device when capturing the first image according to the first relative position.
- the second determination module is specifically used to:
- the first relative position is determined based on the first image coordinates, the spatial coordinates and the second relative position.
- the second determination module is specifically used to:
- the first relative position is obtained according to the first image coordinates, the second image coordinates and the second relative position corresponding to the feature point.
- the second determination module is specifically used to:
- the first relative position is obtained based on the first difference value and the second relative position.
- the second determination module is specifically used to:
- the first matrix includes a coordinate matrix, a rotation matrix, and a translation matrix
- the second matrix is processed through Kalman filtering to obtain the first relative position.
- the second determination module is specifically used to:
- the rotation matrix is set to 0.
- the second determination module is specifically used to:
- the posture when the electronic device captures the first image is determined.
- the first determination module is specifically used to:
- Optical flow tracking or feature matching is performed on the third image coordinates to obtain the first image coordinates of the feature points in the first image.
- the above-mentioned image processing device further includes a third acquisition module, the third acquisition module is used for:
- embodiments of the present disclosure provide an electronic device, including: a processor and a memory;
- the memory stores computer execution instructions
- the processor executes the computer execution instructions stored in the memory, so that the at least one processor executes the above first aspect and the various image processing methods that may be involved in the first aspect.
- embodiments of the present disclosure provide a computer-readable storage medium.
- Computer-executable instructions are stored in the computer-readable storage medium.
- the processor executes the computer-executable instructions, the above first aspect and the first aspect are implemented.
- Various aspects may involve the image processing methods.
- embodiments of the present disclosure provide a computer program product, including a computer program.
- the computer program When the computer program is executed by a processor, the computer program implements the above first aspect and various image processing methods that may be involved in the first aspect.
- embodiments of the present disclosure provide a computer program that, when executed by a processor, implements the above first aspect and various image processing methods that may be involved in the first aspect.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
一种图像处理方法、装置及电子设备,方法包括:获取电子设备拍摄第一对象得到的第一图像,第一图像中包括特征点(S201);确定特征点在第一图像中的第一图像坐标(S202);获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标(S203),第二图像为第一图像的上一帧图像;根据第一图像坐标和空间坐标,确定第一相对位置(S204),第一相对位置为电子设备拍摄第一图像时的位姿与电子设备拍摄第二图像时的位姿之间的相对位置;根据第一相对位置,确定电子设备拍摄第一图像时的位姿(S205)。提高电子设备位姿估计的效率。
Description
相关申请交叉引用
本申请要求于2022年06月28日提交中国专利局、申请号为202210754289.0、发明名称为“图像处理方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用并入本文。
本公开涉及计算机视觉技术领域,尤其涉及一种图像处理方法、装置及电子设备。
在计算机视觉技术领域中,相机位姿的估计尤为重要(如,VR画笔等应用),通过视觉里程计系统对相机的六自由度姿态进行估计。
目前,视觉里程计系统可以通过对坐标进行解析,得到相机的姿态。例如,通过MonoSLAM算法,对相机拍摄的图像中的特征点的坐标进行解析,进而得到相机的姿态。但是,在相机移动时,视觉里程计系统每次都要对特征点的坐标进行解析,使得位姿估计的复杂度较高、耗时较长,进而导致相机位姿估计的效率较低。
发明内容
本公开提供一种图像处理方法、装置及电子设备,用于解决现有技术中相机位姿确定的效率较低的技术问题。
第一方面,本公开提供一种图像处理方法,该方法包括:
获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;
确定所述特征点在所述第一图像中的第一图像坐标;
获取所述电子设备在拍摄第二图像时,所述特征点对应的第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;
根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;
根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
第二方面,本公开提供一种图像处理装置,包括第一获取模块、第一确定模块、第二获取模块、第二确定模块和第三确定模块,其中:
所述第一获取模块用于,获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;
所述确定模块用于,确定所述特征点在所述第一图像中的第一图像坐标;
所述第二获取模块用于,获取所述电子设备在拍摄第二图像时,所述特征点对应的第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;
所述第二确定模块用于,根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;
所述第三确定模块用于,根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
第三方面,本公开实施例提供一种电子设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
图1为本公开实施例提供的一种应用场景示意图
图2为本申请实施例提供的一种图像处理方法的流程示意图;
图3为本公开实施例提供的另一种图像处理方法的流程示意图;
图4为本公开实施例提供的一种噪声点的处理过程示意图;
图5为本公开实施例提供的一种确定第一图像坐标的过程示意图;
图6为本公开实施例提供的一种图像处理方法的过程示意图;
图7为本公开实施例提供的一种图像处理装置的结构示意图;;
图8为本公开实施例提供的另一种图像处理装置的结构示意图;以及,
图9为本公开实施例提供的一种电子设备的结构示意图。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
为了便于理解,首先对本申请实施例所涉及的概念进行说明。
电子设备:是一种具有无线收发功能的设备。电子设备可以部署在陆地上,包括室内或室外、手持、穿戴或车载;也可以部署在水面上(如轮船等)。所述电子设备可以是手机(mobile phone)、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)电子设备、增强现实(augmented reality,AR)电子设备、工业控制(industrial control)中的无线终端、车载电子设备、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线电子设备、智能电网(smart grid)中的无线电子设备、运输安全(transportation safety)中的无线电子设备、智慧城市(smart city)中的无线电子设备、智慧家庭(smart home)中的无线电子设备、可穿戴电子设备等。本申请实施例所涉及的电子设备还可以称为终端、用户设备(user equipment,UE)、接入电子设备、车载终端、工业控制终端、UE单元、UE站、移动站、移动台、远方站、远程电子设备、移动设备、UE电子设备、无线通信设备、UE代理或UE装置等。电子设备也可以是固定的或者移动的。
在相关技术中,相机位姿的估计尤为重要,例如,相机位姿的估计可以应用于机械制造、VR画笔、机器人控制等技术领域。目前,可以通过视觉里程计系统对相机的六自由度姿态进行估计。例如,通过视觉里程计系统对相机拍摄的图像中的特征点的坐标进行解析,进而得到相机的姿态。但是,在相机每次移动时,视觉里程计系统都需要通过特征点坐标解析的方式,得到相机的位姿,使得位姿估计的复杂度较高、耗时较长,进而导致相机位姿估计的效率较低。
为了解决相关技术中相机位姿估计的效率较低的技术问题,本公开实施例提供一种图像处理方法,获取电子设备拍摄第一对象得到的第一图像,其中,第一图像中包括特征点,确定特征点在第一图像中的第一图像坐标,获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标,其中,第二图像为第一图像的上一帧图像,获取电子设备拍摄第二图像时的位姿与电子设备拍摄第二图像的上一帧图像时的位姿之间的第二相对位置,根据第一图像坐标、空间坐标和第二相对位置,确定电子设备拍摄第一图像时的位姿与电子设备拍摄第二图像时的位姿之间的第一相对位置,根据第一相对位置,确定电子设备拍摄第一图像时的位姿。在上述方法中,电子设备通过对第一相对位置的更新,得到第二相对位置,由于相对位置更新的复杂度较低,因此,电子设备可以快速的确定拍摄第一图像时的位姿与拍摄第二图像时的位姿之间的相对位置,使得电子设备快速的得到当前的位姿,进而提高估计相机位姿的效率。
下面,结合图1,对本公开实施例的应用场景进行说明。
图1为本公开实施例提供的一种应用场景示意图。请参见图1,包括:电子设备和第一对象。在电子设备拍摄第一对象时,电子设备得到第一图像。其中,第一图像中包括第一对象和电子设备在第一图像中提取的特征点A。电子设备获取第一图像时,电子设备相比于拍摄上一帧图像的相对位置为向左平移1米,旋转10度。
请参见图1,电子设备确定特征点A的坐标。其中,特征点A在第一图像中的第一图像坐标为(X,Y),特征点A在电子设备拍摄第一图像的上一帧图像时,相对于电子设备的空间坐标为(x,y,z)。电子设备根据第一图像坐标、空间坐标和相对位置,确定电子设备拍摄第一图像时的位姿。由于相对位置在卡尔曼滤波中的更新的复杂度较低,因此,电子设备可以快速的确定拍摄第一图像时的位姿与拍摄第二图像时的位姿之间的相对位置,使得电子
设备快速的得到当前的位姿,进而提高估计相机位姿的效率。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。
本公开提供一种图像处理方法、装置及电子设备,获取电子设备拍摄第一对象得到的第一图像,其中,第一图像中包括特征点,确定特征点在第一图像中的第一图像坐标,获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标,其中,第二图像为第一图像的上一帧图像,根据第一图像坐标和空间坐标,确定第一相对位置,第一相对位置为电子设备拍摄第一图像时的位姿与电子设备拍摄第二图像时的位姿之间的相对位置,根据第一相对位置,确定电子设备拍摄第一图像时的位姿。在上述方法中,电子设备通过第一图像坐标和空间坐标可以准确的确定位姿确定过程中的误差状态,并通过误差状态确定电子设备的拍摄第一图像时的位姿与拍摄第二图像时的位姿之间的相对位置,由于相对位置获取复杂度较低,因此,电子设备可以快速的确定当前的位姿相对于拍摄上一帧图像的位姿之间的相对位置,并且,电子通过相对位置可以快速的得到当前的位姿,进而提高估计相机位姿的效率。
图2为本申请实施例提供的一种图像处理方法的流程示意图。请参见图2,该方法可以包括:
S201、获取电子设备拍摄第一对象得到的第一图像。
本公开实施例的执行主体可以为电子设备,也可以为设置在电子设备中的图像处理装置。其中,图像处理装置可以通过软件实现,图像处理装置也可以通过软件和硬件的结合实现。
可选的,第一对象可以为电子设备的拍摄对象。例如,第一对象可以为用户、飞机等可移动对象,第一对象也可以为桌子、椅子等静止对象(镜头移动时,静止对象在视频中也可以变成可移动对象)。
可选的,第一图像中包括第一对象和特征点。特征点用于标记第一图像中第一对象的位置。例如,第一对象为桌子,在电子设备拍摄桌子的图像时,图像中的特征点可以在桌子的桌角、桌腿等位置。可选的,第一图像中的特征点可以为一个,也可以为多个,本公开实施例对此不作限定。
可选的,电子设备可以在第一图像中设置特征点。例如,电子设备在拍摄得到第一图像时,可以在第一图像中添加多个特征点。可选的,电子设备可以通过追踪的方式,在上一帧图像中获取特征点。例如,电子设备可以在第1帧图像中添加多个特征点(初始图像中电子设备还未选定特征点),在电子设备获取第2帧图像中的特征点时,可以对第1帧图像中的特征点进行光流追踪,进而得到第2帧图像中的特征点。例如,电子设备在第1帧图像中的桌腿添加特征点,在第2帧图像中,电子设备的位置发生改变,桌腿的位置在第2帧图像中也会发生改变,但是特征点在第2帧图像中也指示桌腿。
S202、确定特征点在第一图像中的第一图像坐标。
可选的,第一图像坐标为特征点在第一图像中的2D坐标。例如,第一图像为电子设备拍摄的二维图像,以第一图像的一个顶点为坐标原点建立坐标系,进而可以根据该坐标系表示每个特征点的第一图像坐标。
可选的,可以根据如下可行的实现方式,确定第一图像坐标:获取第二图像中的特征点
的第三图像坐标。其中,第二图像为第一图像的上一帧图像。例如,若第二图像为电子设备拍摄的初始图像,则电子设备可以在初始图像中添加至少一个特征点,并将特征点的坐标确定为第三图像坐标,若第二图像不是电子设备拍摄的初始图像,则电子设备可以追踪第二图像的上一帧图像中的特征点,并在第二图像中得到该特征点的第三图像坐标。
对第三图像坐标进行光流追踪或特征匹配,得到特征点在第一图像中的第一图像坐标。例如,在电子设备获取第二图像中的特征点的第三图像坐标之后,电子设备可以通过光流追踪的方式,在第一图像中追踪第二图像中的特征点,进而得到第一图像中特征点的第一图像坐标,提高图像坐标获取的准确度。
S203、获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标。
可选的,第二图像为第一图像的上一帧图像。可选的,空间坐标可以为第一图像中的特征点在相机坐标系下的3D坐标。例如,第一对象为桌子,若特征点对应桌子的桌角,则空间坐标可以为相机拍摄第二图像时,桌子的桌角在相机坐标系下的3D坐标。可选的,若第二图像为电子设备拍摄第一对象的初始图像,则电子设备可以在第二图像中提取多个特征点,并获取特征点的2D坐标,电子设备通过2D坐标可以得到特征点的方向向量(深度可以设置为单位1);若第二图像不是电子设备拍摄第一对象的初始图像,则电子设备可以通过上一张图像,得到特征点的空间坐标。
S204、根据第一图像坐标和空间坐标,确定第一相对位置。
可选的,第一相对位置为电子设备拍摄第一图像时的位姿与电子设备拍摄第二图像时的位姿之间的相对位置。例如,第一相对位置为电子设备拍摄当前图像时,电子设备相比于拍摄上一帧图像时的相对平移和相对旋转。
电子设备可以根据如下可行的实现方式,确定第一相对位置:获取电子设备拍摄第二图像时的位姿与电子设备拍摄第二图像的上一帧图像时的位姿之间的第二相对位置,根据第一图像坐标、空间坐标和第二相对位置,确定第一相对位置。
可选的,第二相对位置包括第一相对平移和第一相对旋转。其中,第一相对平移为电子设备拍摄第二图像时的位置与电子设备拍摄第二图像的上一帧图像时的位置之间的差值。例如,电子设备在位置A拍摄第二图像,在位置B拍摄第二图像的上一帧图像,若位置A与位置B之间的距离为1米,则确定第一相对平移为1米。
第一相对旋转为电子设备拍摄第二图像时的旋转角度与电子设备拍摄第二图像的上一帧图像时的旋转角度之间的差值。例如,电子设备以30度的旋转角度拍摄第二图像,电子设备以60度的旋转角度拍摄第二图像的上一帧图像,则确定第一相对旋转为30度。
可选的,在确定第二图像的第二相对位置的过程中,若第二图像为电子设备拍摄第一对象的第1帧图像,则第二图像不存在上一帧图像,第二图像为初始图像,因此,第二图像对应的第一相对平移和第一相对旋转都为0,在第二图像为第2帧图像时,上一帧图像的相对平移和相对旋转都为0,通过上一帧图像的相对平移和相对旋转,在卡尔曼滤波中可以得到第2帧图像(第二图像)对应的相对平移和相对旋转,在电子设备移动的过程中,电子设备从初始图像开始,通过卡尔曼滤波一直在更新每张图像对应的相对平移和相对旋转,因此,电子设备可以快速得到上一帧图像对应的相对平移和相对旋转,进而提高电子设备位姿估计的准确度。
S205、根据第一相对位置,确定电子设备拍摄第一图像时的位姿。
可选的,第一相对位置包括目标相对平移和目标相对旋转,根据第一相对位置,确定电子设备拍摄第一图像时的位姿,具体为:根据目标相对平移,确定电子设备的全局平移,根据目标相对旋转,确定电子设备的全局旋转,根据全局平移和全局旋转,确定电子设备拍摄第一图像时的位姿。
可选的,全局平移为电子设备的当前位置与初始位置之间的差值。全局旋转为电子设备的当前旋转角度与初始旋转角度之间的差值。例如,通过对目标相对平移和目标相对旋转进行解析,可以得到电子设备的全局平移和全局旋转,进而得到电子设备拍摄第一图像时的位姿。例如,若全局平移为1米,全局旋转为30度,则确定电子设备拍摄第一图像时的位姿相比于初始位置平移1米,旋转30度。
可选的,确定电子设备的位姿之后,还需要将特征点的坐标系进行转换。例如,在确定相机在本帧图像的位置之后,还需要将特征点在上一帧图像中的相机坐标系转换至本帧的相机坐标系中。
本公开实施例提供一种图像处理方法,获取电子设备拍摄第一对象得到的第一图像,其中,第一图像中包括特征点,确定特征点在第一图像中的第一图像坐标,获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标,获取电子设备拍摄第二图像时的位姿与电子设备拍摄第二图像的上一帧图像时的位姿之间的第二相对位置,根据第一图像坐标、空间坐标和第二相对位置,确定第一相对位置,根据第一相对位置,确定电子设备拍摄第一图像时的位姿。在上述方法中,电子设备通过第一图像坐标和空间坐标可以确定卡尔曼滤波中的误差,进而通过该误差对第二相对位置更新,得到第一相对位置,由于在卡尔曼滤波中,误差状态为相对位置的更新复杂度较低,因此,电子设备可以快速的确定当前的相对平移和相对旋转,进而使用较短的时间得到电子设备当前的位姿,提高电子设备位姿估计的效率。
在图2所示的实施例的基础上,下面,结合图3,对图像处理方法的过程进行进一步的说明。
图3为本公开实施例提供的另一种图像处理方法的流程示意图。请参见图3,该方法流程包括:
S301、获取电子设备拍摄第一对象得到的第一图像,第一图像中包括特征点。
需要说明的是,步骤S301的执行过程可以参照步骤S201,本公开实施例对此不再进行限定。
S302、在第一图像中的特征点中确定第一图像的噪声点,并在第一图像中删除噪声点。
可选的,噪声点为第一图像中匹配错误的特征点。例如,在进行光流追踪或特征匹配时,若第二图像中的多个特征点追踪失败或匹配错误,则将多个特征点确定为第一图像中的噪声点。
可选的,可以通过如下可行的实现方式,获取噪声点:判断电子设备中是否包括陀螺仪。若电子设备中包括陀螺仪,则通过陀螺仪获取电子设备的旋转角度,并结合预设算法(如,两点RANSAC算法)确定噪声点。若电子设备中不包括陀螺仪,则电子设备无法获取旋转角度,电子设备通过五点法(如,五点RANSAC算法)确定噪声点。
可选的,在电子设备获取第一图像中的噪声点之后,电子设备可以将第一图像中的噪声
点删除。例如,第一图像中包括100个特征点,若其中30个特征点为噪声点,则电子设备将该30个噪声点删除,第一图像中剩余70个特征点。
可选的,电子设备在第一图像中删除噪声点之后,电子设备还可以获取第一图像中特征点的数量,在特征点的数量小于或等于第二阈值时,在第一图像中添加预设数量的特征点。例如,若电子设备将第一图像中的噪声点删除之后,第一图像中剩余30个特征点,由于特征点的数量小于第二阈值(特征点数量较少),电子设备可以在第一图像中添加100个特征点,以提高位姿估计的准确度。
下面,结合图4,对本公开实施例中噪声点的处理过程进行说明。
图4为本公开实施例提供的一种噪声点的处理过程示意图。请参见图4,包括第一图像。其中,第一图像中包括特征点A、特征点B、特征点C、特征点D、特征点E、特征点F和特征点G。在第一图像中确定特征点中的噪声点。其中,噪声点包括特征点A、特征点E和特征点G。将噪声点删除,第一图像中剩余的特征点为特征点B、特征点C、特征点D和特征点F。由于第一图像中的特征点数量小于第二阈值,因此,在第一图像中添加新的特征点,其中,新的特征点包括特征点H、特征点I和特征点J。
S303、确定特征点在第一图像中的第一图像坐标。
下面,结合图5,对确定第一图像坐标的过程进行说明。
图5为本公开实施例提供的一种确定第一图像坐标的过程示意图。请参见图5,包括第一图像和第二图像。其中,第二图像为第一图像的上一帧图像。其中,第二图像中包括特征点A,特征点A在第二图像中的图像坐标为已知的(X,Y),在第一图像中,对第二图像中的特征点A进行光流追踪,得到特征点A在第一图像中的图像坐标为(x,y)。
S304、获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标。
可选的,第二图像为第一图像的上一帧图像。在确定特征点对应的第一对象的部分相对于电子设备的空间坐标的过程中,若特征点在电子设备获取的初始图像(如,第1帧图像)中,则电子设备通过特征点的2D坐标,得到特征点的方向向量,深度可以置为单位向量(如,1),这样得到初始图像中特征点的空间坐标为(x,y,1),其中,x为特征点在初始图像中的横坐标,y为特征点在初始图像中的纵坐标。若特征点不在电子设备获取的初始图像中(如,特征点在第2帧、第3帧图像中),则在卡尔曼滤波的计算过程中,可以根据特征点在上一帧图像的空间坐标,得到特征点在本帧图像的空间坐标,在电子设备开始移动的过程中,电子设备从初始图像开始,一直在更新每张图像对应的特征点的空间坐标,这样,可以通过卡尔曼滤波,快速的得到特征点在本帧图像中的空间坐标,进而提高电子设备位姿估计的效率。
S305、获取电子设备拍摄第二图像时的位姿与电子设备拍摄第二图像的上一帧图像时的位姿之间的第二相对位置。
需要说明的是,步骤S305的执行过程可以参照步骤S204,本公开实施例对此不再进行赘述。
S306、根据第一图像坐标、空间坐标和第二相对位置,确定第一相对位置。
第一相对位置为电子设备拍摄第一图像时的位姿与电子设备拍摄第二图像时的位姿之间的相对位置。可选的,可以根据如下可行的实现方式,确定第一相对位置:将特征点对应的空间坐标投影至第一图像中,得到特征点在第一图像中的第二图像坐标。例如,在电子设备
获取到特征点的空间坐标(电子设备拍摄上一帧图像时,特征点相对与电子设备的3D坐标,该坐标可以在卡尔曼滤波中一直更新)时,可以将特征点的空间坐标,投影至本帧图像中,得到特征点在本帧图像中投影的图像坐标(2D坐标)。
根据特征点对应的第一图像坐标、第二图像坐标和第二相对位置,得到第一相对位置。例如,电子设备通过对特征点的空间坐标进行投影,得到特征点在第一图像中的第二图像坐标,电子设备通过对上一帧图像中特征点的追踪,进而得到特征点在第一图像中的第一图像坐标,进而可以根据光流追踪的第一图像坐标和投影得到的第二图像坐标,确定卡尔曼滤波的误差,并通过该误差对卡尔曼滤波中维护的状态(相对平移、相对旋转、以及特征点的空间坐标)进行更新。
可选的,可以根据如下可行的实现方式,得到第一相对位置:确定第一图像坐标和第二图像坐标之间的第一差值。例如,可以将投影坐标(第二图像坐标)减去光流跟踪得到的实际坐标(第一图像坐标),得到第一差值。
根据第一差值、第二相对位置,得到第一相对位置。例如,第一差值可以作为扩展卡尔曼滤波器(Extended Kalman Filter,EKF)状态更新值,通过第一差值对卡尔曼滤波维护的状态(相对平移、相对旋转和特征点的空间坐标)进行更新,得到第一相对位置。可选的,在EKF的预测步骤中,默认系统的运动方程为静止,因此,在协方差矩阵中更新预测的噪声项即可,无需对卡尔曼滤波维护的状态进行更新。
可选的,根据第一差值、第二相对位置,得到第一相对位置,具体为:获取第二相对位置对应的第一矩阵。其中,第一矩阵包括坐标矩阵、旋转矩阵和平移矩阵。例如,第一矩阵可以为卡尔曼滤波中的雅克比矩阵,卡尔曼滤波的维护的每个状态都对应一个雅克比矩阵。例如,卡尔曼滤波维护的相对平移对应一个雅克比矩阵,相对旋转对应一个雅克比矩阵,特征点的空间坐标对应一个雅克比矩阵。
根据第一差值、坐标矩阵、旋转矩阵、平移矩阵,得到第二矩阵。可选的,若第一差值小于或等于第一阈值,则将坐标矩阵置0。例如,若第一差值小于或等于第一阈值,则说明上一帧图像的特征点相对于本帧图像的该特征点的偏移较小,当前为静止状态,因此,无需对雅克比矩阵中的坐标矩阵(特征点的3D坐标部分)进行更新,将坐标矩阵置0。
可选的,若电子设备包括陀螺仪,则将旋转矩阵置0。例如,若电子设备中包括陀螺仪,则电子设备可以通过陀螺仪获取旋转角度,因此,电子设备无需通过卡尔曼滤波获取相对旋转,进而将卡尔曼滤波维护的每个状态对应的雅克比矩阵中的旋转矩阵置0,这样在电子设备有陀螺仪时,可以降低卡尔曼滤波状态维护的复杂度,进而提高电子设备位姿估计的效率。
可选的,通过EFK的基本公式对第一差值进行处理,进而更新卡尔曼滤波的状态和协方差矩阵。由于本公开维护的卡尔曼滤波的状态为相对位置(相对平移和相对旋转),因此,在通过EFK对第一差值进行处理的过程中,可以有效的降低EFK运算的复杂度,并且通过相对位置可以快速的得到电子设备拍摄第一图像时的位姿,进而提高电子设备的位姿估计的效率。
S307、根据第一相对位置,确定电子设备拍摄第一图像时的位姿。
可选的,在通过第一相对位置得到电子设备的全局平移和全局旋转之后,可以在卡尔曼滤波的雅克比矩阵中清零平移矩阵和旋转矩阵。可选的,卡尔曼滤波中的协方差矩阵C也可以根据卡尔曼滤波的状态更新过程,得到对应的雅克比矩阵J(如,C<J*C*C^T)。
本公开实施例提供一种图像处理方法,获取电子设备拍摄第一对象得到的第一图像,第一图像中包括特征点,在第一图像中的特征点中确定第一图像的噪声点,并在第一图像中删除噪声点,确定特征点在第一图像中的第一图像坐标,获取电子设备在拍摄第二图像时,特征点对应的第一对象的部分相对于电子设备的空间坐标,获取电子设备拍摄第二图像时的位姿与电子设备拍摄第二图像的上一帧图像时的位姿之间的第二相对位置,根据第一图像坐标、空间坐标和第二相对位置,确定第一相对位置,并根据第一相对位置,确定电子设备拍摄第一图像时的位姿。在上述方法中,电子设备在得到第一图像中的特征点时,可以对特征点中的噪声点进行剔除,并在特征点的数量较低时,重新向第一图像中补充特征点,这样可以提高位姿估计的准确度,并且,由于在卡尔曼滤波中,误差状态为相对位置的更新复杂度较低,因此,电子设备可以快速的确定当前的相对平移和相对旋转,而通过相对平移和相对旋转可以在较短的时间内得到电子设备当前的位姿(本公开的计算复杂度小于通过五点法的空间坐标计算位姿的计算复杂度),提高电子设备位姿估计的效率。
在上述任意一个实施例的基础上,下面,结合图6,对上述图像处理方法的过程进行说明。
图6为本公开实施例提供的一种图像处理方法的过程示意图。请参见图6,包括:电子设备和第一对象。在电子设备拍摄第一对象时,电子设备得到第一图像。其中,第一图像中包括第一对象和电子设备在第一图像中提取的特征点A、特征点B、特征点C、特征点D、特征点E、特征点F和特征点G。电子设备在第一图像中确定特征点中的噪声点。其中,噪声点包括特征点A、特征点E和特征点G。
请参见图6,电子设备将第一图像中的噪声点删除,第一图像中剩余的特征点为特征点B、特征点C、特征点D和特征点F。由于第一图像中的特征点数量小于第二阈值,因此,在第一图像中添加新的特征点,其中,新的特征点包括特征点H、特征点I和特征点J。
请参见图6,电子设备获取拍摄第一图像的上一帧图像时,电子设备相比于拍摄上两帧图像的相对位置为向左平移1米,旋转10度。电子设备确定每个特征点A坐标。其中,特征点B在第一图像中的第一图像坐标为(X,Y),特征点B在电子设备拍摄第一图像的上一帧图像时,相对于电子设备的空间坐标为(x,y,z),特征点C在第一图像中的第一图像坐标为(M,N),特征点B在电子设备拍摄第一图像的上一帧图像时,相对于电子设备的空间坐标为(m,n,l)等。
请参见图6,电子设备根据相对位置、每个特征点的第一图像坐标和空间坐标,确定电子设备拍摄第一图像时与拍摄上一帧图像的相对位置,并根据该相对位置确定电子设备拍摄第一图像时的位姿。这样,电子设备在得到第一图像中的特征点时,可以对特征点中的噪声点进行剔除,并在特征点的数量较低时,重新向第一图像中补充特征点,这样可以提高位姿估计的准确度,并且,由于相对位置在卡尔曼滤波中的更新的复杂度较低,因此,电子设备可以快速的确定拍摄第一图像时电子设备的第二相对位置,并且通过相对位置可以快速得到电子设备的当前位姿,进而提高估计相机位姿的效率。
图7为本公开实施例提供的一种图像处理装置的结构示意图。请参见图7,该图像处理装置70包括第一获取模块71、第一确定模块72、第二获取模块73、第二确定模块74和第三确定模块75,其中:
所述第一获取模块71用于,获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;
所述确定模块72用于,确定所述特征点在所述第一图像中的第一图像坐标;
所述第二获取模块73用于,获取所述电子设备在拍摄第二图像时,所述特征点对应的第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;
所述第二确定模块74用于,根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;
所述第三确定模块75用于,根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
在一种可能的实施方式中,所述第二确定模块74具体用于:
获取所述电子设备拍摄所述第二图像时的位姿与所述电子设备拍摄所述第二图像的上一帧图像时的位姿之间的第二相对位置;
根据所述第一图像坐标、所述空间坐标和所述第二相对位置,确定所述第一相对位置。
在一种可能的实施方式中,所述第二确定模块74具体用于:
将所述特征点对应的所述空间坐标投影至所述第一图像中,得到所述特征点在所述第一图像中的第二图像坐标;
根据所述特征点对应的第一图像坐标、所述第二图像坐标和所述第二相对位置,得到所述第一相对位置。
在一种可能的实施方式中,所述第二确定模块74具体用于:
确定所述第一图像坐标和所述第二图像坐标之间的第一差值;
根据所述第一差值、所述第二相对位置,得到所述第一相对位置。
在一种可能的实施方式中,所述第二确定模块74具体用于:
获取所述第二相对位置对应的第一矩阵,所述第一矩阵包括坐标矩阵、旋转矩阵和平移矩阵;
根据所述第一差值、所述坐标矩阵、所述旋转矩阵和所述平移矩阵,得到第二矩阵;
通过卡尔曼滤波对所述第二矩阵进行处理,得到所述第一相对位置。
在一种可能的实施方式中,所述第二确定模块74具体用于:
若所述第一差值小于或等于第一阈值,则将所述坐标矩阵置0;
若所述电子设备包括陀螺仪,则将所述旋转矩阵置0。
在一种可能的实施方式中,所述第二确定模块74具体用于:
根据所述目标相对平移,确定所述电子设备的全局平移;
根据所述目标相对旋转,确定所述电子设备的全局旋转;
根据所述全局平移和所述全局旋转,确定所述电子设备拍摄所述第一图像时的位姿。
在一种可能的实施方式中,所述第一确定模块72具体用于:
获取所述第二图像中的特征点的第三图像坐标;
对所述第三图像坐标进行光流追踪或特征匹配,得到所述特征点在所述第一图像中的第一图像坐标。
本实施例提供的图像处理装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图8为本公开实施例提供的另一种图像处理装置的结构示意图。在图7所示的实施例的
基础上,请参见图8,该图像处理装置70还包括第三获取模块76,所述第三获取模块76用于:
在所述第一图像中的特征点中确定第一图像的噪声点;
在所述第一图像中删除所述噪声点。
在一种可能的实施方式中,所述第三获取模块76还用于:
获取所述第一图像中所述特征点的数量;
在所述特征点的数量小于或等于第二阈值时,在所述第一图像中添加预设数量的特征点。
本实施例提供的图像处理装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图9为本公开实施例提供的一种电子设备的结构示意图。请参见图9,其示出了适于用来实现本公开实施例的电子设备900的结构示意图,该电子设备900可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read Only Memory,简称ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,简称RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(Input/Output,简称I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储
器(Erasable Programmable Read-Only Memory,简称EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,简称CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、专用标准产品(Application Specific Standard Parts,简称ASSP)、片上系统(System on Chip,简称SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,简称CPLD)等
等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,本公开一个或多个实施例,提供一种图像处理方法,该方法包括:
获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;
确定所述特征点在所述第一图像中的第一图像坐标;
获取所述电子设备在拍摄第二图像时,所述特征点对应的第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;
根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;
根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
根据本公开一个或多个实施例,根据所述第一图像坐标和所述空间坐标,确定第一相对位置,包括:
获取所述电子设备拍摄所述第二图像时的位姿与所述电子设备拍摄所述第二图像的上一帧图像时的位姿之间的第二相对位置;
根据所述第一图像坐标、所述空间坐标和所述第二相对位置,确定所述第一相对位置。
根据本公开一个或多个实施例,根据所述第一图像坐标、所述空间坐标和所述第二相对位置,确定所述第一相对位置,包括:
将所述特征点对应的所述空间坐标投影至所述第一图像中,得到所述特征点在所述第一图像中的第二图像坐标;
根据所述特征点对应的第一图像坐标、所述第二图像坐标和所述第二相对位置,得到所述第一相对位置。
根据本公开一个或多个实施例,根据所述特征点对应的第一图像坐标、所述第二图像坐标和所述第二相对位置,得到所述第一相对位置,包括:
确定所述第一图像坐标和所述第二图像坐标之间的第一差值;
根据所述第一差值、所述第二相对位置,得到所述第一相对位置。
根据本公开一个或多个实施例,根据所述第一差值、所述第二相对位置,得到所述第一相对位置,包括:
获取所述第二相对位置对应的第一矩阵,所述第一矩阵包括坐标矩阵、旋转矩阵和平移矩阵;
根据所述第一差值、所述坐标矩阵、所述旋转矩阵和所述平移矩阵,得到第二矩阵;
通过卡尔曼滤波对所述第二矩阵进行处理,得到所述第一相对位置。
根据本公开一个或多个实施例,根据所述第一差值,对所述坐标矩阵、旋转矩阵进行更新,包括:
若所述第一差值小于或等于第一阈值,则将所述坐标矩阵置0;
若所述电子设备包括陀螺仪,则将所述旋转矩阵置0。
根据本公开一个或多个实施例,所述第一相对位置包括目标相对平移和目标相对旋转;根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿,包括:
根据所述目标相对平移,确定所述电子设备的全局平移;
根据所述目标相对旋转,确定所述电子设备的全局旋转;
根据所述全局平移和所述全局旋转,确定所述电子设备拍摄所述第一图像时的位姿。
根据本公开一个或多个实施例,确定所述特征点在所述第一图像中的第一图像坐标,包括:
获取所述第二图像中的特征点的第三图像坐标;
对所述第三图像坐标进行光流追踪或特征匹配,得到所述特征点在所述第一图像中的第一图像坐标。
根据本公开一个或多个实施例,确定所述特征点在所述第一图像中的第一图像坐标之前,所述方法还包括:
在所述第一图像中的特征点中确定第一图像的噪声点;
在所述第一图像中删除所述噪声点。
根据本公开一个或多个实施例,在所述第一图像中删除所述噪声点之后,所述方法还包括:
获取所述第一图像中所述特征点的数量;
在所述特征点的数量小于或等于第二阈值时,在所述第一图像中添加预设数量的特征点。
第二方面,本公开一个或多个实施例,提供一种图像处理装置,该图像处理装置包括第一获取模块、第一确定模块、第二获取模块、第二确定模块和第三确定模块,其中:
所述第一获取模块用于,获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;
所述确定模块用于,确定所述特征点在所述第一图像中的第一图像坐标;
所述第二获取模块用于,获取所述电子设备在拍摄第二图像时,所述特征点对应的第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;
所述第二确定模块用于,根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;
所述第三确定模块用于,根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
根据本公开一个或多个实施例,所述第二确定模块具体用于:
获取所述电子设备拍摄所述第二图像时的位姿与所述电子设备拍摄所述第二图像的上一帧图像时的位姿之间的第二相对位置;
根据所述第一图像坐标、所述空间坐标和所述第二相对位置,确定所述第一相对位置。
根据本公开一个或多个实施例,所述第二确定模块具体用于:
将所述特征点对应的所述空间坐标投影至所述第一图像中,得到所述特征点在所述第一图像中的第二图像坐标;
根据所述特征点对应的第一图像坐标、所述第二图像坐标和所述第二相对位置,得到所述第一相对位置。
根据本公开一个或多个实施例,所述第二确定模块具体用于:
确定所述第一图像坐标和所述第二图像坐标之间的第一差值;
根据所述第一差值、所述第二相对位置,得到所述第一相对位置。
根据本公开一个或多个实施例,所述第二确定模块具体用于:
获取所述第二相对位置对应的第一矩阵,所述第一矩阵包括坐标矩阵、旋转矩阵和平移矩阵;
根据所述第一差值、所述坐标矩阵、所述旋转矩阵和所述平移矩阵,得到第二矩阵;
通过卡尔曼滤波对所述第二矩阵进行处理,得到所述第一相对位置。
根据本公开一个或多个实施例,所述第二确定模块具体用于:
若所述第一差值小于或等于第一阈值,则将所述坐标矩阵置0;
若所述电子设备包括陀螺仪,则将所述旋转矩阵置0。
根据本公开一个或多个实施例,所述第二确定模块具体用于:
根据所述目标相对平移,确定所述电子设备的全局平移;
根据所述目标相对旋转,确定所述电子设备的全局旋转;
根据所述全局平移和所述全局旋转,确定所述电子设备拍摄所述第一图像时的位姿。
根据本公开一个或多个实施例,所述第一确定模块具体用于:
获取所述第二图像中的特征点的第三图像坐标;
对所述第三图像坐标进行光流追踪或特征匹配,得到所述特征点在所述第一图像中的第一图像坐标。
根据本公开一个或多个实施例,上述图像处理装置还包括第三获取模块,所述第三获取模块用于:
在所述第一图像中的特征点中确定第一图像的噪声点;
在所述第一图像中删除所述噪声点。
根据本公开一个或多个实施例,所述第三获取模块还用于:
获取所述第一图像中所述特征点的数量;
在所述特征点的数量小于或等于第二阈值时,在所述第一图像中添加预设数量的特征点。
第三方面,本公开实施例提供一种电子设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能涉及的所述图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。
Claims (15)
- 一种图像处理方法,包括:获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;确定所述特征点在所述第一图像中的第一图像坐标;获取所述电子设备在拍摄第二图像时,所述特征点对应的所述第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
- 根据权利要求1所述的方法,其中,根据所述第一图像坐标和所述空间坐标,确定第一相对位置,包括:获取所述电子设备拍摄所述第二图像时的位姿与所述电子设备拍摄所述第二图像的上一帧图像时的位姿之间的第二相对位置;根据所述第一图像坐标、所述空间坐标和所述第二相对位置,确定所述第一相对位置。
- 根据权利要求2所述的方法,其中,根据所述第一图像坐标、所述空间坐标和所述第二相对位置,确定所述第一相对位置,包括:将所述特征点对应的所述空间坐标投影至所述第一图像中,得到所述特征点在所述第一图像中的第二图像坐标;根据所述特征点对应的所述第一图像坐标、所述第二图像坐标和所述第二相对位置,得到所述第一相对位置。
- 根据权利要求3所述的方法,其中,根据所述特征点对应的所述第一图像坐标、所述第二图像坐标和所述第二相对位置,得到所述第一相对位置,包括:确定所述第一图像坐标和所述第二图像坐标之间的第一差值;根据所述第一差值、所述第二相对位置,得到所述第一相对位置。
- 根据权利要求4所述的方法,其中,根据所述第一差值、所述第二相对位置,得到所述第一相对位置,包括:获取所述第二相对位置对应的第一矩阵,所述第一矩阵包括坐标矩阵、旋转矩阵和平移矩阵;根据所述第一差值、所述坐标矩阵、所述旋转矩阵和所述平移矩阵,得到第二矩阵;通过卡尔曼滤波对所述第二矩阵进行处理,得到所述第一相对位置。
- 根据权利要求5所述的方法,其中,根据所述第一差值、所述坐标矩阵、所述旋转矩阵和所述平移矩阵,得到第二矩阵,包括:若所述第一差值小于或等于第一阈值,则将所述坐标矩阵置0;若所述电子设备包括陀螺仪,则将所述旋转矩阵置0。
- 根据权利要求1至6中任一项所述的方法,其中,所述第一相对位置包括目标相对平移和目标相对旋转;根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿,包括:根据所述目标相对平移,确定所述电子设备的全局平移;根据所述目标相对旋转,确定所述电子设备的全局旋转;根据所述全局平移和所述全局旋转,确定所述电子设备拍摄所述第一图像时的位姿。
- 根据权利要求1至7中任一项所述的方法,其中,确定所述特征点在所述第一图像中的第一图像坐标,包括:获取所述第二图像中的特征点的第三图像坐标;对所述第三图像坐标进行光流追踪或特征匹配,得到所述特征点在所述第一图像中的第一图像坐标。
- 根据权利要求1至8中任一项所述的方法,其中,确定所述特征点在所述第一图像中的第一图像坐标之前,所述方法还包括:在所述第一图像中的特征点中确定第一图像的噪声点;在所述第一图像中删除所述噪声点。
- 根据权利要求9所述的方法,其中,在所述第一图像中删除所述噪声点之后,所述方法还包括:获取所述第一图像中所述特征点的数量;在所述特征点的数量小于或等于第二阈值时,在所述第一图像中添加预设数量的特征点。
- 一种图像处理装置,包括第一获取模块、第一确定模块、第二获取模块、第二确定模块和第三确定模块,其中:所述第一获取模块用于,获取电子设备拍摄第一对象得到的第一图像,所述第一图像中包括特征点;所述确定模块用于,确定所述特征点在所述第一图像中的第一图像坐标;所述第二获取模块用于,获取所述电子设备在拍摄第二图像时,所述特征点对应的第一对象的部分相对于所述电子设备的空间坐标,所述第二图像为所述第一图像的上一帧图像;所述第二确定模块用于,根据所述第一图像坐标和所述空间坐标,确定第一相对位置,所述第一相对位置为所述电子设备拍摄所述第一图像时的位姿与所述电子设备拍摄所述第二图像时的位姿之间的相对位置;所述第三确定模块用于,根据所述第一相对位置,确定所述电子设备拍摄所述第一图像时的位姿。
- 一种电子设备,包括:处理器和存储器;所述存储器存储计算机执行指令;所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求1至10中任一项所述的图像处理方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至10中任一项所述的图像处理方法。
- 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的图像处理方法。
- 一种计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的图像处理方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210754289.0A CN115937305A (zh) | 2022-06-28 | 2022-06-28 | 图像处理方法、装置及电子设备 |
CN202210754289.0 | 2022-06-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024001526A1 true WO2024001526A1 (zh) | 2024-01-04 |
Family
ID=86549478
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/092813 WO2024001526A1 (zh) | 2022-06-28 | 2023-05-08 | 图像处理方法、装置及电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115937305A (zh) |
WO (1) | WO2024001526A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118540575A (zh) * | 2024-07-24 | 2024-08-23 | 荣耀终端有限公司 | 同机位拍摄方法、电子设备、存储介质和程序产品 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115937305A (zh) * | 2022-06-28 | 2023-04-07 | 北京字跳网络技术有限公司 | 图像处理方法、装置及电子设备 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180108149A1 (en) * | 2016-10-19 | 2018-04-19 | Seiko Epson Corporation | Computer program, object tracking method, and object tracking device |
CN110555883A (zh) * | 2018-04-27 | 2019-12-10 | 腾讯科技(深圳)有限公司 | 相机姿态追踪过程的重定位方法、装置及存储介质 |
US20200167955A1 (en) * | 2017-07-31 | 2020-05-28 | Tencent Technology (Shenzhen) Company Limited | Method for augmented reality display, method for determining pose information, and apparatuses |
CN111415387A (zh) * | 2019-01-04 | 2020-07-14 | 南京人工智能高等研究院有限公司 | 相机位姿确定方法、装置、电子设备及存储介质 |
CN111754579A (zh) * | 2019-03-28 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | 多目相机外参确定方法及装置 |
CN113989377A (zh) * | 2021-09-23 | 2022-01-28 | 深圳市联洲国际技术有限公司 | 一种相机的外参标定方法、装置、存储介质及终端设备 |
CN114119885A (zh) * | 2020-08-11 | 2022-03-01 | 中国电信股份有限公司 | 图像特征点匹配方法、装置及系统、地图构建方法及系统 |
CN115937305A (zh) * | 2022-06-28 | 2023-04-07 | 北京字跳网络技术有限公司 | 图像处理方法、装置及电子设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109215077B (zh) * | 2017-07-07 | 2022-12-06 | 腾讯科技(深圳)有限公司 | 一种相机姿态信息确定的方法及相关装置 |
CN108537845B (zh) * | 2018-04-27 | 2023-01-03 | 腾讯科技(深圳)有限公司 | 位姿确定方法、装置及存储介质 |
US10782137B2 (en) * | 2019-01-28 | 2020-09-22 | Qfeeltech (Beijing) Co., Ltd. | Methods, apparatus, and systems for localization and mapping |
CN113034594A (zh) * | 2021-03-16 | 2021-06-25 | 浙江商汤科技开发有限公司 | 位姿优化方法、装置、电子设备及存储介质 |
-
2022
- 2022-06-28 CN CN202210754289.0A patent/CN115937305A/zh active Pending
-
2023
- 2023-05-08 WO PCT/CN2023/092813 patent/WO2024001526A1/zh unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180108149A1 (en) * | 2016-10-19 | 2018-04-19 | Seiko Epson Corporation | Computer program, object tracking method, and object tracking device |
US20200167955A1 (en) * | 2017-07-31 | 2020-05-28 | Tencent Technology (Shenzhen) Company Limited | Method for augmented reality display, method for determining pose information, and apparatuses |
CN110555883A (zh) * | 2018-04-27 | 2019-12-10 | 腾讯科技(深圳)有限公司 | 相机姿态追踪过程的重定位方法、装置及存储介质 |
CN111415387A (zh) * | 2019-01-04 | 2020-07-14 | 南京人工智能高等研究院有限公司 | 相机位姿确定方法、装置、电子设备及存储介质 |
CN111754579A (zh) * | 2019-03-28 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | 多目相机外参确定方法及装置 |
CN114119885A (zh) * | 2020-08-11 | 2022-03-01 | 中国电信股份有限公司 | 图像特征点匹配方法、装置及系统、地图构建方法及系统 |
CN113989377A (zh) * | 2021-09-23 | 2022-01-28 | 深圳市联洲国际技术有限公司 | 一种相机的外参标定方法、装置、存储介质及终端设备 |
CN115937305A (zh) * | 2022-06-28 | 2023-04-07 | 北京字跳网络技术有限公司 | 图像处理方法、装置及电子设备 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118540575A (zh) * | 2024-07-24 | 2024-08-23 | 荣耀终端有限公司 | 同机位拍摄方法、电子设备、存储介质和程序产品 |
Also Published As
Publication number | Publication date |
---|---|
CN115937305A (zh) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11270460B2 (en) | Method and apparatus for determining pose of image capturing device, and storage medium | |
US11195049B2 (en) | Electronic device localization based on imagery | |
CN109584276B (zh) | 关键点检测方法、装置、设备及可读介质 | |
CN111325796B (zh) | 用于确定视觉设备的位姿的方法和装置 | |
WO2024001526A1 (zh) | 图像处理方法、装置及电子设备 | |
US10937214B2 (en) | System and method for merging maps | |
US10073531B2 (en) | Electronic device pose identification based on imagery and non-image sensor data | |
CN109461208B (zh) | 三维地图处理方法、装置、介质和计算设备 | |
EP3335153B1 (en) | Managing feature data for environment mapping on an electronic device | |
CN110874853B (zh) | 目标运动的确定方法、装置、设备及存储介质 | |
US11514645B2 (en) | Electronic device for providing visual localization based on outdoor three-dimension map information and operating method thereof | |
CN111401230A (zh) | 姿态估计方法及装置、电子设备和存储介质 | |
CN112465907A (zh) | 一种室内视觉导航方法及系统 | |
US20220375092A1 (en) | Target object controlling method, apparatus, electronic device, and storage medium | |
KR20220123218A (ko) | 타깃 포지셔닝 방법, 장치, 전자 기기, 저장 매체 및 프로그램 | |
CN113587928B (zh) | 导航方法、装置、电子设备、存储介质及计算机程序产品 | |
CN112270242A (zh) | 轨迹的显示方法、装置、可读介质和电子设备 | |
WO2023246468A1 (zh) | 视觉定位参数更新方法、装置、电子设备及存储介质 | |
US20240233172A1 (en) | Video processing method and device, and electronic device | |
CN112880675B (zh) | 用于视觉定位的位姿平滑方法、装置、终端和移动机器人 | |
CN116309819A (zh) | 位姿估计方法、系统、装置及存储介质 | |
CN113643343A (zh) | 深度估计模型的训练方法、装置、电子设备及存储介质 | |
CN110781888B (zh) | 回归视频画面中屏幕的方法、装置、可读介质及电子设备 | |
KR102571530B1 (ko) | 이미지 스티칭을 위한 장치 및 그 방법 | |
WO2024060923A1 (zh) | 移动物体的深度估计方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23829712 Country of ref document: EP Kind code of ref document: A1 |