WO2021017358A1 - 位姿确定方法及装置、电子设备和存储介质 - Google Patents
位姿确定方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021017358A1 WO2021017358A1 PCT/CN2019/123646 CN2019123646W WO2021017358A1 WO 2021017358 A1 WO2021017358 A1 WO 2021017358A1 CN 2019123646 W CN2019123646 W CN 2019123646W WO 2021017358 A1 WO2021017358 A1 WO 2021017358A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- pose
- key point
- processed
- acquisition device
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present disclosure relates to the field of computer technology, and in particular to a method and device for determining a pose, electronic equipment, and storage medium.
- Camera calibration is the basic problem of visual positioning. Calculating the geographic location of the target and obtaining the visible area of the camera require calibration of the camera. In the related art, the commonly used calibration algorithm only considers the situation where the camera position is fixed. However, the current surveillance cameras in cities include many rotatable cameras.
- the present disclosure proposes a pose determination method and device, electronic equipment and storage medium.
- a pose determination method including:
- the target pose of the image to be processed by the image acquisition device According to the correspondence between the first key point and the second key point, and the reference pose corresponding to the reference image, determine the target pose of the image to be processed by the image acquisition device.
- a reference image matching the image to be processed can be selected, and the pose corresponding to the image to be processed is determined according to the pose of the reference image, which can generate rotation or displacement in the image acquisition device Time calibration of the corresponding pose can quickly adapt to new monitoring scenarios.
- the obtaining a reference image matching the image to be processed includes:
- the image acquisition device sequentially acquires during the rotation process
- the reference image is determined from each first image according to the similarity between the first feature information and each of the second feature information.
- the method further includes:
- the geographic plane is a plane where the geographic location coordinates of the target point are located;
- the determining the second homography matrix between the imaging plane and the geographic plane when the image acquiring device acquires the second image, and determining the internal parameter matrix of the image acquiring device include:
- the second homography matrix between the imaging plane and the geographic plane when the image acquisition device collects the second image is determined, wherein
- the target point is a plurality of non-collinear points in the second image
- determining the reference pose corresponding to the second image according to the internal parameter matrix and the second homography matrix includes:
- determining the reference pose corresponding to the at least one first image according to the reference pose corresponding to the second image includes:
- the current first image is an image with a known reference pose among the plurality of first images, the current first image includes the second image, and the next first image is the at least one first image An image adjacent to the current first image in;
- the reference pose of the first image can be obtained, and the reference poses of all the first images are iteratively determined according to the reference pose of the first first image, without the need for complex calibration methods for each first image.
- the image is calibrated to improve processing efficiency.
- a third homography matrix between the current first image and the next first image is determined according to the correspondence between the third key point and the fourth key point ,include:
- the third position coordinates of the third key point in the current first image and the fourth position coordinates of the fourth key point in the next first image determine the current first image and The third homography matrix between the next first image.
- determining the reference pose corresponding to the next first image according to the third homography matrix and the reference pose corresponding to the current first image includes:
- the image acquisition device is collecting the to-be-processed
- the target pose of the image including:
- the image acquisition device is acquiring the target pose of the image to be processed.
- the reference pose of the first image can be obtained, and the reference poses of all the first images are iteratively determined according to the reference pose of the first first image, without the need for complex calibration methods for each first image.
- the image is calibrated to improve processing efficiency.
- the corresponding reference pose, determining the target pose of the image to be processed by the image acquisition device includes:
- the target pose is determined according to the reference pose corresponding to the reference image and the first pose change.
- the reference pose corresponding to the reference image includes the rotation matrix and the displacement vector when the image acquisition device acquires the reference image
- the target pose corresponding to the image to be processed includes the The image acquisition device acquires the rotation matrix and displacement vector of the image to be processed.
- the feature extraction processing and the key point extraction processing are implemented by a convolutional neural network
- the method further includes:
- performing key point extraction processing on the feature map to obtain the key points of the sample image includes:
- a pose determination device including:
- the acquisition module is configured to acquire a reference image matching the image to be processed, wherein the image to be processed and the reference image are acquired by an image acquisition device, the reference image has a corresponding reference pose, and the reference position
- the pose is used to indicate the pose of the image acquisition device when acquiring the reference image
- the first extraction module is configured to perform key point extraction processing on the image to be processed and the reference image, respectively, to obtain the first key point in the image to be processed and the first key point in the reference image The corresponding second key point in
- the first determining module is configured to determine, based on the correspondence between the first key point and the second key point, and the reference pose corresponding to the reference image, determine whether the image acquisition device is collecting the image to be processed Target pose.
- the acquisition module is further configured to:
- the image acquisition device sequentially acquires during the rotation process
- the reference image is determined from each first image according to the similarity between the first feature information and each of the second feature information.
- the device further includes:
- the second determination module is used to determine the second homography matrix between the imaging plane and the geographic plane when the image acquisition device acquires the second image, and determine the internal parameter matrix of the image acquisition device, where
- the second image is any one of the multiple first images
- the geographic plane is a plane where the geographic location coordinates of the target point are located;
- a third determining module configured to determine a reference pose corresponding to the second image according to the internal parameter matrix and the second homography matrix
- the fourth determining module is configured to determine the reference pose corresponding to the at least one first image according to the reference pose corresponding to the second image.
- the second determining module is further configured to:
- the second homography matrix between the imaging plane and the geographic plane when the image acquisition device collects the second image is determined, wherein
- the target point is a plurality of non-collinear points in the second image
- the third determining module is further configured to:
- the fourth determining module is further configured to:
- the current first image is an image with a known reference pose among the plurality of first images, the current first image includes the second image, and the next first image is the at least one first image An image adjacent to the current first image in;
- the fourth determining module is further configured to:
- the third position coordinates of the third key point in the current first image and the fourth position coordinates of the fourth key point in the next first image determine the current first image and The third homography matrix between the next first image.
- the fourth determining module is further configured to:
- the first determining module is further configured to:
- the image acquisition device is acquiring the target pose of the image to be processed.
- the first determining module is further configured to:
- the target pose is determined according to the reference pose corresponding to the reference image and the first pose change.
- the reference pose corresponding to the reference image includes the rotation matrix and the displacement vector when the image acquisition device acquires the reference image
- the target pose corresponding to the image to be processed includes the The image acquisition device acquires the rotation matrix and displacement vector of the image to be processed.
- the feature extraction processing and the key point extraction processing are implemented by a convolutional neural network
- the device further includes:
- the first convolution module is configured to perform convolution processing on the sample image through the convolution layer of the convolutional neural network to obtain a feature map of the sample image;
- the second convolution module is configured to perform convolution processing on the feature map to obtain feature information of the sample image respectively;
- the second extraction module is configured to perform key point extraction processing on the feature map to obtain key points of the sample image
- the training module is used to train the convolutional neural network according to the feature information and key points of the sample image.
- the second extraction module is further configured to:
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the above-mentioned pose determination method.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the foregoing pose determination method is implemented.
- a computer program including computer readable code, and when the computer readable code is run in an electronic device, a processor in the electronic device executes the above-mentioned pose Determine the method.
- Fig. 1 shows a flowchart of a pose determination method according to an embodiment of the present disclosure
- Fig. 2 shows a flowchart of a pose determination method according to an embodiment of the present disclosure
- Fig. 3 shows a schematic diagram of a target point according to an embodiment of the present disclosure
- Fig. 4 shows a flowchart of a pose determination method according to an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of neural network training according to an embodiment of the present disclosure
- Fig. 6 shows an application schematic diagram of a pose determination method according to an embodiment of the present disclosure
- Figure 7 shows a block diagram of a pose determination device according to an embodiment of the present disclosure
- FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure
- FIG. 9 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- Fig. 1 shows a flowchart of a pose determination method according to an embodiment of the present disclosure. As shown in Fig. 1, the method includes:
- step S11 a reference image matching the image to be processed is acquired, wherein the image to be processed and the reference image are acquired by an image acquisition device, the reference image has a corresponding reference pose, and the reference position The pose is used to indicate the pose of the image acquisition device when acquiring the reference image;
- step S12 the image to be processed and the reference image are respectively subjected to key point extraction processing to obtain the first key point in the image to be processed and the corresponding first key point in the reference image.
- step S13 according to the corresponding relationship between the first key point and the second key point, and the reference pose corresponding to the reference image, it is determined that the image acquisition device is collecting the target position of the image to be processed. posture.
- a reference image matching the image to be processed can be selected, and the pose corresponding to the image to be processed is determined according to the pose of the reference image, which can generate rotation or displacement in the image acquisition device Time calibration of the corresponding pose can quickly adapt to new monitoring scenarios.
- the pose determination method can be used to determine the pose of an image acquisition device such as a camera, video camera, monitor, etc., for example, it can be used to determine the pose of a camera of a surveillance system, an access control system, etc.
- an image acquisition device such as a camera, video camera, monitor, etc.
- the pose of the image acquisition device after the pose transformation can be efficiently determined.
- the present disclosure does not address the application field of the pose determination method. limit.
- the method may be executed by a terminal device, which may be User Equipment (UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital processing (Personal Digital Processing) Digital Assistant (PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- UE User Equipment
- PDA Personal Digital Processing
- the method can be implemented by a processor calling computer-readable instructions stored in a memory.
- the method is executed by a server.
- a plurality of first images may be acquired by the image acquisition device located at a preset position, and a reference image matching the image to be processed may be selected from the plurality of first images, so
- the image acquisition device may be a rotatable camera, for example, a spherical camera used for monitoring, etc.
- the image acquisition device may rotate in the pitch direction and/or the yaw direction. During the rotation, the image acquisition device may acquire one Or multiple first images.
- a reference image may also be obtained by the image obtaining device, which is not limited herein.
- the image acquisition device can be rotated 180° in the pitch direction and 360° in the yaw direction, the image acquisition device can acquire multiple first images during the rotation process, for example, every preset angle, acquire a first image One image.
- the angle at which the image acquisition device can be rotated in the pitch direction and/or yaw direction is a preset degree, for example, it can only be rotated by 10°, 20°, 30°, etc., and the image acquisition device can be rotated during the rotation process.
- One or more first images are acquired, for example, one first image is acquired every interval preset angle.
- the image acquisition device can only rotate 20° in the yaw direction.
- the first image can be acquired every 5°, and the image acquisition device can rotate to 0°, 5°, 10°, A first image is acquired at 15° and 20°, and a total of 5 first images are acquired.
- the image acquisition device can only be rotated by 10° in the yaw direction, and the image acquisition device can acquire a first image when it is rotated to 5°, that is, only acquire a reference image.
- the reference pose corresponding to each first image includes the rotation matrix and displacement vector when the image acquisition device acquires each first image
- the target pose corresponding to the image to be processed includes the image acquisition device acquiring the image to be processed The rotation matrix and displacement vector at time.
- the reference image is an image that matches the image to be processed in the first image
- the reference pose corresponding to the reference image includes the rotation matrix and displacement vector when the image acquisition device acquires the reference image, the image to be processed
- the corresponding target pose includes the rotation matrix and the displacement vector when the image acquisition device acquires the image to be processed.
- Fig. 2 shows a flowchart of a method for determining a pose according to an embodiment of the present disclosure. As shown in Fig. 2, the method further includes:
- step S14 the second homography matrix between the imaging plane and the geographic plane when the image acquisition device is acquiring the second image is determined, and the internal parameter matrix of the image acquisition device is determined.
- the second image is any one of the multiple first images
- the geographic plane is a plane where the geographic location coordinates of the target point are located;
- step S15 a reference pose corresponding to the second image is determined according to the internal reference matrix and the second homography matrix
- step S16 the reference pose corresponding to the at least one first image is determined according to the reference pose corresponding to the second image.
- the image acquisition device may be rotated in the pitch direction and/or the yaw direction, and the first image may be sequentially acquired during the rotation.
- the image acquisition device can be set to a certain angle in the pitch direction (for example, 1°, 5°, 10°, etc.), and rotate one circle in the yaw direction, and every certain angle (for example, 1° , 5°, 10°, etc.) to obtain a first image.
- the image acquisition device can be adjusted to a certain angle in the pitch direction (for example, 1°, 5°, 10°, etc.), and rotate one circle in the yaw direction, and obtain one image every certain angle during the rotation.
- the first image is a certain angle in the pitch direction (for example, 1°, 5°, 10°, etc.), and rotate one circle in the yaw direction, and obtain one image every certain angle during the rotation.
- the image acquisition device may acquire the first image in sequence when the rotatable angle in the pitch direction and/or the yaw direction is a preset degree.
- any one of the first images in the above process can be determined as the second image, and when the reference pose of each first image is determined in sequence, the selected second image is used as the determined number.
- the first image to be processed in the processing of the reference pose of the first image, and after the reference pose of the second image is determined, the reference pose of the other first images is determined according to the reference pose of the second image .
- the first image may be determined as the second image, and the second image may be calibrated (ie, the position and posture of the image acquisition device when the second image is acquired) to determine the reference position of the second image
- the reference poses of other first images are sequentially determined based on the reference poses of the second image.
- multiple non-collinear target points can be selected in the second image, and the image position coordinates of the target points in the second image can be marked, and the geographic location of the target point can be obtained Coordinates, for example, the latitude and longitude coordinates of the target point in the actual geographic location.
- FIG. 3 shows a schematic diagram of a target point according to an embodiment of the present disclosure.
- the right side in FIG. 3 is a second image acquired by the image acquisition device, and 4 target points are selected in the second image (That is, 0 point, 1 point, 2 points, and 3 points), for example, 4 vertices of a certain stadium are selected as target points.
- the image position coordinates of the 4 target points in the second image can be obtained, for example, (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), (x 4 , y 4 ).
- the geographic location coordinates of the four target points may be determined, for example, the latitude and longitude coordinates.
- the left side of Fig. 3 is a live map of the stadium, for example, a live map taken by a satellite, the longitude and latitude coordinates of the four target points can be obtained in each live map, for example, (x 1 ', y 1 '), (x 2 ', y 2 '), (x 3 ', y 3 '), (x 4 ', y 4 ').
- determining the second homography matrix between the imaging plane and the geographic plane when the image acquisition device acquires the second image, and determining the internal parameter matrix of the image acquisition device includes : According to the image location coordinates and geographic location coordinates of the target point, determine the second homography matrix between the imaging plane and the geographic plane when the image acquisition device collects the second image; Decomposition processing should be performed on the matrix to determine the internal parameter matrix of the image acquisition device.
- the second homography matrix between the imaging plane and the geographic plane of the image acquisition device is determined according to the image location coordinates and geographic location coordinates of the target point.
- the matrix for example, can establish a system of equations between the coordinates according to the aforementioned coordinates, and solve the second homography matrix according to the system of equations.
- the second homography matrix can be decomposed, and according to the imaging principle, the second homography matrix and the internal parameter matrix of the image acquisition device and the second image can be determined according to the following formula (1)
- H is the second homography matrix
- ⁇ is the eigenvalue of H
- K is the internal parameter matrix of the image acquisition device
- T] is the external parameter matrix corresponding to the second image
- R is the rotation matrix of the second image
- T is the displacement vector of the second image.
- column vector in formula (1) can be expressed as the following formula (2):
- h 1 , h 2 , and h 3 are respectively the column vector of H
- r 1 , r 2 are the column vector of R
- t is the column vector of T.
- K -T is the transposed matrix of K
- K -1 is the inverse matrix of K.
- equations (4) can be obtained according to equations (3):
- singular value decomposition can be performed on the equation set (4) to obtain the internal parameter matrix of the image acquisition device, for example, the least square solution of the internal parameter matrix can be obtained.
- step S15 the reference pose of the second image can be determined according to the internal parameter matrix and the second homography matrix, and step S15 can include: according to the image acquisition device The internal parameter matrix and the second homography matrix determine the external parameter matrix corresponding to the second image; and the reference pose corresponding to the second image is determined according to the external parameter matrix corresponding to the second image.
- the external parameter matrix corresponding to the second image can be determined according to formula (1) or (2).
- both sides of the formula (1) can be multiplied by K -1 and divided by ⁇ at the same time to obtain the external parameter matrix [R
- the rotation matrix R and the displacement vector T in the external parameter matrix are the reference poses corresponding to the second image.
- the reference pose corresponding to each first image may be sequentially determined according to the reference pose of the second image.
- the second image is the first image to be processed in the process of determining the reference poses of multiple first images, and the reference positions of subsequent first images can be determined in turn according to the reference poses of the second images.
- Step S16 may include: performing key point extraction processing on the current first image and the next first image, respectively, to obtain the third key point in the current first image and the corresponding first image of the third key point in the next first image.
- the current first image is an image with a known reference pose among the multiple first images, the current first image includes the second image, and the next first image is the At least one image adjacent to the current first image in the first image; determining the current first image and the next first image according to the correspondence between the third key point and the fourth key point A third homography matrix between images; according to the third homography matrix and the reference pose corresponding to the current first image, the reference pose corresponding to the next first image is determined.
- the current first image and the next first image can be extracted by using a deep learning neural network such as a convolutional neural network to obtain the third key point and the third key point in the current first image.
- the third key point corresponds to the fourth key point in the next first image, or according to the brightness and chromaticity of the pixels in the current first image and the next first image, the current first image is obtained.
- the third key point and the fourth key point corresponding to the third key point in the next first image.
- the third key point and the fourth key point may represent the same set of points, but the set of points is in the current first image
- the position in the next first image can be different.
- the key point may be a point that can represent the contour and shape of the target object in the image.
- the first image and the second first image can be input to the convolutional neural network for key point extraction processing, and the A plurality of third key points and fourth key points are obtained in the image and in the second first image.
- the second image is an image of a certain stadium taken by the image acquisition device
- the third key point is multiple vertices of the stadium
- the vertices of the stadium included in the second first image may be used as the fourth key point.
- the third position coordinates of the third key point in the second image and the fourth position coordinates of the fourth key point in the second first image can be acquired.
- the current first image may also be any first image
- the next first image is an image adjacent to the current first image
- the present disclosure does not limit the current first image
- the image acquisition device rotates a certain angle between acquiring the current first image and the next first image, that is, the pose of the image acquisition device changes, and the third key point
- the correspondence relationship between the fourth key point and the current first image is determined, and the third homography matrix between the current first image and the next first image is determined, and the next can be determined according to the reference pose of the current first image and the third homography matrix A reference pose for the first image.
- a third homography matrix between the current first image and the next first image is determined according to the correspondence between the third key point and the fourth key point , Including: determining the current first image according to the third position coordinates of the third key point in the current first image and the fourth position coordinates of the fourth key point in the next first image A third homography matrix between an image and the next first image.
- the third homography matrix between the current first image and the next first image may be determined according to the third position coordinates and the fourth position coordinates.
- a third homography matrix between the second image and the next first image can be determined.
- determining the reference pose corresponding to the next first image according to the third homography matrix and the reference pose corresponding to the current first image includes: The three homography matrices are decomposed, and the second pose change amount between the image acquisition device acquiring the current first image and the next first image is determined; according to the reference corresponding to the current first image The pose and the second pose change amount determine the reference pose corresponding to the next first image.
- the third homography matrix can be decomposed, for example, the third homography matrix can be decomposed into column vectors, and the linear equations can be determined according to the column vectors of the third homography matrix, and according to The linear equation system solves the second pose change amount between the current first image and the next first image, for example, the change amount of the pose angle.
- the amount of change in the attitude angle of the image acquisition device between the second image being captured and the next first image may be determined.
- the reference pose corresponding to the next first image may be determined according to the reference pose corresponding to the current first image and the amount of change in the second pose.
- the attitude angle corresponding to the next first image can be determined by the reference pose and the amount of attitude angle change of the current first image, so as to obtain the reference pose corresponding to the next first image.
- the reference pose corresponding to the second first image may be determined according to the reference pose of the second image and the amount of change in the pose angle between the second image and the second first image.
- the third homography matrix can be determined based on the second key points of the second first image and the third first image in the above manner, and based on the second first image, the third homography matrix, and
- the reference pose of the second first image determines the reference pose of the third first image
- the reference pose of the fourth first image is obtained based on the reference pose of the third first image... until all the first images are acquired.
- the reference pose of an image That is, in order, iterate from the first first image to the last first image to obtain the reference poses of all the first images.
- the second image may be any one of the first images.
- the reference poses of the two first images adjacent to the second image may be obtained respectively, and According to the reference poses of the two adjacent first images, the reference poses of the two first images adjacent to the two first images are obtained...until the reference poses of all the first images are obtained.
- the number of the first image can be 10, and the second image is the fifth one.
- the reference poses of the fourth first image and the sixth first image can be obtained according to the reference pose of the second image. Further, the reference poses of the third first image and the seventh first image can continue to be obtained...until the reference poses of all the first images are obtained.
- the reference pose of the first image can be obtained, and the reference poses of all the first images are iteratively determined according to the reference pose of the first first image, without the need for complex calibration methods for each first image.
- the image is calibrated to improve processing efficiency.
- the target pose of any image to be processed acquired by the image acquisition device may be determined, that is, the rotation matrix and displacement vector corresponding to the image to be processed may be acquired.
- the image acquisition device may Acquire any image to be processed, and the pose corresponding to the image to be processed is unknown, that is, the pose of the image acquisition device when the image to be processed is taken is unknown, and it can be determined from the first image to be Match the reference image, and determine the pose corresponding to the image to be processed according to the pose corresponding to the reference image.
- Step S11 may include: performing feature extraction processing on the image to be processed and at least one first image, respectively, to obtain first feature information of the image to be processed and second feature information of each of the first images; The similarity between the first feature information and each of the second feature information determines the reference image from each of the first images.
- the image to be processed and each first image may be separately subjected to feature extraction processing through a convolutional neural network.
- the convolutional neural network may extract feature information of each image.
- the first feature information of the image to be processed and the second feature information of each first image, the first feature information and the second feature information may include feature maps, feature vectors, etc.
- the present disclosure does not limit the feature information.
- the first feature information of the image to be processed and the second feature information of each of the first images can also be determined by parameters such as the chromaticity and brightness of the pixels of each first image and the image to be processed. The disclosure does not restrict the way of feature extraction processing.
- the similarity (for example, cosine similarity) between the first feature information and each second feature information can be determined separately, for example, the first feature information and the second feature information are both feature vectors .
- the cosine similarity between the first feature information and each second feature information can be determined respectively, and the first image corresponding to the second feature information with the largest cosine similarity of the first feature information can be determined, that is, the reference Image and get the reference pose of the reference image.
- the image to be processed and the reference image may be separately processed for key point extraction.
- the first key point in the image to be processed may be extracted through the convolutional neural network, and Obtain a second key point corresponding to the first key point in the reference image.
- the first key point and the second key point can be determined by parameters such as brightness and chroma of the pixel points of the image to be processed and the reference image. Do restrictions.
- the target pose corresponding to the image to be processed may be determined according to the correspondence between the first key point and the second key point, and the reference pose corresponding to the reference image.
- Step S13 may include: according to the first position coordinates of the first key point in the image to be processed, the second position coordinates of the second key point in the reference image, and the reference position corresponding to the reference image Posture, determining the target posture of the image to be processed by the image acquisition device. That is, the target pose corresponding to the image to be processed can be determined according to the position coordinates of the first key point, the position coordinates of the second key point, and the reference pose.
- determining the target pose of the image to be processed by the image acquisition device may include: determining the reference image and the target pose according to the first position coordinates and the second position coordinates Process the first homography matrix between the images; decompose the first homography matrix to determine the first pose change between the image acquisition device acquiring the image to be processed and the reference image ; Determine the target pose according to the reference pose corresponding to the reference image and the first pose change.
- the first homography matrix between the reference image and the image to be processed may be determined according to the first position coordinates and the second position coordinates.
- the first homography matrix between the reference image and the image to be processed can be determined according to the correspondence between the first position coordinates and the second position coordinates of the first key point.
- the first homography matrix can be decomposed, for example, the first homography matrix can be decomposed into column vectors, and the linear equation system can be determined according to the column vectors of the first homography matrix, and
- the first pose change amount between the reference image and the image to be processed for example, the change amount of the pose angle, is solved according to the linear equation set.
- the amount of change in the attitude angle of the image acquisition device between the shooting of the reference image and the image to be processed may be determined.
- the target pose corresponding to the image to be processed can be determined according to the reference pose corresponding to the reference image and the first pose change.
- the pose angle corresponding to the image to be processed can be determined by the reference pose and the amount of change in the pose angle of the reference image, so as to obtain the target pose corresponding to the image to be processed.
- the target pose of the image to be processed can be determined by the reference pose of the reference image matched with the image to be processed and the first homography matrix, without the need to calibrate the image to be processed, which improves processing efficiency.
- the feature extraction process and the key point extraction process are implemented by a convolutional neural network, and before the feature extraction process and the key point extraction process are performed using the convolutional neural network, the The convolutional neural network performs multi-task training, that is, the ability of the convolutional neural network to perform feature extraction processing and key point extraction processing is trained.
- FIG. 4 shows a flowchart of a method for determining a pose according to an embodiment of the present disclosure. As shown in FIG. 4, the method further includes:
- step S21 convolution processing is performed on the sample image through the convolution layer of the convolutional neural network to obtain a feature map of the sample image;
- step S22 perform convolution processing on the feature map to obtain feature information of the sample image respectively;
- step S23 perform key point extraction processing on the feature map to obtain key points of the sample image
- step S24 the convolutional neural network is trained according to the feature information and key points of the sample image.
- Fig. 5 shows a schematic diagram of neural network training according to an embodiment of the present disclosure. As shown in Figure 5, sample images can be used to train the convolutional neural network for feature extraction processing capabilities.
- the sample image may be convolved through the convolution layer of the convolutional neural network to obtain a feature map of the sample image.
- image pairs composed of sample images can be used to train the convolutional neural network.
- the similarity of two sample images in the image pair can be labeled (for example, completely different images can be labeled Is 0, the completely consistent image can be marked as 1, etc.), and the feature maps of the two sample images in the sample image pair are extracted through the convolutional layer of the convolutional neural network, and in step S22, the feature map Perform convolution processing to obtain feature information (for example, feature vectors) of the two sample images of the sample image pair.
- a sample image with key point annotation information may be used to train the convolutional neural network to perform key point extraction processing.
- Step S23 may include: processing the feature map through the region candidate network of the convolutional neural network to obtain the region of interest; and performing processing on the region of interest through the region of interest pooling layer of the convolutional neural network Pooling, and convolution processing is performed through a convolution layer, and key points of the sample image are determined in the region of interest.
- the convolutional neural network may include a candidate region network (Region Proposal Network, RPN) and a region of interest (Region of Interest, ROI) pooling layer.
- RPN Region Proposal Network
- ROI region of Interest
- the feature map can be processed by the region candidate network to obtain the region of interest, and the region of interest in the sample image can be pooled by the region of interest pooling layer, and further, can be performed by the 1 ⁇ 1 convolutional layer Convolution processing to determine the location of key points (for example, location coordinates) in the region of interest.
- step S24 the convolutional neural network is trained according to the feature information and key points of the sample image.
- the cosine similarity between the feature information of the two sample images of the sample image pair can be determined.
- the first loss function of the feature extraction processing capability of the convolutional neural network can be determined according to the cosine similarity (with possible errors) output by the convolutional neural network and the similarity between the two labeled sample images.
- the first loss function of the convolutional neural network in terms of feature extraction processing ability can be determined according to the difference between the cosine similarity output by the convolutional neural network and the similarity of the two labeled sample images.
- the ability of the convolutional neural network to extract key points can be determined according to the position coordinates of the key points output by the convolutional neural network and the key point annotation information Aspect of the second loss function.
- the position coordinates of the key points output by the convolutional neural network may have errors.
- the key points of the convolutional neural network can be determined according to the error between the position coordinates of the key points output by the convolutional neural network and the label information of the key points.
- the second loss function in terms of point extraction processing power.
- the convolutional neural network can be determined according to the first loss function of the convolutional neural network in terms of feature extraction processing capabilities and the second loss function of the convolutional neural network in terms of key point extraction processing capabilities.
- the loss function for example, can perform a weighted summation of the first loss function and the second loss function, and the present disclosure does not limit the manner of determining the loss function of the convolutional neural network.
- the network parameters of the convolutional neural network can be adjusted according to the loss function. For example, the network parameters of the convolutional neural network can be adjusted by a gradient descent method. The above processing can be performed iteratively until the training conditions are met.
- the processing of adjusting network parameters can be performed iteratively for a predetermined number of times.
- the training conditions for feature extraction can be satisfied, or the When the loss function of the network converges within the preset interval or is less than the preset threshold, the training condition is satisfied.
- the convolutional neural network meets the training condition, the training of the convolutional neural network is completed.
- the convolutional neural network can be used in key point extraction processing and feature extraction processing.
- the convolutional neural network can convolve the input image to obtain the feature map of the input image, and perform convolution processing on the feature map to obtain the feature information of the input image .
- the region of interest of the feature map can also be obtained through the region candidate network, and the region of interest can be pooled by the region of interest pooling layer, and then the key points can be obtained in the region of interest.
- the region candidate network and the region of interest pooling layer can obtain the region of interest of the image input to the convolutional neural network during the training process or the process of key point extraction processing, and determine the key points in the region of interest to improve the key point determination Accuracy, improve processing efficiency.
- multiple first images can be obtained during the rotation process, and the reference poses of all the first images can be iteratively determined according to the reference poses of the second images, without the need for each first image
- the image is calibrated to improve processing efficiency.
- a reference image matching the image to be processed can be selected in the first image, and the pose of the image to be processed can be determined according to the reference pose of the reference image and the first homography matrix pose, which can be obtained in the image
- the convolutional neural network can obtain the region of interest of the input image, and determine the key points in the region of interest, improve the accuracy of key point determination, and improve processing efficiency.
- Fig. 6 shows an application schematic diagram of a pose determination method according to an embodiment of the present disclosure.
- the image to be processed may be an image currently acquired by the image acquisition device, and the current pose of the image acquisition device can be determined according to the image to be processed.
- the image acquisition device may be rotated in the pitch direction and/or the yaw direction in advance, and a plurality of first images may be acquired during the rotation. It can calibrate the first image (the second image) of the multiple first images, and select multiple non-collinear target points in the second image, and according to the target point in the second image The corresponding relationship between the image location coordinates and the geographic location coordinates of the target point determines the second homography matrix.
- the second homography matrix can be decomposed, and the least square solution of the internal parameter matrix of the image acquisition device can be obtained according to formula (4).
- the reference pose corresponding to the second image is determined by formula (1) or (2).
- the second image and the second first image may be subjected to key point extraction processing through the convolutional neural network to obtain the third key point in the second image and the fourth key point in the second first image, According to the third key point and the fourth key point, the third homography matrix between the second image and the second first image is obtained, and the third homography matrix can be obtained through the reference pose corresponding to the second image and the third homography matrix.
- the reference poses of the two first images and further, the reference pose of the second first image and the third homography matrix between the second first image and the third first image can be used to obtain the first
- the above processing can be performed iteratively to determine the reference poses of all the first images.
- the image to be processed and each first image can be separately subjected to feature extraction processing through a convolutional neural network to obtain the first feature information of the image to be processed and the second feature information of each first image, and Determine the cosine similarity between the first feature information and each second feature information respectively, and determine the first image corresponding to the second feature information with the largest cosine similarity of the first feature information as a reference for matching with the image to be processed image.
- a convolutional neural network may be used to perform key point extraction processing on the image to be processed and the reference image, respectively, to obtain the first key point of the first key point in the image to be processed and the reference image. The second key point. And according to the first key point and the second key point, the first homography matrix between the reference image and the image to be processed is determined.
- the target pose of the image to be processed can be determined according to the reference pose of the reference image and the first homography matrix, that is, the pose of the image acquisition device when the image to be processed is captured (ie, Current pose).
- the pose determination method can determine the pose of the image acquisition device at any moment, and can also predict the visible area of the image acquisition device based on the pose. Further, the pose determination method can provide a basis for the position of any point on the prediction plane relative to the image acquisition device and the motion speed of the target object on the prediction plane.
- the present disclosure also provides a pose determination device, electronic equipment, computer-readable storage medium, and a program, all of which can be used to implement any pose determination method provided in this disclosure.
- a pose determination device electronic equipment, computer-readable storage medium, and a program, all of which can be used to implement any pose determination method provided in this disclosure.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- Fig. 7 shows a block diagram of a pose determination device according to an embodiment of the present disclosure. As shown in Figure 7, the device includes:
- the acquiring module 11 is configured to acquire a reference image matching the image to be processed, wherein the image to be processed and the reference image are acquired by an image acquiring device, the reference image has a corresponding reference pose, and the reference The pose is used to indicate the pose of the image acquisition device when acquiring the reference image;
- the first extraction module 12 is configured to perform key point extraction processing on the to-be-processed image and the reference image respectively to obtain the first key point in the to-be-processed image and the first key point in the reference image.
- the first determining module 13 is configured to determine that the image acquisition device is capturing the image to be processed according to the correspondence between the first key point and the second key point, and the reference pose corresponding to the reference image The target pose.
- the acquisition module is further configured to:
- the image acquisition device sequentially acquires during the rotation process
- the reference image is determined from each first image according to the similarity between the first feature information and each of the second feature information.
- the device further includes:
- the second determination module is used to determine the second homography matrix between the imaging plane and the geographic plane when the image acquisition device acquires the second image, and determine the internal parameter matrix of the image acquisition device, where
- the second image is any one of the multiple first images
- the geographic plane is a plane where the geographic location coordinates of the target point are located;
- a third determining module configured to determine a reference pose corresponding to the second image according to the internal parameter matrix and the second homography matrix
- the fourth determining module is configured to determine the reference pose corresponding to the at least one first image according to the reference pose corresponding to the second image.
- the second determining module is further configured to:
- the second homography matrix between the imaging plane and the geographic plane when the image acquisition device collects the second image is determined, wherein
- the target point is a plurality of non-collinear points in the second image
- the third determining module is further configured to:
- the fourth determining module is further configured to:
- the current first image is an image with a known reference pose among the plurality of first images, the current first image includes the second image, and the next first image is the at least one first image An image adjacent to the current first image in;
- the fourth determining module is further configured to:
- the third position coordinates of the third key point in the current first image and the fourth position coordinates of the fourth key point in the next first image determine the current first image and The third homography matrix between the next first image.
- the fourth determining module is further configured to:
- the first determining module is further configured to:
- the image acquisition device is acquiring the target pose of the image to be processed.
- the first determining module is further configured to:
- the target pose is determined according to the reference pose corresponding to the reference image and the first pose change.
- the reference pose corresponding to the reference image includes the rotation matrix and the displacement vector when the image acquisition device acquires the reference image
- the target pose corresponding to the image to be processed includes the The image acquisition device acquires the rotation matrix and displacement vector of the image to be processed.
- the feature extraction processing and the key point extraction processing are implemented by a convolutional neural network
- the device further includes:
- the first convolution module is configured to perform convolution processing on the sample image through the convolution layer of the convolutional neural network to obtain a feature map of the sample image;
- the second convolution module is configured to perform convolution processing on the feature map to obtain feature information of the sample image respectively;
- the second extraction module is configured to perform key point extraction processing on the feature map to obtain key points of the sample image
- the training module is used to train the convolutional neural network according to the feature information and key points of the sample image.
- the second extraction module is further configured to:
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- brevity, here No longer refer to the description of the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 8 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- the embodiments of the present disclosure also provide a computer program product, including computer readable code, and when the computer readable code runs on the device, the processor in the device executes instructions for implementing the method provided in any of the above embodiments.
- the computer program product can be specifically implemented by hardware, software or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
- SDK software development kit
- Fig. 9 is a block diagram showing an electronic device 1900 according to an exemplary embodiment.
- the electronic device 1900 may be provided as a server.
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by the memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
- the present disclosure may be a system, method, and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (29)
- 一种位姿确定方法,所述方法包括:获取与待处理图像匹配的参考图像,其中,所述待处理图像和所述参考图像是由图像获取装置获取的,所述参考图像具有对应的参考位姿,所述参考位姿用于表示所述图像获取装置在采集所述参考图像时的位姿;对所述待处理图像和所述参考图像分别进行关键点提取处理,分别得到所述待处理图像中的第一关键点以及所述第一关键点在所述参考图像中对应的第二关键点;根据所述第一关键点与所述第二关键点的对应关系,以及所述参考图像对应的参考位姿,确定所述图像获取装置在采集所述待处理图像的目标位姿。
- 根据权利要求1所述的方法,其特征在于,所述获取与待处理图像匹配的参考图像,包括:对所述待处理图像和至少一个第一图像分别进行特征提取处理,获得所述待处理图像的第一特征信息和各所述第一图像的第二特征信息,所述至少一个第一图像是所述图像获取装置在旋转的过程中依次获取的;根据所述第一特征信息和各所述第二特征信息之间的相似度,从各第一图像中确定出所述参考图像。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:确定所述图像获取装置在采集所述第二图像时的成像平面和地理平面之间的第二单应矩阵,以及确定所述图像获取装置的内参矩阵,其中,所述第二图像为所述多个第一图像中的任意一张图像,所述地理平面为所述目标点的地理位置坐标所在平面;根据所述内参矩阵及所述第二单应矩阵,确定所述第二图像对应的参考位姿;根据所述第二图像对应的参考位姿,确定所述至少一个第一图像对应的参考位姿。
- 根据权利要求3所述的方法,其特征在于,所述确定所述图像获取装置在采集所述第二图像时的成像平面和地理平面之间的第二单应矩阵,以及确定所述图像获取装置的内参矩阵,包括:根据所述第二图像中目标点的图像位置坐标和地理位置坐标,确定所述图像获取装置在采集所述第二图像时的成像平面和地理平面之间的第二单应矩阵,其中,所述目标点为所述第二图像中的多个不共线的点;对所述第二单应矩阵进行分解处理,确定所述图像获取装置的内参矩阵。
- 根据权利要求4所述的方法,其特征在于,根据所述内参矩阵及所述第二单应矩阵,确定所述第二图像对应的参考位姿,包括:根据所述图像获取装置的内参矩阵及所述第二单应矩阵,确定所述第二图像对应的外参矩阵;根据所述第二图像对应的外参矩阵,确定所述第二图像对应的参考位姿。
- 根据权利要求3所述的方法,其特征在于,根据所述第二图像对应的参考位姿,确定所述至少一个第一图像对应的参考位姿,包括:对当前第一图像和下一个第一图像分别进行关键点提取处理,获得当前第一图像中的第三关键点和所述第三关键点在下一个第一图像中对应的第四关键点,所述当前第一图像为所述多个第一图像中已知参考位姿的图像,所述当前第一图像包括所述第二图像,所述下一个第一图像为所述至少一个第一图像中与所述当前第一图像相邻的图像;根据所述第三关键点和所述第四关键点的对应关系,确定所述当前第一图像和所述下一个第一图像之间的第三单应矩阵;根据所述第三单应矩阵和所述当前第一图像对应的参考位姿,确定所述下一个第一图像对应的参考位姿。
- 根据权利要求6所述的方法,其特征在于,根据所述第三关键点和所述第四关键点的对应关系,确定所述当前第一图像和所述下一个第一图像之间的第三单应矩阵,包括:根据所述第三关键点在所述当前第一图像中的第三位置坐标以及所述第四关键点在所述下一个第一图像中的第四位置坐标,确定所述当前第一图像和所述下一个第一图像之间的第三单应矩阵。
- 根据权利要求6所述的方法,其特征在于,根据所述第三单应矩阵和所述当前第一图像对应的 参考位姿,确定所述下一个第一图像对应的参考位姿,包括:对所述第三单应矩阵进行分解处理,确定所述图像获取装置在获取所述当前第一图像和所述下一个第一图像之间的第二位姿变化量;根据所述当前第一图像对应的参考位姿以及所述第二位姿变化量,确定所述下一个第一图像对应的参考位姿。
- 根据权利要求1所述的方法,其特征在于,根据所述第一关键点与所述第二关键点的对应关系,以及所述参考图像对应的参考位姿,确定所述图像获取装置在采集所述待处理图像的目标位姿,包括:根据所述第一关键点在所述待处理图像中的第一位置坐标、所述第二关键点在所述参考图像中的第二位置坐标,以及参考图像对应的参考位姿,确定所述图像获取装置在采集所述待处理图像的目标位姿。
- 根据权利要求9所述的方法,其特征在于,根据所述第一关键点在所述待处理图像中的第一位置坐标、所述第二关键点在所述参考图像中的第二位置坐标,以及参考图像对应的参考位姿,确定所述图像获取装置在采集所述待处理图像的目标位姿,包括:根据所述第一位置坐标和所述第二位置坐标,确定所述参考图像和所述待处理图像之间的第一单应矩阵;对所述第一单应矩阵进行分解处理,确定所述图像获取装置在获取所述待处理图像和所述参考图像之间的第一位姿变化量;根据所述参考图像对应的参考位姿以及所述第一位姿变化量,确定所述目标位姿。
- 根据权利要求1-10中任一项所述的方法,其特征在于,所述参考图像对应的参考位姿包括所述图像获取装置获取所述参考图像时的旋转矩阵和位移向量,所述待处理图像对应的目标位姿包括所述图像获取装置获取待处理图像时的旋转矩阵和位移向量。
- 根据权利要求1-10中任一项所述的方法,其特征在于,所述特征提取处理及所述关键点提取处理通过卷积神经网络来实现,其中,所述方法还包括:通过所述卷积神经网络的卷积层对所述样本图像进行卷积处理,获得所述样本图像的特征图;对所述特征图进行卷积处理,分别获得所述样本图像的特征信息;对所述特征图进行关键点提取处理,获得所述样本图像的关键点;根据所述样本图像的特征信息和关键点,训练所述卷积神经网络。
- 根据权利要求12所述的方法,其特征在于,对所述特征图进行关键点提取处理,获得所述样本图像的关键点,包括:通过所述卷积神经网络的区域候选网络对所述特征图进行处理,获得感兴趣区域;通过所述卷积神经网络的感兴趣区域池化层对所述感兴趣区域进行池化,并通过卷积层进行卷积处理,在所述感兴趣区域中确定所述样本图像的关键点。
- 一种位姿确定装置,包括:获取模块,用于获取与待处理图像匹配的参考图像,其中,所述待处理图像和所述参考图像是由图像获取装置获取的,所述参考图像具有对应的参考位姿,所述参考位姿用于表示所述图像获取装置在采集所述参考图像时的位姿;第一提取模块,用于对所述待处理图像和所述参考图像分别进行关键点提取处理,分别得到所述待处理图像中的第一关键点以及所述第一关键点在所述参考图像中对应的第二关键点;第一确定模块,用于根据所述第一关键点与所述第二关键点的对应关系,以及所述参考图像对应的参考位姿,确定所述图像获取装置在采集所述待处理图像的目标位姿。
- 根据权利要求14所述的装置,其特征在于,所述获取模块被进一步配置为:对所述待处理图像和至少一个第一图像分别进行特征提取处理,获得所述待处理图像的第一特征信息和各所述第一图像的第二特征信息,所述至少一个第一图像是所述图像获取装置在旋转的过程中依次获取的;根据所述第一特征信息和各所述第二特征信息之间的相似度,从各第一图像中确定出所述参考图像。
- 根据权利要求15所述的装置,其特征在于,所述装置还包括:第二确定模块,用于确定所述图像获取装置在采集所述第二图像时的成像平面和地理平面之间的第二单应矩阵,以及确定所述图像获取装置的内参矩阵,其中,所述第二图像为所述多个第一图像中的任意一张图像,所述地理平面为所述目标点的地理位置坐标所在平面;第三确定模块,用于根据所述内参矩阵及所述第二单应矩阵,确定所述第二图像对应的参考位姿;第四确定模块,用于根据所述第二图像对应的参考位姿,确定所述至少一个第一图像对应的参考位姿。
- 根据权利要求16所述的装置,其特征在于,所述第二确定模块被进一步配置为:根据所述第二图像中目标点的图像位置坐标和地理位置坐标,确定所述图像获取装置在采集所述第二图像时的成像平面和地理平面之间的第二单应矩阵,其中,所述目标点为所述第二图像中的多个不共线的点;对所述第二单应矩阵进行分解处理,确定所述图像获取装置的内参矩阵。
- 根据权利要求17所述的装置,其特征在于,所述第三确定模块被进一步配置为:根据所述图像获取装置的内参矩阵及所述第二单应矩阵,确定所述第二图像对应的外参矩阵;根据所述第二图像对应的外参矩阵,确定所述第二图像对应的参考位姿。
- 根据权利要求16所述的装置,其特征在于,所述第四确定模块被进一步配置为:对当前第一图像和下一个第一图像分别进行关键点提取处理,获得当前第一图像中的第三关键点和所述第三关键点在下一个第一图像中对应的第四关键点,所述当前第一图像为所述多个第一图像中已知参考位姿的图像,所述当前第一图像包括所述第二图像,所述下一个第一图像为所述至少一个第一图像中与所述当前第一图像相邻的图像;根据所述第三关键点和所述第四关键点的对应关系,确定所述当前第一图像和所述下一个第一图像之间的第三单应矩阵;根据所述第三单应矩阵和所述当前第一图像对应的参考位姿,确定所述下一个第一图像对应的参考位姿。
- 根据权利要求19所述的装置,其特征在于,所述第四确定模块被进一步配置为:根据所述第三关键点在所述当前第一图像中的第三位置坐标以及所述第四关键点在所述下一个第一图像中的第四位置坐标,确定所述当前第一图像和所述下一个第一图像之间的第三单应矩阵。
- 根据权利要求19所述的装置,其特征在于,所述第四确定模块被进一步配置为:对所述第三单应矩阵进行分解处理,确定所述图像获取装置在获取所述当前第一图像和所述下一个第一图像之间的第二位姿变化量;根据所述当前第一图像对应的参考位姿以及所述第二位姿变化量,确定所述下一个第一图像对应的参考位姿。
- 根据权利要求14所述的装置,其特征在于,所述第一确定模块被进一步配置为:根据所述第一关键点在所述待处理图像中的第一位置坐标、所述第二关键点在所述参考图像中的第二位置坐标,以及参考图像对应的参考位姿,确定所述图像获取装置在采集所述待处理图像的目标位姿。
- 根据权利要求22所述的装置,其特征在于,所述第一确定模块被进一步配置为:根据所述第一位置坐标和所述第二位置坐标,确定所述参考图像和所述待处理图像之间的第一单应矩阵;对所述第一单应矩阵进行分解处理,确定所述图像获取装置在获取所述待处理图像和所述参考图像之间的第一位姿变化量;根据所述参考图像对应的参考位姿以及所述第一位姿变化量,确定所述目标位姿。
- 根据权利要求14-23中任一项所述的装置,其特征在于,所述参考图像对应的参考位姿包括 所述图像获取装置获取所述参考图像时的旋转矩阵和位移向量,所述待处理图像对应的目标位姿包括所述图像获取装置获取待处理图像时的旋转矩阵和位移向量。
- 根据权利要求14-23中任一项所述的装置,其特征在于,所述特征提取处理及所述关键点提取处理通过卷积神经网络来实现,其中,所述装置还包括:第一卷积模块,用于通过所述卷积神经网络的卷积层对所述样本图像进行卷积处理,获得所述样本图像的特征图;第二卷积模块,用于对所述特征图进行卷积处理,分别获得所述样本图像的特征信息;第二提取模块,用于对所述特征图进行关键点提取处理,获得所述样本图像的关键点;训练模块,用于根据所述样本图像的特征信息和关键点,训练所述卷积神经网络。
- 根据权利要求25所述的装置,其特征在于,所述第二提取模块被进一步配置为:通过所述卷积神经网络的区域候选网络对所述特征图进行处理,获得感兴趣区域;通过所述卷积神经网络的感兴趣区域池化层对所述感兴趣区域进行池化,并通过卷积层进行卷积处理,在所述感兴趣区域中确定所述样本图像的关键点。
- 一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至13中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至13中任意一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-13中的任一权利要求所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021578183A JP2022540072A (ja) | 2019-07-31 | 2019-12-06 | 位置姿勢決定方法及び装置、電子機器並びに記憶媒体 |
US17/563,744 US20220122292A1 (en) | 2019-07-31 | 2021-12-28 | Pose determination method and device, electronic device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910701860.0A CN110473259A (zh) | 2019-07-31 | 2019-07-31 | 位姿确定方法及装置、电子设备和存储介质 |
CN201910701860.0 | 2019-07-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/563,744 Continuation US20220122292A1 (en) | 2019-07-31 | 2021-12-28 | Pose determination method and device, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021017358A1 true WO2021017358A1 (zh) | 2021-02-04 |
Family
ID=68509631
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/123646 WO2021017358A1 (zh) | 2019-07-31 | 2019-12-06 | 位姿确定方法及装置、电子设备和存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220122292A1 (zh) |
JP (1) | JP2022540072A (zh) |
CN (1) | CN110473259A (zh) |
TW (1) | TWI753348B (zh) |
WO (1) | WO2021017358A1 (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110473259A (zh) * | 2019-07-31 | 2019-11-19 | 深圳市商汤科技有限公司 | 位姿确定方法及装置、电子设备和存储介质 |
CN111028283B (zh) * | 2019-12-11 | 2024-01-12 | 北京迈格威科技有限公司 | 图像检测方法、装置、设备及可读存储介质 |
CN115039015A (zh) * | 2020-02-19 | 2022-09-09 | Oppo广东移动通信有限公司 | 位姿跟踪方法、可穿戴设备、移动设备以及存储介质 |
CN111523485A (zh) * | 2020-04-24 | 2020-08-11 | 浙江商汤科技开发有限公司 | 位姿识别方法及装置、电子设备和存储介质 |
CN111552757B (zh) * | 2020-04-30 | 2022-04-01 | 上海商汤临港智能科技有限公司 | 生成电子地图的方法、装置、设备及存储介质 |
CN111709428B (zh) * | 2020-05-29 | 2023-09-15 | 北京百度网讯科技有限公司 | 图像中关键点位置的识别方法、装置、电子设备及介质 |
CN111882605A (zh) * | 2020-06-30 | 2020-11-03 | 浙江大华技术股份有限公司 | 监控设备图像坐标转换方法、装置和计算机设备 |
CN112328715B (zh) * | 2020-10-16 | 2022-06-03 | 浙江商汤科技开发有限公司 | 视觉定位方法及相关模型的训练方法及相关装置、设备 |
CN114640785A (zh) * | 2020-12-16 | 2022-06-17 | 华为技术有限公司 | 站点模型更新方法及系统 |
CN113240739B (zh) * | 2021-04-29 | 2023-08-11 | 三一重机有限公司 | 一种挖掘机、属具的位姿检测方法、装置及存储介质 |
CN113674352A (zh) * | 2021-07-28 | 2021-11-19 | 浙江大华技术股份有限公司 | 开关状态检测方法、电子装置和存储介质 |
CN115359132B (zh) * | 2022-10-21 | 2023-03-24 | 小米汽车科技有限公司 | 用于车辆的相机标定方法、装置、电子设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062776A (zh) * | 2018-01-03 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | 相机姿态跟踪方法和装置 |
CN108734736A (zh) * | 2018-05-22 | 2018-11-02 | 腾讯科技(深圳)有限公司 | 相机姿态追踪方法、装置、设备及存储介质 |
CN109697734A (zh) * | 2018-12-25 | 2019-04-30 | 浙江商汤科技开发有限公司 | 位姿估计方法及装置、电子设备和存储介质 |
CN109829947A (zh) * | 2019-02-25 | 2019-05-31 | 北京旷视科技有限公司 | 位姿确定方法、托盘装载方法、装置、介质及电子设备 |
US20190197713A1 (en) * | 2017-12-27 | 2019-06-27 | Interdigital Ce Patent Holdings | Method and apparatus for depth-map estimation |
CN109948624A (zh) * | 2019-02-18 | 2019-06-28 | 北京旷视科技有限公司 | 特征提取的方法、装置、电子设备和计算机存储介质 |
CN110473259A (zh) * | 2019-07-31 | 2019-11-19 | 深圳市商汤科技有限公司 | 位姿确定方法及装置、电子设备和存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7844106B2 (en) * | 2007-04-23 | 2010-11-30 | Mitsubishi Electric Research Laboratories, Inc | Method and system for determining poses of objects from range images using adaptive sampling of pose spaces |
EP2884459A4 (en) * | 2012-08-10 | 2016-03-30 | Konica Minolta Inc | PICTURE PROCESSING DEVICE, PICTURE PROCESSING METHOD AND PICTURE PROCESSING PROGRAM |
US20190122035A1 (en) * | 2016-03-28 | 2019-04-25 | Beijing Sensetime Technology Development Co., Ltd | Method and system for pose estimation |
CN108230437B (zh) * | 2017-12-15 | 2021-11-09 | 深圳市商汤科技有限公司 | 场景重建方法和装置、电子设备、程序和介质 |
JP6943183B2 (ja) * | 2018-01-05 | 2021-09-29 | オムロン株式会社 | 位置特定装置、位置特定方法、位置特定プログラムおよびカメラ装置 |
CN108364302B (zh) * | 2018-01-31 | 2020-09-22 | 华南理工大学 | 一种无标记的增强现实多目标注册跟踪方法 |
CN109344882B (zh) * | 2018-09-12 | 2021-05-25 | 浙江科技学院 | 基于卷积神经网络的机器人控制目标位姿识别方法 |
CN109671119A (zh) * | 2018-11-07 | 2019-04-23 | 中国科学院光电研究院 | 一种基于slam的室内定位方法及装置 |
CN109949361A (zh) * | 2018-12-16 | 2019-06-28 | 内蒙古工业大学 | 一种基于单目视觉定位的旋翼无人机姿态估计方法 |
-
2019
- 2019-07-31 CN CN201910701860.0A patent/CN110473259A/zh active Pending
- 2019-12-06 WO PCT/CN2019/123646 patent/WO2021017358A1/zh active Application Filing
- 2019-12-06 JP JP2021578183A patent/JP2022540072A/ja active Pending
-
2020
- 2020-01-06 TW TW109100345A patent/TWI753348B/zh not_active IP Right Cessation
-
2021
- 2021-12-28 US US17/563,744 patent/US20220122292A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190197713A1 (en) * | 2017-12-27 | 2019-06-27 | Interdigital Ce Patent Holdings | Method and apparatus for depth-map estimation |
CN108062776A (zh) * | 2018-01-03 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | 相机姿态跟踪方法和装置 |
CN108734736A (zh) * | 2018-05-22 | 2018-11-02 | 腾讯科技(深圳)有限公司 | 相机姿态追踪方法、装置、设备及存储介质 |
CN109697734A (zh) * | 2018-12-25 | 2019-04-30 | 浙江商汤科技开发有限公司 | 位姿估计方法及装置、电子设备和存储介质 |
CN109948624A (zh) * | 2019-02-18 | 2019-06-28 | 北京旷视科技有限公司 | 特征提取的方法、装置、电子设备和计算机存储介质 |
CN109829947A (zh) * | 2019-02-25 | 2019-05-31 | 北京旷视科技有限公司 | 位姿确定方法、托盘装载方法、装置、介质及电子设备 |
CN110473259A (zh) * | 2019-07-31 | 2019-11-19 | 深圳市商汤科技有限公司 | 位姿确定方法及装置、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
TW202107339A (zh) | 2021-02-16 |
TWI753348B (zh) | 2022-01-21 |
US20220122292A1 (en) | 2022-04-21 |
JP2022540072A (ja) | 2022-09-14 |
CN110473259A (zh) | 2019-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021017358A1 (zh) | 位姿确定方法及装置、电子设备和存储介质 | |
WO2021051857A1 (zh) | 目标对象匹配方法及装置、电子设备和存储介质 | |
CN109522910B (zh) | 关键点检测方法及装置、电子设备和存储介质 | |
WO2021008023A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
US20210097715A1 (en) | Image generation method and device, electronic device and storage medium | |
TWI706379B (zh) | 圖像處理方法及裝置、電子設備和儲存介質 | |
WO2020135529A1 (zh) | 位姿估计方法及装置、电子设备和存储介质 | |
TWI773945B (zh) | 錨點確定方法、電子設備和儲存介質 | |
CN110503689B (zh) | 位姿预测方法、模型训练方法及装置 | |
TW202036464A (zh) | 文本識別方法及裝置、電子設備和儲存介質 | |
US9959484B2 (en) | Method and apparatus for generating image filter | |
TWI724712B (zh) | 圖像處理方法、電子設備和儲存介質 | |
WO2020181728A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2021012564A1 (zh) | 视频处理方法及装置、电子设备和存储介质 | |
CN111462238B (zh) | 姿态估计优化方法、装置及存储介质 | |
WO2023103377A1 (zh) | 标定方法及装置、电子设备、存储介质及计算机程序产品 | |
TW202141428A (zh) | 場景深度和相機運動預測方法、電子設備和電腦可讀儲存介質 | |
CN110532956B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
TWI778313B (zh) | 圖像處理方法、電子設備和儲存介質 | |
WO2022179013A1 (zh) | 目标定位方法、装置、电子设备、存储介质及程序 | |
WO2022247091A1 (zh) | 人群定位方法及装置、电子设备和存储介质 | |
CN113139484B (zh) | 人群定位方法及装置、电子设备和存储介质 | |
WO2022141969A1 (zh) | 图像分割方法及装置、电子设备、存储介质和程序 | |
CN113538310A (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN111311588B (zh) | 重定位方法及装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19939113 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021578183 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19939113 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.08.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19939113 Country of ref document: EP Kind code of ref document: A1 |