WO2023015903A1 - 三维姿态调整的方法、装置、电子设备及存储介质 - Google Patents

三维姿态调整的方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023015903A1
WO2023015903A1 PCT/CN2022/083749 CN2022083749W WO2023015903A1 WO 2023015903 A1 WO2023015903 A1 WO 2023015903A1 CN 2022083749 W CN2022083749 W CN 2022083749W WO 2023015903 A1 WO2023015903 A1 WO 2023015903A1
Authority
WO
WIPO (PCT)
Prior art keywords
key point
target
key
information
feature information
Prior art date
Application number
PCT/CN2022/083749
Other languages
English (en)
French (fr)
Inventor
吴思泽
金晟
刘文韬
钱晨
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023015903A1 publication Critical patent/WO2023015903A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • Embodiments of the present disclosure relate to the technical field of computer vision, and relate to a method, device, electronic device, and storage medium for three-dimensional pose adjustment.
  • Three-dimensional (Threee-Dimensional, 3D) human body pose estimation refers to estimating the pose of a human target from an image, video or point cloud, and is often used in various industrial fields such as human body reconstruction, human-computer interaction, behavior recognition, and game modeling.
  • the relevant technology provides a 3D human body pose estimation scheme based on 3D space voxelization for multi-view feature extraction and detection of key points through convolutional neural networks (CNN).
  • CNN convolutional neural networks
  • spatial voxelization is to divide the 3D space equidistantly into grids of equal size, and the multi-view image features after voxelization can be used as the input of 3D convolution.
  • the spatial voxelization in the above-mentioned 3D human pose estimation scheme will bring quantization errors.
  • a larger step size can be selected for voxelization, which will further increase the quantization error, resulting in The determined 3D pose has low precision and accuracy.
  • Embodiments of the present disclosure at least provide a method, device, electronic device, and storage medium for 3D pose adjustment, so as to improve the precision and accuracy of 3D pose estimation.
  • an embodiment of the present disclosure provides a method for adjusting a three-dimensional posture, the method is executed by an electronic device, and the method includes:
  • the multiple target images are target images obtained by shooting the target object under multiple viewing angles;
  • the three-dimensional pose information of the target object is determined based on the pre-built key point connection relationship information corresponding to the target object and the key point feature information of the target images corresponding to the multiple key points in the multiple viewing angles.
  • the embodiment of the present disclosure can determine the connection relationship of multiple key points under different viewing angles by using the key point feature information of multiple key points under different viewing angles.
  • connection relationship will help determine more accurate key points.
  • Feature information in addition, combined with the pre-built key point connection relationship information can constrain the connection relationship between key points, so that the determined key point feature information is more accurate, and further improves the accuracy and accuracy of the determined 3D pose information. Accuracy is improved.
  • the determining, based on the three-dimensional coordinates to be adjusted, the key point feature information obtained by projecting the multiple key points in multiple target images respectively includes:
  • the key point feature information matching the key point can be determined based on the correspondence between the two-dimensional projection point information of the key point in multiple target images and the image features, and the operation is simple.
  • the two-dimensional projection point information includes image position information of the two-dimensional projection point; the two-dimensional projection point information based on the key point in the plurality of target images is obtained from the Extract the key point feature information matching the key point from the image features corresponding to the plurality of target images respectively, including:
  • the extracted image feature corresponding to the image position information is determined as the key point feature information matching the key point.
  • the three-dimensional posture information of the target object includes:
  • the other key points are key points associated with the key point;
  • the three-dimensional pose information of the target object is determined based on the updated key point feature information corresponding to the plurality of key points and the pre-built key point connection relationship information corresponding to the target object.
  • the key point feature information of each key point under different viewing angles and the key point feature information of other key points associated with the key point can be used to update the key point feature information of the key point, and update the key point feature information To a certain extent, it includes the features of other key points in a view, and also includes the features of key points between different views, making the features of key points more accurate, and thus making the determined 3D pose information more accurate .
  • the updated key point features of the key point under different viewing angles are determined. information, including:
  • the key point features of the key point under different viewing angles The information is first updated to obtain the key point feature information after the first update;
  • a second update is performed on the key point feature information of the key point under the target perspective to obtain a second The updated feature information of the key point;
  • the other key point belongs to the target perspective as the key point, and has a second connection relationship with the key point;
  • the three-dimensional posture information of the target object includes:
  • the key point feature information of the key point under different viewing angles is fused to obtain the fused key point feature information corresponding to the key point;
  • the three-dimensional pose information of the target object is determined based on the pre-built key point connection relationship information corresponding to the target object and the fused key point feature information respectively corresponding to the multiple key points.
  • the determined fused key point feature information can take into account the features of different viewing angles, further improving the accuracy of the three-dimensional pose information.
  • the key point feature information includes key point feature values in multiple dimensions; the key point feature information of the key point under different viewing angles is fused to obtain the key point corresponding
  • the fusion of key point feature information including:
  • the fused key point feature information corresponding to the key point is determined.
  • the determining the fused key point feature value corresponding to the dimension based on the determined feature values of the multiple key points includes at least one of the following:
  • the 3D pose information of the target object is determined based on the pre-built key point connection relationship information corresponding to the target object and the fused key point feature information respectively corresponding to the multiple key points ,include:
  • the fused key point feature information corresponding to the multiple key points can be updated to obtain the updated fused key point feature information, That is, the feature information of key points can be calibrated and fused by using the pre-built third connection relationship, so that the determined three-dimensional pose is also more accurate.
  • each of the multiple key points of the target object is used as the first key point, and each of the key points with the third connection relationship is used as the second key point. key point;
  • the second key point is a human skeleton point
  • the first key point includes at least one of the following: human skeleton points, human body marker points.
  • the determining the 3D pose information of the target object based on the updated fused key point feature information includes:
  • the updated fusion key point feature information into the pre-trained target posture recognition network, and output posture deviation information;
  • the posture deviation information is used to represent the difference between the current posture of the target object and the posture to be adjusted. deviation;
  • the acquiring the three-dimensional coordinates to be adjusted of the multiple key points of the target object in the target voxel space includes at least one of the following:
  • Depth information respectively returned by multiple detection rays emitted by the radio device is obtained, and three-dimensional coordinates to be adjusted of multiple key points of the target object in the target voxel space are determined based on the depth information.
  • each of the acquired target images is used as the first target image, and each of the multiple target images used for the key point projection is used as the first target image.
  • At least part of the image of the first target image is the same as at least part of the image of the second target image; or,
  • the first target image does not have the same image as the second target image.
  • the embodiment of the present disclosure also provides a three-dimensional attitude adjustment device, the device includes:
  • the obtaining part is configured to obtain the three-dimensional coordinates to be adjusted in the target voxel space of a plurality of key points of the target object;
  • the determining part is configured to determine key point feature information obtained by projecting the multiple key points in the multiple target images based on the three-dimensional coordinates to be adjusted; the multiple target images are taken under multiple viewing angles The target image obtained by the target object;
  • the adjustment part is configured to determine the target object based on the pre-built key point connection relationship information corresponding to the target object and the key point feature information of the target image corresponding to the multiple key points in the multiple viewing angles. 3D pose information.
  • an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the steps of the method for adjusting a three-dimensional pose as described in any one of the first aspect and its various implementation modes are executed.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and the computer program is executed when the processor runs, as in the first aspect and its various implementation modes The steps of any one of the methods for three-dimensional posture adjustment.
  • the embodiment of the present disclosure also provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer executes the computer program described in the first aspect.
  • FIG. 1 shows a flowchart of a method for adjusting a three-dimensional pose provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of the application of a method for adjusting a three-dimensional posture provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a three-dimensional posture adjustment device provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • 3D human body pose estimation scheme based on 3D space voxelization for multi-view feature extraction and detection of key points by CNN.
  • spatial voxelization is to divide the 3D space equidistantly into grids of equal size, and the multi-view image features after voxelization can be used as the input of 3D convolution.
  • the spatial voxelization in the above-mentioned 3D human pose estimation scheme will bring quantization errors.
  • a larger step size can be selected for voxelization, which will further increase the quantization error, resulting in The determined 3D pose has low precision and accuracy.
  • embodiments of the present disclosure provide a method, device, electronic device and storage medium for 3D pose adjustment, so as to improve the precision and accuracy of 3D pose estimation.
  • the execution subject of the method for adjusting a three-dimensional posture provided by an embodiment of the present disclosure is generally an electronic computer with a certain computing power.
  • the electronic equipment includes, for example: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), Handheld devices, in-vehicle devices, wearable devices, etc.
  • the method for adjusting the three-dimensional pose may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • FIG. 1 is a flow chart of a method for three-dimensional posture adjustment provided by an embodiment of the present disclosure
  • the method includes steps S101 to S103, wherein:
  • S102 Based on the three-dimensional coordinates to be adjusted, determine key point feature information obtained by projecting multiple key points in multiple target images; the multiple target images are target images obtained by shooting the target object under multiple viewing angles;
  • S103 Based on the pre-built key point connection relationship information corresponding to the target object, and the key point feature information of the target image corresponding to the multiple key points at multiple viewing angles, determine the three-dimensional pose information of the target object.
  • the 3D posture adjustment method in the embodiments of the present disclosure can be applied to any application scenario that requires 3D posture adjustment, for example, in the field of automatic driving, the adjustment of the 3D posture of pedestrians in front of the self-driving vehicle is performed, and for another example, in the field of intelligent security
  • the embodiment of the present disclosure does not specifically limit the adjustment of the three-dimensional attitude of the road vehicle.
  • the accuracy of the 3D attitude determined by combining voxelization and CNN network in related technologies is often limited by the quantization error of voxelization, in addition, even if the accuracy of the 3D attitude determined by other vehicle-mounted equipment, such as radio equipment, etc.
  • the precision and accuracy of the determined three-dimensional pose information may be low due to the influence of various adverse factors.
  • the embodiments of the present disclosure provide a scheme for adjusting the 3D posture by combining the pre-built key point connection relationship information and the key point feature information of multiple key points under different viewing angles, so as to improve the 3D posture.
  • the precision and accuracy of attitude can be better applied to various practical scenarios.
  • the three-dimensional coordinates to be adjusted in the embodiments of the present disclosure may be the initial three-dimensional coordinates of multiple key points of the same target object.
  • the above-mentioned three-dimensional coordinates to be adjusted may be determined based on voxelization and CNN network detection based on multiple target images, or may be obtained based on limit distance calculation based on multiple target images, and then based on 3D reconstruction, It may also be calculated based on the depth information detected by radio devices working synchronously, or determined by other methods, which are not specifically limited in this embodiment of the present disclosure.
  • the target image selected in the case of determining the three-dimensional coordinates to be adjusted and the target image for subsequent key point projection may be images obtained by photographing the same target object.
  • the images may be completely the same, may be partly the same, or may be completely different images.
  • Each target image selected to determine the three-dimensional coordinates to be adjusted is used as the first target image, and each target image used for key point projection is used as the second target image, at least part of the images in the first target image and the second target image, at least part of the images in the first target image and the second target image At least part of the images in the two target images are the same, and here may be part of the images, or all of the images, and the same part of the images may refer to overlapping images, and the number of overlapping images and the same shooting angle of view; or, the first The target image and the second target image do not have the same image, that is, although both the first target image and the second target image are images taken for the target object in a certain posture, when shooting the target object, the adopted The shooting angles are not the same.
  • the multiple key points of the target object may correspond to the key nodes of the target object.
  • the key points here may be the human skeleton points corresponding to the human skeleton, or they may be Human landmarks of the human body are identified.
  • the method for adjusting the three-dimensional posture provided by the embodiment of the present disclosure can first determine the two-dimensional projection point information of multiple key points in multiple target images, and then based on the two-dimensional projection point information , to determine the key point feature information of multiple key points under different viewing angles.
  • the multiple target images used for two-dimensional projection may be obtained from shooting the same target object under multiple viewing angles, that is, one viewing angle may correspond to one target image.
  • the above multiple target images can be obtained by synchronous shooting of the same target object by multiple cameras installed in the vehicle.
  • the multiple cameras here can be selected in combination with different user needs, for example, it can be two The three cameras installed at the side and the center position correspondingly capture three target images for the pedestrians in front.
  • the information about the two-dimensional projection point can be determined based on the transformation relationship between the three-dimensional coordinate system where the three-dimensional coordinates are to be adjusted and the two-dimensional coordinate system where the target image is located, that is, the key points can be projected onto the target image by using the transformation relationship , so as to determine the image position and other information of the two-dimensional projection point of the key point on the target image.
  • the key point feature information of multiple key points under different viewing angles can be determined.
  • the key point feature information determined here can be the fusion of features from different viewing angles. Information, this is mainly considering that for the same target object, there is a certain connection relationship between the corresponding key points under different perspectives, and then the feature of the key points can be updated. In addition, under the same viewing angle, there is also a certain connection relationship between the corresponding key points, and the update of the key point features can also be realized, so that the determined key point feature information is more suitable for the actual pose of the target object. .
  • the pre-constructed key point connection relationship information can correspond to a target object with a certain posture, combined with the key point connection relationship information, the key point feature information of multiple key points in different perspectives can be constrained, and the determined 3D pose is more accurate.
  • the 3D posture information determined based on the connection relationship of key points and the feature information of key points may be an adjusted 3D coordinate combination obtained by adjusting the 3D coordinates to be adjusted of each of the multiple key points of the target object.
  • the obtained, that is, the adjusted 3D coordinates of multiple key points can represent the 3D pose of the target object.
  • the above-mentioned process of determining the feature information of key points includes the following steps:
  • Step 1 Based on the three-dimensional coordinates to be adjusted, determine the two-dimensional projection point information of multiple key points in multiple target images, and extract image features corresponding to multiple target images respectively;
  • Step 2 based on the two-dimensional projection point information of the key point in the multiple target images, extract the key point feature information matching the key point from the image features respectively corresponding to the multiple target images;
  • Step 3 Determining the extracted key point feature information matched with the key point as the key point feature information projected into multiple target images.
  • the 3D pose adjustment method In order to extract the feature information of key points that match the key points, in the 3D pose adjustment method provided by the embodiment of the present disclosure, for each target image, based on the image position of the two-dimensional projection points of the key points in multiple target images Information, extract the image feature corresponding to the image position information from the image feature corresponding to the target image, and use the extracted image feature as the key point feature information matched with the key point.
  • the image features corresponding to the target image can be obtained based on image processing, or extracted based on a trained feature extraction network, or other information that can be extracted to represent the target object, the scene where the target object is located, etc. determined by other methods, which are not specifically limited in the embodiments of the present disclosure.
  • the key point feature information of the key point can be updated based on the key point connection relationship first, and then based on the updated key point feature information and the pre-built corresponding to the target object
  • the key points are connected to the relationship information to determine the three-dimensional pose information of the target object.
  • the three-dimensional pose information of the target object can be determined by the following steps:
  • Step 1 For each of the multiple key points, based on the key point feature information of the key point under different viewing angles and the key point feature information of other key points associated with the key point, determine the key point under different viewing angles.
  • Step 2 Determine the three-dimensional pose information of the target object based on the updated key point feature information corresponding to the multiple key points and the pre-built key point connection relationship information corresponding to the target object.
  • connection relationship mainly corresponds to the connection relationship between key points under the same view, and for In terms of key point feature information of key points under different viewing angles, what can be determined is the connection relationship between two-dimensional projection points determined for the same key point under different views.
  • Step 1 Based on the key point feature information of the key point under different viewing angles and the first connection relationship between the two-dimensional projection points of the key point under different viewing angles, perform the second step on the key point feature information of the key point under different viewing angles.
  • the key point feature information of the key point is updated for the key point feature information of the key point under the target perspective, and the key point feature information after the second update is obtained;
  • Step 2 Based on the first updated key point feature information and the second updated key point feature information, determine the updated key point feature information of the key point under the target perspective.
  • the first connection relationship between each two-dimensional projection point of the key point in different viewing angles is predetermined, and based on the first connection relationship, the key point feature information of the key point in each viewing angle can be used for the key point in a viewing angle.
  • the key point feature information of the point is updated, that is, the key point feature information after the first update integrates key point features of the same key point in other views.
  • the key point feature information of the key point can be updated based on the key point feature information of other key points that belong to the target perspective and have a second connection relationship with the key point, where the second connection relationship can also be predetermined , the determined second updated keypoint feature information is fused with keypoint features of other keypoints of the same view.
  • Combining the first updated key point feature information and the second updated key point feature information can make the updated key point feature information of the determined key point in any viewing angle more accurate.
  • the first update can be performed first, and then based on the first update
  • the second update can be performed first; the second update can also be performed first, and then the first update can be performed on the basis of the second update; the first update and the second update can also be performed at the same time, and then the first update and the second update can be combined
  • the update of the feature information of the key points is realized by the result, and no specific limitation is made here.
  • Graph Neural Network can be used to update the feature information of the above key points.
  • a graph model can be constructed based on the above-mentioned first connection relationship, second connection relationship, and key point feature information, and the key point feature information of key points can be continuously updated by performing convolution operations on the graph model.
  • the 3D pose adjustment method provided by the embodiments of the present disclosure can firstly fuse the key point feature information, and then determine the 3D pose information of the target object in combination with the pre-built key point connection relationship information, so as to improve the accuracy of the 3D pose information.
  • the three-dimensional pose information of the target object can be determined through the following steps:
  • Step 1 For each of the multiple key points, the key point feature information of the key point under different perspectives is fused to obtain the fused key point feature information corresponding to the key point;
  • Step 2 Determine the three-dimensional pose information of the target object based on the pre-built key point connection relationship information corresponding to the target object and the fused key point feature information corresponding to the multiple key points respectively.
  • the key points can be fused with the feature information of key points under different viewing angles, so that the obtained fused key point feature information can take into account the posture of the target object under each viewing angle to a certain extent.
  • it can include the following step:
  • Step 1 For each of the multiple dimensions, determine the multiple key point feature values corresponding to the key points in different perspectives and dimensions, and determine the fused key point features corresponding to the dimensions based on the determined multiple key point feature values value;
  • Step 2 Determine the fused key point feature information corresponding to the key point based on the fused key point feature values corresponding to the multiple dimensions.
  • the feature value of the key point with the largest value can be selected from the multiple key point feature values corresponding to the dimension of a key point under different perspectives and determined as the fusion result corresponding to the dimension.
  • Key point eigenvalues highlight the characteristics of each dimension with the greatest possibility.
  • this embodiment of the disclosure can also calculate the average value of corresponding multiple key point eigenvalues for each dimension, and use it as the corresponding After fusion, the key point feature values, that is, the key point features of multiple perspectives in each dimension are fused.
  • weighted summation of weight values corresponding to multiple key point feature values can also be combined here to determine the key point feature values after fusion, thereby realizing feature fusion for key points.
  • relevant weight values may be determined manually, or may be determined through a pre-trained weight matching network, and no specific limitation is made here.
  • the above-mentioned fused key point feature information in the process of confirming the three-dimensional pose information of the target object, can also be updated based on the pre-built key point connection relationship information corresponding to the target object, so as to further improve the accuracy of the determined pose.
  • the update of the feature information of the above-mentioned fused key points can be realized through the following steps:
  • Step 1 Based on the third connection relationship between each key point included in the pre-built key point connection relationship information corresponding to the target object, update the fused key point feature information corresponding to the multiple key points, and obtain the updated fusion Key point feature information;
  • Step 2 Determine the 3D pose information of the target object based on the updated fused key point feature information.
  • the information about the pre-built key point connection relationship can include the third connection relationship between each key point, where the third connection relationship can be the connection formed by connecting the human body skeleton points in sequence according to the human body skeleton structure To a certain extent, the fused key point feature information corresponding to each key point can be calibrated more accurately, so that the determined 3D pose of the target object is also more accurate.
  • the three-dimensional pose information of the target object can be determined according to the following steps:
  • Step 1 Input the updated fusion key point feature information into the pre-trained target attitude recognition network, and output attitude deviation information; the attitude deviation information is used to indicate the deviation between the current attitude of the target object and the attitude to be adjusted;
  • Step 2 Based on the posture deviation information and the three-dimensional coordinates to be adjusted of multiple key points of the target object in the target voxel space, determine the adjusted three-dimensional coordinates of the multiple key points of the target object in the target voxel space, and based on the adjusted The post-3D coordinates determine the 3D pose information of the target object.
  • the target attitude recognition network may be related attitude deviation information.
  • the attitude deviation information corresponds to the deviation between the current attitude and the attitude to be adjusted. Based on the attitude deviation information and the three-dimensional coordinates to be adjusted, it can be determined that the target object is in The adjusted three-dimensional coordinates in the target voxel space, so that the three-dimensional pose information of the target object can be determined.
  • the posture to be adjusted above can be obtained by combining the three-dimensional coordinates to be adjusted of multiple key points of the target object.
  • the target posture recognition network can output the coordinate deviation for each key point of the target object, and the coordinate By summing the deviation amount and the corresponding three-dimensional coordinates to be adjusted, the adjusted three-dimensional coordinates of each key point can be determined.
  • the three-dimensional coordinates to be adjusted of multiple key points in the target voxel space can be determined based on its attitude to be adjusted, and the three-dimensional coordinates to be adjusted can be projected to three viewing angles
  • the node V corresponds to the image feature at the image position of the two-dimensional projection point of the key point in the target image
  • the edge E corresponds to the relationship between nodes, which can be the connection of the same key point under cross-view angle or is the connection of different keypoints within a single viewpoint.
  • the feature information of the key points under different perspectives can be updated.
  • GNN can be used to update the features.
  • the fusion of multi-view features can be completed based on maximum pooling.
  • the fused key point feature information obtained by fusion can be updated by using the pre-built key point connection relationship information corresponding to the target object, and the updated fused key point feature information is input to the regression network to predict the correction value of the attitude estimation to be adjusted , the correction value and the attitude to be adjusted are summed to determine the adjusted three-dimensional attitude information.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a three-dimensional posture adjustment device corresponding to the three-dimensional posture adjustment method, because the problem-solving principle of the device in the embodiment of the disclosure is the same as the above-mentioned three-dimensional posture adjustment method of the embodiment of the disclosure Similarly, the implementation of the device can refer to the implementation of the method.
  • FIG. 3 it is a schematic diagram of a three-dimensional posture adjustment device provided by an embodiment of the present disclosure.
  • the device includes: an acquisition part 301, a determination part 302, and an adjustment part 303; wherein,
  • the obtaining part 301 is configured to obtain the three-dimensional coordinates to be adjusted in the target voxel space of a plurality of key points of the target object;
  • the determining part 302 is configured to determine, based on the three-dimensional coordinates to be adjusted, key point feature information obtained by projecting multiple key points in multiple target images; the multiple target images are target images obtained by shooting the target object under multiple viewing angles;
  • the adjustment part 303 is configured to determine the 3D pose information of the target object based on the pre-built key point connection relationship information corresponding to the target object and the key point feature information of the target image corresponding to the multiple key points at multiple viewing angles.
  • the embodiment of the present disclosure can determine the connection relationship of multiple key points under different viewing angles by using the key point feature information of multiple key points under different viewing angles.
  • connection relationship will help determine more accurate key points.
  • Feature information in addition, combined with the pre-built key point connection relationship information can constrain the connection relationship between key points, so that the determined key point feature information is more accurate, and further improves the accuracy and accuracy of the determined 3D pose information. Accuracy is improved.
  • the determining part 302 is configured to determine key point feature information obtained by projecting multiple key points in multiple target images based on the three-dimensional coordinates to be adjusted according to the following steps:
  • the extracted key point feature information matched with the key point is determined as the key point feature information projected into the multiple target images.
  • the two-dimensional projection point information includes image position information of the two-dimensional projection point; the determining part 302 is configured to follow the steps below based on the two-dimensional projection point information of the key point in multiple target images, Extract the key point feature information matching the key point from the image features corresponding to multiple target images:
  • the extracted image feature corresponding to the image position information is determined as the key point feature information matched with the key point.
  • the adjustment part 303 is configured to follow the steps below based on the pre-built key point connection relationship information corresponding to the target object, and the key points of the target image corresponding to the multiple key points at multiple viewing angles Feature information, determine the 3D pose information of the target object:
  • the three-dimensional pose information of the target object is determined based on the updated key point feature information corresponding to the multiple key points and the pre-built key point connection relationship information corresponding to the target object.
  • the adjustment part 303 is configured to determine the key point based on the key point feature information of the key point under different viewing angles and the key point feature information of other key points associated with the key point according to the following steps Update key point feature information under different perspectives:
  • the key point feature information of the key point under different viewing angles is first updated, obtaining the first updated keypoint feature information
  • the key point characteristics of the key point under the target perspective Based on the key point feature information of the key point under the target perspective and the key point feature information of other key points that belong to the target perspective and have a second connection relationship with the key point, the key point characteristics of the key point under the target perspective performing a second update on the information to obtain the key point feature information after the second update;
  • the updated key point feature information of the key point under the target perspective is determined.
  • the adjustment part 303 is configured to follow the steps below based on the pre-built key point connection relationship information corresponding to the target object, and the key points of the target image corresponding to the multiple key points at multiple viewing angles Feature information, determine the 3D pose information of the target object:
  • the key point feature information of the key point under different perspectives is fused to obtain the fused key point feature information corresponding to the key point;
  • the three-dimensional pose information of the target object is determined based on the pre-built key point connection relationship information corresponding to the target object and the fused key point feature information corresponding to the multiple key points respectively.
  • the key point feature information includes key point feature values in multiple dimensions; the adjustment part 303 is configured to fuse the key point feature information of key points under different perspectives according to the following steps to obtain the key point The fusion key point feature information corresponding to the point:
  • each dimension dimension in the plurality of dimensions determine a plurality of key point eigenvalues corresponding to the key points in different perspectives and dimensions, and determine a fused key point eigenvalue corresponding to the dimension based on the determined plurality of key point eigenvalues;
  • the fused key point feature information corresponding to the key point is determined.
  • the adjustment part 303 is configured to determine the fused key point feature value corresponding to the dimension based on the determined multiple key point feature values in the following manner:
  • the average value of multiple key point feature values is used as the fused key point feature value corresponding to the dimension
  • the adjustment part 303 is configured to determine the target object based on the pre-built key point connection relationship information corresponding to the target object and the fused key point feature information corresponding to the multiple key points according to the following steps:
  • the fusion key point feature information corresponding to the multiple key points is updated to obtain the updated fusion key point feature information
  • the 3D pose information of the target object is determined.
  • each of the multiple key points of the target object is used as the first key point, and each of the key points with the third connection relationship is used as the second key point;
  • the second key point is the human skeleton point
  • the first key point includes at least one of human skeleton points and human body marker points.
  • the adjustment part 303 is configured to determine the three-dimensional pose information of the target object based on the updated fusion key point feature information according to the following steps:
  • the updated fusion key point feature information into the pre-trained target attitude recognition network, and output attitude deviation information;
  • the attitude deviation information is used to indicate the deviation between the current attitude of the target object and the attitude to be adjusted;
  • the acquisition part 301 is configured to acquire the three-dimensional coordinates to be adjusted in the target voxel space of multiple key points of the target object in the following manner:
  • Depth information respectively returned by multiple detection rays emitted by the radio device is obtained, and three-dimensional coordinates to be adjusted of multiple key points of the target object in the target voxel space are determined based on the depth information.
  • each target image in the acquired multiple target images is used as the first target image, and each target image in the multiple target images used for key point projection is used as the second target image ;
  • At least part of the image in the first target image is the same as at least part of the image in the second target image;
  • the first object image does not have the same image as the second object image.
  • FIG. 4 is a schematic structural diagram of the electronic device provided by the embodiment of the present disclosure, including: a processor 401 , a memory 402 , and a bus 403 .
  • the memory 402 stores machine-readable instructions executable by the processor 401 (for example, execution instructions corresponding to the acquisition part 301, the determination part 302, and the adjustment part 303 in the apparatus in FIG. 3 ), and when the electronic device is running, the processor 401 communicates with the memory 402 through the bus 403, and when the machine-readable instructions are executed by the processor 401, the following processes are performed:
  • the multiple target images are target images obtained by shooting target objects from multiple perspectives;
  • the three-dimensional pose information of the target object is determined based on the pre-built key point connection relationship information corresponding to the target object, and the key point feature information of the target image corresponding to the multiple key points at multiple viewing angles.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the method for adjusting the three-dimensional posture described in the above-mentioned method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer executes the method described in the above method embodiment.
  • the steps of the method for adjusting the three-dimensional posture refer to the above method embodiments.
  • the above-mentioned computer program product may be realized by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the embodiments of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make an electronic device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned computer-readable storage medium may be a tangible device capable of retaining and storing instructions used by an instruction execution device, and may be a volatile storage medium or a nonvolatile storage medium.
  • a computer readable storage medium may be, for example but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above.
  • Non-exhaustive list of computer-readable storage media include: portable computer disk, hard disk, random access memory (Random Access Memory, RAM), read only memory (Read Only Memory, ROM), erasable Type programmable read-only memory (Erasable Programmable Read Only Memory, EPROM or flash memory), static random-access memory (Static Random-Access Memory, SRAM), portable compact disk read-only memory (Compact Disk Read Only Memory, CD-ROM) , Digital versatile discs (Digital versatile Disc, DVD), memory sticks, floppy disks, mechanically encoded devices, such as punched cards or raised structures in grooves with instructions stored thereon, and any suitable combination of the foregoing.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • the embodiment of the present disclosure acquires the three-dimensional coordinates to be adjusted of multiple key points of the target object in the target voxel space; based on the three-dimensional coordinates to be adjusted, determine the key Point feature information; the multiple target images are target images obtained by shooting the target object under multiple viewing angles; based on the pre-built key point connection relationship information corresponding to the target object, and the multiple key points are respectively in the multiple The key point feature information of the target image corresponding to the viewing angle is used to determine the three-dimensional pose information of the target object.
  • the embodiment of the present disclosure can determine the connection relationship of multiple key points under different viewing angles by using the key point feature information of multiple key points under different viewing angles. Such a connection relationship will help determine more accurate key points.
  • Feature information, in addition, combined with the pre-built key point connection relationship information can constrain the connection relationship between key points, so that the determined key point feature information is more accurate, and further improves the accuracy and accuracy of the determined 3D pose information. Accuracy is improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本公开提供了一种三维姿态调整的方法、装置、电子设备及存储介质,其中,该方法包括:获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息;基于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息。

Description

三维姿态调整的方法、装置、电子设备及存储介质
相关申请的交叉引用
本公开基于申请号为202110929425.0、申请日为2021年08月13日、申请名称为“三维姿态调整的方法、装置、电子设备及存储介质”的中国专利申请提出,并要求上述中国专利申请的优先权,上述中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开实施例涉及计算机视觉技术领域,涉及一种三维姿态调整的方法、装置、电子设备及存储介质。
背景技术
三维(Three-Dimensional,3D)人体姿态估计是指从图像、视频或点云中估计人物目标的姿态,常用于人体重建、人机交互、行为识别、游戏建模等各个工业领域。
相关技术中提供了一种基于3D空间体素化进行多视角特征提取,并通过卷积神经网络(Convolutional Neural Networks,CNN)检测关键点的3D人体姿态估计方案。其中,空间体素化是将3D空间等距地划分为等大小的网格,体素化后的多视角图像特征可以作为3D卷积的输入。
上述3D人体姿态估计方案中的空间体素化会带来量化误差,在较大的3D空间场景里,往往只能选择较大的步长进行体素化,这会进一步增大量化误差,导致所确定的三维姿态的精度和准确度均较低。
发明内容
本公开实施例至少提供一种三维姿态调整的方法、装置、电子设备及存储介质,以提升三维姿态评估的精度和准确度。
第一方面,本公开实施例提供了一种三维姿态调整的方法,所述方法由电子设备执行,所述方法包括:
获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
基于所述待调整三维坐标,确定所述多个关键点分别在多张目标图像中投影得到的关键点特征信息;所述多张目标图像为多个视角下拍摄目标对象得到的目标图像;
基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息。
采用上述三维姿态调整的方法,在获取到目标对象的多个关键点在目标体素空间内的待调整三维坐标的情况下,可以基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息,最后基 于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息。可知,本公开实施例利用多个关键点在不同视角下的关键点特征信息可以确定多个关键点在不同视角下的连接关系,这样的连接关系将有助于确定出更为准确的关键点特征信息,除此之外,结合预先构建的关键点连接关系信息可以约束关键点之间的连接关系,使得确定出的关键点特征信息更为准确,进一步使得所确定的三维姿态信息的精度和准确度得以提升。
在一种可能的实施方式中,所述基于所述待调整三维坐标,确定所述多个关键点分别在多张目标图像中投影得到的关键点特征信息,包括:
基于所述待调整三维坐标,确定所述多个关键点分别在所述多张目标图像中的二维投影点信息,以及提取所述多张目标图像分别对应的图像特征;
基于所述关键点在所述多张目标图像中的二维投影点信息,从所述多张目标图像分别对应的图像特征中提取与所述关键点匹配的关键点特征信息;
将提取的所述与所述关键点匹配的关键点特征信息确定为所述在多张目标图像中投影得到的关键点特征信息。
这里,可以基于关键点在多张目标图像中的二维投影点信息与图像特征之间的对应关系,确定与关键点匹配的关键点特征信息,操作简单。
在一种可能的实施方式中,所述二维投影点信息包括二维投影点的图像位置信息;所述基于所述关键点在所述多张目标图像中的二维投影点信息,从所述多张目标图像分别对应的图像特征中提取与所述关键点匹配的关键点特征信息,包括:
针对所述多张目标图像中的每一张目标图像,基于所述关键点在所述多张目标图像中的二维投影点的图像位置信息,从所述目标图像对应的图像特征中提取与所述图像位置信息对应的图像特征;
将提取的与所述图像位置信息对应的图像特征,确定为与所述关键点匹配的关键点特征信息。
在一种可能的实施方式中,所述基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息,包括:
针对所述多个关键点中的每一个关键点,基于所述关键点在不同视角下的关键点特征信息,以及其他关键点的关键点特征信息,确定所述关键点在不同视角下的更新关键点特征信息;所述其他关键点是与所述关键点关联的关键点;
基于所述多个关键点分别对应的更新关键点特征信息,以及所述目标对象对应的预先构建的关键点连接关系信息,确定所述目标对象的三维姿 态信息。
这里,可以利用每一个关键点在不同视角下的关键点特征信息,以及与该关键点关联的其他关键点的关键点特征信息进行该关键点的关键点特征信息的更新,更新关键点特征信息一定程度上包括了一个视图内的其他关键点的特征,还包括了不同视图间的关键点的特征,使得关键点的特征更趋近于准确,进而使得所确定的三维姿态信息也更为准确。
在一种可能的实施方式中,所述基于所述关键点在不同视角下的关键点特征信息,以及其他关键点的关键点特征信息,确定所述关键点在不同视角下的更新关键点特征信息,包括:
将所述多个视角中的每一个视角作为目标视角,分别执行下列步骤:
基于所述关键点在不同视角下的关键点特征信息以及所述关键点在不同视角下的各个二维投影点之间的第一连接关系,对所述关键点在不同视角下的关键点特征信息进行第一更新,得到第一更新后的关键点特征信息;
基于所述关键点在所述目标视角下的关键点特征信息以及其他关键点的关键点特征信息,对所述关键点在所述目标视角下的关键点特征信息进行第二更新,得到第二更新后的关键点特征信息;所述其他关键点与所述关键点同属于所述目标视角、且与所述关键点存在第二连接关系;
基于所述第一更新后的关键点特征信息以及所述第二更新后的关键点特征信息,确定所述关键点在所述目标视角下的更新关键点特征信息。
在一种可能的实施方式中,所述基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息,包括:
针对所述多个关键点中的每一个关键点,将所述关键点在不同视角下的关键点特征信息进行融合,得到所述关键点对应的融合关键点特征信息;
基于所述目标对象对应的预先构建的关键点连接关系信息、以及所述多个关键点分别对应的融合关键点特征信息,确定所述目标对象的三维姿态信息。
这里,通过不同视角下的关键点特征信息的融合操作,使得所确定的融合关键点特征信息能够兼顾不同视角的特征,进一步提升三维姿态信息的准确度。
在一种可能的实施方式中,所述关键点特征信息包括多个维度的关键点特征值;所述将所述关键点在不同视角下的关键点特征信息进行融合,得到所述关键点对应的融合关键点特征信息,包括:
针对所述多个维度中的每一个所述维度,确定所述关键点在不同视角下与所述维度对应的多个关键点特征值,并基于确定的所述多个关键点特征值确定所述维度对应的融合后关键点特征值;
基于所述多个维度分别对应的融合后关键点特征值,确定所述关键点对应的融合关键点特征信息。
在一种可能的实施方式中,所述基于确定的所述多个关键点特征值确定所述维度对应的融合后关键点特征值,包括以下至少之一:
从所述多个关键点特征值中选取取值最大的关键点特征值,作为所述维度对应的融合后关键点特征值;
将所述多个关键点特征值的平均值作为所述维度对应的融合后关键点特征值;
获取与所述多个关键点特征值分别对应的权重值,并基于所述多个关键点特征值以及与所述多个关键点特征值分别对应的权重值之间的加权求和,确定所述维度对应的融合后关键点特征值。
在一种可能的实施方式中,基于所述目标对象对应的预先构建的关键点连接关系信息、以及所述多个关键点分别对应的融合关键点特征信息,确定所述目标对象的三维姿态信息,包括:
基于所述目标对象对应的预先构建的关键点连接关系信息包括的各个关键点之间的第三连接关系,对所述多个关键点分别对应的融合关键点特征信息进行更新,得到更新后的融合关键点特征信息;
基于所述更新后的融合关键点特征信息,确定所述目标对象的三维姿态信息。
这里,可以基于预先构建的关键点连接关系信息包括的各个关键点之间的第三连接关系对多个关键点分别对应的融合关键点特征信息进行更新,得到更新后的融合关键点特征信息,也即,利用预先构建的第三连接关系可以校准融合关键点特征信息,使得所确定的三维姿态也更为准确。
在一种可能的实施方式中,所述目标对象的多个关键点中的每一个关键点作为第一关键点,具备所述第三连接关系的各个关键点中的每一个关键点作为第二关键点;
所述第二关键点为人体骨骼点;
所述第一关键点包括以下至少之一:人体骨骼点、人体标记点。
在一种可能的实施方式中,所述基于所述更新后的融合关键点特征信息,确定所述目标对象的三维姿态信息,包括:
将所述更新后的融合关键点特征信息输入到预先训练好的目标姿态识别网络中,输出姿态偏差信息;所述姿态偏差信息用于表示所述目标对象的当前姿态与待调整姿态之间的偏差情况;
基于所述姿态偏差信息以及所述目标对象的多个关键点在所述目标体素空间内的待调整三维坐标,确定所述目标对象的多个关键点在所述目标体素空间内的调整后三维坐标,并基于所述调整后三维坐标确定所述目标对象的三维姿态信息。
在一种可能的实施方式中,所述获取目标对象的多个关键点在目标体素空间内的待调整三维坐标,包括以下至少之一:
获取在多个视角下拍摄所述目标对象得到的多张目标图像,并基于所 述多张目标图像,确定目标对象的多个关键点在所述目标体素空间内的待调整三维坐标;
获取无线电设备发射的多条探测射线分别返回的深度信息,并基于所述深度信息确定所述目标对象的多个关键点在所述目标体素空间内的待调整三维坐标。
在一种可能的实施方式中,获取的所述多张目标图像中的每一张目标图像作为第一目标图像,用于所述关键点投影的多张目标图像中的每一张目标图像作为第二目标图像;
所述第一目标图像中至少部分图像与所述第二目标图像中至少部分图像相同;或者,
所述第一目标图像与所述第二目标图像不存在相同的图像。
第二方面,本公开实施例还提供了一种三维姿态调整的装置,所述装置包括:
获取部分,被配置为获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
确定部分,被配置为基于所述待调整三维坐标,确定所述多个关键点分别在所述多张目标图像中投影得到的关键点特征信息;所述多张目标图像为多个视角下拍摄目标对象得到的目标图像;
调整部分,被配置为基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息。
第三方面,本公开实施例还提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面及其各种实施方式任一所述的三维姿态调整的方法的步骤。
第四方面,本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面及其各种实施方式任一所述的三维姿态调整的方法的步骤。
第五方面,本公开实施例还提供了一种计算机程序产品,计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行如第一方面及其各种实施方式任一所述的三维姿态调整的方法的步骤。
关于上述三维姿态调整的装置、电子设备、及计算机可读存储介质的效果描述参见上述三维姿态调整的方法的说明。
为使本公开实施例的上述目的、特征和优点能更明显易懂,下文特举实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开实施例的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种三维姿态调整的方法的流程图;
图2示出了本公开实施例所提供的一种三维姿态调整的方法的应用示意图;
图3示出了本公开实施例所提供的一种三维姿态调整的装置的示意图;
图4示出了本公开实施例所提供的一种电子设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
经研究发现,相关技术中提供了一种基于3D空间体素化进行多视角特征提取,并通过CNN检测关键点的3D人体姿态估计方案。其中,空间体素化是将3D空间等距地划分为等大小的网格,体素化后的多视角图像特征可以作为3D卷积的输入。
上述3D人体姿态估计方案中的空间体素化会带来量化误差,在较大的3D空间场景里,往往只能选择较大的步长进行体素化,这会进一步增大量化误差,导致所确定的三维姿态的精度和准确度均较低。
基于上述研究,本公开实施例提供了一种三维姿态调整的方法、装置、电子设备及存储介质,以提升三维姿态评估的精度和准确度。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种三维姿态调整的方法进行详细介绍,本公开实施例所提供的三维姿态调整的方法的执行主体一般为具有一定计算能力的电子设备,该电子设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该三维姿态调整的方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例提供的三维姿态调整的方法的流程图,方法包括步骤S101至S103,其中:
S101:获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
S102:基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息;多张目标图像为多个视角下拍摄目标对象得到的目标图像;
S103:基于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息。
为了便于理解本公开实施例提供的三维姿态调整的方法,接下来首先对该方法的应用场景进行简单介绍。本公开实施例中的三维姿态调整的方法可以应用于任何需要进行三维姿态调整的应用场景中,例如,自动驾驶领域中对自动驾驶车辆前方的行人进行三维姿态的调整,再如,智能安防领域对道路车辆的三维姿态进行调整等,本公开实施例对此不做具体的限制。接下来多以自动驾驶领域进行示例说明。
考虑到相关技术中结合体素化和CNN网络所确定三维姿态的精度往往受限于体素化的量化误差,另外,即使是采用其它车载设备,例如无线电设备等所确定的三维姿态的精度也可能会因为受到各种不良因素的影响而导致所确定的三维姿态信息的精度和准确度较低。
正是为了解决上述问题,本公开实施例才提供了一种结合预先构建的关键点连接关系信息以及多个关键点分别在不同视角下的关键点特征信息进行三维姿态调整的方案,以提升三维姿态的精度和准确度,从而可以更好的应用于各种实际场景。
本公开实施例中的待调整三维坐标可以是针对同一目标对象的多个关键点的初始三维坐标。在一些实施例中,上述待调整三维坐标可以是基于多张目标图像进行体素化和CNN网络检测所确定的,也可以是基于多张目标图像进行极限距离计算,而后基于3D重建得到的,还可以是基于同步工 作的无线电设备所探测的深度信息计算得到的,除此以外,还可以是其它方法确定的,本公开实施例对此不做具体的限制。
需要说明的是,这里在确定待调整三维坐标的情况下所选用的目标图像与后续进行关键点投影的目标图像可以是针对同一目标对象拍摄所得到的图像。在一些实施例中,可以是完全相同的图像,也可以是部分相同的图像,也可以是完全不同的图像。以确定待调整三维坐标所选用的每一张目标图像作为第一目标图像,以用于关键点投影的每一张目标图像作为第二目标图像而言,第一目标图像中至少部分图像与第二目标图像中至少部分图像相同,这里可以是部分图像相同,也可以是全部图像相同,部分图像相同可以指的是存在重叠的图像,且重叠的图像的数量以及拍摄视角相同;或者,第一目标图像与第二目标图像不存在相同的图像,也即,尽管第一目标图像和第二目标图像均是针对处于某一姿态的目标对象所拍摄的图像,但在拍摄目标对象时,所采用的拍摄视角并不相同。
本公开实施例中,目标对象的多个关键点对应的可以是目标对象的关键节点,以人体作为目标对象为例,这里的关键点可以是与人体骨骼对应的人体骨骼点,还可以是能够识别出人体的人体标记点。
在获取到待调整三维坐标的情况下,本公开实施例提供的三维姿态调整的方法可以先确定多个关键点分别在多张目标图像中的二维投影点信息,并基于二维投影点信息,确定多个关键点在不同视角下的关键点特征信息。
这里的用于进行二维投影的多张目标图像可以是在多个视角下针对同一目标对象拍摄得到的,也即,一个视角可以对应一张目标图像。在自动驾驶领域中,上述多张目标图像可以是安装在车辆的多个摄像头针对同一目标对象进行同步拍摄得到的,这里的多个摄像头可以是结合不同的用户需求来选取,例如可以是车头两侧及中心位置处对应安装的三个摄像头针对前方行人抓拍的三张目标图像。
其中,有关二维投影点信息可以是基于待调整三维坐标所在三维坐标系与目标图像所在二维坐标系之间的转换关系确定的,也即,利用转换关系可以将关键点投影到目标图像上,从而确定关键点在目标图像上的二维投影点的图像位置等信息。
基于多个关键点分别在多张目标图像中的二维投影点信息,可以确定多个关键点在不同视角下的关键点特征信息,这里所确定的关键点特征信息可以是融合不同视角的特征信息,这主要是考虑到针对同一目标对象而言,在不同视角下,对应关键点之间存在一定的连接关系,继而可以实现有关关键点特征的更新。除此以外,在同一视角下,对应关键点之间也存在一定的连接关系,也可以实现有关关键点特征的更新,从而使得所确定的关键点特征信息更为贴合符合目标对象的实际姿态。
这里,预先构建的关键点连接关系信息可以对应的是一定姿态的目标 对象,结合关键点连接关系信息可以对多个关键点分别在不同视角下的关键点特征信息进行约束,进一步可以使得所确定的三维姿态更为准确。
其中,基于关键点连接关系以及关键点特征信息所确定的三维姿态信息可以是由对目标对象的多个关键点中的每一个关键点的待调整三维坐标进行调整后得到的调整后三维坐标组合得到的,也即,多个关键点的调整后三维坐标可以表征出目标对象的三维姿态。
考虑到关键点的关键点特征信息的确定对于三维姿态调整的关键作用,接下来可以对确定关键点特征信息的过程进行详细描述。
上述确定关键点特征信息的过程包括如下步骤:
步骤一、基于待调整三维坐标,确定多个关键点分别在多张目标图像中的二维投影点信息,以及提取多张目标图像分别对应的图像特征;
步骤二、基于关键点在多张目标图像中的二维投影点信息,从多张目标图像分别对应的图像特征中提取与关键点匹配的关键点特征信息;
步骤三、将提取的与关键点匹配的关键点特征信息确定为在多张目标图像中投影得到的关键点特征信息。
为了提取与关键点匹配的关键点特征信息,本公开实施例提供的三维姿态调整的方法中,可以针对每一张目标图像,基于关键点在多张目标图像中的二维投影点的图像位置信息,从该张目标图像对应的图像特征中提取与图像位置信息对应的图像特征,并将提取的该图像特征作为与关键点匹配的关键点特征信息。
其中,有关目标图像对应的图像特征可以是基于图像处理得到的,也可以是基于训练好的特征提取网络提取得到的,还可以是其它能够提取出表征目标对象、目标对象所在场景等各种信息的其它方法确定的,本公开实施例对此不做具体的限制。
为了确定出更为准确的目标对象的三维姿态信息,这里,可以先基于关键点连接关系对关键点的关键点特征信息进行更新,而后基于更新关键点特征信息,以及目标对象对应的预先构建的关键点连接关系信息,确定目标对象的三维姿态信息,在一些实施例中,可以通过如下步骤来确定目标对象的三维姿态信息:
步骤一、针对多个关键点中的每一个关键点,基于关键点在不同视角下的关键点特征信息,以及与关键点关联的其他关键点的关键点特征信息,确定关键点在不同视角下的更新关键点特征信息;
步骤二、基于多个关键点分别对应的更新关键点特征信息,以及目标对象对应的预先构建的关键点连接关系信息,确定目标对象的三维姿态信息。
这里,针对每一个关键点,与该关键点关联的其他关键点可以是与关键点存在连接关系的关键点,这里的连接关系主要对应的是同一视图下关键点之间的连接关系,而对于关键点在不同视角下的关键点特征信息而言, 可以确定的是不同视图下针对同一关键点所确定的二维投影点之间的连接关系。将多个视角中的每一个视角作为目标视角,在一些实施例中,可以通过如下步骤来进行关键点在每一个视角下的关键点特征信息的更新:
步骤一、基于关键点在不同视角下的关键点特征信息以及关键点在不同视角下的各个二维投影点之间的第一连接关系,对关键点在不同视角下的关键点特征信息进行第一更新,得到第一更新后的关键点特征信息;以及,基于关键点在目标视角下的关键点特征信息以及与关键点同属于目标视角、且与关键点存在第二连接关系的其他关键点的关键点特征信息对关键点在目标视角下的关键点特征信息进行第二更新,得到第二更新后的关键点特征信息;
步骤二、基于第一更新后的关键点特征信息以及第二更新后的关键点特征信息,确定关键点在目标视角下的更新关键点特征信息。
其中,关键点在不同视角下的各个二维投影点之间的第一连接关系是预先确定的,基于第一连接关系可以基于各个视角下的关键点的关键点特征信息对一个视角下的关键点的关键点特征信息进行更新,也即,第一更新后的关键点特征信息融合了其它视图下同一关键点的关键点特征。
另外,基于同属于目标视角、且与关键点存在第二连接关系的其他关键点的关键点特征信息可以对关键点的关键点特征信息进行更新,这里的第二连接关系也可以是预先确定的,这样所确定的第二更新后的关键点特征信息融合了同一视图的其它关键点的关键点特征。
结合第一更新后的关键点特征信息以及第二更新后的关键点特征信息,可以使得所确定的关键点在任一视角下的更新关键点特征信息更为准确。
需要说明的是,在结合第一更新后的关键点特征信息以及第二更新后的关键点特征信息进行关键点特征信息更新的过程中,可以先进行第一更新,然后在第一更新的基础上再进行第二更新;也可以先进行第二更新,然后在第二更新的基础上再进行第一更新;还可以同时进行第一更新和第二更新,而后融合第一更新和第二更新的结果实现关键点特征信息的更新,在此不做具体的限制。
在实际应用中,可以利用图神经网络(Graph Neural Network,GNN)实现上述关键点特征信息的更新。这里,在进行特征更新之前,可以基于上述第一连接关系、第二连接关系以及关键点特征信息构建图模型,通过对图模型进行卷积运算,不断更新关键点的关键点特征信息。
本公开实施例提供的三维姿态调整的方法,可以先进行关键点特征信息的融合,再结合预先构建的关键点连接关系信息确定目标对象的三维姿态信息,以提升三维姿态信息的准确性,在一些实施例中,可以通过如下步骤来确定目标对象的三维姿态信息:
步骤一、针对多个关键点中的每一个关键点,将关键点在不同视角下 的关键点特征信息进行融合,得到关键点对应的融合关键点特征信息;
步骤二、基于目标对象对应的预先构建的关键点连接关系信息、以及多个关键点分别对应的融合关键点特征信息,确定目标对象的三维姿态信息。
这里,可以针对关键点进行不同视角下的关键点特征信息的融合,这样所得到的融合关键点特征信息一定程度上可以兼顾各视角下的目标对象的姿态,在一些实施例中,可以包括如下步骤:
步骤一、针对多个维度中的每一个维度,确定关键点在不同视角下与维度对应的多个关键点特征值,并基于确定的多个关键点特征值确定维度对应的融合后关键点特征值;
步骤二、基于多个维度分别对应的融合后关键点特征值,确定关键点对应的融合关键点特征信息。
这里,针对关键点特征信息的每一个维度,可以从一个关键点在不同视角下、该维度对应的多个关键点特征值中选取取值最大的关键点特征值确定为该维度对应的融合后关键点特征值,以最大可能性的彰显每一个维度的特征,除此之外,本公开实施例还可以针对每一个维度,求取对应多个关键点特征值的平均值,并作为对应的融合后关键点特征值,也即,进行了每一个维度下的多个视角的关键点特征的融合。
另外,这里还可以结合与多个关键点特征值分别对应的权重值进行加权求和,确定融合后关键点特征值,从而实现了针对关键点的特征融合。在实际应用中,上述有关权重值可以是人工方式确定的,也可以通过预先训练的权重匹配网络确定的,在此不做具体的限制。
本公开实施例在进行目标对象的三维姿态信息确认的过程中,还可以基于目标对象对应的预先构建的关键点连接关系信息对上述融合关键点特征信息进行更新,进一步提升所确定姿态的准确性,在一些实施例中,可以通过如下步骤来实现对上述融合关键点特征信息的更新:
步骤一、基于目标对象对应的预先构建的关键点连接关系信息包括的各个关键点之间的第三连接关系,对多个关键点分别对应的融合关键点特征信息进行更新,得到更新后的融合关键点特征信息;
步骤二、基于更新后的融合关键点特征信息,确定目标对象的三维姿态信息。
这里,有关预先构建的关键点连接关系信息可以包括的是各个关键点之间的第三连接关系,这里的第三连接关系可以是按照人体骨架结构依次连接人体的各个人体骨骼点所形成的连接关系,一定程度上可以对每一个关键点所对应的融合关键点特征信息进行更为准确的校准,这样所确定的目标对象的三维姿态也更为准确。
本公开实施例中,可以按照如下步骤确定目标对象的三维姿态信息:
步骤一、将更新后的融合关键点特征信息输入到预先训练好的目标姿 态识别网络中,输出姿态偏差信息;姿态偏差信息用于表示目标对象的当前姿态与待调整姿态之间的偏差情况;
步骤二、基于姿态偏差信息以及目标对象的多个关键点在目标体素空间内的待调整三维坐标,确定目标对象的多个关键点在目标体素空间内的调整后三维坐标,并基于调整后三维坐标确定目标对象的三维姿态信息。
这里,利用目标姿态识别网络确定的可以是有关姿态偏差信息,该姿态偏差信息对应的是当前姿态与待调整姿态之间的偏差情况,基于姿态偏差信息以及待调整三维坐标,可以确定目标对象在目标体素空间内的调整后三维坐标,从而可以确定出目标对象的三维姿态信息。
其中,上述待调整姿态可以是由目标对象的多个关键点的待调整三维坐标组合得到的,这样,目标姿态识别网络可以输出的是针对目标对象的每一个关键点的坐标偏差量,将坐标偏差量与对应待调整三维坐标进行求和,即可以确定出每一个关键点的调整后三位坐标。
为了便于进一步理解本公开实施例提供的三维姿态调整的方法,接下来可以结合图2进一步进行说明。
如图2所示,对于待调整姿态下的目标对象而言,可以基于其待调整姿态确定多个关键点在目标体素空间内的待调整三维坐标,将待调整三维坐标投影到三个视角下的摄像机(摄像机#1-摄像机#3)所拍摄的目标图像上,可以构造如图所示的图模型G={V,E}。
其中,节点V对应的是关键点在目标图像中的二维投影点所在图像位置处的图像特征,边E对应的是节点之间的关系,可以是同一个关键点在跨视角下的连接或者是单个视角内不同关键点的连接。
构造好图模型之后,可以进行关键点在不同视角下的关键点特征信息的更新,这里,可以利用GNN来实现特征的更新。除此之外,可以基于最大值池化完成多视角特征的融合。
对于融合得到的融合关键点特征信息可以利用目标对象对应的预先构建的关键点连接关系信息进行更新,并将更新后的融合关键点特征信息输入到回归网络,预测出对待调整姿态估计的修正值,将该修正值与待调整姿态进行求和运算,可以确定出调整后的三维姿态信息。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与三维姿态调整的方法对应的三维姿态调整的装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述三维姿态调整的方法相似,因此装置的实施可以参见方法的实施。
参照图3所示,为本公开实施例提供的一种三维姿态调整的装置的示意图,装置包括:获取部分301、确定部分302和调整部分303;其中,
获取部分301,被配置为获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
确定部分302,被配置为基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息;多张目标图像为多个视角下拍摄目标对象得到的目标图像;
调整部分303,被配置为基于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息。
采用上述三维姿态调整的装置,在获取到目标对象的多个关键点在目标体素空间内的待调整三维坐标的情况下,可以基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息,最后基于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息。可知,本公开实施例利用多个关键点在不同视角下的关键点特征信息可以确定多个关键点在不同视角下的连接关系,这样的连接关系将有助于确定出更为准确的关键点特征信息,除此之外,结合预先构建的关键点连接关系信息可以约束关键点之间的连接关系,使得确定出的关键点特征信息更为准确,进一步使得所确定的三维姿态信息的精度和准确度得以提升。
在一种可能的实施方式中,确定部分302,被配置为按照如下步骤基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息:
基于待调整三维坐标,确定多个关键点分别在多张目标图像中的二维投影点信息,以及提取多张目标图像分别对应的图像特征;
基于关键点在多张目标图像中的二维投影点信息,从多张目标图像分别对应的图像特征中提取与关键点匹配的关键点特征信息;
将提取的与关键点匹配的关键点特征信息确定为在多张目标图像中投影得到的关键点特征信息。
在一种可能的实施方式中,二维投影点信息包括二维投影点的图像位置信息;确定部分302,被配置为按照如下步骤基于关键点在多张目标图像中的二维投影点信息,从多张目标图像分别对应的图像特征中提取与关键点匹配的关键点特征信息:
针对多张目标图像中的每一张目标图像,基于关键点在多张目标图像中的二维投影点的图像位置信息,从目标图像对应的图像特征中提取与图像位置信息对应的图像特征;
将提取的与图像位置信息对应的图像特征,确定为与关键点匹配的关键点特征信息。
在一种可能的实施方式中,调整部分303,被配置为按照如下步骤基于 目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息:
针对多个关键点中的每一个关键点,基于关键点在不同视角下的关键点特征信息,以及与关键点关联的其他关键点的关键点特征信息,确定关键点在不同视角下的更新关键点特征信息;
基于多个关键点分别对应的更新关键点特征信息,以及目标对象对应的预先构建的关键点连接关系信息,确定目标对象的三维姿态信息。
在一种可能的实施方式中,调整部分303,被配置为按照如下步骤基于关键点在不同视角下的关键点特征信息,以及与关键点关联的其他关键点的关键点特征信息,确定关键点在不同视角下的更新关键点特征信息:
将多个视角中的每一个视角作为目标视角,分别执行下列步骤:
基于关键点在不同视角下的关键点特征信息以及关键点在不同视角下的各个二维投影点之间的第一连接关系,对关键点在不同视角下的关键点特征信息进行第一更新,得到第一更新后的关键点特征信息;以及,
基于关键点在目标视角下的关键点特征信息以及与关键点同属于目标视角、且与关键点存在第二连接关系的其他关键点的关键点特征信息对关键点在目标视角下的关键点特征信息进行第二更新,得到第二更新后的关键点特征信息;
基于第一更新后的关键点特征信息以及第二更新后的关键点特征信息,确定关键点在目标视角下的更新关键点特征信息。
在一种可能的实施方式中,调整部分303,被配置为按照如下步骤基于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息:
针对多个关键点中的每一个关键点,将关键点在不同视角下的关键点特征信息进行融合,得到关键点对应的融合关键点特征信息;
基于目标对象对应的预先构建的关键点连接关系信息、以及多个关键点分别对应的融合关键点特征信息,确定目标对象的三维姿态信息。
在一种可能的实施方式中,关键点特征信息包括多个维度的关键点特征值;调整部分303,被配置为按照如下步骤将关键点在不同视角下的关键点特征信息进行融合,得到关键点对应的融合关键点特征信息:
针对多个维度中的每一个维度维度,确定关键点在不同视角下、维度对应的多个关键点特征值,并基于确定的多个关键点特征值确定维度对应的融合后关键点特征值;
基于多个维度分别对应的融合后关键点特征值,确定关键点对应的融合关键点特征信息。
在一种可能的实施方式中,调整部分303,被配置为按照如下方式基于 确定的多个关键点特征值确定维度对应的融合后关键点特征值:
从多个关键点特征值中选取取值最大的关键点特征值,作为维度对应的融合后关键点特征值;
将多个关键点特征值的平均值作为维度对应的融合后关键点特征值;
获取与多个关键点特征值分别对应的权重值,并基于多个关键点特征值以及与多个关键点特征值分别对应的权重值之间的加权求和,确定维度对应的融合后关键点特征值。
在一种可能的实施方式中,调整部分303,被配置为按照如下步骤基于目标对象对应的预先构建的关键点连接关系信息、以及多个关键点分别对应的融合关键点特征信息,确定目标对象的三维姿态信息:
基于目标对象对应的预先构建的关键点连接关系信息包括的各个关键点之间的第三连接关系,对多个关键点分别对应的融合关键点特征信息进行更新,得到更新后的融合关键点特征信息;
基于更新后的融合关键点特征信息,确定目标对象的三维姿态信息。
在一种可能的实施方式中,目标对象的多个关键点中的每一个关键点作为第一关键点,具备第三连接关系的各个关键点中的每一个关键点作为第二关键点;
第二关键点为人体骨骼点;
第一关键点包括人体骨骼点和人体标记点中的至少一项。
在一种可能的实施方式中,调整部分303,被配置为按照如下步骤基于更新后的融合关键点特征信息,确定目标对象的三维姿态信息:
将更新后的融合关键点特征信息输入到预先训练好的目标姿态识别网络中,输出姿态偏差信息;姿态偏差信息用于表示目标对象的当前姿态与待调整姿态之间的偏差情况;
基于姿态偏差信息以及目标对象的多个关键点在目标体素空间内的待调整三维坐标,确定目标对象的多个关键点在目标体素空间内的调整后三维坐标,并基于调整后三维坐标确定目标对象的三维姿态信息。
在一种可能的实施方式中,获取部分301,被配置为按照以下方式获取目标对象的多个关键点在目标体素空间内的待调整三维坐标:
获取在多个视角下拍摄目标对象得到的多张目标图像,并基于多张目标图像,确定目标对象的多个关键点在目标体素空间内的待调整三维坐标;
获取无线电设备发射的多条探测射线分别返回的深度信息,并基于深度信息确定目标对象的多个关键点在目标体素空间内的待调整三维坐标。
在一种可能的实施方式中,获取的多张目标图像中的每一张目标图像作为第一目标图像,用于关键点投影的多张目标图像中的每一张目标图像作为第二目标图像;
第一目标图像中至少部分图像与第二目标图像中至少部分图像相同;
第一目标图像与第二目标图像不存在相同的图像。
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。
本公开实施例还提供了一种电子设备,如图4所示,为本公开实施例提供的电子设备结构示意图,包括:处理器401、存储器402、和总线403。存储器402存储有处理器401可执行的机器可读指令(比如,图3中的装置中获取部分301、确定部分302、以及调整部分303对应的执行指令等),当电子设备运行时,处理器401与存储器402之间通过总线403通信,机器可读指令被处理器401执行时执行如下处理:
获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
基于待调整三维坐标,确定多个关键点分别在多张目标图像中投影得到的关键点特征信息;多张目标图像为多个视角下拍摄目标对象得到的目标图像;
基于目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在多个视角对应的目标图像的关键点特征信息,确定目标对象的三维姿态信息。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的三维姿态调整的方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行上述方法实施例中所述的三维姿态调整的方法的步骤,可参见上述方法实施例。
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地 方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。
前述的计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备,可为易失性存储介质或非易失性存储介质。计算机可读存储介质例如可以是但不限于:电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disk Read Only Memory,CD-ROM)、数字多功能盘(Digital versatile Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
最后应说明的是:以上所述实施例,仅为本公开实施例的具体实施方式,用以说明本公开实施例的技术方案,而非对其限制,本公开实施例的保护范围并不局限于此,尽管参照前述实施例对本公开实施例进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开实施例揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开实施例的保护范围之内。因此,本公开实施例的保护范围应所述以权利要求的保护范围为准。
工业实用性
本公开实施例获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;基于所述待调整三维坐标,确定所述多个关键点分别在多张目标图像中投影得到的关键点特征信息;所述多张目标图像为多个视角下拍摄目标对象得到的目标图像;基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息。如此,本公开实施例利用多个关键点在不同视角下的关键点特征信息可以确定多个关键点在不同视角下的连接关系,这样的连接关系将有助于确定出更为准确的关键点特征信息,除此之外,结合预先构建的关键点连接关系信息可以约束关键点之间的连接关系,使得确定出的关键点特征信息更为准确,进一步使得所确定的三维姿态信息的精度和准确度得以提升。

Claims (17)

  1. 一种三维姿态调整的方法,所述方法包括:
    获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
    基于所述待调整三维坐标,确定所述多个关键点分别在多张目标图像中投影得到的关键点特征信息;所述多张目标图像为多个视角下拍摄目标对象得到的目标图像;
    基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息。
  2. 根据权利要求1所述的方法,其中,所述基于所述待调整三维坐标,确定所述多个关键点分别在多张目标图像中投影得到的关键点特征信息,包括:
    基于所述待调整三维坐标,确定所述多个关键点分别在所述多张目标图像中的二维投影点信息,以及提取所述多张目标图像分别对应的图像特征;
    基于所述关键点在所述多张目标图像中的二维投影点信息,从所述多张目标图像分别对应的图像特征中提取与所述关键点匹配的关键点特征信息;
    将提取的与所述关键点匹配的关键点特征信息确定为所述在多张目标图像中投影得到的关键点特征信息。
  3. 根据权利要求2所述的方法,其中,所述二维投影点信息包括二维投影点的图像位置信息;所述基于所述关键点在所述多张目标图像中的二维投影点信息,从所述多张目标图像分别对应的图像特征中提取与所述关键点匹配的关键点特征信息,包括:
    针对所述多张目标图像中的每一张目标图像,基于所述关键点在所述目标图像中的二维投影点的图像位置信息,从所述目标图像对应的图像特征中提取与所述图像位置信息对应的图像特征;
    将提取的与所述图像位置信息对应的图像特征,确定为与所述关键点匹配的关键点特征信息。
  4. 根据权利要求1至3任一所述的方法,其中,所述基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息,包括:
    针对所述多个关键点中的每一个关键点,基于所述关键点在不同视角下的关键点特征信息,以及其他关键点的关键点特征信息,确定所述关键点在不同视角下的更新关键点特征信息;所述其他关键点是与所述关键点关联的关键点;
    基于所述多个关键点分别对应的更新关键点特征信息,以及所述目标 对象对应的预先构建的关键点连接关系信息,确定所述目标对象的三维姿态信息。
  5. 根据权利要求4所述的方法,其中,所述基于所述关键点在不同视角下的关键点特征信息,以及其他关键点的关键点特征信息,确定所述关键点在不同视角下的更新关键点特征信息,包括:
    将所述多个视角中的每一个视角作为目标视角,分别执行下列步骤:
    基于所述关键点在不同视角下的关键点特征信息以及所述关键点在不同视角下的各个二维投影点之间的第一连接关系,对所述关键点在不同视角下的关键点特征信息进行第一更新,得到第一更新后的关键点特征信息;
    基于所述关键点在所述目标视角下的关键点特征信息以及其他关键点的关键点特征信息,对所述关键点在所述目标视角下的关键点特征信息进行第二更新,得到第二更新后的关键点特征信息;所述其他关键点与所述关键点同属于所述目标视角、且与所述关键点存在第二连接关系;
    基于所述第一更新后的关键点特征信息以及所述第二更新后的关键点特征信息,确定所述关键点在所述目标视角下的更新关键点特征信息。
  6. 根据权利要求1至3任一所述的方法,其中,所述基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息,包括:
    针对所述多个关键点中的每一个关键点,将所述关键点在不同视角下的关键点特征信息进行融合,得到所述关键点对应的融合关键点特征信息;
    基于所述目标对象对应的预先构建的关键点连接关系信息、以及所述多个关键点分别对应的融合关键点特征信息,确定所述目标对象的三维姿态信息。
  7. 根据权利要求6所述的方法,其中,所述关键点特征信息包括多个维度的关键点特征值;所述将所述关键点在不同视角下的关键点特征信息进行融合,得到所述关键点对应的融合关键点特征信息,包括:
    针对所述多个维度中的每一个所述维度,确定所述关键点在不同视角下与所述维度对应的多个关键点特征值,并基于确定的所述多个关键点特征值确定所述维度对应的融合后关键点特征值;
    基于所述多个维度分别对应的融合后关键点特征值,确定所述关键点对应的融合关键点特征信息。
  8. 根据权利要求7所述的方法,其中,所述基于确定的所述多个关键点特征值确定所述维度对应的融合后关键点特征值,包括以下至少之一:
    从所述多个关键点特征值中选取取值最大的关键点特征值,作为所述维度对应的融合后关键点特征值;
    将所述多个关键点特征值的平均值作为所述维度对应的融合后关键点特征值;
    获取与所述多个关键点特征值分别对应的权重值,并基于所述多个关键点特征值以及与所述多个关键点特征值分别对应的权重值之间的加权求和,确定所述维度对应的融合后关键点特征值。
  9. 根据权利要求6至8任一所述的方法,其中,所述基于所述目标对象对应的预先构建的关键点连接关系信息、以及所述多个关键点分别对应的融合关键点特征信息,确定所述目标对象的三维姿态信息,包括:
    基于所述目标对象对应的预先构建的关键点连接关系信息包括的各个关键点之间的第三连接关系,对所述多个关键点分别对应的融合关键点特征信息进行更新,得到更新后的融合关键点特征信息;
    基于所述更新后的融合关键点特征信息,确定所述目标对象的三维姿态信息。
  10. 根据权利要求9所述的方法,其中,所述目标对象的多个关键点中的每一个关键点作为第一关键点,具备所述第三连接关系的各个关键点中的每一个关键点作为第二关键点;
    所述第二关键点为人体骨骼点;
    所述第一关键点包括以下至少之一:人体骨骼点、人体标记点。
  11. 根据权利要求9或10所述的方法,其中,所述基于所述更新后的融合关键点特征信息,确定所述目标对象的三维姿态信息,包括:
    将所述更新后的融合关键点特征信息输入到预先训练好的目标姿态识别网络中,输出姿态偏差信息;所述姿态偏差信息用于表示所述目标对象的当前姿态与待调整姿态之间的偏差情况;
    基于所述姿态偏差信息以及所述目标对象的多个关键点在所述目标体素空间内的待调整三维坐标,确定所述目标对象的多个关键点在所述目标体素空间内的调整后三维坐标,并基于所述调整后三维坐标确定所述目标对象的三维姿态信息。
  12. 根据权利要求1至11任一所述的方法,其中,所述获取目标对象的多个关键点在目标体素空间内的待调整三维坐标,包括以下至少之一:
    获取在多个视角下拍摄所述目标对象得到的多张目标图像,并基于所述多张目标图像,确定目标对象的多个关键点在所述目标体素空间内的待调整三维坐标;
    获取无线电设备发射的多条探测射线分别返回的深度信息,并基于所述深度信息确定所述目标对象的多个关键点在所述目标体素空间内的待调整三维坐标。
  13. 根据权利要求12所述的方法,其中,获取的所述多张目标图像中的每一张目标图像作为第一目标图像,用于所述关键点投影的多张目标图像中的每一张目标图像作为第二目标图像;
    所述第一目标图像中至少部分图像与所述第二目标图像中至少部分图像相同;或者,
    所述第一目标图像与所述第二目标图像不存在相同的图像。
  14. 一种三维姿态调整的装置,所述装置包括:
    获取部分,被配置为获取目标对象的多个关键点在目标体素空间内的待调整三维坐标;
    确定部分,被配置为基于所述待调整三维坐标,确定所述多个关键点分别在多张目标图像中投影得到的关键点特征信息;所述多张目标图像为多个视角下拍摄目标对象得到的目标图像;
    调整部分,被配置为基于所述目标对象对应的预先构建的关键点连接关系信息,以及多个关键点分别在所述多个视角对应的目标图像的关键点特征信息,确定所述目标对象的三维姿态信息。
  15. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至13任一所述的三维姿态调整的方法的步骤。
  16. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至13任一所述的三维姿态调整的方法的步骤。
  17. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行权利要求1至13中任一所述的方法。
PCT/CN2022/083749 2021-08-13 2022-03-29 三维姿态调整的方法、装置、电子设备及存储介质 WO2023015903A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110929425.0 2021-08-13
CN202110929425.0A CN113610966A (zh) 2021-08-13 2021-08-13 三维姿态调整的方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023015903A1 true WO2023015903A1 (zh) 2023-02-16

Family

ID=78340658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083749 WO2023015903A1 (zh) 2021-08-13 2022-03-29 三维姿态调整的方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN113610966A (zh)
WO (1) WO2023015903A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610966A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维姿态调整的方法、装置、电子设备及存储介质
CN114494334B (zh) * 2022-01-28 2023-02-03 北京百度网讯科技有限公司 调整三维姿态的方法、装置、电子设备及存储介质
CN115620094B (zh) * 2022-12-19 2023-03-21 南昌虚拟现实研究院股份有限公司 关键点的标注方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10657659B1 (en) * 2017-10-10 2020-05-19 Slightech, Inc. Visual simultaneous localization and mapping system
CN111582207A (zh) * 2020-05-13 2020-08-25 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN112528831A (zh) * 2020-12-07 2021-03-19 深圳市优必选科技股份有限公司 多目标姿态估计方法、多目标姿态估计装置及终端设备
CN112767489A (zh) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 一种三维位姿确定方法、装置、电子设备及存储介质
CN112836618A (zh) * 2021-01-28 2021-05-25 清华大学深圳国际研究生院 一种三维人体姿态估计方法及计算机可读存储介质
US20210232858A1 (en) * 2020-01-23 2021-07-29 Seiko Epson Corporation Methods and systems for training an object detection algorithm using synthetic images
CN113610966A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维姿态调整的方法、装置、电子设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840500B (zh) * 2019-01-31 2021-07-02 深圳市商汤科技有限公司 一种三维人体姿态信息检测方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10657659B1 (en) * 2017-10-10 2020-05-19 Slightech, Inc. Visual simultaneous localization and mapping system
US20210232858A1 (en) * 2020-01-23 2021-07-29 Seiko Epson Corporation Methods and systems for training an object detection algorithm using synthetic images
CN111582207A (zh) * 2020-05-13 2020-08-25 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN112528831A (zh) * 2020-12-07 2021-03-19 深圳市优必选科技股份有限公司 多目标姿态估计方法、多目标姿态估计装置及终端设备
CN112836618A (zh) * 2021-01-28 2021-05-25 清华大学深圳国际研究生院 一种三维人体姿态估计方法及计算机可读存储介质
CN112767489A (zh) * 2021-01-29 2021-05-07 北京达佳互联信息技术有限公司 一种三维位姿确定方法、装置、电子设备及存储介质
CN113610966A (zh) * 2021-08-13 2021-11-05 北京市商汤科技开发有限公司 三维姿态调整的方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113610966A (zh) 2021-11-05

Similar Documents

Publication Publication Date Title
CN110728717B (zh) 定位方法及装置、设备、存储介质
WO2023015903A1 (zh) 三维姿态调整的方法、装置、电子设备及存储介质
CN110705574B (zh) 定位方法及装置、设备、存储介质
CN110264509B (zh) 确定图像捕捉设备的位姿的方法、装置及其存储介质
CN110702111A (zh) 使用双事件相机的同时定位与地图创建(slam)
EP3786900A2 (en) Markerless multi-user multi-object augmented reality on mobile devices
CN110704563B (zh) 地图融合方法及装置、设备、存储介质
JP2021527877A (ja) 3次元人体姿勢情報の検出方法および装置、電子機器、記憶媒体
US10846923B2 (en) Fusion of depth images into global volumes
US11847796B2 (en) Calibrating cameras using human skeleton
EP2779091B1 (en) Automatic stereoscopic camera calibration
WO2019079766A1 (en) SYSTEM, APPARATUS, METHOD FOR DATA PROCESSING, AND INFORMATION MEDIUM
WO2023015938A1 (zh) 三维点检测的方法、装置、电子设备及存储介质
US20240029301A1 (en) Efficient localization based on multiple feature types
KR102083293B1 (ko) 모션 정보를 이용한 객체 복원 장치 및 이를 이용한 객체 복원 방법
JP2023065296A (ja) 平面検出装置及び方法
US11222430B2 (en) Methods, devices and computer program products using feature points for generating 3D images
CN110060343B (zh) 地图构建方法及系统、服务器、计算机可读介质
CN113793379A (zh) 相机姿态求解方法及系统、设备和计算机可读存储介质
CN112767484B (zh) 定位模型的融合方法、定位方法、电子装置
CN110880187A (zh) 一种相机位置信息确定方法、装置、电子设备及存储介质
WO2021065607A1 (ja) 情報処理装置および方法、並びにプログラム
CN114332448A (zh) 基于稀疏点云的平面拓展方法及其系统和电子设备
Henry et al. LOSTU: Fast, Scalable, and Uncertainty-Aware Triangulation
CN113808216A (zh) 相机标定方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22854908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE