CN113900516A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113900516A
CN113900516A CN202111137432.3A CN202111137432A CN113900516A CN 113900516 A CN113900516 A CN 113900516A CN 202111137432 A CN202111137432 A CN 202111137432A CN 113900516 A CN113900516 A CN 113900516A
Authority
CN
China
Prior art keywords
information
movable part
target object
image data
orientation information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137432.3A
Other languages
Chinese (zh)
Inventor
曹健
卓力安
张邦
潘攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Damo Institute Hangzhou Technology Co Ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202111137432.3A priority Critical patent/CN113900516A/en
Publication of CN113900516A publication Critical patent/CN113900516A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a data processing method, a data processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring measurement data of an inertial sensor on a movable part of a target object, and determining first orientation information of the movable part of the target object according to the measurement data; acquiring monocular image data of the target object, and determining second orientation information of a movable part of the target object according to the monocular image data; and according to the trained fusion model, fusing the first orientation information and the second orientation information of each movable part to determine the posture information of the target object. In the embodiment of the application, the measuring mode of the sensor and the monocular image data analysis mode can be fused, so that the accuracy of the attitude information can be improved.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, a data processing apparatus, an electronic device, and a storage medium.
Background
The gesture of the target object needs to be captured in a part of scenes, for example, in a shooting scene of movies and television, the gesture of the motion process of a person can be captured, and each part of the virtual object is adjusted according to the gesture, so that the virtual object moves along with the person to produce movies and television.
Currently, a plurality of Inertial Measurement Units (IMUs) are generally worn on each component of a target object to analyze an orientation of each component of the target object through data collected by the IMUs (or Inertial sensors), so as to determine a posture of the target object according to the orientation of each component.
However, the posture analysis in the above manner is not accurate.
Disclosure of Invention
The embodiment of the application provides a data processing method for improving the accuracy of attitude information.
Correspondingly, the embodiment of the application also provides a data processing device, an electronic device and a storage medium, which are used for ensuring the realization and the application of the system.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including: acquiring measurement data of an inertial sensor on a movable part of a target object, and determining first orientation information of the movable part of the target object according to the measurement data; acquiring monocular image data of the target object, and determining second orientation information of a movable part of the target object according to the monocular image data; and according to the trained fusion model, fusing the first orientation information and the second orientation information of each movable part to determine the posture information of the target object.
Optionally, the method further includes: adjusting each movable part of the virtual object according to the posture information of the target object, and combining background data of the image data to form virtual image data; and issuing virtual image data to display in a display page.
Optionally, the determining first orientation information of the movable part of the target object according to the measurement data includes: first orientation information of the movable part of the target object is determined according to the wearing position of the inertial sensor and the measurement data.
Optionally, the determining second orientation information of the movable part of the target object according to the monocular image data includes: determining the position information of a target node of the movable part according to the monocular image data; determining depth information corresponding to the position information in the monocular image data to form coordinate information of a target node of the movable part; and determining second orientation information of the movable part according to the coordinate information of each node of the movable part.
Optionally, the determining the position information of the target node of the movable component according to the monocular image data includes at least one of the following steps: acquiring related monocular images before and/or after the monocular image data, and determining the position information of a target node in the monocular image data according to the related position information of the target node of the movable part in the related monocular images; determining the position of the target identifier in the monocular image data to determine the position information of the target node of the movable part; and determining the position information of a second node associated with the first node according to the position information of the first node of the movable part in the monocular image data.
Optionally, the determining, according to the coordinate information of each node of the movable component, second orientation information of the movable component includes: checking the coordinate information according to the component parameters of the movable component to form a checking result; and determining second orientation information of the movable part according to the coordinate information of which the verification result is normal.
Optionally, the determining, according to the coordinate information of each node of the movable component, second orientation information of the movable component, further includes: providing a coordinate adjustment page to show coordinate information with an abnormal verification result; acquiring coordinate adjustment information in response to the triggering of a coordinate adjustment control of the coordinate adjustment page, and determining the adjusted coordinate information according to the coordinate adjustment information; and determining second orientation information of the movable part according to the adjusted coordinate information.
Optionally, the method further includes: providing a posture display page to display posture information; acquiring attitude adjustment information of the attitude information according to triggering of an attitude adjustment control in the attitude display page; and adjusting the posture information of the target object according to the adjustment information, and adjusting the fusion model.
Optionally, the method further includes: determining an amount of difference between the first orientation information and the second orientation information of the movable part; providing an orientation adjustment page to show orientation information with a difference amount larger than a preset difference threshold, wherein the orientation information comprises at least one of first orientation information and second orientation information; and responding to the trigger of the orientation adjustment control in the orientation adjustment page, acquiring orientation adjustment information, and adjusting the node of the movable part according to the orientation adjustment information.
Optionally, the method further includes: determining the adjustment amount of a second node associated with the first node according to the adjustment of the first node of the movable part, and displaying the adjustment amount in an orientation adjustment page; and acquiring adjustment confirmation information according to the triggering of the confirmation control in the orientation adjustment page so as to confirm the adjustment of the second node.
Optionally, the method further comprises at least one of the following steps: when the attitude of the target object is determined to be a preset attitude according to the attitude information of the target object, adjusting parameters of the inertial sensor so as to calibrate the inertial sensor; and acquiring a calibration instruction, and adjusting parameters of the inertial sensor according to the specified preset posture so as to calibrate the inertial sensor.
Optionally, the method further comprises the step of calibrating the inertial sensor: determining a confidence level of the second orientation information; when the confidence level of the second orientation information meets the confidence level threshold, the inertial sensor is calibrated according to the difference between the first orientation information and the second orientation information.
Optionally, the method further includes a training step of the fusion model: determining first orientation information of a movable part of the target object according to the measurement data of the inertial sensor; determining second orientation information of a movable part of the target object according to the monocular image data of the target object; inputting the first orientation information and the second orientation information into the fusion model, and determining a posture analysis result; and adjusting the fusion model according to the posture marking result and the posture analysis result to obtain the trained fusion model.
Optionally, the method further comprises at least one of the following steps: acquiring a plurality of image data according to a plurality of camera assemblies in different directions, analyzing a movable part of a target object according to the plurality of image data, and determining a posture labeling result; and providing a labeling page, and determining a posture labeling result according to the labeling information of the movable part of the target object.
Optionally, the method further includes: performing data disturbance on the monocular image data to obtain the monocular image data after the data disturbance so as to expand the data volume of the image data, wherein the data disturbance comprises the following steps: random cutting, hole digging, shielding, turning, rotating and image tone adjusting.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including: determining first orientation information of a movable part of the target object according to the measurement data of the inertial sensor; determining second orientation information of a movable part of the target object according to the monocular image data of the target object; inputting the first orientation information and the second orientation information into the fusion model, and determining a posture analysis result; and adjusting the fusion model according to the posture marking result and the posture analysis result to obtain the trained fusion model.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including: acquiring measurement data of an inertial sensor on a movable part of a target object from live broadcast data, and determining first orientation information of the movable part of the target object according to the measurement data; acquiring monocular image data of a target object from the live broadcast data, and determining second orientation information of a movable part of the target object according to the monocular image data; according to the trained fusion model, fusing the first orientation information and the second orientation information of each movable part to determine the posture information of the target object; adjusting each movable part of the virtual object according to the posture information of the target object, and combining background data of the image data to obtain fused image data; and determining live display data according to the fused image data, and issuing the live display data to display in a display page.
Optionally, the method further includes: providing a virtual object selection page, wherein a plurality of virtual objects are displayed on the virtual object selection page; acquiring an object selection instruction of the virtual object according to the triggering of the object selection control in the virtual object selection page; and determining the selected virtual object according to the object selection instruction so as to fuse the virtual object into the live display data.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including: providing a resource data display page to display the resource data; acquiring measurement data of an inertial sensor on a movable part of a target object, and determining first orientation information of the movable part of the target object according to the measurement data; acquiring monocular image data of the target object, and determining second orientation information of a movable part of the target object according to the monocular image data; according to the trained fusion model, fusing the first orientation information and the second orientation information of each movable part to determine the posture information of the target object; and determining a resource control instruction for the resource data according to the posture information of the target object so as to control the resource data and update the resource data displayed in the resource data display page.
In order to solve the above problem, an embodiment of the present application discloses a data processing apparatus, including: the first orientation acquisition module is used for acquiring the measurement data of the inertial sensor on the movable part of the target object and determining first orientation information of the movable part of the target object according to the measurement data; the second orientation acquisition module is used for acquiring monocular image data of the target object and determining second orientation information of the movable part of the target object according to the monocular image data; and the attitude information acquisition module is used for fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model and determining the attitude information of the target object.
In order to solve the above problem, an embodiment of the present application discloses a data processing apparatus, including: the measurement data acquisition module is used for determining first orientation information of a movable part of the target object according to the measurement data of the inertial sensor; the monocular image acquisition module is used for determining second orientation information of the movable part of the target object according to the monocular image data of the target object; the analysis result acquisition module is used for inputting the first orientation information and the second orientation information into the fusion model and determining a posture analysis result; and the fusion model training module is used for adjusting the fusion model according to the attitude marking result and the attitude analysis result to obtain the trained fusion model.
In order to solve the above problem, an embodiment of the present application discloses a data processing apparatus, including: the first orientation determining module is used for acquiring the measurement data of the inertial sensor on the movable part of the target object from the live broadcast data and determining first orientation information of the movable part of the target object according to the measurement data; the second orientation determining module is used for acquiring monocular image data of the target object from the live broadcast data and determining second orientation information of a movable part of the target object according to the monocular image data; the attitude information determination module is used for fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model to determine the attitude information of the target object; the fusion image determining module is used for adjusting each movable part of the virtual object according to the attitude information of the target object and obtaining fusion image data by combining background data of the image data; and the display data determining module is used for determining the live display data according to the fused image data and issuing the live display data to display in the display page.
In order to solve the above problem, an embodiment of the present application discloses a data processing apparatus, including: the resource data display module is used for providing a resource data display page to display the resource data; the first orientation obtaining module is used for obtaining the measurement data of the inertial sensor on the movable part of the target object and determining first orientation information of the movable part of the target object according to the measurement data; the second orientation obtaining module is used for obtaining monocular image data of the target object and determining second orientation information of the movable part of the target object according to the monocular image data; the attitude information acquisition module is used for fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model and determining the attitude information of the target object; and the resource data control module is used for determining a resource control instruction for the resource data according to the posture information of the target object so as to control the resource data and update the resource data displayed in the resource data display page.
In order to solve the above problem, an embodiment of the present application discloses an electronic device, including: a processor; and a memory having executable code stored thereon, which when executed, causes the processor to perform the method of any of the above embodiments.
To address the above issues, embodiments of the present application disclose one or more machine-readable media having executable code stored thereon that, when executed, cause a processor to perform a method as described in any of the above embodiments.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, an inertial sensor can be configured on a movable part to be detected of a target object, so that first orientation information of the movable part is determined according to measurement data of the inertial sensor; the monocular image data shot by the camera shooting assembly can be analyzed, and second orientation information of the movable part in the monocular image data is determined; and then, according to the trained fusion model, fusing the first orientation information and the second orientation information to obtain the posture information of the target object. In the embodiment of the application, the measuring mode of the sensor and the monocular image data analysis mode can be fused, so that the accuracy of the attitude information can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a data processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of a data processing method according to another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of a data processing method according to yet another embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of a data processing method according to yet another embodiment of the present application;
FIG. 5 is a schematic flow chart diagram of a data processing method according to yet another embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of a data processing method according to yet another embodiment of the present application;
FIG. 7 is a block diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 8 is a schematic block diagram of a data processing apparatus according to another embodiment of the present application;
FIG. 9 is a schematic block diagram of a data processing apparatus according to yet another embodiment of the present application;
FIG. 10 is a schematic block diagram of a data processing apparatus according to yet another embodiment of the present application;
fig. 11 is a schematic structural diagram of an exemplary apparatus provided in one embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The embodiment of the application can be applied to a scene of capturing the posture of a target object, first orientation information of each movable part of the target object can be acquired through the inertial sensor, second orientation information of each movable part of the target object is acquired through a monocular image acquired by the camera, and the first orientation information and the second orientation information are fused, so that more accurate posture information is obtained.
Specifically, as shown in fig. 1, the present embodiment may wear a plurality of Inertial sensors, which may also be referred to as Inertial Measurement Units (IMUs), in advance for detecting and measuring acceleration and rotational motion, for a target object. The target object is composed of a plurality of movable parts, and the inertial sensors may be respectively provided on the respective movable parts (to be detected) of the target object to measure the movement of the respective movable parts. In addition, the camera can be preset in the embodiment of the application to acquire a corresponding image, so that monocular image data can be obtained, the monocular image data can be understood as an image acquired by a single camera, and the single camera can be used for acquiring the image to analyze the posture (position) of the target object.
According to the embodiment of the application, the measurement data of the target object can be acquired through the inertial sensor, the measurement data comprises the motion direction and the motion information of the movable part of the target object, and the motion information can comprise the information such as the motion amount, the rotation direction and the rotation amount. The first orientation information of the movable part can be determined according to the wearing position of the inertial sensor, the measurement data and the parameter of the movable part of the target object. The orientation information (the first orientation information or the second orientation information) of the target object may include direction information and position information of the target object, and the direction information of the movable part of the target object is not changed when the movable part is translated, so that the new orientation of the target object may be represented by the change of the position information.
In the embodiment of the application, nodes corresponding to the movable parts can be preset, and each movable part can correspond to at least one node, so that the orientation information corresponding to the movable part can be determined by identifying coordinates of the nodes in the image. Specifically, in the embodiment of the present application, monocular image data of a target object may be acquired by a camera, and position information (which may be understood as two-dimensional position information) of a target node of a movable component in the monocular image data is determined. In this embodiment of the present application, an identifier may be set for a node of the movable part, for example, a color different from the movable part may be set as the identifier (or a character identifier may be set for each node), so as to identify the position of the node of the movable part through the identifier. The position of the relevant node can be analyzed according to the relevance between the nodes. The embodiment of the application can also analyze the position of the node of the movable part through a relevant monocular image (an image of at least one frame before or after the image data) according to the relevance of the position of the node of the movable part in the multi-frame image. The embodiment of the application can extract image features of a monocular image through an encoder (encoder), and determine the position of a node in image data through analysis of the image features.
After the position information of the node is determined, the depth corresponding to the position information may be identified, the depth information may be determined, and coordinate information (position) of a target node of the movable part may be formed, and the coordinate information may be understood as a three-dimensional coordinate of the node. After determining the coordinate information, second orientation information for each movable element may be determined based on the coordinate information. After the coordinate information corresponding to the position information is determined, the coordinate information may be verified according to the component parameter of the movable component in the embodiment of the application, and for the coordinate information passing the verification, the second orientation information of the movable component may be determined according to the coordinate information passing the verification. For the coordinate information which is not passed through the verification, the embodiment of the application can display the abnormal coordinate information so that the user can perform corresponding adjustment, and therefore the second orientation information of the movable part is determined according to the adjusted coordinate information.
After determining the first orientation information and the second orientation information of the movable components, the embodiment of the application may fuse the first orientation information and the second orientation information of each movable component according to a trained fusion model to determine the posture information of the target object. After the attitude information is determined, the attitude information can be correspondingly applied according to different scenes, for example, the embodiment of the application can be applied to a production scene of movies and televisions, a target object can wear an inertial sensor, and monocular image data of the target object is shot through a camera so as to determine the attitude information of the target object. Then, the movable part of the virtual object to be adjusted can be adjusted according to the posture information of the target object, so that the virtual object moves along with the movement of the target object, and the movie and television are manufactured. For another example, the embodiment of the application can also be applied to a live broadcast scene, a person can be replaced by a virtual object, a live broadcast user can wear a plurality of inertial sensors, and a monocular image of the user is shot through a camera, so that the first orientation information and the second orientation information of the live broadcast user are determined, and further the posture information of the live broadcast user is determined. Then, each movable part of the virtual object can be adjusted according to the posture information of the live broadcast user, and virtual image data is formed by combining background data in the live broadcast data so as to be displayed to the live broadcast watching user for watching.
After the posture information of the target object is determined, the posture information can be displayed to the user, the user can adjust the posture information (such as adjusting the coordinates of the nodes of the movable nodes) so as to enable the posture information to be more accurate, and in addition, the fusion model can be adjusted according to the adjustment of the posture information, so that the fusion accuracy of the fusion model is improved. The user can adjust the posture information by adjusting the nodes of the movable parts, and each movable part of the target object may have a correlation.
The embodiment of the application can also calibrate the inertial sensor, and the inertial sensor can generate errors in the measurement process, and the errors are accumulated continuously, so that the accuracy of the data measured by the inertial sensor is reduced. Therefore, the embodiment of the application can calibrate the inertial sensor to improve the accuracy of the measured data. In an optional embodiment, a preset posture may be preset, and when the motion made by the target object conforms to the preset posture, the parameters of the inertial sensor are adjusted according to the parameters corresponding to the preset posture. On one hand, the calibration instruction can be made manually, and the inertial sensor can be adjusted according to the preset posture corresponding to the calibration instruction. On the other hand, the attitude information can be determined according to the first orientation information and the second orientation information, and the parameters of the inertial sensor are adjusted when the attitude information conforms to the preset attitude.
In another alternative embodiment, the second orientation information is determined from the monocular image data, and the confidence level of the second orientation information is high when each movable part of the target object in the monocular image data is unobstructed, so that the inertial sensor may be calibrated based on a difference between the first orientation information and the second orientation information when the confidence level of the second orientation information meets a confidence level threshold.
The fusion model in the embodiments of the present application may fuse the first orientation information and the second orientation information to determine the pose of the target object. For example, weights corresponding to the first orientation information and the second orientation information may be determined, and the fused orientation information may be determined according to the weights, so as to determine the posture information of the target object. Correspondingly, in the training process of the fusion model, the embodiment of the application can input the first orientation information and the second orientation information into the fusion model, determine the posture analysis result (which can be understood as the posture information), and obtain the posture labeling result corresponding to the first orientation information and the second orientation information, so as to adjust the fusion model according to the difference between the posture labeling result and the posture analysis result, so as to train the fusion model.
For image data, the embodiment of the present application may perform data perturbation on monocular image data to expand the data amount of the monocular image data, for example, the embodiment of the present application may perform the following data perturbation on the monocular image data: random cutting, hole digging, shielding, turning, rotating and image tone adjusting.
On one hand, the embodiment of the present application can provide an annotation page, where image data, first orientation information, and second orientation information can be displayed in the annotation page, and a user can adjust the first orientation information or the second orientation information to form a posture annotation result; in addition, the gesture analysis results corresponding to the first orientation information and the second orientation information can also be displayed in the labeling page, and the user can adjust the gesture analysis results to form a gesture labeling result. On the other hand, the embodiment of the application can analyze the orientation of each movable part of the target object through the multi-view image data, so as to form a posture labeling result. The multi-view image data can be determined according to a plurality of camera shooting components, the orientations of the camera shooting components are different, and if the camera shooting components can shoot around a target object; according to the method and the device, the analysis can be carried out according to the distributed positions of the plurality of camera shooting assemblies and the shot images, so that the posture marking result is determined. The gesture information determined through the multi-view image data can be used as a gesture marking result, and the existing gesture information (obtained through the multi-view image data) can be better utilized. In the application process of the trained fusion model, the gesture of the target object is captured by adopting monocular image data, and compared with a mode of analyzing by adopting monocular image data, the monocular image data is analyzed by matching the accurate positions of all the camera shooting components, and the gesture can be captured more simply and conveniently by adopting the mode of the embodiment of the application.
The embodiment of the application can be applied to a scene of posture capture, and the inertial sensor can be worn on each movable part of the target object so as to determine the first orientation information of the movable part of the target object through the measurement data of the inertial sensor; the monocular image data can be acquired through the camera, and second orientation information of the movable part of the target object is determined; the first orientation information and the second orientation information may then be fused to obtain the pose of the target object. The embodiment of the application is optimization of the analysis process of the gesture capture, so the embodiment of the application can be applied to various scenes of gesture capture through monocular image data and sensor measurement data. For example, the embodiment of the application can be applied to a scene of movie and television production, and a target object (such as a person, an animal and a mechanical device) can wear an inertial sensor and shoot monocular image data of the target object through a camera so as to determine the posture information of the target object. Then, the movable part of the virtual object to be adjusted can be adjusted according to the posture information of the target object, so that the virtual object moves along with the movement of the target object, and the movie and television are manufactured.
For another example, the embodiment of the application can be applied to a live broadcast scene, a live broadcast user can be replaced by a virtual object, the live broadcast user can wear a plurality of inertial sensors, and a monocular image of the user is shot through a camera, so that the first orientation information and the second orientation information of the live broadcast user are determined, and further the posture information of the live broadcast user is determined. Then, each movable part of the virtual object can be adjusted according to the gesture information of the live broadcast user, and virtual image data (or live broadcast display data) is formed by combining background data in the live broadcast data so as to be displayed to the live broadcast watching user for watching.
For another example, the embodiment of the application can be applied to gesture capture and a scene (such as a game scene) for interaction by using gestures; for example, the embodiment of the application can be applied to a game scene, the gesture of a user can be captured, a resource control instruction (game control instruction) corresponding to the gesture is determined, the resource (game application) performs corresponding processing according to the resource control instruction, and corresponding resource data (game data) is fed back to the user for interaction.
The embodiment of the present application provides a data processing method, which can be applied to a server, where the server can interact with a data acquisition device (such as an inertial sensor and a camera) to capture a posture according to acquired data, where it is to be noted that the embodiment of the present application is illustrated in the application of the server as an example, the method of the embodiment of the present application can also be applied to a terminal device, and the terminal device can have a data acquisition module (such as an inertial sensor and a camera), a calculation module, and a display module for feedback, so as to complete data acquisition, data calculation, and data feedback at the terminal device. In the following, taking data processing by a server as an example, specifically, as shown in fig. 2, the method includes:
step 202, measurement data of the inertial sensor on the movable part of the target object is acquired, and first orientation information of the movable part of the target object is determined according to the measurement data. In the embodiment of the application, the server and the inertial sensor can be directly connected for interaction; and the connection can be carried out through other transit equipment to carry out interaction. The target object is composed of a plurality of movable parts, each movable part can complete at least one of movement, rotation and the like, the inertial sensor can be worn on the movable part of the target object, and the server can acquire the measurement data of each inertial sensor and determine the orientation of the movable part by combining the wearing position of the inertial sensor. Specifically, as an alternative embodiment, the determining the first orientation information of the movable component of the target object according to the measurement data includes: first orientation information of the movable part of the target object is determined according to the wearing position of the inertial sensor and the measurement data. The inertial sensor can be worn on each movable part of the target object, and the embodiment of the application can record the wearing position of the inertial sensor and record measurement data such as the moving direction, the moving distance, the rotating direction and the rotating amount of each movable part so as to determine the first orientation information of the movable part. In addition, in an alternative embodiment, the wearing position input by the user into the server may not be accurate enough, so that the embodiment of the present application may correct the wearing position through the difference between the first orientation information and the second orientation information. If the wearing position mark is wrong, the measurement accuracy of the inertial sensor is high, and therefore, the difference between the first orientation information and the second orientation information can be caused continuously, and therefore, the wearing position can be calibrated according to the difference between the first orientation information and the second orientation information so as to improve the accuracy of posture capture. It should be noted that, in the embodiment of the present application, it is described that the inertial sensor is used to detect the movement, the rotation, and the like of the movable part, and the embodiment of the present application may also use other sensors for detecting the movement and the rotation to perform data detection, which may be configured according to the requirement, for example, for a human joint, the measurement data may be obtained by other motion capture sensors (such as an attitude sensor).
On the other hand, in step 204, the embodiment of the present application may acquire monocular image data of the target object, and determine second orientation information of the movable part of the target object according to the monocular image data. Monocular image data can be understood as an image shot by a single camera shooting assembly, and the position of the camera shooting assembly in the embodiment of the application can be fixed or can be moved, so that the monocular image data can be conveniently applied to more scenes.
The movable part can correspond to at least one node so as to be convenient for positioning the movable part in the monocular image data. Specifically, as an optional embodiment, the determining second orientation information of the movable part of the target object according to the monocular image data includes: determining the position information of a target node of the movable part according to the monocular image data; determining depth information corresponding to the position information in the monocular image data to form coordinate information of a target node of the movable part; and determining second orientation information of the movable part according to the coordinate information of each node of the movable part.
Specifically, as an optional embodiment, the determining the position information of the target node of the movable part according to the monocular image data includes at least one of the following steps: acquiring related monocular images before and/or after the monocular image data, and determining the position information of a target node in the monocular image data according to the related position information of the target node of the movable part in the related monocular images; determining the position of the target identifier in the monocular image data to determine the position information of the target node of the movable part; and determining the position information of a second node associated with the first node according to the position information of the first node of the movable part in the monocular image data. On one hand, according to the embodiment of the application, the position information of the node in the monocular image data can be acquired according to the position correlation of the node of the movable part in the multi-frame image. On the other hand, the embodiment of the present application may set an identifier for the node of the movable part, for example, a color different from the movable part may be set as an identifier (or character identifier) to identify the position of the node of the movable part through the identifier. On the other hand, the embodiment of the application can analyze the position information of the related nodes according to the correlation among the nodes in the monocular image data.
After the two-dimensional position information of each node is determined, the monocular image data may be subjected to depth recognition to determine depth information, and after the depth information is determined, the three-dimensional coordinates of the node may be constructed according to the depth information and the position information, so that the orientation of the movable part may be determined according to the three-dimensional coordinate information. Specifically, as an optional embodiment, the determining second orientation information of the movable component according to the coordinate information of each node of the movable component includes: checking the coordinate information according to the component parameters of the movable component to form a checking result; and determining second orientation information of the movable part according to the coordinate information of which the verification result is normal. The part parameters (such as length, width, height, maximum rotation amount, maximum stretching amount and the like) of the movable part are limited, and if the movable part consisting of a plurality of nodes exceeds the range of the part parameters, the coordinates corresponding to the nodes may have differences.
For the coordinate information that is not verified (abnormal), the embodiment of the present application may be shown to a user, and the user may adjust the coordinate to correct the coordinate information to obtain a more accurate orientation of the movable component, specifically, as an optional embodiment, the determining the second orientation information of the movable component according to the coordinate information of each node of the movable component further includes: providing a coordinate adjustment page to show coordinate information with an abnormal verification result; acquiring coordinate adjustment information in response to the triggering of a coordinate adjustment control of the coordinate adjustment page, and determining the adjusted coordinate information according to the coordinate adjustment information; and determining second orientation information of the movable part according to the adjusted coordinate information. The embodiment of the application can provide the coordinate adjustment page to show unusual coordinate information, the coordinate adjustment page can contain the coordinate adjustment controlling part, and the user can trigger the coordinate adjustment controlling part to adjust the coordinate information, thereby revise the coordinate information, promote the degree of accuracy of second orientation information.
After the first orientation information and the second orientation information are determined, the first orientation information and the second orientation information of each movable component may be fused according to the trained fusion model to determine the pose information of the target object in step 206. The posture information of the target object is composed of orientation information of the respective movable parts of the target object. During shooting of monocular image data, the camera may move or rotate, which may cause the first orientation information and the second orientation information to be non-corresponding (not in the same coordinate system), and therefore, the fusion model may determine an adjustment amount of the first orientation information or the second orientation information according to a posture composed of the first orientation information and a posture composed of the second orientation information, so as to adjust the first orientation information or the second orientation information, so that a corresponding relationship is formed between the first orientation information and the second orientation information (in two different coordinate systems). After the pose information is determined, the pose information may be presented to the user for adjustment by the user. Specifically, as an optional embodiment, the method further includes: providing a posture display page to display posture information; acquiring attitude adjustment information of the attitude information according to triggering of an attitude adjustment control in the attitude display page; and adjusting the posture information of the target object according to the adjustment information, and adjusting the fusion model. The embodiment of the application can provide the posture display page, display the posture information in the posture display page, the posture display page can contain the posture adjustment control, and a user can trigger the posture adjustment control in the posture display page to adjust the posture information, so that the posture information can be adjusted according to the posture adjustment information, the fusion model is adjusted, and the accuracy of the fusion model is further improved.
In addition, after determining the first orientation information and the second orientation information, the size of the difference between the first orientation information and the second orientation information may be determined, and information with an excessive difference is presented to the user, so that the user can adjust the first orientation information or the second orientation information, and specifically, as an optional embodiment, the method further includes: determining an amount of difference between the first orientation information and the second orientation information of the movable part; providing an orientation adjustment page to show orientation information with a difference amount larger than a preset difference threshold, wherein the orientation information comprises at least one of first orientation information and second orientation information; and responding to the trigger of the orientation adjustment control in the orientation adjustment page, acquiring orientation adjustment information, and adjusting the node of the movable part according to the orientation adjustment information. The orientation adjustment page can be provided to show orientation information (first orientation information and second orientation information) with a difference greater than a preset difference threshold value on the orientation adjustment page, the orientation adjustment page comprises an orientation adjustment control, and a user can trigger and operate the orientation adjustment control in the orientation adjustment page to adjust the first orientation information or the second orientation information, so that the accuracy of the orientation information is improved, and the accuracy of the posture information is improved.
Each active component of the target object may have an association, and therefore, in the embodiment of the present application, association adjustment may be performed on a plurality of nodes by using the association between each active node of the active component, specifically, as an optional embodiment, the method further includes: determining the adjustment amount of a second node associated with the first node according to the adjustment of the first node of the movable part, and displaying the adjustment amount in an orientation adjustment page; and acquiring adjustment confirmation information according to the triggering of the confirmation control in the orientation adjustment page so as to confirm the adjustment of the second node. According to the method and the device for adjusting the orientation of the movable part, the second node related to the first node can be adjusted according to the adjustment of the first node by the user and displayed in the orientation adjustment page, the orientation adjustment page can further comprise a confirmation control, and the user can confirm whether the adjustment of the second node is accurate (or further adjusted), so that the user can conveniently adjust the orientation of the movable part.
After determining the pose information, the pose information may be correspondingly applied according to different scenarios, and specifically, as an optional embodiment, the method further includes: and adjusting each movable part of the virtual object according to the posture information of the target object, and combining background data of the image data to form virtual image data. The method and the device can be applied to capturing the gesture of the target object and adjusting the scene of the movable part of the virtual object according to the gesture, so that the target object is replaced by the virtual object to form virtual image data. For example, the method can be applied to live scenes and scenes for movie production, and can be specifically adjusted according to requirements.
During the measurement process of the inertial sensor, errors may be generated in each measurement, and the errors are accumulated continuously, which may result in a decrease in the accuracy of the data measured by the inertial sensor. Therefore, the embodiment of the application can calibrate the inertial sensor to improve the accuracy of the measured data. Specifically, as an optional embodiment, the method further includes at least one of the following steps: when the attitude of the target object is determined to be a preset attitude according to the attitude information of the target object, adjusting parameters of the inertial sensor so as to calibrate the inertial sensor; and acquiring a calibration instruction, and adjusting parameters of the inertial sensor according to the specified preset posture so as to calibrate the inertial sensor. On one hand, the preset posture can be preset, and when the action made by the target object accords with the preset posture, the parameters of the inertial sensor are adjusted according to the parameters corresponding to the preset posture.
On the other hand, in the embodiment of the present application, the inertial sensor may be calibrated by using the second orientation information with high reliability, and specifically, as an optional embodiment, the method further includes the step of calibrating the inertial sensor: determining a confidence level of the second orientation information; when the confidence level of the second orientation information meets the confidence level threshold, the inertial sensor is calibrated according to the difference between the first orientation information and the second orientation information. The second orientation information is determined according to the monocular image data, and when each movable part of the target object in the monocular image data is not blocked, the reliability of the second orientation information is high, so that the embodiment of the application can calibrate the inertial sensor according to the difference between the first orientation information and the second orientation information by analyzing the reliability of the orientation information of each movable part in the monocular image data and when the reliability of the second orientation information meets the reliability threshold.
In the training process of the fusion model, the embodiment of the present application may train the fusion model through the first orientation information and the second orientation information configured with the posture labeling result, specifically, as an optional embodiment, the method further includes a training step of the fusion model: determining first orientation information of a movable part of the target object according to the measurement data of the inertial sensor; determining second orientation information of a movable part of the target object according to the monocular image data of the target object; inputting the first orientation information and the second orientation information into the fusion model, and determining a posture analysis result; and adjusting the fusion model according to the posture marking result and the posture analysis result to obtain the trained fusion model. The embodiment of the application can expand monocular image data to improve the data volume of training data and improve the accuracy of the fusion model. Specifically, as an optional embodiment, the method further includes: performing data disturbance on the monocular image data to obtain the monocular image data after the data disturbance so as to expand the data volume of the image data, wherein the data disturbance comprises the following steps: random cutting, hole digging, shielding, turning, rotating and image tone adjusting.
In the embodiment of the present application, a manual annotation or multi-view image analysis may be adopted to obtain a posture annotation result, and specifically, as an optional embodiment, the method further includes at least one of the following steps: acquiring a plurality of image data according to a plurality of camera assemblies in different directions, analyzing a movable part of a target object according to the plurality of image data, and determining a posture labeling result; and providing a labeling page, and determining a posture labeling result according to the labeling information of the movable part of the target object. The embodiment of the application can provide the annotation page, and the user can operate in the annotation page to label the gesture of the target object to form a gesture labeling result. The annotation page can display the monocular image data so that a user can annotate the data. The embodiment of the application can also analyze the coordinates of the plurality of camera assemblies and the orientation of each movable part of the target object through multi-purpose image data (image data shot by the camera assemblies in different orientations), so as to determine the attitude annotation result.
In the embodiment of the application, an inertial sensor can be configured on a movable part to be detected of a target object, so that first orientation information of the movable part is determined according to measurement data of the inertial sensor; the monocular image data shot by the camera shooting assembly can be analyzed, and second orientation information of the movable part in the monocular image data is determined; and then, according to the trained fusion model, fusing the first orientation information and the second orientation information to obtain the posture information of the target object. In the embodiment of the application, the measuring mode of the sensor and the monocular image data analysis mode can be fused, so that the accuracy of the attitude information can be improved.
On the basis of the foregoing embodiments, an embodiment of the present application further provides a data processing method, which can improve accuracy of gesture capture, and specifically, as shown in fig. 3, the method includes:
step 302, measurement data of an inertial sensor on a moving part of the target object is acquired.
Step 304, determining first orientation information of the movable part of the target object according to the wearing position of the inertial sensor and the measurement data.
And step 306, acquiring monocular image data of the target object.
And 308, determining the position information of the target node of the movable part according to the monocular image data. As an alternative embodiment, the determining the position information of the target node of the movable part according to the monocular image data includes at least one of the following steps: acquiring related monocular images before and/or after the monocular image data, and determining the position information of a target node in the monocular image data according to the related position information of the target node of the movable part in the related monocular images; determining the position of the target identifier in the monocular image data to determine the position information of the target node of the movable part; and determining the position information of a second node associated with the first node according to the position information of the first node of the movable part in the monocular image data.
And step 310, determining depth information corresponding to the position information in the monocular image data to form coordinate information of the target node of the movable part. As an alternative embodiment, the determining the second orientation information of the movable element according to the coordinate information of each node of the movable element includes: checking the coordinate information according to the component parameters of the movable component to form a checking result; and determining second orientation information of the movable part according to the coordinate information of which the verification result is normal. Providing a coordinate adjustment page to show coordinate information with an abnormal verification result; acquiring coordinate adjustment information in response to the triggering of a coordinate adjustment control of the coordinate adjustment page, and determining the adjusted coordinate information according to the coordinate adjustment information; and determining second orientation information of the movable part according to the adjusted coordinate information.
Step 312, determining second orientation information of the movable component according to the coordinate information of each node of the movable component.
And step 314, fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model, and determining the posture information of the target object. As an optional embodiment, the method further comprises: providing a posture display page to display posture information; acquiring attitude adjustment information of the attitude information according to triggering of an attitude adjustment control in the attitude display page; and adjusting the posture information of the target object according to the adjustment information, and adjusting the fusion model. As an optional embodiment, the method further comprises at least one of the following steps: when the attitude of the target object is determined to be a preset attitude according to the attitude information of the target object, adjusting parameters of the inertial sensor so as to calibrate the inertial sensor; and acquiring a calibration instruction, and adjusting parameters of the inertial sensor according to the specified preset posture so as to calibrate the inertial sensor. As an optional embodiment, the method further comprises a training step of the fusion model: determining first orientation information of a movable part of the target object according to the measurement data of the inertial sensor; determining second orientation information of a movable part of the target object according to the monocular image data of the target object; inputting the first orientation information and the second orientation information into the fusion model, and determining a posture analysis result; and adjusting the fusion model according to the posture marking result and the posture analysis result to obtain the trained fusion model.
And step 316, adjusting each movable part of the virtual object according to the posture information of the target object, and forming virtual image data by combining background data of the image data so as to display the virtual image data in the display page.
In the embodiment of the application, an inertial sensor may be configured on the to-be-detected movable component of the target object to acquire measurement data through the inertial sensor, and then, the first orientation information of the movable component of the target object may be determined by combining the measurement data and the wearing position of the inertial sensor. The monocular image data shot by the camera shooting assembly can be acquired, the position information of the node of the movable part in the monocular image data is identified according to the monocular image data, the depth information corresponding to the position information is determined, the coordinate information of the node of the movable part is determined, and the second orientation information of the movable part is further determined. After determining the first orientation information and the second orientation information, the first orientation information and the second orientation information may be fused to obtain the pose information of the target object. Then, corresponding processing may be performed according to different application scenarios by using the pose information, for example, each moving part of the virtual object may be adjusted according to the pose information of the target object, and virtual image data may be formed by combining with background data of the image data, so as to be displayed in the display page.
On the basis of the foregoing embodiments, an embodiment of the present application further provides a data processing method, which may train a fusion model using first orientation information and second orientation information labeled with a posture labeling result, so as to perform posture analysis on a target object according to the fusion model, specifically, as shown in fig. 4, the method includes:
step 402, determining first orientation information of a movable part of the target object according to the measurement data of the inertial sensor.
Step 404, determining second orientation information of the movable part of the target object according to the monocular image data of the target object.
And step 406, inputting the first orientation information and the second orientation information into the fusion model, and determining a posture analysis result.
And step 408, adjusting the fusion model according to the posture marking result and the posture analysis result to obtain the trained fusion model.
The implementation manner of this embodiment is similar to that of the above embodiment, and the detailed implementation manner of the above embodiment may be referred to, and is not described herein again.
In the embodiment of the application, the fusion model can be trained according to the first orientation information and the second orientation information which are marked with the attitude marking result, the first orientation information can be determined through the measurement data of the inertial sensor, and the second orientation information can be determined according to the monocular image data shot by the camera shooting assembly. After the first orientation information and the second orientation information are determined, the first orientation information and the second orientation information may be input into the fusion model, a posture analysis result (which may be understood as posture information) of the target object is determined, then, an adjustment amount of the fusion model may be determined according to a difference between the posture labeling result and the posture analysis result, and the fusion model is adjusted, so that a trained fusion model is obtained.
On the basis of the foregoing embodiment, the present application embodiment further provides a data processing method, which can be applied to a live broadcast scene, and can replace live broadcast personnel with a virtual object to improve the interest of live broadcast, specifically, as shown in fig. 5, the method includes:
step 502, obtaining measurement data of an inertial sensor on a movable part of a target object from the live broadcast data, and determining first orientation information of the movable part of the target object according to the measurement data.
Step 504, monocular image data of the target object is obtained from the live broadcast data, and second orientation information of the movable part of the target object is determined according to the monocular image data.
And step 506, fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model, and determining the posture information of the target object.
And step 508, adjusting each movable part of the virtual object according to the posture information of the target object, and combining background data of the image data to obtain fused image data.
And step 510, determining live display data according to the fused image data, and issuing the live display data to display in a display page. As an optional embodiment, the method further comprises: providing a virtual object selection page, wherein a plurality of virtual objects are displayed on the virtual object selection page; acquiring an object selection instruction of the virtual object according to the triggering of the object selection control in the virtual object selection page; and determining the selected virtual object according to the object selection instruction so as to fuse the virtual object into the live display data. The method and the device for selecting the virtual objects can provide a virtual object selection page, a plurality of virtual objects are displayed in the page, an object selection control can be configured for each virtual object, and a user can trigger the object selection control to select the corresponding virtual object, so that live users can be replaced by the virtual objects, and the interest of live broadcasting is improved.
The implementation manner of this embodiment is similar to that of the above embodiment, and the detailed implementation manner of the above embodiment may be referred to, and is not described herein again.
The embodiment of the application can be applied to a scene captured by the gesture of a live broadcast user, the gesture of the live broadcast user can be captured, the live broadcast user is replaced by a virtual object, the live broadcast interest is improved, specifically, the live broadcast user can wear a plurality of inertial sensors, the monocular image of the user is shot through the camera, the first orientation information and the second orientation information of the live broadcast user are determined, and then the gesture information of the live broadcast user is determined. Then, each movable part of the virtual object can be adjusted according to the posture information of the live broadcast user, and the fusion image data is formed by combining the background data in the live broadcast data. And then, determining live broadcast display data according to the fused image data, and issuing the live broadcast display data to a user watching the live broadcast for watching.
On the basis of the foregoing embodiments, the present application further provides a data processing method, which can be applied in a scene (e.g., a game scene) that captures a gesture of a user and interacts according to the gesture, and can perform corresponding feedback according to the gesture of the user, so as to improve interestingness, specifically, as shown in fig. 6, the method includes:
step 602, providing a resource data display page to display the resource data.
Step 604, measurement data of the inertial sensor on the movable part of the target object is obtained, and first orientation information of the movable part of the target object is determined according to the measurement data.
Step 606, acquiring monocular image data of the target object, and determining second orientation information of the movable part of the target object according to the monocular image data.
And 608, fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model, and determining the posture information of the target object.
Step 610, determining a resource control instruction for the resource data according to the posture information of the target object, so as to control the resource data, and updating the resource data displayed in the resource data display page.
The implementation manner of this embodiment is similar to that of the above embodiment, and the detailed implementation manner of the above embodiment may be referred to, and is not described herein again.
The method and the device for displaying the resource data can be applied to interactive scenes such as games, teaching and home furnishing, for example, the method and the device for displaying the resource data can be applied to the game scenes, the resource data (such as game data) can be displayed in the resource data display page, and gesture prompt information for controlling each resource data can be displayed in the resource data display page so that a user can make a corresponding gesture. The method and the device for measuring the orientation of the movable component of the target object can acquire the measurement data of the movable component (such as each arm, leg and the like) of the target object (such as a user), and determine the first orientation information of the movable component of the target object according to the measurement data; the monocular image data of the target object can be acquired, and second orientation information of the movable part of the target object is determined according to the monocular image data; then, the first orientation information and the second orientation information can be input into the trained fusion model to determine the posture information of the target object, and determine a resource control instruction (game control instruction) corresponding to the posture information, and the resource (game application) performs corresponding processing according to the resource control instruction and feeds back corresponding resource data (game data) to the user for interaction.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
On the basis of the foregoing embodiment, this embodiment further provides a data processing apparatus, and with reference to fig. 7, the data processing apparatus may specifically include the following modules:
a first orientation obtaining module 702, configured to obtain measurement data of the inertial sensor on the movable component of the target object, and determine first orientation information of the movable component of the target object according to the measurement data.
The second orientation obtaining module 704 is configured to obtain monocular image data of the target object, and determine second orientation information of the movable component of the target object according to the monocular image data.
And a posture information obtaining module 706, configured to fuse the first orientation information and the second orientation information of each moving part according to the trained fusion model, and determine posture information of the target object.
In summary, in the embodiment of the present application, an inertial sensor may be configured on a to-be-detected movable component of a target object, so as to determine first orientation information of the movable component according to measurement data of the inertial sensor; the monocular image data shot by the camera shooting assembly can be analyzed, and second orientation information of the movable part in the monocular image data is determined; and then, according to the trained fusion model, fusing the first orientation information and the second orientation information to obtain the posture information of the target object. In the embodiment of the application, the measuring mode of the sensor and the monocular image data analysis mode can be fused, so that the accuracy of the attitude information can be improved.
On the basis of the foregoing embodiment, this embodiment further provides a data processing apparatus, which may specifically include the following modules:
and the measurement data acquisition processing module is used for acquiring the measurement data of the inertial sensor on the movable part of the target object.
And the first orientation acquisition processing module is used for determining first orientation information of the movable part of the target object according to the wearing position of the inertial sensor and the measurement data.
And the monocular image acquiring and processing module is used for acquiring the monocular image data of the target object.
And the position information acquisition processing module is used for determining the position information of the target node of the movable part according to the monocular image data.
And the coordinate information acquisition processing module is used for determining depth information corresponding to the position information in the monocular image data so as to form coordinate information of a target node of the movable part.
And the second orientation acquisition processing module is used for determining the second orientation information of the movable part according to the coordinate information of each node of the movable part.
And the orientation information fusion processing module is used for fusing the first orientation information and the second orientation information of each movable part according to the trained fusion model and determining the posture information of the target object.
And the virtual image acquisition processing module is used for adjusting each movable part of the virtual object according to the posture information of the target object and forming virtual image data by combining background data of the image data.
In the embodiment of the application, an inertial sensor may be configured on the to-be-detected movable component of the target object to acquire measurement data through the inertial sensor, and then, the first orientation information of the movable component of the target object may be determined by combining the measurement data and the wearing position of the inertial sensor. The monocular image data shot by the camera shooting assembly can be acquired, the position information of the node of the movable part in the monocular image data is identified according to the monocular image data, the depth information corresponding to the position information is determined, the coordinate information of the node of the movable part is determined, and the second orientation information of the movable part is further determined. After determining the first orientation information and the second orientation information, the first orientation information and the second orientation information may be fused to obtain the pose information of the target object. After that, corresponding processing may be performed by using the pose information according to different application scenarios, for example, each moving part of the virtual object may be adjusted according to the pose information of the target object, and the virtual image data may be formed by combining with the background data of the image data.
On the basis of the foregoing embodiment, this embodiment further provides a data processing apparatus, and with reference to fig. 8, the data processing apparatus may specifically include the following modules:
a measurement data obtaining module 802, configured to determine first orientation information of the movable component of the target object according to the measurement data of the inertial sensor.
And a monocular image obtaining module 804, configured to determine second orientation information of the movable component of the target object according to the monocular image data of the target object.
And an analysis result obtaining module 806, configured to input the first orientation information and the second orientation information into the fusion model, and determine a posture analysis result.
And the fusion model training module 808 is configured to adjust the fusion model according to the posture labeling result and the posture analysis result to obtain a trained fusion model.
In summary, in the embodiment of the present application, the fusion model may be trained according to the first orientation information and the second orientation information that are labeled with the gesture labeling result, the first orientation information may be determined according to the measurement data of the inertial sensor, and the second orientation information may be determined according to the monocular image data captured by the camera module. After the first orientation information and the second orientation information are determined, the first orientation information and the second orientation information may be input into the fusion model, a posture analysis result (which may be understood as posture information) of the target object is determined, then, an adjustment amount of the fusion model may be determined according to a difference between the posture labeling result and the posture analysis result, and the fusion model is adjusted, so that a trained fusion model is obtained.
On the basis of the foregoing embodiment, this embodiment further provides a data processing apparatus, and with reference to fig. 9, the data processing apparatus may specifically include the following modules:
a first orientation determining module 902, configured to obtain measurement data of the inertial sensor on the movable part of the target object from the live data, and determine first orientation information of the movable part of the target object according to the measurement data.
And a second orientation determining module 904, configured to obtain monocular image data of the target object from the live data, and determine second orientation information of the movable component of the target object according to the monocular image data.
And a posture information determining module 906, configured to fuse the first orientation information and the second orientation information of each moving part according to the trained fusion model, and determine posture information of the target object.
The fused image determining module 908 is configured to adjust each moving component of the virtual object according to the posture information of the target object, and obtain fused image data by combining with the background data of the image data.
And the display data determining module 910 is configured to determine live display data according to the fused image data, and issue the live display data to be displayed in a display page. As an optional embodiment, the apparatus further comprises: the object selection page providing and processing module is used for providing a virtual object selection page, and the virtual object selection page is displayed with a plurality of virtual objects; the object selection instruction acquisition processing module is used for acquiring an object selection instruction of the virtual object according to the triggering of the object selection control in the virtual object selection page; and the virtual object determination processing module is used for determining the selected virtual object according to the object selection instruction so as to fuse the virtual object into the live display data. The method and the device for selecting the virtual objects can provide a virtual object selection page, a plurality of virtual objects are displayed in the page, an object selection control can be configured for each virtual object, and a user can trigger the object selection control to select the corresponding virtual object, so that live users can be replaced by the virtual objects, and the interest of live broadcasting is improved.
To sum up, this application embodiment can be applied to in the scene of the gesture capture of live broadcast user, can catch live broadcast user's gesture to replace live broadcast user into virtual object, in order to promote live broadcast's interest, and is concrete, and live broadcast user can wear a plurality of inertial sensor, and shoots user's monocular image through the camera, in order to confirm live broadcast user's first orientation information and second orientation information, and then confirms live broadcast user's gesture information. Then, each movable part of the virtual object can be adjusted according to the posture information of the live broadcast user, and the fusion image data is formed by combining the background data in the live broadcast data. And then, determining live broadcast display data according to the fused image data, and issuing the live broadcast display data to a user watching the live broadcast for watching.
On the basis of the foregoing embodiment, this embodiment further provides a data processing apparatus, and with reference to fig. 10, the data processing apparatus may specifically include the following modules:
the resource data display module 1002 is configured to provide a resource data display page to display resource data.
A first orientation obtaining module 1004 is configured to obtain measurement data of the inertial sensor on the movable component of the target object, and determine first orientation information of the movable component of the target object according to the measurement data.
The second orientation obtaining module 1006 is configured to obtain monocular image data of the target object, and determine second orientation information of the movable component of the target object according to the monocular image data.
And a posture information obtaining module 1008, configured to fuse the first orientation information and the second orientation information of each moving component according to the trained fusion model, and determine posture information of the target object.
The resource data manipulation module 1010 is configured to determine a resource manipulation instruction for the resource data according to the posture information of the target object, so as to manipulate the resource data, and update the resource data displayed in the resource data display page.
To sum up, the embodiment of the present application may be applied to interactive scenes such as games, teaching, home furnishing, and the like, for example, the embodiment of the present application may be applied to a game scene, and may obtain measurement data of a movable part (such as each arm, leg, and the like) of a target object (such as a user), and determine first orientation information of the movable part of the target object according to the measurement data; the monocular image data of the target object can be acquired, and second orientation information of the movable part of the target object is determined according to the monocular image data; then, the first orientation information and the second orientation information can be input into the trained fusion model to determine the posture information of the target object, and determine a resource control instruction (game control instruction) corresponding to the posture information, and the resource (game application) performs corresponding processing according to the resource control instruction and feeds back corresponding resource data (game data) to the user for interaction.
The present application further provides a non-transitory, readable storage medium, where one or more modules (programs) are stored, and when the one or more modules are applied to a device, the device may execute instructions (instructions) of method steps in this application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an electronic device to perform the methods as described in one or more of the above embodiments. In the embodiment of the application, the electronic device includes a server, a terminal device and other devices.
Embodiments of the present disclosure may be implemented as an apparatus, which may comprise a server (cluster), a terminal, etc., electronic device, using any suitable hardware, firmware, software, or any combination thereof, in a desired configuration. Fig. 11 schematically illustrates an example apparatus 1100 that may be used to implement various embodiments described herein.
For one embodiment, fig. 11 illustrates an example apparatus 1100 having one or more processors 1102, a control module (chipset) 1104 coupled to at least one of the processor(s) 1102, a memory 1106 coupled to the control module 1104, a non-volatile memory (NVM)/storage 1108 coupled to the control module 1104, one or more input/output devices 1110 coupled to the control module 1104, and a network interface 1112 coupled to the control module 1104.
The processor 1102 may include one or more single-core or multi-core processors, and the processor 1102 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the apparatus 1100 can be used as a server, a terminal, or the like in the embodiments of the present application.
In some embodiments, the apparatus 1100 may include one or more computer-readable media (e.g., the memory 1106 or the NVM/storage 1108) having instructions 1114 and one or more processors 1102 in combination with the one or more computer-readable media configured to execute the instructions 1114 to implement modules to perform the actions described in this disclosure.
For one embodiment, control module 1104 may include any suitable interface controllers to provide any suitable interface to at least one of the processor(s) 1102 and/or to any suitable device or component in communication with control module 1104.
The control module 1104 may include a memory controller module to provide an interface to the memory 1106. The memory controller module may be a hardware module, a software module, and/or a firmware module.
The memory 1106 may be used, for example, to load and store data and/or instructions 1114 for the device 1100. For one embodiment, memory 1106 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the memory 1106 may comprise a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, control module 1104 may include one or more input/output controllers to provide an interface to NVM/storage 1108 and input/output device(s) 1110.
For example, NVM/storage 1108 may be used to store data and/or instructions 1114. NVM/storage 1108 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1108 may include storage resources that are part of the device on which apparatus 1100 is installed, or it may be accessible by the device and need not be part of the device. For example, NVM/storage 1108 may be accessed over a network via input/output device(s) 1110.
Input/output device(s) 1110 may provide an interface for apparatus 1100 to communicate with any other suitable device, input/output devices 1110 may include communication components, audio components, sensor components, and so forth. Network interface 1112 may provide an interface for device 1100 to communicate over one or more networks, and device 1100 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, e.g., WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1102 may be packaged together with logic for one or more controller(s) (e.g., memory controller module) of the control module 1104. For one embodiment, at least one of the processor(s) 1102 may be packaged together with logic for one or more controller(s) of control module 1104 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1102 may be integrated on the same die with logic for one or more controller(s) of the control module 1104. For one embodiment, at least one of the processor(s) 1102 may be integrated on the same die with logic for one or more controller(s) of control module 1104 to form a system on chip (SoC).
In various embodiments, the apparatus 1100 may be, but is not limited to: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments, the apparatus 1100 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1100 includes one or more cameras, keyboards, Liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, Application Specific Integrated Circuits (ASICs), and speakers.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
An embodiment of the present application further provides an electronic device, including: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in one or more of the embodiments of the application.
Embodiments of the present application also provide one or more machine-readable media having executable code stored thereon that, when executed, cause a processor to perform a method as described in one or more of the embodiments of the present application.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing detailed description has provided a data processing method, a data processing apparatus, an electronic device, and a storage medium, and the principles and embodiments of the present application are described herein using specific examples, which are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A method of data processing, the method comprising:
acquiring measurement data of an inertial sensor on a movable part of a target object, and determining first orientation information of the movable part of the target object according to the measurement data;
acquiring monocular image data of the target object, and determining second orientation information of a movable part of the target object according to the monocular image data;
and according to the trained fusion model, fusing the first orientation information and the second orientation information of each movable part to determine the posture information of the target object.
2. The method of claim 1, further comprising:
adjusting each movable part of the virtual object according to the posture information of the target object, and combining background data of the image data to form virtual image data;
and issuing virtual image data to display in a display page.
3. The method of claim 1, wherein determining second orientation information for the movable element of the target object from the monocular image data comprises:
determining the position information of a target node of the movable part according to the monocular image data;
determining depth information corresponding to the position information in the monocular image data to form coordinate information of a target node of the movable part;
and determining second orientation information of the movable part according to the coordinate information of each node of the movable part.
4. The method of claim 3, wherein determining the location information of the target node of the movable element based on the monocular image data comprises at least one of:
acquiring related monocular images before and/or after the monocular image data, and determining the position information of a target node in the monocular image data according to the related position information of the target node of the movable part in the related monocular images;
determining the position of the target identifier in the monocular image data to determine the position information of the target node of the movable part;
and determining the position information of a second node associated with the first node according to the position information of the first node of the movable part in the monocular image data.
5. The method of claim 3, wherein determining the second orientation information of the movable element according to the coordinate information of each node of the movable element comprises:
checking the coordinate information according to the component parameters of the movable component to form a checking result;
and determining second orientation information of the movable part according to the coordinate information of which the verification result is normal.
6. The method of claim 5, wherein determining the second orientation information of the movable element according to the coordinate information of each node of the movable element further comprises:
providing a coordinate adjustment page to show coordinate information with an abnormal verification result;
acquiring coordinate adjustment information in response to the triggering of a coordinate adjustment control of the coordinate adjustment page, and determining the adjusted coordinate information according to the coordinate adjustment information;
and determining second orientation information of the movable part according to the adjusted coordinate information.
7. The method of claim 1, further comprising:
providing a posture display page to display posture information;
acquiring attitude adjustment information of the attitude information according to triggering of an attitude adjustment control in the attitude display page;
and adjusting the posture information of the target object according to the adjustment information, and adjusting the fusion model.
8. The method of claim 1, further comprising:
determining an amount of difference between the first orientation information and the second orientation information of the movable part;
providing an orientation adjustment page to show orientation information with a difference amount larger than a preset difference threshold, wherein the orientation information comprises at least one of first orientation information and second orientation information;
and responding to the trigger of the orientation adjustment control in the orientation adjustment page, acquiring orientation adjustment information, and adjusting the node of the movable part according to the orientation adjustment information.
9. A method of data processing, the method comprising:
acquiring measurement data of an inertial sensor on a movable part of a target object from live broadcast data, and determining first orientation information of the movable part of the target object according to the measurement data;
acquiring monocular image data of a target object from the live broadcast data, and determining second orientation information of a movable part of the target object according to the monocular image data;
according to the trained fusion model, fusing the first orientation information and the second orientation information of each movable part to determine the posture information of the target object;
adjusting each movable part of the virtual object according to the posture information of the target object, and combining background data of the image data to obtain fused image data;
and determining live display data according to the fused image data, and issuing the live display data to display in a display page.
10. The method of claim 9, further comprising:
providing a virtual object selection page, wherein a plurality of virtual objects are displayed on the virtual object selection page;
acquiring an object selection instruction of the virtual object according to the triggering of the object selection control in the virtual object selection page;
and determining the selected virtual object according to the object selection instruction so as to fuse the virtual object into the live display data.
11. An electronic device, comprising: a processor; and
a memory having executable code stored thereon that, when executed, causes the processor to perform the method of any of claims 1-10.
12. One or more machine-readable media having executable code stored thereon that, when executed, causes a processor to perform the method of any of claims 1-10.
CN202111137432.3A 2021-09-27 2021-09-27 Data processing method and device, electronic equipment and storage medium Pending CN113900516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137432.3A CN113900516A (en) 2021-09-27 2021-09-27 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137432.3A CN113900516A (en) 2021-09-27 2021-09-27 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113900516A true CN113900516A (en) 2022-01-07

Family

ID=79029698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137432.3A Pending CN113900516A (en) 2021-09-27 2021-09-27 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113900516A (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028469A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd Method and apparatus for estimating three-dimensional position and orientation through sensor fusion
US20130222565A1 (en) * 2012-02-28 2013-08-29 The Johns Hopkins University System and Method for Sensor Fusion of Single Range Camera Data and Inertial Measurement for Motion Capture
CN104106262A (en) * 2012-02-08 2014-10-15 微软公司 Head pose tracking using a depth camera
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
WO2016187757A1 (en) * 2015-05-23 2016-12-01 SZ DJI Technology Co., Ltd. Sensor fusion using inertial and image sensors
WO2018214778A1 (en) * 2017-05-25 2018-11-29 阿里巴巴集团控股有限公司 Method and device for presenting virtual object
US20190212359A1 (en) * 2018-01-11 2019-07-11 Finch Technologies Ltd. Correction of Accumulated Errors in Inertial Measurement Units Attached to a User
US20200033937A1 (en) * 2018-07-25 2020-01-30 Finch Technologies Ltd. Calibration of Measurement Units in Alignment with a Skeleton Model to Control a Computer System
CN111091587A (en) * 2019-11-25 2020-05-01 武汉大学 Low-cost motion capture method based on visual markers
CN111126272A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Posture acquisition method, and training method and device of key point coordinate positioning model
CN111681281A (en) * 2020-04-16 2020-09-18 北京诺亦腾科技有限公司 Calibration method and device for limb motion capture, electronic equipment and storage medium
CN111694429A (en) * 2020-06-08 2020-09-22 北京百度网讯科技有限公司 Virtual object driving method and device, electronic equipment and readable storage
US10919152B1 (en) * 2017-05-30 2021-02-16 Nimble Robotics, Inc. Teleoperating of robots with tasks by mapping to human operator pose
US20210089116A1 (en) * 2019-09-19 2021-03-25 Finch Technologies Ltd. Orientation Determination based on Both Images and Inertial Measurement Units
US20210089162A1 (en) * 2019-09-19 2021-03-25 Finch Technologies Ltd. Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user
CN113298858A (en) * 2021-05-21 2021-08-24 广州虎牙科技有限公司 Method, device, terminal and storage medium for generating action of virtual image
CN113318430A (en) * 2021-05-28 2021-08-31 网易(杭州)网络有限公司 Virtual character posture adjusting method and device, processor and electronic device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130028469A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd Method and apparatus for estimating three-dimensional position and orientation through sensor fusion
CN104106262A (en) * 2012-02-08 2014-10-15 微软公司 Head pose tracking using a depth camera
US20130222565A1 (en) * 2012-02-28 2013-08-29 The Johns Hopkins University System and Method for Sensor Fusion of Single Range Camera Data and Inertial Measurement for Motion Capture
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
WO2016187757A1 (en) * 2015-05-23 2016-12-01 SZ DJI Technology Co., Ltd. Sensor fusion using inertial and image sensors
WO2018214778A1 (en) * 2017-05-25 2018-11-29 阿里巴巴集团控股有限公司 Method and device for presenting virtual object
US10919152B1 (en) * 2017-05-30 2021-02-16 Nimble Robotics, Inc. Teleoperating of robots with tasks by mapping to human operator pose
US20190212359A1 (en) * 2018-01-11 2019-07-11 Finch Technologies Ltd. Correction of Accumulated Errors in Inertial Measurement Units Attached to a User
US20200033937A1 (en) * 2018-07-25 2020-01-30 Finch Technologies Ltd. Calibration of Measurement Units in Alignment with a Skeleton Model to Control a Computer System
US20210089116A1 (en) * 2019-09-19 2021-03-25 Finch Technologies Ltd. Orientation Determination based on Both Images and Inertial Measurement Units
US20210089162A1 (en) * 2019-09-19 2021-03-25 Finch Technologies Ltd. Calibration of inertial measurement units in alignment with a skeleton model to control a computer system based on determination of orientation of an inertial measurement unit from an image of a portion of a user
CN111091587A (en) * 2019-11-25 2020-05-01 武汉大学 Low-cost motion capture method based on visual markers
CN111126272A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Posture acquisition method, and training method and device of key point coordinate positioning model
CN111681281A (en) * 2020-04-16 2020-09-18 北京诺亦腾科技有限公司 Calibration method and device for limb motion capture, electronic equipment and storage medium
CN111694429A (en) * 2020-06-08 2020-09-22 北京百度网讯科技有限公司 Virtual object driving method and device, electronic equipment and readable storage
CN113298858A (en) * 2021-05-21 2021-08-24 广州虎牙科技有限公司 Method, device, terminal and storage medium for generating action of virtual image
CN113318430A (en) * 2021-05-28 2021-08-31 网易(杭州)网络有限公司 Virtual character posture adjusting method and device, processor and electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王立玲;梁亮;马东;王洪瑞;刘秀玲;: "基于多传感器信息融合的双足机器人自主定位", 中国惯性技术学报, no. 05 *
白秀梅;徐世民;: "虚拟主播在应急气象影视节目制作中的应用探讨", 黑龙江气象, no. 02 *

Similar Documents

Publication Publication Date Title
Wasenmüller et al. Comparison of kinect v1 and v2 depth images in terms of accuracy and precision
US10488195B2 (en) Curated photogrammetry
CN107645701B (en) Method and device for generating motion trail
US20230245391A1 (en) 3d model reconstruction and scale estimation
JP7293169B2 (en) Distance measuring device for video camera focus applications
US20150310619A1 (en) Single-Camera Distance Ranging Method and System
CN113850248B (en) Motion attitude evaluation method and device, edge calculation server and storage medium
US9275470B1 (en) Computer vision system for tracking ball movement and analyzing user skill
CN109247068A (en) Method and apparatus for rolling shutter compensation
CN108985263B (en) Data acquisition method and device, electronic equipment and computer readable medium
CN104978077B (en) interaction method and system
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
CN105791663B (en) Range estimation system and range estimation method
JP2022531186A (en) Information processing methods, devices, electronic devices, storage media and programs
CN111488775A (en) Device and method for judging degree of fixation
Zhang et al. Synthetic aperture photography using a moving camera-IMU system
CN109816628B (en) Face evaluation method and related product
US20230306636A1 (en) Object three-dimensional localizations in images or videos
CN106370883B (en) Speed measurement method and terminal
US9552059B2 (en) Information processing method and electronic device
CN113900516A (en) Data processing method and device, electronic equipment and storage medium
CN110826422A (en) System and method for obtaining motion parameter information
US11166005B2 (en) Three-dimensional information acquisition system using pitching practice, and method for calculating camera parameters
CN107667522A (en) Adjust the length of live image
CN107102794B (en) Operation processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination