CN111368667A - Data acquisition method, electronic equipment and storage medium - Google Patents

Data acquisition method, electronic equipment and storage medium Download PDF

Info

Publication number
CN111368667A
CN111368667A CN202010116897.XA CN202010116897A CN111368667A CN 111368667 A CN111368667 A CN 111368667A CN 202010116897 A CN202010116897 A CN 202010116897A CN 111368667 A CN111368667 A CN 111368667A
Authority
CN
China
Prior art keywords
target
character
target person
scene
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010116897.XA
Other languages
Chinese (zh)
Other versions
CN111368667B (en
Inventor
徐泽元
付强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN202010116897.XA priority Critical patent/CN111368667B/en
Publication of CN111368667A publication Critical patent/CN111368667A/en
Application granted granted Critical
Publication of CN111368667B publication Critical patent/CN111368667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention relates to the technical field of computer vision, and discloses a data acquisition method, electronic equipment and a storage medium. The data acquisition method comprises the following steps: selecting a target character from the stored plurality of character models and/or selecting a target scene from the stored plurality of scene models; controlling the target character to move in the target scene; acquiring motion data of a target person at different acquisition moments, wherein each motion data comprises: and acquiring the image of the target person and the position information of each joint of the target person in the image. By adopting the implementation mode, the data for human body posture estimation can be generated quickly, the data acquisition efficiency is improved, and the data acquisition cost is reduced.

Description

Data acquisition method, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a data acquisition method, electronic equipment and a storage medium.
Background
The human body posture estimation refers to the position of each body part of a human body in a positioning image or video. Are used in many fields, for example: man-machine interaction, action recognition, character comparison and the like. In human-computer interaction, game roles can be controlled through human postures; in movie and game production, animation of each character can be produced based on estimation of the posture of the human body. Human body pose estimation can also be used for intelligent security and monitoring, for example, by detecting human pose, for analyzing human behavior to achieve security and monitoring.
The inventors found that at least the following problems exist in the related art: at present, a model for estimating the human body posture is trained in a deep learning mode, the accuracy of the model for estimating the human body posture depends on a large amount of data containing the human body posture, the human body posture data for training is generally acquired manually, the labor cost is high, the efficiency is low, and the applicable scenes are few.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a data acquisition method, an electronic device, and a storage medium, which can quickly generate data for estimating a human body posture, improve data acquisition efficiency, and reduce data acquisition cost.
In order to solve the above technical problem, an embodiment of the present invention provides a data acquisition method, including: selecting a target character from the stored plurality of character models and/or selecting a target scene from the stored plurality of scene models; controlling the target character to move in the target scene; acquiring motion data of a target person at different acquisition moments, wherein each motion data comprises: and acquiring the image of the target person and the position information of each joint of the target person in the image.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the data acquisition method.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the data acquisition method.
Compared with the prior art, the embodiment of the invention has the advantages that the plurality of character models and the plurality of scene models are stored, the motion data of different target characters under different scenes can be obtained by selecting the target characters and/or selecting the target scenes, the manual operation is reduced and the labor cost is reduced because the motion data of the target characters in the target scenes are not required to be manually acquired on site, meanwhile, the position information of each joint of the target characters in the images is acquired, so that the images of the target characters are not required to be manually marked, the labor cost is reduced, a large amount of motion data can be obtained, and the data acquisition efficiency is improved.
In addition, the method for acquiring the motion data of the target person at different acquisition moments comprises the following steps: the following processing is carried out aiming at each acquisition moment, and the processing comprises the following steps: controlling a virtual camera to shoot a target person in a target scene to obtain an image of the target person; and determining the position of each joint of the target person in the image according to the position information of the virtual camera, the position information of the target person in the target scene and the position information of each joint of the target person in the target scene. Shooting a target person through a virtual camera, so that an image of the target person can be quickly acquired; in addition, according to the known position information of the virtual camera, the position information of the target person in the target scene and the position information of each joint of the target person in the target scene, the position information of each joint of the target person in the image can be determined quickly and accurately without manual marking, so that the labor cost is reduced, and the marking accuracy is improved.
In addition, controlling the movement of the target person in the target scene includes: selecting character animation for the target character, wherein the character animation is used for indicating the movement of each joint of the target character within a preset time length; the control target character moves according to the instruction of the character animation. Because the character animation is a preset action, the target character can be quickly controlled to complete the known human body posture by selecting the character animation for the target character, and the control efficiency is improved.
In addition, controlling the movement of the target person in the target scene includes: and randomly controlling the positions of all joints of the target person. And randomly controlling the positions of all joints of the target character to realize the acquisition of the unknown posture of the target character.
In addition, the method for acquiring the motion data of the target person at different acquisition moments further comprises the following steps: and acquiring a label of a target scene where the target person is located. The label of the target scene is obtained, and the content of the motion data is enriched.
In addition, after acquiring the motion data of the target person at different acquisition moments, the method further comprises the following steps: judging whether motion data of the target character under all scene models is acquired; if yes, returning to execute the step of reselecting the character model from the stored character models as the target character, controlling the motion of the re-determined target character in the target scene, and acquiring the motion data of the re-determined target character in the target scene until all the character models are traversed; otherwise, returning to execute the reselection of the scene model from the plurality of stored scene models as the target scene, controlling the target character to move in the redetermined target scene, and acquiring the movement data of the target character in the redetermined target scene. Through a traversing mode, a combination mode of all the character models and the scene models in the material library can be obtained, and various different motion data can be obtained as far as possible.
In addition, the method for acquiring the motion data of the target person at different acquisition moments further comprises the following steps: obtaining one or any combination of the following: a character tag of the target character, position information of the target character in the target scene, rotation information of each joint of the target character, a terminal pose of each joint of the target character, or parameter information of the virtual camera. Further enriches the content of the motion data and improves the quality of the collected motion data.
In addition, before controlling the target person to move in the target scene, the method further includes: setting a skeleton motion range of a target person, wherein the skeleton motion range is matched with the skeleton motion range of a real human; the collision condition of the target person is set such that the limbs of the target person are in contact with each other and there is no overlap between the insides of the limbs. By setting the skeletal motion range of the target person, the motion of the target person can be made to conform to the motion of the skeleton of a real human being, and by setting the collision condition of the target person, the target person is prevented from having an impossible motion during the motion, for example, the hand of the target person is placed in the abdomen of the target person.
In addition, the image of the target person includes: a two-dimensional image of the target person and a three-dimensional image of the target person. Two-dimensional images and three-dimensional images are provided so that the images can meet different use requirements.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a detailed flowchart of a data acquisition method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a joint of a target person according to a first embodiment of the present invention;
FIG. 3 is a diagram illustrating an embodiment of obtaining motion data according to a first embodiment of the present invention;
FIG. 4 is a detailed flowchart of a data acquisition method according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
There are various ways for estimating the posture of the human body, for example, the posture is obtained by wearing a measuring device, the cost of the measuring device in this way is very high, and the measuring process is complex; the position of each joint of the human body can be estimated through measurement of a 3D camera; however, the cost of the 3D camera in this method is also high, and the estimated positions of the joints of the human body are not accurate. The other method is to acquire an image of a human body posture by using a camera and estimate the human body posture by using a deep learning method, but the method is low in cost, the accuracy of the method depends on an accurate human body posture image, generally, the position of each joint of a human body in the image needs to be marked manually to acquire the image by using the camera so as to train a human body posture estimation model, but the cost is increased due to manual marking, and a large amount of manpower is required to acquire images of various human body postures, so that the cost is increased.
A first embodiment of the present invention relates to a data acquisition method. The data acquisition method is applied to electronic equipment, and the electronic equipment can be a server or a terminal. The specific flow of the data acquisition method is shown in fig. 1:
step 101: a target character is selected from the stored plurality of character models and/or a target scene is selected from the stored plurality of scene models.
Specifically, the electronic device may be equipped with an Engine having a function of simulating a 3D object, for example, a virtual Engine 4 (abbreviated as "UE 4") or a Unity3D Engine, and the UE4 Engine is taken as an example in this embodiment. The running engine is stored with a plurality of character models and a plurality of scene models, the character models and the scene models can be added manually through an input interface, and each character model and each scene model are three-dimensional models; a character material library can be set, wherein the character material library is used for storing character models, and a scene material library is set and used for storing scene models; the character models and scene models may also be stored in a story library. The character model may include: adult male, adult female, senior male, senior female, and children; the scene model may include: office, living room, park, kitchen, etc.
A character model may be randomly selected from the material library as the target character, or a command for selection may be manually input through the input interface, and the character model indicated in the command may be selected from the material library according to the received command. Similarly, a scene model can be selected from the material library as the target scene.
In the initial case, a character model may be selected from among a plurality of stored character models as a target character and a scene model may be selected from among a plurality of stored scene models as a target scene. If the motion data is obtained last time, a new character model can be re-selected from the stored character models to serve as a target character; or only a new scene model can be reselected from the stored scene models as a target scene; it is also possible to simultaneously reselect a new character model from among the stored character models as a target character and a new scene model from among the stored scene models as a target scene.
In order to acquire accurate motion data, before the step 102 is performed to control the target person to move in the target scene, the skeletal motion range of the target person may be set and the collision condition of the target person may be set. The skeleton motion range is matched with the skeleton motion range of a real human, and the collision condition is that all limbs of the target person are in contact and do not overlap.
Specifically, the setting of the bone motion range of the target person may specifically be to set a respective bone motion range for each joint of the target person, for example, the setting of the bone motion range of the elbow joint of the target person may be: 0 to 180 degrees. By setting the skeletal motion range of the target person, the range of abnormal human motion of the target person can be ensured, and the accuracy of the acquired motion data is ensured.
Setting a collision condition of the target person, where the target person has contact between limbs and has no overlap inside the limbs, for example, in an illusion engine, each person model is usually calibrated by a grid, each limb in each person model also has a respective grid, and the collision condition can be set to have a distance between grid edges of each limb of 0, determining that a collision occurs, and after the collision occurs, restraining the interior of the grid of the limbs from overlapping, if the grid of a hand is not inserted into the grid of the abdomen. By setting the collision condition, the movement of the target person is further refined so that the movement of the target person is closer to or equal to the movement of a human.
Step 102: and controlling the movement of the target person in the target scene.
In one example, a character animation is selected for the target character, and the character animation is used for indicating the movement of each joint of the target character within a preset time length; the control target character moves according to the instruction of the character animation.
Specifically, the character animation indicates the movement of each joint of the target character within a preset time length, and the character animation can be: walking, running, making a call, calling a call, etc. For example, the character animation of the call indicates that the wrist joint is shaken within 4 seconds, and as shown in the joint diagram of the target character in fig. 2, the right wrist joint of the target character rotates from the O position to the a position in the 1 st second, the wrist joint rotates from the a position to the B position in the 2 nd second, the wrist joint returns to the a position in the 3 rd second, and the wrist joint returns to the home position O in the 4 th second. By setting the character animation, the target character can move according to the known posture of human beings, such as the conventional postures of running, swimming and the like. The movement of the control target person is simplified, and the control speed is accelerated.
In another example, the positions of the joints of the target person may be randomly controlled.
Specifically, the positions of the joints of the target person can be randomly controlled, so that the motion of the target person can be controlled, and the positions of the joints of the target person can be randomly controlled, so that the target person can make a plurality of different unknown motions, and the acquirable motion data can be enriched.
Step 103: acquiring motion data of a target person at different acquisition moments, wherein each motion data comprises: and acquiring the image of the target person and the position information of each joint of the target person in the image.
In one example, the substeps shown in FIG. 3 are performed for each acquisition instant.
Substep S11: and controlling the virtual camera to shoot the target person in the target scene to obtain the image of the target person.
Specifically, the camera parameters of the virtual camera may be preset, and the virtual camera with the camera parameters set may be controlled to capture an image of a target person in a target scene to obtain the image of the target person. The image of the target person includes: a two-dimensional image of the target person and a three-dimensional image of the target person.
Substep S12: and determining the position of each joint of the target person in the image according to the position information of the virtual camera, the position information of the target person in the target scene and the position information of each joint of the target person in the target scene.
Specifically, the position information of the virtual camera at the time of the acquisition, the position information of the target person in the target scene, and the position information of each joint of the target person in the target scene are acquired, and if the acquired image of the target person is a two-dimensional image, the 3D coordinates of each joint of the target person can be projected into the 2D coordinates of the image, so that the position information of each joint of the target person in the acquired two-dimensional image can be calculated. If the acquired image of the target person is a three-dimensional image, the position information of the target person in the target scene and the position information of each joint of the target person in the target scene can be converted into the coordinate system of the camera directly according to the position information of the camera, so that the position information of each joint of the target person in the acquired three-dimensional image can be calculated.
It is understood that a plurality of virtual cameras may be used to capture the target person at different positions simultaneously to obtain a plurality of images of the target person, and the positions of the joints of the target person in the images may be determined based on the respective position information of each virtual camera, the position information of the target person in the target scene, and the position information of the joints of the target person in the target scene.
It should be noted that each piece of motion data may be stored, and in the storage process, the image may be used as an index for query, and the name of the action may also be used as an index for query, so that the corresponding motion data may be queried as needed. The image in a large amount of motion data can be used as input data of training, the position information of each joint of the target person in the motion data in the image can be used as output data of the training, and then the model of the human body posture estimation can be obtained through a deep learning algorithm.
It should be noted that, because various character animations are stored, it is also possible to sequentially set character animations that the target character is not set in a traversal manner in the current target scene, control the target character to perform different posture motions, and acquire corresponding motion data.
Compared with the prior art, the embodiment of the invention stores a plurality of character models and a plurality of scene models, can obtain the motion data of different target characters under different scenes by selecting the target characters and/or selecting the target scenes, reduces manual operation and labor cost because the motion data of the target characters in the target scenes are not required to be manually acquired on site, reduces labor cost because the position information of all joints of the target characters in the images is acquired, ensures that the images of the target characters are not required to be manually marked, reduces labor cost, can obtain a large amount of motion data and improves the efficiency of data acquisition.
A second embodiment of the present invention relates to a data acquisition method. The present embodiment is a further improvement of the first embodiment, and the main improvement is that in the present embodiment, after the motion data of the target person at different capturing times is captured, the target person or the target scene is readjusted to obtain all possible motion data in the illusion engine. The specific flow is shown in fig. 4.
Step 201: a target character is selected from the stored plurality of character models and/or a target scene is selected from the stored plurality of scene models.
Step 202: and controlling the movement of the target person in the target scene.
Step 203: and acquiring the motion data of the target person at different acquisition moments.
Specifically, this step is substantially the same as step 103 in the first embodiment, and in this embodiment, this step further includes acquiring a tag of a target scene where the target person is located. It is understood that one or any combination of the following may also be obtained: a character tag of the target character, position information of the target character in the target scene, rotation information of each joint of the target character, a terminal pose of each joint of the target character, or parameter information of the virtual camera.
Step 204: judging whether motion data of the target character under all scene models is acquired; if yes, go to step 205; otherwise, the method returns to execute step 201 to reselect the scene model from the material library as the target scene.
Specifically, in order to obtain all possible combinations of the target person and the target scene as much as possible, all scene models may be traversed based on the current target person, and if all scene models have been traversed, step 205 is executed. If all the scene models are not traversed, the process returns to step 201, the step of reselecting the scene model from the stored scene models as the target scene is executed, then step 202 is executed, the target person is controlled to move in the redetermined target scene, step 203 acquires the movement data of the target person in the redetermined target scene, and step 204 is executed.
Step 205: detecting whether there is an unselected character model among the stored plurality of character models, and if there is an unselected character model in the material library, returning to the step of re-selecting a character model from the stored plurality of character models as a target character in the step 201; otherwise, the flow ends.
Specifically, by detecting whether an unselected character model exists in the stored character models or not until all the character models are traversed, if it is detected that an unselected character model exists in the stored character models, indicating that all the character models are not traversed currently, returning to the step of reselecting a character model from the stored character models as a target character in the step 201, and then executing the step 202 to control the newly determined target character to move in the target scene, and acquiring the movement data of the newly determined target character in the target scene in the step 203; step 204 is performed again.
The data acquisition method provided in this embodiment can acquire a combination of all the character models and the scene models in the material library in a traversal manner, and acquire various different motion data as far as possible.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to an electronic device, the specific configuration of which is shown in fig. 5, and which includes: at least one processor 301; and a memory 302 communicatively coupled to the at least one processor 301; the memory 302 stores instructions executable by the at least one processor 301, and the instructions are executed by the at least one processor 301, so that the at least one processor 301 can execute the data acquisition method according to the first embodiment or the second embodiment.
The memory 302 and the processor 301 are connected by a bus, which may include any number of interconnected buses and bridges that link one or more of the various circuits of the processor 301 and the memory 302. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor 301 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 302 may be used to store data used by processor 301 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program which, when executed by a processor, implements the data acquisition method of the first embodiment or the second embodiment.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (11)

1. A method of data acquisition, comprising:
selecting a target character from the stored plurality of character models and/or selecting a target scene from the stored plurality of scene models;
controlling the target person to move in the target scene;
acquiring motion data of the target person at different acquisition moments, wherein each motion data comprises: and acquiring the image of the target person and the position information of each joint of the target person in the image.
2. The data acquisition method of claim 1, wherein the acquiring of the motion data of the target person at different acquisition moments comprises:
the following processing is carried out aiming at each acquisition moment, and the processing comprises the following steps:
controlling a virtual camera to shoot the target person in the target scene to obtain an image of the target person;
and determining the positions of the joints of the target person in the image according to the position information of the virtual camera, the position information of the target person in the target scene and the position information of the joints of the target person in the target scene.
3. The data acquisition method of claim 1 or 2, wherein the controlling the target person to move in the target scene comprises:
selecting a character animation for the target character, wherein the character animation is used for indicating the movement of each joint of the target character within a preset time length;
and controlling the target character to move according to the indication of the character animation.
4. The data collection method of any one of claims 1 to 3, wherein the controlling the target person to move in the target scene comprises:
and randomly controlling the positions of all joints of the target person.
5. The data acquisition method according to any one of claims 1 to 4, wherein the acquiring motion data of the target person at different acquisition moments further comprises:
and acquiring a label of the target scene where the target person is located.
6. The data collection method of claim 5, wherein after collecting the motion data of the target person at different collection times, the method further comprises:
judging whether motion data of the target character under all scene models are acquired;
if yes, returning to execute the step of reselecting a character model from a plurality of stored character models as a target character, controlling the motion of the re-determined target character in the target scene, and acquiring the motion data of the re-determined target character in the target scene until all the character models are traversed;
otherwise, returning to execute the reselection of a scene model from the plurality of stored scene models as a target scene, controlling the target character to move in the redetermined target scene, and acquiring the movement data of the target character in the redetermined target scene.
7. The data acquisition method according to claim 1 or 5, wherein the acquiring of the motion data of the target person at different acquisition moments further comprises:
obtaining one or any combination of the following: the character tag of the target character, the position information of the target character in the target scene, the rotation information of each joint of the target character, the terminal posture of each joint of the target character or the parameter information of the virtual camera.
8. The data collection method of any one of claims 1 to 7, wherein before the controlling the target person to move in the target scene, further comprising:
setting a skeleton motion range of the target person, wherein the skeleton motion range is matched with a skeleton motion range of a real human;
and setting the collision condition of the target person, wherein the collision condition is that the limbs of the target person are in contact with each other and the interiors of the limbs are not overlapped with each other.
9. The data collection method of any one of claims 1 to 7, wherein the image of the target person comprises: a two-dimensional image of the target person and a three-dimensional image of the target person.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a data acquisition method as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the data acquisition method of any one of claims 1 to 9.
CN202010116897.XA 2020-02-25 2020-02-25 Data acquisition method, electronic equipment and storage medium Active CN111368667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010116897.XA CN111368667B (en) 2020-02-25 2020-02-25 Data acquisition method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010116897.XA CN111368667B (en) 2020-02-25 2020-02-25 Data acquisition method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111368667A true CN111368667A (en) 2020-07-03
CN111368667B CN111368667B (en) 2024-03-26

Family

ID=71208258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010116897.XA Active CN111368667B (en) 2020-02-25 2020-02-25 Data acquisition method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111368667B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308910A (en) * 2020-10-10 2021-02-02 达闼机器人有限公司 Data generation method and device and storage medium
CN114625343A (en) * 2022-02-14 2022-06-14 达闼机器人股份有限公司 Data generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014085933A (en) * 2012-10-25 2014-05-12 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional posture estimation apparatus, three-dimensional posture estimation method, and program
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN109753150A (en) * 2018-12-11 2019-05-14 北京字节跳动网络技术有限公司 Figure action control method, device, storage medium and electronic equipment
CN110245638A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Video generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014085933A (en) * 2012-10-25 2014-05-12 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional posture estimation apparatus, three-dimensional posture estimation method, and program
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN109191548A (en) * 2018-08-28 2019-01-11 百度在线网络技术(北京)有限公司 Animation method, device, equipment and storage medium
CN109753150A (en) * 2018-12-11 2019-05-14 北京字节跳动网络技术有限公司 Figure action control method, device, storage medium and electronic equipment
CN110245638A (en) * 2019-06-20 2019-09-17 北京百度网讯科技有限公司 Video generation method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308910A (en) * 2020-10-10 2021-02-02 达闼机器人有限公司 Data generation method and device and storage medium
WO2022073415A1 (en) * 2020-10-10 2022-04-14 达闼机器人有限公司 Data generation method and apparatus, and storage medium
CN112308910B (en) * 2020-10-10 2024-04-05 达闼机器人股份有限公司 Data generation method, device and storage medium
CN114625343A (en) * 2022-02-14 2022-06-14 达闼机器人股份有限公司 Data generation method and device

Also Published As

Publication number Publication date
CN111368667B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
US9330470B2 (en) Method and system for modeling subjects from a depth map
EP4307233A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
CN109410316B (en) Method for three-dimensional reconstruction of object, tracking method, related device and storage medium
JP7448679B2 (en) Image processing method and device
CN109544615A (en) Method for relocating, device, terminal and storage medium based on image
CN103020648B (en) A kind of type of action recognition methods, program broadcasting method and device
KR20030068444A (en) Method of processing passive optical motion capture data
CN109308437B (en) Motion recognition error correction method, electronic device, and storage medium
CN111368667B (en) Data acquisition method, electronic equipment and storage medium
CN112580582B (en) Action learning method, action learning device, action learning medium and electronic equipment
CN110751728A (en) Virtual reality equipment and method with BIM building model mixed reality function
CN112801064A (en) Model training method, electronic device and storage medium
CN110096152A (en) Space-location method, device, equipment and the storage medium of physical feeling
CN110910449B (en) Method and system for identifying three-dimensional position of object
CN115482556A (en) Method for key point detection model training and virtual character driving and corresponding device
CN110651274A (en) Movable platform control method and device and movable platform
CN105225270A (en) A kind of information processing method and electronic equipment
CN113010009B (en) Object sharing method and device
US20230290101A1 (en) Data processing method and apparatus, electronic device, and computer-readable storage medium
CN110430416B (en) Free viewpoint image generation method and device
CN108205664B (en) Food identification method and device, storage medium and computer equipment
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
CN115393962A (en) Motion recognition method, head-mounted display device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant