CN109753150A - Figure action control method, device, storage medium and electronic equipment - Google Patents
Figure action control method, device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN109753150A CN109753150A CN201811512146.9A CN201811512146A CN109753150A CN 109753150 A CN109753150 A CN 109753150A CN 201811512146 A CN201811512146 A CN 201811512146A CN 109753150 A CN109753150 A CN 109753150A
- Authority
- CN
- China
- Prior art keywords
- limb action
- action information
- personage
- posture point
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
This disclosure relates to a kind of figure action control method, device, storage medium and electronic equipment, including target image is obtained, it include the first limb action information of user in the target image;The first posture point corresponding with the first limb action information in the target image is obtained to combine;It combines the first posture point in input pre-set image translation model, to obtain combining corresponding second limb action information with the first posture point;The movement of default personage is controlled according to the second limb action information.Through the above technical solution, the acquisition of posture point combination is first carried out to the image comprising user itself action message, then the action message of personage is preset according to the posture point combination producing of acquisition by pre-set model, it is moved to control default personage according to user itself action message, user can be facilitated directly conveniently and efficiently to control the movement for specifying preset virtual portrait with the movement of itself in this way.
Description
Technical field
This disclosure relates to image interpretation field, and in particular, to a kind of figure action control method, device, storage medium
And electronic equipment.
Background technique
It can only realize the facial expression of the action control virtual portrait according to the facial expression of user at present in the prior art
Movement the function of being restored on virtual portrait in real time also cannot achieve for the limb action of user, i.e., user without
The image of Buddha controls the same limb action for controlling virtual portrait in real time of facial expression of virtual portrait, if it is desired to control visual human
The limb action of object can only generally pass through the mode of computer animation (Computer Graphics, CG).
Summary of the invention
Purpose of this disclosure is to provide a kind of figure action control method, device, storage medium and electronic equipment, Neng Goufang
Just user directly controls the movement of specified virtual portrait with the movement of itself.
To achieve the goals above, the disclosure provides a kind of figure action control method, device, storage medium and electronics and sets
It is standby, which comprises
Target image is obtained, includes the first limb action information of user in the target image;
The first posture point corresponding with the first limb action information in the target image is obtained to combine;
It combines the first posture point in input pre-set image translation model, to obtain combining with the first posture point
Corresponding second limb action information;
The movement of default personage is controlled according to the second limb action information.
Optionally, described to obtain the first posture point corresponding with the first limb action information in the target image
Combination includes:
Extract the first limb action information in the target image, wherein include background in the target image
Information and the first limb action information;
The combination of the first posture point according to the first limb action acquisition of information.
Optionally, the pre-set image translation model and the default personage correspond;
It is described by the first posture point combine input pre-set image translation model in step before, the method is also
Include:
Personage's selection signal is received, personage's selection signal, which is used to indicate, needs the default personage to be controlled;
The pre-set image translation model is determined according to personage's selection signal.
Optionally, training obtains the pre-set image translation model by the following method:
It obtains multiple to training image, the third limb action in training image include default personage to be trained
Information;
Extract respectively each third limb action information in training image and with the third limb action information phase
Corresponding second posture point combination;
It combines the second posture point and distinguishes the corresponding third limb action information therewith with pairs of shape
Formula is inputted in the pre-set image translation model and is trained.
The disclosure also provides a kind of figure action control device, and described device includes:
First obtains module, includes the first limb action letter of user for obtaining target image, in the target image
Breath;
Second obtain module, for obtain in the target image with the first limb action information corresponding first
The combination of posture point;
Translation module, for will the first posture point combination input pre-set image translation model in, with obtain with it is described
First posture point combines corresponding second limb action information;
Control module, for controlling the movement of default personage according to the second limb action information.
Optionally, the second acquisition module includes:
Extracting sub-module, for extracting the first limb action information in the target image, wherein the target
It include background information and the first limb action information in image;
Posture point combines acquisition submodule, is used for the first posture point group according to the first limb action acquisition of information
It closes.
Optionally, the pre-set image translation model and the default personage correspond;
Before the translation module the first posture point combines input pre-set image translation model, described device is also
Include:
Receiving module, for receiving personage's selection signal, personage's selection signal, which is used to indicate, needs institute to be controlled
State default personage;
Determining module, for determining the pre-set image translation model according to personage's selection signal.
Optionally, training obtains the pre-set image translation model by the following method:
It obtains multiple to training image, the third limb action in training image include default personage to be trained
Information;
Extract respectively each third limb action information in training image and with the third limb action information phase
Corresponding second posture point combination;
It combines the second posture point and distinguishes the corresponding third limb action information therewith with pairs of shape
Formula is inputted in the pre-set image translation model and is trained.
The disclosure also provides a kind of computer readable storage medium, is stored thereon with computer program, and the program is processed
The step of above method is realized when device executes.
The disclosure also provides a kind of electronic equipment, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, the step of to realize the above method.
Through the above technical solutions, the acquisition of posture point combination is first carried out to the image comprising user itself action message,
Then the action message of personage is preset according to the posture point combination producing of acquisition by pre-set model, so that control is default
Personage moves according to user itself action message, and user can be facilitated directly conveniently and efficiently to be controlled with the movement of itself in this way
System specifies the movement of preset virtual portrait.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool
Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is a kind of flow chart of figure action control method shown according to one exemplary embodiment of the disclosure.
Fig. 2 is to obtain the first posture in a kind of figure action control method shown according to one exemplary embodiment of the disclosure
The flow chart of the combined method of point.
Fig. 3 is the flow chart of the another figure action control method shown according to one exemplary embodiment of the disclosure.
Fig. 4 is a kind of structural block diagram of figure action control device shown according to one exemplary embodiment of the disclosure.
Fig. 5 is second to obtain module in a kind of figure action control device shown according to one exemplary embodiment of the disclosure
Structural block diagram.
Fig. 6 is the structural block diagram of the another figure action control device shown according to one exemplary embodiment of the disclosure.
Fig. 7 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Fig. 8 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched
The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
Fig. 1 is a kind of flow chart of figure action control method shown according to one exemplary embodiment of the disclosure.Such as Fig. 1
Shown, the method includes the steps 101 to step 104.
In a step 101, target image is obtained, includes the first limb action information of user in the target image.Institute
The image that target image can be any source is stated, for example, it may be the real-time video shot by camera to user
In picture frame, be also possible to the static images comprising the first limb action of user information, as long as containing the first of user
The image of limb action information.
In a step 102, the first posture corresponding with the first limb action information in the target image is obtained
Point combination.The first posture point group is combined into a series of posture points corresponding with the first limb action information, all postures
The movement in the target image that point combines the user that can be showed the first limb action information is anti-
It mirrors and.The method that the first posture point corresponding with the first limb action information combines is obtained from the target image
It can be any attitude point acquisition methods.
In step 103, it combines the first posture point in input pre-set image translation model, to obtain and described the
Posture point combines corresponding second limb action information.In the appearance for the first limb action information for obtaining to characterize user
It after the combination of state point, is inputted in the pre-set image translation model, which is pre- to first pass through certain instruction
White silk data are trained, and the second limb action information of default personage can be obtained according to the combination of the posture of input point, wherein
The second limb action information of default personage combine with the posture of input point corresponding to the first limb action information of user be
It is identical.For example, user has done a movement than the heart, the corresponding appearance of the first limb action information is acquired from image
It after the combination of state point, combines the posture point in input pre-set image translation model, to obtain presetting personage's accordingly
After second limb action information, then the movement by controlling the default personage in step 104 according to the second limb action information
Motion picture of the default personage than the heart can be obtained, and the ratio heart action of the default personage and the ratio heart action of user are all
In correspondence with each other.The pre-set image translation model can be such as pix2pix model, the i.e. image based on confrontation neural network
Translation model etc..
At step 104, the movement of default personage is controlled according to the second limb action information.
Through the above technical solutions, the acquisition of posture point combination is first carried out to the image comprising user itself action message,
Then the action message of personage is preset according to the posture point combination producing of acquisition by pre-set model, so that control is default
Personage moves according to user itself action message, and user can be facilitated directly conveniently and efficiently to be controlled with the movement of itself in this way
System specifies the movement of preset virtual portrait.
Fig. 2 is to obtain the first posture in a kind of figure action control method shown according to one exemplary embodiment of the disclosure
The flow chart of the combined method of point.As shown in Fig. 2, step 102 described in Fig. 1 further includes step 201 and step 202.
In step 201, the first limb action information in the target image is extracted, wherein the target figure
It include background information and the first limb action information as in.The first limb action letter is extracted from the target image
Breath all removes all image informations in the target image in addition to the first limb action information, such as the back
Scape information only obtains the first limb action information that can embody user action.Wherein, described in being extracted from the target image
The extracting method of first limb action information can be for such as stingy diagram technology (Image Matting).
In step 202, the first posture point according to the first limb action acquisition of information combines.For example, by
After the stingy diagram technology being previously mentioned in step 201 extracts the first limb action information in target image, direct basis
The the first limb action information extracted obtains posture point combination, is obtained according to the first limb action information
The posture point acquisition methods for obtaining the posture point combination may be any attitude point acquisition methods.
Through the above technical solutions, first extracting the first limb action information from target image, then exist
The acquisition of posture point is carried out to obtain appearance corresponding with the first limb action information to the first limb action information
The combination of state point enables to the acquisition of posture point more accurate and quick in this way.
Fig. 3 is the flow chart of the another figure action control method shown according to one exemplary embodiment of the disclosure.Such as Fig. 3
Shown, the method further includes step 301 and step 302 before step 103 as shown in Figure 1.Wherein, the pre-set image
Translation model and the default personage correspond.
In step 301, personage's selection signal is received, personage's selection signal is used to indicate need to be to be controlled described
Default personage.Personage's selection signal can be user's input, and user can carry out different personages according to their own needs
Selection.What personage's selection signal was also possible to automatically generate, such as when user does not carry out personage's selection, it is automatic raw
The personage's selection signal defaulted at one, the generation method of the personage's selection signal automatically generated do not limit in the present embodiment
System.
In step 302, the pre-set image translation model is determined according to personage's selection signal.Due to described default
Image interpretation model is one-to-one with the default personage, it is therefore desirable to after receiving personage's selection signal,
Institute's pre-set image translation model to be used in step 103 is determined according to personage's selection signal, thus can basis
The posture point got from target image combines to obtain the second limb action information of specified default personage.
Through the above technical solutions, personage's selection signal that user carries out selection to default personage can be received, and according to
The default personage of user's selection to carry out image interpretation to the user action on target image, and the personage for enabling user specify is by user
Movement on target image, which re-starts, to be showed, in such manner, it is possible to further increase user experience.
A kind of corresponding pass in possible embodiment, between the pre-set image translation model and the default personage
System can also be one-to-many, i.e., using the second limbs of the same pre-set image translation model also available multiple default personages
Action message.
In a kind of possible embodiment, training obtains the pre-set image translation model by the following method: obtaining
It is multiple to training image, the third limb action information in training image include default personage to be trained;It mentions respectively
Take each third limb action information in training image and the second posture corresponding with the third limb action information
Point combination;It combines the second posture point and distinguishes the corresponding third limb action information therewith in pairs of form
It inputs in the pre-set image translation model and is trained.Wherein, the extraction to the third limb action information in training image
Method can be identical as the method for extracting the first limb action information in step 201 as shown in Figure 2, such as all can scratch figure
Technology, can also be different from extracting method used in the step 201, the second posture point combination extracting method can also with such as
The method that the combination of the first posture point is extracted in step 202 in Fig. 2 is identical, can not also be identical.As long as can by this first
Limb action information, the third limb action information are extracted from target image, and obtain the first posture point combination and
Either second posture point combination method all may be used.
All default personages require to complete by being trained pre-set image translation model to target image
On the first limb action information of user show again.
Fig. 4 is a kind of structural block diagram of figure action control device shown according to one exemplary embodiment of the disclosure.Such as
Shown in Fig. 4, described device includes: the first acquisition module 10, includes user's in the target image for obtaining target image
First limb action information;Second obtain module 20, for obtain in the target image with the first limb action information
Corresponding first posture point combination;Translation module 30 translates mould for the first posture point combining input pre-set image
In type, to obtain combining corresponding second limb action information with the first posture point;Control module 40, for according to institute
State the movement that the second limb action information controls default personage.
Through the above technical solutions, the acquisition of posture point combination is first carried out to the image comprising user itself action message,
Then the action message of personage is preset according to the posture point combination producing of acquisition by pre-set model, so that control is default
Personage moves according to user itself action message, and user can be facilitated directly conveniently and efficiently to be controlled with the movement of itself in this way
System specifies the movement of preset virtual portrait.
Fig. 5 is second to obtain module in a kind of figure action control device shown according to one exemplary embodiment of the disclosure
Structural block diagram.As shown in figure 5, the second acquisition module 20 includes: extracting sub-module 201, for extracting the target figure
The first limb action information as in, wherein include background information and first limb action in the target image
Information;Posture point combines acquisition submodule 202, is used for the first posture point group according to the first limb action acquisition of information
It closes.
Through the above technical solutions, first extracting the first limb action information from target image, then exist
The acquisition of posture point is carried out to obtain appearance corresponding with the first limb action information to the first limb action information
The combination of state point enables to the acquisition of posture point more accurate and quick in this way.
Fig. 6 is the structural block diagram of the another figure action control device shown according to one exemplary embodiment of the disclosure.Its
In, the pre-set image translation model and the default personage correspond.As shown in fig. 6, in the translation module 30 by institute
Before stating the first posture point combination input pre-set image translation model, described device further include: receiving module 50 is used for recipient
Object selection signal, personage's selection signal, which is used to indicate, needs the default personage to be controlled;Determining module 60 is used for root
The pre-set image translation model is determined according to personage's selection signal.
Through the above technical solutions, personage's selection signal that user carries out selection to default personage can be received, and according to
The default personage of user's selection to carry out image interpretation to the user action on target image, and the personage for enabling user specify is by user
Movement on target image, which re-starts, to be showed, in such manner, it is possible to further increase user experience.
In a kind of possible embodiment, training obtains the pre-set image translation model by the following method: obtaining
It is multiple to training image, the third limb action information in training image include default personage to be trained;It mentions respectively
Take each third limb action information in training image and the second posture corresponding with the third limb action information
Point combination;It combines the second posture point and distinguishes the corresponding third limb action information therewith in pairs of form
It inputs in the pre-set image translation model and is trained.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Fig. 7 is the block diagram of a kind of electronic equipment 700 shown according to an exemplary embodiment.As shown in fig. 7, the electronics is set
Standby 700 may include: processor 701, memory 702.The electronic equipment 700 can also include multimedia component 703, input/
Export one or more of (I/O) interface 704 and communication component 705.
Wherein, processor 701 is used to control the integrated operation of the electronic equipment 700, to complete above-mentioned figure action control
All or part of the steps in method processed.Memory 702 is for storing various types of data to support in the electronic equipment 700
Operation, these data for example may include the finger of any application or method for operating on the electronic equipment 700
Order and the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The storage
Device 702 can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random
It accesses memory (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory
(Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable
Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory
(Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as
ROM), magnetic memory, flash memory, disk or CD.Multimedia component 703 may include screen and audio component.Wherein
Screen for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include
One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage
Device 702 is sent by communication component 705.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O
Interface 704 provides interface between processor 701 and other interface modules, other above-mentioned interface modules can be keyboard, mouse,
Button etc..These buttons can be virtual push button or entity button.Communication component 705 is for the electronic equipment 700 and other
Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field
Communication, abbreviation NFC), 2G, 3G, 4G, NB-IOT, eMTC or other 5G etc. or they one or more of
Combination, it is not limited here.Therefore the corresponding communication component 707 may include: Wi-Fi module, bluetooth module, NFC mould
Block etc..
In one exemplary embodiment, electronic equipment 700 can be by one or more application specific integrated circuit
(Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital
Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device,
Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array
(Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member
Part is realized, for executing above-mentioned figure action control method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should
The step of above-mentioned figure action control method is realized when program instruction is executed by processor.For example, the computer-readable storage
Medium can be the above-mentioned memory 702 including program instruction, and above procedure instruction can be by the processor 701 of electronic equipment 700
It executes to complete above-mentioned figure action control method.
Fig. 8 is the block diagram of a kind of electronic equipment 800 shown according to an exemplary embodiment.For example, electronic equipment 800 can
To be provided as a server.Referring to Fig. 8, electronic equipment 800 includes processor 822, and quantity can be one or more, with
And memory 832, for storing the computer program that can be executed by processor 822.The computer program stored in memory 832
May include it is one or more each correspond to one group of instruction module.In addition, processor 822 can be configured as
The computer program is executed, to execute above-mentioned figure action control method.
In addition, electronic equipment 800 can also include power supply module 826 and communication component 850, which can be with
It is configured as executing the power management of electronic equipment 800, which, which can be configured as, realizes electronic equipment 800
Communication, for example, wired or wireless communication.In addition, the electronic equipment 800 can also include input/output (I/O) interface 858.Electricity
Sub- equipment 800 can be operated based on the operating system for being stored in memory 832, such as Windows ServerTM, Mac OS
XTM, UnixTM, LinuxTM etc..
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should
The step of above-mentioned figure action control method is realized when program instruction is executed by processor.For example, the computer-readable storage
Medium can be the above-mentioned memory 832 including program instruction, and above procedure instruction can be by the processor 822 of electronic equipment 800
It executes to complete above-mentioned figure action control method.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality
The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure
Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance
In the case where shield, it can be combined in any appropriate way.In order to avoid unnecessary repetition, the disclosure to it is various can
No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally
Disclosed thought equally should be considered as disclosure disclosure of that.
Claims (10)
1. a kind of figure action control method, which is characterized in that the described method includes:
Target image is obtained, includes the first limb action information of user in the target image;
The first posture point corresponding with the first limb action information in the target image is obtained to combine;
It combines the first posture point in input pre-set image translation model, to obtain combining relatively with the first posture point
The the second limb action information answered;
The movement of default personage is controlled according to the second limb action information.
2. the method according to claim 1, wherein it is described obtain in the target image with first limbs
The corresponding first posture point of action message, which combines, includes:
Extract the first limb action information in the target image, wherein include background information in the target image
With the first limb action information;
The combination of the first posture point according to the first limb action acquisition of information.
3. the method according to claim 1, wherein the pre-set image translation model and the default personage one
One is corresponding;
It is described by the first posture point combine input pre-set image translation model in step before, the method is also wrapped
It includes:
Personage's selection signal is received, personage's selection signal, which is used to indicate, needs the default personage to be controlled;
The pre-set image translation model is determined according to personage's selection signal.
4. according to the method described in claim 3, it is characterized in that, the pre-set image translation model is trained by the following method
It obtains:
It obtains multiple to training image, the third limb action letter in training image include default personage to be trained
Breath;
Each third limb action information in training image and corresponding with the third limb action information is extracted respectively
The second posture point combination;
It combines the second posture point and the corresponding third limb action information is defeated in pairs of form respectively therewith
Enter and is trained in the pre-set image translation model.
5. a kind of figure action control device, which is characterized in that described device includes:
First obtains module, includes the first limb action information of user for obtaining target image, in the target image;
Second obtains module, for obtaining the first posture corresponding with the first limb action information in the target image
Point combination;
Translation module, for combining the first posture point in input pre-set image translation model, to obtain and described first
Posture point combines corresponding second limb action information;
Control module, for controlling the movement of default personage according to the second limb action information.
6. device according to claim 5, which is characterized in that described second, which obtains module, includes:
Extracting sub-module, for extracting the first limb action information in the target image, wherein the target image
In include background information and the first limb action information;
Posture point combines acquisition submodule, combines for the first posture point according to the first limb action acquisition of information.
7. device according to claim 5, which is characterized in that the pre-set image translation model and the default personage one
One is corresponding;
Before the translation module the first posture point combines input pre-set image translation model, described device is also wrapped
It includes:
Receiving module, for receiving personage's selection signal, personage's selection signal is used to indicate need to be to be controlled described pre-
If personage;
Determining module, for determining the pre-set image translation model according to personage's selection signal.
8. device according to claim 7, which is characterized in that the pre-set image translation model is trained by the following method
It obtains:
It obtains multiple to training image, the third limb action letter in training image include default personage to be trained
Breath;
Each third limb action information in training image and corresponding with the third limb action information is extracted respectively
The second posture point combination;
It combines the second posture point and the corresponding third limb action information is defeated in pairs of form respectively therewith
Enter and is trained in the pre-set image translation model.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor
The step of any one of claim 1-4 the method is realized when row.
10. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one of claim 1-4
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811512146.9A CN109753150A (en) | 2018-12-11 | 2018-12-11 | Figure action control method, device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811512146.9A CN109753150A (en) | 2018-12-11 | 2018-12-11 | Figure action control method, device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109753150A true CN109753150A (en) | 2019-05-14 |
Family
ID=66403525
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811512146.9A Pending CN109753150A (en) | 2018-12-11 | 2018-12-11 | Figure action control method, device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109753150A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368667A (en) * | 2020-02-25 | 2020-07-03 | 达闼科技(北京)有限公司 | Data acquisition method, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040041812A1 (en) * | 2002-08-30 | 2004-03-04 | Roberts Brian Curtis | System and method for presenting three-dimensional data |
CN103179437A (en) * | 2013-03-15 | 2013-06-26 | 苏州跨界软件科技有限公司 | System and method for recording and playing virtual character videos |
CN103218843A (en) * | 2013-03-15 | 2013-07-24 | 苏州跨界软件科技有限公司 | Virtual character communication system and method |
CN106485773A (en) * | 2016-09-14 | 2017-03-08 | 厦门幻世网络科技有限公司 | A kind of method and apparatus for generating animation data |
CN107688391A (en) * | 2017-09-01 | 2018-02-13 | 广州大学 | A kind of gesture identification method and device based on monocular vision |
CN108227931A (en) * | 2018-01-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | For controlling the method for virtual portrait, equipment, system, program and storage medium |
CN108664894A (en) * | 2018-04-10 | 2018-10-16 | 天津大学 | The human action radar image sorting technique of neural network is fought based on depth convolution |
CN108803874A (en) * | 2018-05-30 | 2018-11-13 | 广东省智能制造研究所 | A kind of human-computer behavior exchange method based on machine vision |
CN108960086A (en) * | 2018-06-20 | 2018-12-07 | 电子科技大学 | Based on the multi-pose human body target tracking method for generating confrontation network positive sample enhancing |
-
2018
- 2018-12-11 CN CN201811512146.9A patent/CN109753150A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040041812A1 (en) * | 2002-08-30 | 2004-03-04 | Roberts Brian Curtis | System and method for presenting three-dimensional data |
CN103179437A (en) * | 2013-03-15 | 2013-06-26 | 苏州跨界软件科技有限公司 | System and method for recording and playing virtual character videos |
CN103218843A (en) * | 2013-03-15 | 2013-07-24 | 苏州跨界软件科技有限公司 | Virtual character communication system and method |
CN106485773A (en) * | 2016-09-14 | 2017-03-08 | 厦门幻世网络科技有限公司 | A kind of method and apparatus for generating animation data |
CN107688391A (en) * | 2017-09-01 | 2018-02-13 | 广州大学 | A kind of gesture identification method and device based on monocular vision |
CN108227931A (en) * | 2018-01-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | For controlling the method for virtual portrait, equipment, system, program and storage medium |
CN108664894A (en) * | 2018-04-10 | 2018-10-16 | 天津大学 | The human action radar image sorting technique of neural network is fought based on depth convolution |
CN108803874A (en) * | 2018-05-30 | 2018-11-13 | 广东省智能制造研究所 | A kind of human-computer behavior exchange method based on machine vision |
CN108960086A (en) * | 2018-06-20 | 2018-12-07 | 电子科技大学 | Based on the multi-pose human body target tracking method for generating confrontation network positive sample enhancing |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368667A (en) * | 2020-02-25 | 2020-07-03 | 达闼科技(北京)有限公司 | Data acquisition method, electronic equipment and storage medium |
CN111368667B (en) * | 2020-02-25 | 2024-03-26 | 达闼科技(北京)有限公司 | Data acquisition method, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919888B (en) | Image fusion method, model training method and related device | |
CN106161939B (en) | Photo shooting method and terminal | |
CN110349232B (en) | Image generation method and device, storage medium and electronic equipment | |
CN110349081B (en) | Image generation method and device, storage medium and electronic equipment | |
CN104869320B (en) | Electronic equipment and controlling electronic devices operating method | |
CN109495688A (en) | Method for previewing of taking pictures, graphic user interface and the electronic equipment of electronic equipment | |
JP6924901B2 (en) | Photography method and electronic equipment | |
CN110475069B (en) | Image shooting method and device | |
CN107330418B (en) | Robot system | |
KR20150141808A (en) | Electronic Device Using Composition Information of Picture and Shooting Method of Using the Same | |
CN109064388A (en) | Facial image effect generation method, device and electronic equipment | |
CN112712578A (en) | Virtual character model creating method and device, electronic equipment and storage medium | |
CN108734754A (en) | Image processing method and device | |
WO2021232875A1 (en) | Method and apparatus for driving digital person, and electronic device | |
CN111131702A (en) | Method and device for acquiring image, storage medium and electronic equipment | |
CN106101575B (en) | A kind of generation method, device and the mobile terminal of augmented reality photo | |
CN109697446A (en) | Image key points extracting method, device, readable storage medium storing program for executing and electronic equipment | |
CN114387445A (en) | Object key point identification method and device, electronic equipment and storage medium | |
CN113453027B (en) | Live video and virtual make-up image processing method and device and electronic equipment | |
CN113920229A (en) | Virtual character processing method and device and storage medium | |
CN110349577B (en) | Man-machine interaction method and device, storage medium and electronic equipment | |
CN108965699A (en) | Parameter regulation means and device, terminal, the readable storage medium storing program for executing of reference object | |
CN115984447A (en) | Image rendering method, device, equipment and medium | |
CN109753150A (en) | Figure action control method, device, storage medium and electronic equipment | |
CN104869283B (en) | A kind of image pickup method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |