CN108986191A - Generation method, device and the terminal device of figure action - Google Patents

Generation method, device and the terminal device of figure action Download PDF

Info

Publication number
CN108986191A
CN108986191A CN201810720342.9A CN201810720342A CN108986191A CN 108986191 A CN108986191 A CN 108986191A CN 201810720342 A CN201810720342 A CN 201810720342A CN 108986191 A CN108986191 A CN 108986191A
Authority
CN
China
Prior art keywords
object model
person
keywords related
keywords
collecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810720342.9A
Other languages
Chinese (zh)
Other versions
CN108986191B (en
Inventor
乔慧
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810720342.9A priority Critical patent/CN108986191B/en
Publication of CN108986191A publication Critical patent/CN108986191A/en
Application granted granted Critical
Publication of CN108986191B publication Critical patent/CN108986191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the present invention provides generation method, device and the terminal device of a kind of figure action, and this method is used for virtual reality and/or augmented reality, this method comprises: the expressing information of acquisition user's input;Wherein, include keyword related with personage in expressing information, and include keyword related with the limb action of personage in keyword related with personage;Keyword related with personage is extracted from expressing information;And object model corresponding with keyword is obtained in pre-stored object model library;Object model is handled further according to expressing information, obtains the limb action information of object model.Generation method, device and the terminal device of figure action provided in an embodiment of the present invention improve the acquisition efficiency of human limbs movement variation in three-dimensional scenic on the basis of realization constructs three-dimensional scenic automatically.

Description

Character action generation method and device and terminal equipment
Technical Field
The invention relates to the technical field of computers, in particular to a character action generation method and device and terminal equipment.
Background
With the continuous development of virtual reality technology and/or augmented reality, more and more three-dimensional models are used for sharing applications, and three-dimensional scenes are constructed through the three-dimensional models for sharing applications, and the three-dimensional scenes are widely applied to many fields, so that more visual enjoyment can be provided for users to a great extent, and the user experience is improved.
In the prior art, when a three-dimensional scene is constructed by using an existing three-dimensional model, a professional needs to first obtain the three-dimensional models of objects in the scene, and then combine the three-dimensional models of the objects in a manual mode, so as to generate the corresponding three-dimensional scene. For example, when a character model in the three-dimensional scene has a series of body motion changes, a professional needs to manually acquire each body motion of the character model and manually combine the character models corresponding to the body motions to obtain a set of body motion changes of the complete character model. Therefore, the existing mode is adopted, so that the acquisition efficiency of the limb movement change of the human body in the three-dimensional scene is not high.
Disclosure of Invention
The invention provides a character action generation method, a character action generation device and terminal equipment, which improve the acquisition efficiency of the action change of a human limb in a three-dimensional scene on the basis of realizing automatic construction of the three-dimensional scene.
In a first aspect, an embodiment of the present invention provides a method for generating a character action, where the method is used for virtual reality and/or augmented reality, and the method includes:
collecting expression information input by a user; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person;
extracting the keywords related to the person from the expression information;
acquiring an object model corresponding to the keyword from a pre-stored object model library;
and processing the object model according to the expression information to obtain the limb action information of the object model.
In a possible implementation manner, before the obtaining the object model corresponding to the keyword from the pre-stored object model library, the method further includes:
collecting the keywords related to the characters, and collecting object models corresponding to the keywords;
and establishing an object model library, wherein the object model library comprises an incidence relation between the key words and the object models.
In one possible implementation manner, the collecting the object models corresponding to the keywords includes:
collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword;
and training and clustering the plurality of different object models serving as training samples to obtain the object models corresponding to the keywords.
In a possible implementation manner, the collecting expression information input by a user includes:
collecting text information input by a user;
correspondingly, the extracting of keywords related to the person from the expression information comprises:
performing word segmentation processing on the text information according to a semantic model to obtain a word group;
and extracting the keywords related to the characters in the phrases.
In a possible implementation manner, the collecting expression information input by a user includes:
collecting voice information input by a user;
correspondingly, the extracting of keywords related to the person from the expression information comprises:
carrying out voice recognition on the voice information to obtain text information;
performing word segmentation processing on the text information according to a semantic model to obtain a word group;
and extracting keywords related to the character from the phrase.
In a second aspect, an embodiment of the present invention provides an apparatus for generating human actions, where the apparatus is used for virtual reality and/or augmented reality, and the apparatus includes:
the acquisition unit is used for acquiring the expression information input by the user; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person;
an acquisition unit configured to extract the keyword related to the person from the expression information;
the acquiring unit is further used for acquiring an object model corresponding to the keyword from a pre-stored object model library;
and the processing unit is used for processing the object model according to the expression information to obtain the limb action information of the object model.
In a possible implementation manner, the apparatus for acquiring a human body motion further includes an establishing unit;
the acquisition unit is further used for collecting the keywords related to the characters and collecting object models corresponding to the keywords;
the establishing unit is used for establishing an object model library, and the object model library comprises the incidence relation between the key words and the object models.
In a possible implementation manner, the collection unit is specifically configured to collect a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering the plurality of different object models as training samples to obtain the object models corresponding to the keywords.
In a possible implementation manner, the collecting unit is specifically configured to collect text information input by a user;
correspondingly, the obtaining unit is specifically configured to perform word segmentation processing on the text information according to a semantic model to obtain a word group; and extracting the keywords related to the characters from the phrases.
In a possible implementation manner, the collecting unit is specifically configured to collect voice information input by a user;
correspondingly, the obtaining unit is specifically configured to perform voice recognition on the voice information to obtain text information; performing word segmentation processing on the text information according to a semantic model to obtain a word group; and extracting keywords related to the character from the phrase.
In a third aspect, an embodiment of the present invention further provides a terminal device, which may include a processor and a memory, where,
the memory is to store program instructions;
the processor is configured to read the program instructions in the memory, and execute the method for generating a human motion according to any one of the first aspect of the present invention according to the program instructions in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, wherein,
the computer-readable storage medium has stored thereon a computer program that, when executed by a processor, executes the method for generating a human motion according to any one of the first aspect.
According to the method, the device and the terminal equipment for generating the character actions, which are provided by the embodiment of the invention, the expression information input by a user is collected; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person; extracting keywords related to the person from the expression information; then obtaining an object model corresponding to the keyword from a pre-stored object model library; and then, processing the object model according to the expression information so as to obtain the limb action information of the object model. Therefore, according to the character action generation method, the character action generation device and the terminal device, after the object model corresponding to the keyword is obtained, the object model can be directly processed according to the expression information, so that the limb action information of the object model is obtained, compared with the situation that in the prior art, the limb action changes of a set of complete character model can be obtained only by manually combining character models corresponding to the limb actions, on the basis of realizing automatic construction of a three-dimensional scene, the obtaining efficiency of the limb action changes of the human body in the three-dimensional scene is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for generating a character action according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating another method for generating a character action according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a character motion generation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another human action generation apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For example, please refer to fig. 1, which is a schematic diagram of an application scenario provided in the embodiment of the present invention, when a certain user a listens to a voiced novel through a terminal device (e.g., a mobile phone), in order to improve the reading experience of the user a, a three-dimensional scene of the novel may be synchronously displayed on the mobile phone while the user a listens to the voiced novel. For example, when the mobile phone acquires the text information "a 10-year-old girl with a height of 145 cm and a weight of 40 kg, wears a small skirt, walks and suddenly sees its mother", and thus holds a gift to run to its mother ", the mobile phone may construct a three-dimensional scene including the girl, which may include other information such as limb movement change information (from walking to running) of the girl. In order to display the body and limb action change of the person in the three-dimensional scene and improve the acquisition efficiency of the body and limb action change of the person, in the embodiment of the invention, the expression information input by a user can be firstly acquired; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person; extracting keywords related to the person from the expression information; then obtaining an object model corresponding to the keyword from a pre-stored object model library; and then, processing the object model according to the expression information so as to obtain the limb action information of the object model. Compared with the figure models corresponding to the limb actions which are manually combined in the prior art, the limb action change of a set of complete figure models can be obtained, and on the basis of realizing automatic construction of a three-dimensional scene, the acquisition efficiency of the limb action change of the human body in the three-dimensional scene is improved.
The following describes the technical solution of the present invention and how to solve the above technical problems with specific examples. The following specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart of a method for generating a human motion according to an embodiment of the present invention, where the method for generating a human motion can be used for virtual reality and/or augmented reality, the method for generating a human motion can be executed by a human motion generating device, and the human motion generating device can be provided separately or integrated in a processor. Referring to fig. 2, the method for generating a human action may include:
s201, collecting expression information input by a user.
The expression information includes keywords related to the person, and the keywords related to the person include keywords related to the body movement of the person.
The expression information input by the user can be input in a text mode, namely text information; of course, the input may be by voice, i.e. voice information. For example, when the terminal device collects text information input by a user, the text information input by the user can be collected through a screen of the terminal device; when the terminal device collects the voice information input by the user, the voice information input by the user can be collected through a microphone of the terminal device. The expression information comprises at least one keyword related to the person, and the keywords related to the person comprise at least one keyword related to the body movement of the person.
It should be noted that the expression information in the embodiment of the present invention may be a sentence, or a section of a plurality of sentences, or, of course, a complete text composed of a plurality of sections of sentences.
S202, extracting keywords related to the person from the expression information.
After the terminal device collects the expression information input by the user through the above S201, the terminal device may extract keywords related to the person in the expression information. For example, the keyword related to the person may be a word representing the age, height, weight, movement of an arm, movement of a leg, and the like of the person.
And S203, acquiring an object model corresponding to the keyword from a pre-stored object model library.
The object model corresponding to the keyword may be a three-dimensional model.
Before obtaining the object model corresponding to the keyword, an object model library is required to be established in advance, and a plurality of keywords and object models corresponding to the keywords are stored in the object model library. For a keyword, it may correspond to one or more object models. For example, for the keyword "run", the corresponding object model may be a running male character model, a running female character model, or a running character model with different postures. Of course, a plurality of keywords may correspond to one object model. The more keywords related to a person are acquired, the higher the accuracy of the corresponding object model acquired in the object model library.
After the keywords related to the person are extracted from the expression information in S202, the object model corresponding to the keywords may be searched in a pre-established object model library, so as to obtain the object model corresponding to the keywords.
And S204, processing the object model according to the expression information to obtain the limb action information of the object model.
After the object model corresponding to the keyword is acquired in the pre-stored object model library in S203, the object models may be combined with the expression information to obtain the body motion information of the object model. For example, if the expression information includes "a person walks and suddenly runs", after the walking character model and the running character model are respectively obtained, the two models are sorted and combined with the expression information, and the body motion information of the object model in the expression information can be obtained. Compared with the figure models corresponding to the limb actions which are manually combined in the prior art, the limb action change of a set of complete figure models can be obtained, and on the basis of realizing automatic construction of a three-dimensional scene, the acquisition efficiency of the limb action change of the human body in the three-dimensional scene is improved.
The method for generating the character action, provided by the embodiment of the invention, comprises the steps of collecting the expression information input by a user; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person; extracting keywords related to the person from the expression information; then obtaining an object model corresponding to the keyword from a pre-stored object model library; and then, processing the object model according to the expression information so as to obtain the limb action information of the object model. Therefore, according to the character action generation method provided by the embodiment of the invention, after the object model corresponding to the keyword is obtained, the object model can be directly processed according to the expression information, so that the limb action information of the object model is obtained, and compared with the prior art that the limb action changes of a set of complete character model can be obtained only by manually combining character models corresponding to the limb actions, the method improves the obtaining efficiency of the limb action changes of the human body in the three-dimensional scene on the basis of realizing the automatic construction of the three-dimensional scene.
To more clearly describe the method for generating a character action according to the embodiment of the present invention, please refer to fig. 3, where fig. 3 is a schematic flow chart of another method for generating a character action according to the embodiment of the present invention, in the embodiment shown in fig. 3, taking the expression information input by the user as text information as an example, the method for generating a character action may further include:
s301, collecting text information input by a user.
The expression information input by the user comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person.
Similarly, the text information in the embodiment of the present invention may be a sentence, or a section of a plurality of sentences, or, of course, a complete text composed of a plurality of sections of sentences. The text information comprises at least one keyword related to the person, and the keywords related to the person comprise at least one keyword related to the body movement of the person.
Optionally, the terminal device may collect text information input by the user through a screen of the terminal device, and certainly, may also collect text information input by the user through other manners.
After acquiring the text information input by the user, the terminal device may extract the keywords related to the person in the text information, and optionally, in an embodiment of the present invention, the extracting of the keywords related to the person in the text information may be implemented by the following steps S302 to S303:
and S302, performing word segmentation processing on the text information according to the semantic model to obtain a word group.
After the text information input by the user is collected in the above S301, word segmentation processing may be performed on the text information according to the semantic model to obtain a word group. It should be noted that, the method for performing word segmentation processing on text information through a semantic model may refer to a method disclosed in the prior art, and here, the embodiment of the present invention is not described again.
For example, as shown in fig. 1, after acquiring text information "a 10-year-old girl with a height of 145 cm and a weight of 40 kg, wears a small skirt, walks and suddenly sees its mother", and thus holds a gift to run to its mother ", the terminal device may perform word segmentation processing on the text information through a semantic model to obtain a plurality of word groups, where the word groups at least include: height, 145 cm, weight, 40 kg, 10 years old, girl, wear, small skirt, walk, sudden, see, self, mom, hold, gift, run, etc.
S303, extracting keywords related to the character from the phrases.
After word segmentation processing is carried out on the text information according to the semantic model to obtain word groups, keywords related to the characters can be extracted from the obtained word groups.
It should be noted that, in the above S301 to S303, only the expression information is taken as text information, and how to extract the keywords related to the person in the text information is taken as an example for description, of course, the expression information may also be speech information, when the expression information is speech information, speech recognition may be performed on the speech information first to obtain text information corresponding to the speech information, so as to convert the speech information input by the user into text information corresponding to the speech information, and then a manner of extracting the keywords related to the person in the text information is the same as that in the above S302 to S303, which can be referred to the description in the above S302 to S303, and thus, the embodiments of the present invention are not described again.
For example, after the word segmentation processing is performed in S302 to obtain phrases such as height, 145 cm, weight, 40 kg, 10 years old, girl, wear, small skirt, walk, sudden, sight, self, mom, holding, gift, running, and the like, keywords related to the person may be extracted from the phrases. It can be seen that the keywords related to the person are: height, 145 cm, weight, 40 kg, 10 years old, girl, wear, small skirt, walk, see, hold, present, run.
S304, collecting keywords related to the person, and collecting object models corresponding to the keywords.
Before acquiring the body motion information of the object model corresponding to the keyword, the keyword corresponding to the person needs to be collected, and the object model corresponding to the keyword needs to be collected. When collecting object models corresponding to keywords, a keyword may correspond to a plurality of object models, and of course, a plurality of keywords may correspond to a single object model.
Alternatively, the attributes associated with the person may include characteristics of height, weight, age, wear, arm movements, leg movements, and the like, and the words used to represent these attributes may be understood as keywords associated with the person. After determining keywords related to the person, object models corresponding to the keywords may be further collected. Optionally, in this embodiment of the present invention, collecting the object models corresponding to the keywords may include: collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering a plurality of different object models as training samples to obtain object models corresponding to the keywords. For example, for a keyword "station", the body motions of the stations corresponding to different users may be different, and therefore, when determining an object model corresponding to the keyword "station", different object models corresponding to a plurality of users may be trained and clustered, so as to obtain an object model corresponding to the keyword "station".
S305, establishing an object model library.
The object model library comprises incidence relations between the keywords and the object models.
After the keywords related to the person are collected and the object models corresponding to the keywords are collected, respectively, through S304, an object model library may be built according to the association relationship between the keywords and the object models.
It should be noted that there is no sequence between S301-S303 and S304-S305, and S301-S303 may be executed first, and then S304-S305 may be executed; of course, S304-S305 may be executed first, and then S301-S303 may be executed; of course, S301-S303 and S304-S305 may also be performed simultaneously. Here, the embodiment of the invention is described by taking the steps S301 to S303 and then S304 to S305 as examples, but the embodiment of the invention is not limited thereto. In general, S304-S305 may be performed first, that is, first, keywords related to a person are collected, and object models corresponding to the keywords are collected, and an object model library is built, and S304-S305 are not required to be performed each time limb motion information of the object models is acquired, an object model library may be built when the limb motion information of the object models is acquired for the first time, and then, when a new keyword and an object model corresponding to the new keyword are generated, the new keyword and the object model corresponding to the new keyword may be added to the object model library, so as to update the object model library.
S306, acquiring an object model corresponding to the keyword from a pre-stored object model library.
It should be noted that, when an object model corresponding to a keyword is obtained from a pre-established object model library, if a keyword corresponds to at least two object models in a pre-stored corresponding model library, one of the at least two object models may be arbitrarily selected as the object model corresponding to the keyword, or an average value of the at least two object models may be obtained, so as to obtain the object model corresponding to the keyword. Of course, if the text information further defines the object model, an object model matching the text information may be selected from at least two object models.
For example, after extracting keywords related to a character, height, 145 cm, weight, 40 kg, 10 years old, girl, wear, small skirt, walk, see, hold, gift, run through S303, and building an object model library through S305, a worn small skirt with height of 145 cm, weight of 40 kg, 10 years old may be searched and obtained in the pre-built object model library according to the keywords related to the character, a worn small skirt with height of 40 kg, 10 years old may be contained, a worn small skirt with weight of 40 kg, 10 years old may be contained, and a run small skirt with height of 145 cm, weight of 40 kg, 10 years old may be searched and obtained; after the two models are acquired respectively, the two models can be processed in combination with the text information, so as to obtain the limb movement information of the girl model, please refer to the following S307:
and S307, processing the object model according to the expression information to obtain the limb action information of the object model.
After the object model corresponding to the keyword is acquired in the pre-stored object model library in S306, the object models may be combined with the expression information to obtain the body motion information of the object model. Compared with the figure models corresponding to the limb actions which are manually combined in the prior art, the limb action change of a set of complete figure models can be obtained, and on the basis of realizing automatic construction of a three-dimensional scene, the acquisition efficiency of the limb action change of the human body in the three-dimensional scene is improved.
For example, a wearing small skirt with the height of 145 cm, the weight of 40 kg and the age of 10 is searched and obtained from a pre-stored object model library, a walking small girl model with the gift is contained, a wearing small skirt with the height of 145 cm, the weight of 40 kg and the age of 10 is searched and obtained, after the walking small girl model with the gift is contained, a 10-year-old small girl with the height of 145 cm and the weight of 40 kg wears the small skirt and walks, and suddenly sees a mother of the baby, so that the walking small girl model with the gift running towards the mother can obtain the text information, the small girl performs walking motions first and then performs running motions, and therefore the two models can be sorted and combined together with the text information, and limb motion information of the small girl model is obtained. Compared with the prior art that the change of the limb actions of the girl can be completely obtained only by manually combining the girl models corresponding to the limb actions, the acquisition efficiency of the change of the limb actions of the girl in the three-dimensional scene is improved on the basis of realizing the automatic construction of the three-dimensional scene.
Fig. 4 is a schematic structural diagram of a human motion generation apparatus 40 according to an embodiment of the present invention, please refer to fig. 4, in which the human motion generation apparatus 40 can be applied to virtual reality and/or augmented reality, and the human motion generation apparatus 40 can include:
an acquisition unit 401 for acquiring the expression information input by the user; the expression information includes keywords related to the person, and the keywords related to the person include keywords related to the body movement of the person.
An obtaining unit 402 for extracting keywords related to the person from the expression information.
The obtaining unit 402 is further configured to obtain an object model corresponding to the keyword from a pre-stored object model library.
A processing unit 403, configured to process the object model according to the expression information, so as to obtain the body motion information of the object model.
Optionally, the generating device 40 of the human actions may further include an establishing unit 404, please refer to fig. 5, where fig. 5 is a schematic structural diagram of another generating device of human actions according to an embodiment of the present invention.
The collecting unit 401 is further configured to collect keywords related to the person, and collect object models corresponding to the keywords.
The establishing unit 404 is configured to establish an object model library, where the object model library includes an association relationship between a keyword and an object model.
Optionally, the collecting unit 401 is specifically configured to collect a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering a plurality of different object models as training samples to obtain object models corresponding to the keywords.
Optionally, the collecting unit 401 is specifically configured to collect text information input by a user.
Correspondingly, the obtaining unit 402 is specifically configured to perform word segmentation processing on the text information according to the semantic model to obtain a word group; and extracting keywords related to the character from the phrase.
Optionally, the collecting unit 401 is specifically configured to collect voice information input by a user.
Correspondingly, the obtaining unit 402 is specifically configured to perform voice recognition on the voice information to obtain text information; performing word segmentation processing on the text information according to the semantic model to obtain a word group; and extracting keywords related to the characters from the phrases.
The character motion generation apparatus 40 according to the embodiment of the present invention may implement the technical solution of the character motion generation method according to any of the above embodiments, and the implementation principle and the beneficial effect are similar, which are not described herein again.
Fig. 6 is a schematic structural diagram of a terminal device 60 according to an embodiment of the present invention, please refer to fig. 6, where the terminal device 60 may include a processor 601 and a memory 602. Wherein,
the memory 602 is used to store program instructions.
The processor 601 is configured to read the program instructions in the memory 602 and execute the human action generation method according to any of the above embodiments according to the program instructions in the memory 602.
The terminal device 60 shown in the embodiment of the present invention may execute the technical solution of the method for generating a character action shown in any one of the above embodiments, and the implementation principle and the beneficial effect are similar, which are not described herein again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for generating a character action according to any of the above embodiments is performed, and the implementation principle and the beneficial effects are similar, and are not described herein again.
The processor in the above embodiments may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a Random Access Memory (RAM), a flash memory, a read-only memory (ROM), a programmable ROM, an electrically erasable programmable memory, a register, or other storage media that are well known in the art. The storage medium is located in a memory, and a processor reads instructions in the memory and combines hardware thereof to complete the steps of the method.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for generating human actions, the method being used for virtual reality and/or augmented reality, the method comprising:
collecting expression information input by a user; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person;
extracting the keywords related to the person from the expression information;
acquiring an object model corresponding to the keyword from a pre-stored object model library;
and processing the object model according to the expression information to obtain the limb action information of the object model.
2. The method according to claim 1, wherein before obtaining the object model corresponding to the keyword from the pre-stored object model library, the method further comprises:
collecting the keywords related to the characters, and collecting object models corresponding to the keywords;
and establishing an object model library, wherein the object model library comprises an incidence relation between the key words and the object models.
3. The method of claim 2, wherein the collecting the object models corresponding to the keywords comprises:
collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword;
and training and clustering the plurality of different object models serving as training samples to obtain the object models corresponding to the keywords.
4. The method according to any one of claims 1 to 3, wherein the collecting expression information input by a user comprises:
collecting text information input by a user;
correspondingly, the extracting of keywords related to the person from the expression information comprises:
performing word segmentation processing on the text information according to a semantic model to obtain a word group;
and extracting the keywords related to the characters in the phrases.
5. The method according to any one of claims 1 to 3, wherein the collecting expression information input by a user comprises:
collecting voice information input by a user;
correspondingly, the extracting of keywords related to the person from the expression information comprises:
carrying out voice recognition on the voice information to obtain text information;
performing word segmentation processing on the text information according to a semantic model to obtain a word group;
and extracting keywords related to the character from the phrase.
6. An apparatus for generating human actions, the apparatus being used for virtual reality and/or augmented reality, the apparatus comprising:
the acquisition unit is used for acquiring the expression information input by the user; the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the body action of the person;
an acquisition unit configured to extract the keyword related to the person from the expression information;
the acquiring unit is further used for acquiring an object model corresponding to the keyword from a pre-stored object model library;
and the processing unit is used for processing the object model according to the expression information to obtain the limb action information of the object model.
7. The apparatus of claim 6, further comprising a setup unit;
the acquisition unit is further used for collecting the keywords related to the characters and collecting object models corresponding to the keywords;
the establishing unit is used for establishing an object model library, and the object model library comprises the incidence relation between the key words and the object models.
8. The apparatus of claim 7,
the acquisition unit is specifically used for collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering the plurality of different object models as training samples to obtain the object models corresponding to the keywords.
9. The apparatus according to any one of claims 6 to 8,
the acquisition unit is specifically used for acquiring text information input by a user;
correspondingly, the obtaining unit is specifically configured to perform word segmentation processing on the text information according to a semantic model to obtain a word group; and extracting the keywords related to the characters from the phrases.
10. The apparatus according to any one of claims 6 to 8,
the acquisition unit is specifically used for acquiring voice information input by a user;
correspondingly, the obtaining unit is specifically configured to perform voice recognition on the voice information to obtain text information; performing word segmentation processing on the text information according to a semantic model to obtain a word group; and extracting keywords related to the character from the phrase.
11. A terminal device, comprising a processor and a memory, wherein,
the memory is to store program instructions;
the processor is used for reading the program instructions in the memory and executing the human action generation method as claimed in any one of claims 1 to 5 according to the program instructions in the memory.
12. A computer-readable storage medium, characterized in that,
the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, executes the method for generating a human motion according to any one of claims 1 to 5.
CN201810720342.9A 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment Active CN108986191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810720342.9A CN108986191B (en) 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810720342.9A CN108986191B (en) 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108986191A true CN108986191A (en) 2018-12-11
CN108986191B CN108986191B (en) 2023-06-27

Family

ID=64536039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810720342.9A Active CN108986191B (en) 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108986191B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918509A (en) * 2019-03-12 2019-06-21 黑龙江世纪精彩科技有限公司 Scene generating method and scene based on information extraction generate the storage medium of system
CN113313792A (en) * 2021-05-21 2021-08-27 广州幻境科技有限公司 Animation video production method and device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2828572A1 (en) * 2001-08-13 2003-02-14 Olivier Cordoleani Method for creating a virtual three-dimensional person representing a real person in which a database of geometries, textures, expression, etc. is created with a motor then used to manage movement and expressions of the 3-D person
CN101482976A (en) * 2009-01-19 2009-07-15 腾讯科技(深圳)有限公司 Method for driving change of lip shape by voice, method and apparatus for acquiring lip cartoon
WO2010105216A2 (en) * 2009-03-13 2010-09-16 Invention Machine Corporation System and method for automatic semantic labeling of natural language texts
CN102903142A (en) * 2012-10-18 2013-01-30 天津戛唛影视动漫文化传播有限公司 Method for realizing three-dimensional augmented reality
CN103646425A (en) * 2013-11-20 2014-03-19 深圳先进技术研究院 A method and a system for body feeling interaction
CN104268166A (en) * 2014-09-09 2015-01-07 北京搜狗科技发展有限公司 Input method, device and electronic device
CN104317389A (en) * 2014-09-23 2015-01-28 广东小天才科技有限公司 Method and device for identifying character role through action
CN104461215A (en) * 2014-11-12 2015-03-25 深圳市东信时代信息技术有限公司 Augmented reality system and method based on virtual augmentation technology
CN104866308A (en) * 2015-05-18 2015-08-26 百度在线网络技术(北京)有限公司 Scenario image generation method and apparatus
CN105551084A (en) * 2016-01-28 2016-05-04 北京航空航天大学 Outdoor three-dimensional scene combined construction method based on image content parsing
CN205721630U (en) * 2016-04-26 2016-11-23 西安智道科技有限责任公司 A kind of new media promotes the man-machine interaction structure of machine
CN106710590A (en) * 2017-02-24 2017-05-24 广州幻境科技有限公司 Voice interaction system with emotional function based on virtual reality environment and method
CN106951881A (en) * 2017-03-30 2017-07-14 成都创想空间文化传播有限公司 A kind of three-dimensional scenic rendering method, apparatus and system
CN106951095A (en) * 2017-04-07 2017-07-14 胡轩阁 Virtual reality interactive approach and system based on 3-D scanning technology
CN107272884A (en) * 2017-05-09 2017-10-20 聂懋远 A kind of control method and its control system based on virtual reality technology
CN107316343A (en) * 2016-04-26 2017-11-03 腾讯科技(深圳)有限公司 A kind of model treatment method and apparatus based on data-driven
US20180052849A1 (en) * 2016-08-18 2018-02-22 International Business Machines Corporation Joint embedding of corpus pairs for domain mapping
CN108170278A (en) * 2018-01-09 2018-06-15 三星电子(中国)研发中心 Link up householder method and device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2828572A1 (en) * 2001-08-13 2003-02-14 Olivier Cordoleani Method for creating a virtual three-dimensional person representing a real person in which a database of geometries, textures, expression, etc. is created with a motor then used to manage movement and expressions of the 3-D person
CN101482976A (en) * 2009-01-19 2009-07-15 腾讯科技(深圳)有限公司 Method for driving change of lip shape by voice, method and apparatus for acquiring lip cartoon
WO2010105216A2 (en) * 2009-03-13 2010-09-16 Invention Machine Corporation System and method for automatic semantic labeling of natural language texts
CN102903142A (en) * 2012-10-18 2013-01-30 天津戛唛影视动漫文化传播有限公司 Method for realizing three-dimensional augmented reality
CN103646425A (en) * 2013-11-20 2014-03-19 深圳先进技术研究院 A method and a system for body feeling interaction
CN104268166A (en) * 2014-09-09 2015-01-07 北京搜狗科技发展有限公司 Input method, device and electronic device
CN104317389A (en) * 2014-09-23 2015-01-28 广东小天才科技有限公司 Method and device for identifying character role through action
CN104461215A (en) * 2014-11-12 2015-03-25 深圳市东信时代信息技术有限公司 Augmented reality system and method based on virtual augmentation technology
CN104866308A (en) * 2015-05-18 2015-08-26 百度在线网络技术(北京)有限公司 Scenario image generation method and apparatus
CN105551084A (en) * 2016-01-28 2016-05-04 北京航空航天大学 Outdoor three-dimensional scene combined construction method based on image content parsing
CN205721630U (en) * 2016-04-26 2016-11-23 西安智道科技有限责任公司 A kind of new media promotes the man-machine interaction structure of machine
CN107316343A (en) * 2016-04-26 2017-11-03 腾讯科技(深圳)有限公司 A kind of model treatment method and apparatus based on data-driven
US20180052849A1 (en) * 2016-08-18 2018-02-22 International Business Machines Corporation Joint embedding of corpus pairs for domain mapping
CN106710590A (en) * 2017-02-24 2017-05-24 广州幻境科技有限公司 Voice interaction system with emotional function based on virtual reality environment and method
CN106951881A (en) * 2017-03-30 2017-07-14 成都创想空间文化传播有限公司 A kind of three-dimensional scenic rendering method, apparatus and system
CN106951095A (en) * 2017-04-07 2017-07-14 胡轩阁 Virtual reality interactive approach and system based on 3-D scanning technology
CN107272884A (en) * 2017-05-09 2017-10-20 聂懋远 A kind of control method and its control system based on virtual reality technology
CN108170278A (en) * 2018-01-09 2018-06-15 三星电子(中国)研发中心 Link up householder method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
谭小慧等: "模型驱动的快速三维场景构建", 《系统仿真学报》 *
谭小慧等: "模型驱动的快速三维场景构建", 《系统仿真学报》, vol. 25, no. 10, 8 October 2013 (2013-10-08), pages 2397 - 2402 *
赵宇婧;许鑫泽;朱齐丹;张智;: "一种自然语言关键词的人机交互方法", 应用科技, vol. 43, no. 06, pages 1 - 6 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918509A (en) * 2019-03-12 2019-06-21 黑龙江世纪精彩科技有限公司 Scene generating method and scene based on information extraction generate the storage medium of system
CN109918509B (en) * 2019-03-12 2021-07-23 明白四达(海南经济特区)科技有限公司 Scene generation method based on information extraction and storage medium of scene generation system
CN113313792A (en) * 2021-05-21 2021-08-27 广州幻境科技有限公司 Animation video production method and device

Also Published As

Publication number Publication date
CN108986191B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN110459214B (en) Voice interaction method and device
CN111680562A (en) Human body posture identification method and device based on skeleton key points, storage medium and terminal
CN111833418A (en) Animation interaction method, device, equipment and storage medium
CN106897372B (en) Voice query method and device
CN107333071A (en) Video processing method and device, electronic equipment and storage medium
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN108345385A (en) Virtual accompany runs the method and device that personage establishes and interacts
CN109660865B (en) Method and device for automatically labeling videos, medium and electronic equipment
CN110174942B (en) Eye movement synthesis method and device
CN109409255A (en) A kind of sign language scene generating method and device
EP4300431A1 (en) Action processing method and apparatus for virtual object, and storage medium
CN115272537A (en) Audio driving expression method and device based on causal convolution
CN110516749A (en) Model training method, method for processing video frequency, device, medium and calculating equipment
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN112489036A (en) Image evaluation method, image evaluation device, storage medium, and electronic apparatus
CN114513678A (en) Face information generation method and device
CN108986191B (en) Character action generation method and device and terminal equipment
CN113744286A (en) Virtual hair generation method and device, computer readable medium and electronic equipment
CN110298925B (en) Augmented reality image processing method, device, computing equipment and storage medium
CN117689752A (en) Literary work illustration generation method, device, equipment and storage medium
CN108961431A (en) Generation method, device and the terminal device of facial expression
CN110781327B (en) Image searching method and device, terminal equipment and storage medium
CN101877189A (en) Machine translation method from Chinese text to sign language
CN111447379B (en) Method and device for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant