WO2021196644A1 - Method, apparatus and device for driving interactive object, and storage medium - Google Patents

Method, apparatus and device for driving interactive object, and storage medium Download PDF

Info

Publication number
WO2021196644A1
WO2021196644A1 PCT/CN2020/129793 CN2020129793W WO2021196644A1 WO 2021196644 A1 WO2021196644 A1 WO 2021196644A1 CN 2020129793 W CN2020129793 W CN 2020129793W WO 2021196644 A1 WO2021196644 A1 WO 2021196644A1
Authority
WO
WIPO (PCT)
Prior art keywords
phoneme
sequence
interactive object
feature code
control vector
Prior art date
Application number
PCT/CN2020/129793
Other languages
French (fr)
Chinese (zh)
Inventor
吴文岩
吴潜溢
钱晨
白晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to KR1020217027692A priority Critical patent/KR20210124307A/en
Priority to SG11202111909QA priority patent/SG11202111909QA/en
Priority to JP2021549562A priority patent/JP2022530935A/en
Publication of WO2021196644A1 publication Critical patent/WO2021196644A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a method, device, device, and storage medium for driving interactive objects.
  • the embodiments of the present disclosure provide a driving solution for interactive objects.
  • a method for driving an interactive object comprising: obtaining a phoneme sequence corresponding to text data; obtaining a control parameter value of at least one partial region of an interactive object matching the phoneme sequence; The acquired control parameter value controls the posture of the interactive object.
  • the method further includes: controlling the display device displaying the interactive object to display text according to the text data, and/or controlling the display device according to the phoneme sequence corresponding to the text data Output speech.
  • the control parameter of the local area of the interactive object includes the posture control vector of the local area, and the control parameter value of at least one local area of the interactive object matching the phoneme sequence is obtained
  • the method includes: performing feature coding on the phoneme sequence to obtain a first coding sequence corresponding to the phoneme sequence; obtaining a feature code corresponding to at least one phoneme according to the first coding sequence; obtaining the interaction corresponding to the feature code The attitude control vector of at least one local area of the object.
  • performing feature encoding on the phoneme sequence to obtain the first encoding sequence corresponding to the phoneme sequence includes: generating for each of the multiple phonemes contained in the phoneme sequence The sub-coding sequence corresponding to the phoneme; and obtaining the first coding sequence corresponding to the phoneme sequence according to the sub-coding sequences corresponding to the multiple phonemes respectively.
  • generating a sub-coding sequence corresponding to the phoneme includes: detecting whether the phoneme corresponds to each time point; By setting the coding value at the time point when the phoneme is present to the first value, and setting the coding value at the time point when the phoneme is not present to the second value, the sub-coding sequence corresponding to the phoneme is obtained.
  • the method further includes: for the sub-coding sequence corresponding to each phoneme of the multiple phonemes, using a Gaussian filter to perform continuous values of the phonemes in time Gaussian convolution operation.
  • controlling the posture of the interactive object according to the obtained control parameter value includes: obtaining a sequence of posture control vectors corresponding to the second coding sequence; and according to the posture control vector The sequence of controls the gesture of the interactive object.
  • the method further includes: in the case that the time interval between the phonemes in the phoneme sequence is greater than a set threshold, controlling the parameter value according to the setting of the local area To control the posture of the interactive object.
  • acquiring the attitude control vector of at least one local area of the interactive object corresponding to the feature code includes: inputting the feature code into a pre-trained recurrent neural network to obtain the The attitude control vector of at least one local area of the interactive object corresponding to the feature code.
  • the recurrent neural network is obtained through feature coding sample training; the method further includes: obtaining a video segment of a character's voice, and obtaining a plurality of video segments containing the character according to the video segment Extract the corresponding voice segment from the video segment, obtain a sample phoneme sequence according to the voice segment, and perform feature encoding on the sample phoneme sequence; obtain at least the corresponding voice segment corresponding to the first image frame A feature encoding of a phoneme; converting the first image frame into a second image frame containing the interactive object, and obtaining the attitude control vector value of at least one local area corresponding to the second image frame; controlling according to the attitude The vector value is used to annotate the feature code corresponding to the first image frame to obtain the feature code sample.
  • the method further includes: training the initial recurrent neural network according to the characteristic coding samples, and training to obtain the recurrent neural network after the change of the network loss satisfies the convergence condition, wherein
  • the network loss includes the difference between the attitude control vector value of the at least one local area predicted by the recurrent neural network and the marked attitude control vector value.
  • a driving device for an interactive object including: a first acquiring unit for acquiring a phoneme sequence corresponding to text data; a second acquiring unit for acquiring a phoneme sequence matching the phoneme sequence The control parameter value of at least one partial area of the interactive object; the driving unit is used to control the posture of the interactive object according to the acquired control parameter value.
  • an electronic device includes a memory and a processor, the memory is used to store computer instructions that can be run on the processor, and the processor is used to execute the computer instructions when the computer instructions are executed.
  • the method for driving interactive objects described in any of the embodiments provided in the present disclosure is implemented.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the method for driving an interactive object according to any one of the embodiments provided in the present disclosure is realized.
  • the driving method, device, device, and computer readable storage medium of an interactive object obtain a phoneme sequence corresponding to text data, and obtain at least one partial region of an interactive object matching the phoneme sequence
  • the control parameter value of to control the posture of the interactive object so that the interactive object can make a posture that matches the phoneme corresponding to the text data.
  • the posture includes facial posture and body posture, so that the target object generates that the interactive object is speaking
  • the sense of text content enhances the interactive experience between the target object and the interactive object.
  • FIG. 1 is a schematic diagram of a display device in a method for driving interactive objects proposed by at least one embodiment of the present disclosure
  • FIG. 2 is a flowchart of a method for driving interactive objects proposed by at least one embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a process of feature encoding for a phoneme sequence proposed by at least one embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a driving device for interactive objects proposed in at least one embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of an electronic device proposed in at least one embodiment of the present disclosure.
  • At least one embodiment of the present disclosure provides a method for driving interactive objects.
  • the driving method may be executed by electronic devices such as a terminal device or a server.
  • the terminal device may be a fixed terminal or a mobile terminal, such as a mobile phone, a tablet, or a game.
  • the server includes a local server or a cloud server, etc., and the method can also be implemented by a processor calling computer-readable instructions stored in a memory.
  • the interaction object may be any virtual image capable of interacting with the target object.
  • the interactive object may be a virtual character, or may also be a virtual animal, virtual item, cartoon image, or other virtual images capable of implementing interactive functions.
  • the display form of the interactive object may be 2D or 3D, which is not limited in the present disclosure.
  • the target object may be a user, a robot, or other smart devices.
  • the interaction manner between the interaction object and the target object may be an active interaction manner or a passive interaction manner.
  • the target object can make a demand by making gestures or body movements, and trigger the interactive object to interact with it by means of active interaction.
  • the interactive object may actively greet the target object, prompt the target object to make an action, etc., so that the target object interacts with the interactive object in a passive manner.
  • the interactive objects may be displayed through terminal devices, which may be televisions, all-in-one machines with display functions, projectors, virtual reality (VR) devices, and augmented reality (AR) devices Etc., the present disclosure does not limit the specific form of the terminal device.
  • terminal devices may be televisions, all-in-one machines with display functions, projectors, virtual reality (VR) devices, and augmented reality (AR) devices Etc.
  • VR virtual reality
  • AR augmented reality
  • Fig. 1 shows a display device proposed by at least one embodiment of the present disclosure.
  • the display device has a transparent display screen, and a stereoscopic picture can be displayed on the transparent display screen to present a virtual scene and interactive objects with a stereoscopic effect.
  • the interactive objects displayed on the transparent display screen in FIG. 1 include virtual cartoon characters.
  • the terminal device described in the present disclosure may also be the above-mentioned display device with a transparent display screen.
  • the display device is configured with a memory and a processor, and the memory is used to store computer instructions that can run on the processor.
  • the processor is used to implement the method for driving the interactive object provided in the present disclosure when the computer instruction is executed, so as to drive the interactive object displayed on the transparent display screen to communicate or respond to the target object.
  • the interactive object in response to the sound-driven data used to drive the interactive object to output voice, the interactive object may emit a specified voice to the target object.
  • the terminal device can generate sound-driven data according to the actions, expressions, identities, preferences, etc. of the target object around the terminal device to drive the interactive object to respond by issuing a specified voice, thereby providing anthropomorphic services for the target object.
  • the sound-driven data can also be generated in other ways, for example, generated by the server and sent to the terminal device.
  • At least one embodiment of the present disclosure proposes a method for driving an interactive object, so as to improve the interaction experience between the target object and the interactive object.
  • FIG. 2 shows a flowchart of a method for driving an interactive object according to at least one embodiment of the present disclosure. As shown in FIG. 2, the method includes steps 201 to 203.
  • Step 201 Obtain a phoneme sequence corresponding to the text data.
  • the text data may be driving data used to drive the interactive object.
  • the drive data can be drive data generated by the server or terminal device according to the actions, expressions, identity, preferences, etc. of the target object interacting with the interactive object, or drive data called by the terminal device from the internal memory.
  • the present disclosure does not limit the method of obtaining the text data.
  • the phoneme corresponding to the morpheme can be obtained according to the morphemes contained in the text, so as to obtain the phoneme sequence corresponding to the text.
  • the phoneme is the smallest phonetic unit divided according to the natural attributes of the speech, and a pronunciation action of a real person can form a phoneme.
  • the Chinese text in response to the text being a Chinese text, can be converted into pinyin, a phoneme sequence can be generated using pinyin, and a timestamp for each phoneme can be generated.
  • Step 202 Obtain a control parameter value of at least one partial region of an interactive object that matches the phoneme sequence.
  • the local area is obtained by dividing the whole (including face and/or body) of the interactive object.
  • the control of one or more local areas of the face may correspond to a series of facial expressions or actions of the interactive object.
  • the control of the eye area may correspond to the facial actions of the interactive object such as opening, closing, blinking, and changing the perspective;
  • the control of the mouth area can correspond to facial actions such as closing the mouth of the interactive object and opening the mouth to different degrees.
  • the control of one or more local areas of the body may correspond to a series of physical actions of the interactive object.
  • the control of the leg area may correspond to the actions of the interactive object such as walking, jumping, and kicking.
  • the control parameter of the local area of the interactive object includes the posture control vector of the local area.
  • the attitude control vector of each local area is used to drive the local area of the interactive object to perform actions.
  • Different posture control vector values correspond to different motions or motion ranges. For example, for the posture control vector of the mouth area, one set of posture control vector values can make the mouth of the interactive object slightly open, and another set of posture control vector values can make the mouth of the interactive object open wider.
  • the corresponding local areas can make different actions or actions with different amplitudes.
  • the local area can be selected according to the action of the interactive object that needs to be controlled. For example, when the face and limbs of the interactive object need to be controlled to perform actions at the same time, the posture control vector values of all the local areas can be obtained; when the expression of the interactive object needs to be controlled At this time, the posture control vector value of the local area corresponding to the face can be obtained.
  • the control parameter value corresponding to the feature code can be determined, thereby determining the control parameter value corresponding to the phoneme sequence.
  • Different encoding methods can reflect different characteristics of the phoneme sequence. The present disclosure does not limit the specific encoding method.
  • the corresponding relationship between the feature code of the phoneme sequence corresponding to the text data and the control parameter value of the interactive object can be established in advance, so that the corresponding control parameter value can be obtained through the text data.
  • the specific method for obtaining the control parameter value matching the feature code of the phoneme sequence of the text data will be described in detail later.
  • Step 203 Control the posture of the interactive object according to the acquired control parameter value.
  • the control parameter value such as the posture control vector value
  • the display device that displays the interactive object is controlled to display text according to the text data, and/or the display device is controlled to output speech according to the phoneme sequence corresponding to the text data
  • the gesture made by the interactive object is different from that of the interactive object.
  • the output voice and/or the displayed text are synchronized, thereby giving the target object a feeling that the interactive object is speaking.
  • the posture of the interactive object can be controlled, so that the interactive object A gesture that matches the phoneme corresponding to the text data is made, and the gesture includes facial gestures and body gestures, so that the target object feels that the interactive object is speaking the text content, and the interactive experience of the target object is improved.
  • the method is applied to a server, including a local server or a cloud server.
  • the server processes text data, generates control parameter values of the interactive objects, and uses three-dimensional rendering according to the control parameter values.
  • the engine performs rendering to obtain the animation of the interactive object.
  • the server may send the animation to the terminal for display to communicate or respond to the target object, and may also send the animation to the cloud, so that the terminal can obtain the animation from the cloud to communicate or respond to the target object .
  • the control parameter value may also be sent to the terminal, so that the terminal completes the process of rendering, generating animation, and performing display.
  • the method is applied to a terminal, and the terminal processes text data, generates control parameter values of the interactive object, and renders the interactive object using a three-dimensional rendering engine according to the control parameter value to obtain the interactive
  • the animation of the object the terminal can display the animation to communicate or respond to the target object.
  • the display device displaying the interactive object may be controlled to display text according to the text data, and/or the display device may be controlled to output speech according to the phoneme sequence corresponding to the text data.
  • the voice and/or text output according to the text data is different from controlling the gesture of the interactive object according to the control parameter value.
  • the gesture made by the interactive object is synchronized with the output voice and/or displayed text, giving the target object the feeling that the interactive object is speaking.
  • control parameter of at least one local area of the interactive object includes a posture control vector
  • the posture control vector can be obtained in the following manner.
  • the coding sequence corresponding to the phoneme sequence of the text data is called the first coding sequence, that is, the first coding sequence is obtained by performing feature coding on the phoneme sequence.
  • a sub-coding sequence corresponding to each phoneme is generated.
  • the encoding value at the time point where the first phoneme is present is set to The first value
  • the coding value at the time point without the first phoneme is set to the second value
  • the coding sequence corresponding to the first phoneme can be obtained after assigning the coding value at each time point.
  • the code value at the time point when the first phoneme is present may be set to 1, and the code value at the time point when the first phoneme is not present may be set to 0.
  • the encoding value at the time point where the phoneme is present is set to the first value, and there is no phoneme.
  • the coding value at the time point of the phoneme is set to the second value, and the coding sequence corresponding to the phoneme can be obtained after assigning the coding value at each time point.
  • the first coding sequence corresponding to the phoneme sequence is obtained according to the respective sub-coding sequences corresponding to the multiple phonemes.
  • a Gaussian filter may be used to perform a Gaussian convolution operation on the continuous values of the first phoneme in time, so as to filter and smooth the matrix corresponding to the feature encoding. The transition of the mouth area when each phoneme is converted.
  • FIG. 3 shows a schematic diagram of a driving method of interactive objects proposed by at least one embodiment of the present disclosure.
  • the phoneme sequence 310 contains phonemes j, i1, j, and ie4 (for brevity, only some phonemes are shown), and corresponding sub-coding sequences 321, 322, and 321 are respectively obtained for each phoneme j, i1, and ie4. 323.
  • the corresponding code value at the time point where the phoneme is present is set to a first value (for example, 1), and the corresponding code value at the time point without the phoneme is set to the second value ( For example, 0).
  • the value of the sub-coding sequence 321 is the first value 1
  • the value of the sub-coding sequence 321 is the first value.
  • the two value is 0. All the sub-coding sequences constitute the first coding sequence 320.
  • a feature code corresponding to at least one phoneme is obtained.
  • the duration of i1 in the sub-coding sequence 322 and the duration of ie4 in the sub-coding sequence 323 can obtain the characteristic information of the sub-coding sequences 321, 322, and 323.
  • a Gaussian filter may be used to perform Gaussian convolution operations on the consecutive values of phonemes j, i1, and ie4 in the sub-encoding sequences 321, 322, and 323, respectively, to smooth the feature encoding to obtain the smoothed ⁇ coding sequence 330. That is, the Gaussian convolution operation is performed on the continuous value of the phoneme in time through the Gaussian filter, so that the code value in each code sequence changes from the second value to the first value or from the first value to the second value. smooth.
  • the values of the coding sequence also show intermediate values, such as 0.2, 0.3, etc., and the posture control vector obtained according to the values of these intermediate states makes the interaction characters excessively move and change their expressions more smoothly , Naturally, improve the interactive experience of the target object.
  • the feature code corresponding to at least one phoneme may be obtained by performing a sliding window on the first code sequence.
  • the first coding sequence may be a coding sequence after a Gaussian convolution operation.
  • a sliding window is performed on the coding sequence with a time window of a set length and a set step size, and the feature code in the time window is used as the feature code of the corresponding at least one phoneme.
  • the second code sequence can be obtained. Since the duration of each phoneme is different, and the duration of each phoneme is different in proportion to the length of the time window, the number of phonemes corresponding to the feature code in the time window may be 1, 2 or even more depending on the position of the time window. As shown in FIG.
  • M is a positive integer, and its value is determined according to the length of the first coding sequence, the length of the time window, and the sliding step of the time window.
  • attitude control vector of at least one partial region of the interactive object corresponding to the feature code is acquired.
  • attitude control vector 1 According to feature code 1, feature code 2, feature code 3,..., feature code M, the corresponding attitude control vector 1, attitude control vector 2, attitude control vector 3,..., attitude control vector M can be obtained respectively, thereby obtaining attitude control 350 of the sequence of vectors.
  • the sequence 350 of the attitude control vector and the second coding sequence 340 are aligned in time. Since each feature code in the second coding sequence is obtained according to at least one phoneme in the phoneme sequence, the sequence of the attitude control vector Each control vector in 350 is also obtained based on at least one phoneme in the phoneme sequence.
  • the interactive object While playing the phoneme sequence corresponding to the text data, the interactive object is driven to make an action according to the sequence of the posture control vector, that is, the interactive object can be driven to emit the sound corresponding to the text content while making synchronization with the sound
  • the action gives the target object the feeling that the interactive object is speaking, which improves the interactive experience of the target object.
  • the attitude control vector value before the set time can be set to the default value, that is, when the phoneme sequence is just started to be played, the interactive object A default action is made, and after the set time, the interactive object is driven to make an action using the sequence of the posture control vector obtained according to the first coding sequence.
  • the feature code 1 starts to be output at time t0, which corresponds to the default attitude control vector before time t0.
  • the length of the time window is related to the amount of information contained in the feature code. In the case where the amount of information contained in the time window is relatively large, the cyclic neural network processing will output a relatively uniform result. If the length of the time window is too large, the expression of the interactive object may not correspond to part of the text; if the length of the time window is too small, the expression of the interactive object may appear rigid when speaking. Therefore, the duration of the time window needs to be determined according to the minimum duration of the phoneme corresponding to the text data, so that the actions taken by driving the interactive object have a stronger correlation with the sound.
  • the sliding step length of the time window is related to the time interval (frequency) of obtaining the attitude control vector, that is, it is related to the frequency of driving the interactive object to make an action.
  • the length and step length of the time window can be set according to the actual interactive scene, so that the expressions and actions made by the interactive object are more closely related to the sound, and are more vivid and natural.
  • the interactive object when the time interval between phonemes in the phoneme sequence is greater than a set threshold, the interactive object is driven to take actions according to the set posture control vector of the local area. That is, when the interactive character pauses for a long time, the interactive object is driven to make a set action. For example, when the output voice pauses for a long time, the interactive object can be made to make a smiling expression or slightly swing the body to avoid the interactive object standing upright without expression when the pause is long, thereby making the interactive object speak more Natural and smooth, it improves the interaction between the target object and the interactive object.
  • the feature code may be input to a pre-trained recurrent neural network, and the recurrent neural network outputs at least one of the interactive objects corresponding to the feature code according to the first coding sequence.
  • the attitude control vector of the local area Since the recurrent neural network is a time recurrent neural network, it can learn the historical information of the input feature code, and output the attitude control vector of the at least one local area according to the feature code sequence.
  • the characteristic coding sequence includes a first coding sequence and a second coding sequence.
  • the recurrent neural network may be, for example, a long short-term memory network (Long Short-Term Memory, LSTM).
  • a pre-trained recurrent neural network is used to obtain the posture control vector of at least one local area of the interactive object corresponding to the feature code, and the historical feature information of the feature code and the current feature information are merged, thereby The historical attitude control vector has an impact on the current attitude control vector change, making the expression changes and body movements of the interactive characters more smooth and natural.
  • the recurrent neural network can be trained in the following manner.
  • a feature coded sample is obtained, the feature coded sample is annotated with a true value, and the true value is a posture control vector value of at least one local area of the interactive object.
  • the initial recurrent neural network is trained according to the feature coding samples, and the recurrent neural network is trained after the change of the network loss satisfies the convergence condition, wherein the network loss includes the recurrent neural network The difference between the attitude control vector value of the at least one local area and the real value obtained by network prediction.
  • feature code samples can be obtained by the following method.
  • the manner of encoding the sample phoneme sequence is the same as the encoding manner of the phoneme sequence corresponding to the text data described above.
  • the feature code of at least one phoneme corresponding to the first image frame is obtained.
  • the at least one phoneme may be a phoneme within a set range of the appearance time of the first image frame.
  • the first image frame is converted into a second image frame containing the interactive object, and the attitude control vector value of at least one local area corresponding to the second image frame is obtained.
  • the attitude control vector value may include the attitude control vector value of all the local areas, and may also include the attitude control vector value of some of the local areas.
  • the image frame of the real person can be converted into a second image frame containing the image represented by the interactive object, and the local area of the real person
  • the posture control vector corresponds to the posture control vector of each local area of the interactive object, so that the posture control vector of each local area of the interactive object in the second image frame can be obtained.
  • the feature code of at least one phoneme corresponding to the first image frame obtained above is annotated according to the attitude control vector value to obtain feature code samples.
  • the video segment of a character is split into a plurality of corresponding first image frames and voice segments, and the first image frame containing the real person is converted into the second image containing the interactive object.
  • Frames are used to obtain the attitude control vector corresponding to the feature code of the phoneme, so that the feature code has a better correspondence with the attitude control vector, so as to obtain high-quality feature code samples, so that the actions of the interactive objects are closer to the real actions of the corresponding characters.
  • FIG. 4 shows a schematic structural diagram of a driving device for interactive objects according to at least one embodiment of the present disclosure.
  • the device may include: a first obtaining unit 401, configured to obtain a phoneme sequence corresponding to text data; and second The acquiring unit 402 is configured to acquire a control parameter value of at least one partial region of an interactive object matching the phoneme sequence; the driving unit 403 is configured to control the posture of the interactive object according to the acquired control parameter value.
  • the device further includes an output unit for controlling the display device displaying the interactive object to display text according to the text data, and/or controlling the display device according to the phoneme sequence corresponding to the text data Output speech.
  • the second obtaining unit is specifically configured to: perform feature coding on the phoneme sequence to obtain a first coding sequence corresponding to the phoneme sequence; and obtain at least one phoneme corresponding to the phoneme sequence according to the first coding sequence.
  • the second acquiring unit when performing feature encoding on the phoneme sequence to obtain the first encoding sequence corresponding to the phoneme sequence, is specifically configured to: for multiple phonemes included in the phoneme sequence, A sub-coding sequence corresponding to each phoneme is generated; and the first coding sequence corresponding to the phoneme sequence is obtained according to the sub-coding sequences respectively corresponding to the multiple phonemes.
  • the second acquiring unit when generating the sub-coding sequence corresponding to each phoneme for the multiple phonemes included in the phoneme sequence, is specifically configured to: detect whether there is a first phoneme corresponding to each time point. , The first phoneme is any one of the multiple phonemes; by setting the code value at the time point when the first phoneme is present to the first value, the time point when the first phoneme is not present is set to the first value. The encoding value of is set to a second value to obtain the sub-encoding sequence corresponding to the first phoneme.
  • the device further includes a filtering unit for performing a Gaussian filter on the continuous value of the phoneme in time for the sub-coding sequence corresponding to each phoneme of the multiple phonemes.
  • Gaussian convolution operation for the sub-coding sequence corresponding to the first phoneme, a Gaussian filter is used to perform a Gaussian convolution operation on the continuous values of the first phoneme in time, and the first phoneme is one of the multiple phonemes.
  • a Gaussian filter is used to perform a Gaussian convolution operation on the continuous values of the first phoneme in time, and the first phoneme is one of the multiple phonemes.
  • the second acquiring unit when acquiring feature codes corresponding to at least one phoneme according to the first coding sequence, is specifically configured to: A sliding window is performed on the coding sequence, the feature code in the time window is used as the feature code of the corresponding at least one phoneme, and the second code sequence is obtained according to the multiple feature codes obtained by completing the sliding window.
  • the driving unit is specifically configured to: obtain a sequence of a posture control vector corresponding to the second coding sequence; and control the posture of the interactive object according to the sequence of the posture control vector.
  • the device further includes a pause drive unit, which is used to control the set control parameter value of the local area when the time interval between phonemes in the phoneme sequence is greater than a set threshold.
  • a pause drive unit which is used to control the set control parameter value of the local area when the time interval between phonemes in the phoneme sequence is greater than a set threshold.
  • the second acquiring unit when acquiring the attitude control vector of at least one partial region of the interactive object corresponding to the feature code, is specifically configured to: input the feature code into a pre-trained loop A neural network obtains a posture control vector of at least one local area of the interactive object corresponding to the feature code.
  • the neural network is obtained through phoneme sequence sample training; the device further includes a sample acquisition unit for: acquiring a video segment of a character's voice, and acquiring a plurality of video segments containing the voice according to the video segment The first image frame of the character; extract the corresponding voice segment from the video segment, obtain a sample phoneme sequence according to the voice segment, and perform feature encoding on the sample phoneme sequence; obtain the corresponding voice segment of the first image frame Feature encoding of at least one phoneme; transforming the first image frame into a second image frame containing the interactive object, and obtaining the attitude control vector value of at least one local area corresponding to the second image frame; according to the attitude The control vector value is used to annotate the feature code corresponding to the first image frame to obtain a feature code sample.
  • a sample acquisition unit for: acquiring a video segment of a character's voice, and acquiring a plurality of video segments containing the voice according to the video segment The first image frame of the character; extract the corresponding voice segment from the
  • the device further includes a training unit for training the initial recurrent neural network according to the characteristic coding samples, and training to obtain the recurrent neural network after the change in network loss satisfies the convergence condition, wherein
  • the network loss includes the difference between the attitude control vector value of the at least one local area and the labeled attitude control vector value predicted by the recurrent neural network.
  • At least one embodiment of this specification also provides an electronic device.
  • the device includes a memory and a processor.
  • the memory is used to store computer instructions that can run on the processor.
  • the method for driving interactive objects described in any embodiment of the present disclosure is realized by computer instructions.
  • At least one embodiment of this specification also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method for driving an interactive object according to any embodiment of the present disclosure is realized.
  • one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt computer programs implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the embodiments of the subject and functional operations described in this specification can be implemented in the following: digital electronic circuits, tangible computer software or firmware, computer hardware including the structures disclosed in this specification and structural equivalents thereof, or among them A combination of one or more.
  • the embodiments of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or one of the computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing device or to control the operation of the data processing device Multiple modules.
  • the program instructions may be encoded on artificially generated propagated signals, such as machine-generated electrical, optical or electromagnetic signals, which are generated to encode information and transmit it to a suitable receiver device for use by the data
  • the processing device executes.
  • the computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the processing and logic flow described in this specification can be executed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating according to input data and generating output.
  • the processing and logic flow can also be executed by a dedicated logic circuit, such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the device can also be implemented as a dedicated logic circuit.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • Computers suitable for executing computer programs include, for example, general-purpose and/or special-purpose microprocessors, or any other type of central processing unit.
  • the central processing unit will receive instructions and data from a read-only memory and/or a random access memory.
  • the basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks, or the computer will be operatively coupled to this mass storage device to receive data from or send data to it. It transmits data, or both.
  • the computer does not have to have such equipment.
  • the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or, for example, a universal serial bus (USB ) Flash drives are portable storage devices, just to name a few.
  • PDA personal digital assistant
  • GPS global positioning system
  • USB universal serial bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or Removable disks), magneto-optical disks, CD ROM and DVD-ROM disks.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks or Removable disks
  • magneto-optical disks CD ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by or incorporated into a dedicated logic circuit.

Abstract

Disclosed are a method, apparatus and device for driving an interactive object, and a storage medium. The method comprises: acquiring a phoneme sequence corresponding to text data; acquiring a control parameter value of at least one local region of an interactive object that matches the phoneme sequence; and controlling the posture of the interactive object according to the acquired control parameter value.

Description

交互对象的驱动方法、装置、设备以及存储介质Driving method, device, equipment and storage medium of interactive object
相关交叉引用Related cross references
本申请基于申请号为2020102458024、申请日为2020年3月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on a Chinese patent application with an application number of 2020102458024 and an application date of March 31, 2020, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated by reference into this application.
技术领域Technical field
本公开涉及计算机技术领域,具体涉及一种交互对象的驱动方法、装置、设备以及存储介质。The present disclosure relates to the field of computer technology, and in particular to a method, device, device, and storage medium for driving interactive objects.
背景技术Background technique
人机交互的方式大多基于按键、触摸、语音进行输入,通过在显示屏上呈现图像、文本或虚拟人物进行回应。目前虚拟人物多是在语音助理的基础上改进得到的。The way of human-computer interaction is mostly based on keystrokes, touches, and voice input, and responds by presenting images, texts or virtual characters on the display screen. At present, virtual characters are mostly improved on the basis of voice assistants.
发明内容Summary of the invention
本公开实施例提供一种交互对象的驱动方案。The embodiments of the present disclosure provide a driving solution for interactive objects.
根据本公开的一方面,提供一种交互对象的驱动方法,所述方法包括:获取文本数据对应的音素序列;获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值;根据获取的所述控制参数值控制所述交互对象的姿态。According to an aspect of the present disclosure, there is provided a method for driving an interactive object, the method comprising: obtaining a phoneme sequence corresponding to text data; obtaining a control parameter value of at least one partial region of an interactive object matching the phoneme sequence; The acquired control parameter value controls the posture of the interactive object.
结合本公开提供的任一实施方式,所述方法还包括:根据所述文本数据控制展示所述交互对象的显示设备展示文本,和/或根据所述文本数据对应的音素序列控制所述显示设备输出语音。With reference to any of the embodiments provided in the present disclosure, the method further includes: controlling the display device displaying the interactive object to display text according to the text data, and/or controlling the display device according to the phoneme sequence corresponding to the text data Output speech.
结合本公开提供的任一实施方式,所述交互对象的局部区域的控制参数包括所述局部区域的姿态控制向量,获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值,包括:对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序 列;根据所述第一编码序列,获取至少一个音素对应的特征编码;获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量。With reference to any one of the embodiments provided in the present disclosure, the control parameter of the local area of the interactive object includes the posture control vector of the local area, and the control parameter value of at least one local area of the interactive object matching the phoneme sequence is obtained, The method includes: performing feature coding on the phoneme sequence to obtain a first coding sequence corresponding to the phoneme sequence; obtaining a feature code corresponding to at least one phoneme according to the first coding sequence; obtaining the interaction corresponding to the feature code The attitude control vector of at least one local area of the object.
结合本公开提供的任一实施方式,对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列,包括:针对所述音素序列包含的多种音素中的每种音素,生成所述音素对应的子编码序列;根据所述多种音素分别对应的子编码序列,获得所述音素序列对应的第一编码序列。In conjunction with any one of the embodiments provided in the present disclosure, performing feature encoding on the phoneme sequence to obtain the first encoding sequence corresponding to the phoneme sequence includes: generating for each of the multiple phonemes contained in the phoneme sequence The sub-coding sequence corresponding to the phoneme; and obtaining the first coding sequence corresponding to the phoneme sequence according to the sub-coding sequences corresponding to the multiple phonemes respectively.
结合本公开提供的任一实施方式,针对所述音素序列包含的多种音素中的每种音素,生成所述音素对应的子编码序列,包括:检测各时间点上是否对应有所述音素;通过将有所述音素的时间点上的编码值设置为第一数值,将没有所述音素的时间点上的编码值设置为第二数值,得到所述音素对应的所述子编码序列。With reference to any one of the embodiments provided in the present disclosure, for each of the multiple phonemes included in the phoneme sequence, generating a sub-coding sequence corresponding to the phoneme includes: detecting whether the phoneme corresponds to each time point; By setting the coding value at the time point when the phoneme is present to the first value, and setting the coding value at the time point when the phoneme is not present to the second value, the sub-coding sequence corresponding to the phoneme is obtained.
结合本公开提供的任一实施方式,所述方法还包括:对于所述多种音素中的每种音素对应的所述子编码序列,利用高斯滤波器对所述音素在时间上的连续值进行高斯卷积操作。With reference to any one of the embodiments provided in the present disclosure, the method further includes: for the sub-coding sequence corresponding to each phoneme of the multiple phonemes, using a Gaussian filter to perform continuous values of the phonemes in time Gaussian convolution operation.
结合本公开提供的任一实施方式,根据获取的所述控制参数值控制所述交互对象的姿态,包括:获取与所述第二编码序列对应的姿态控制向量的序列;根据所述姿态控制向量的序列控制所述交互对象的姿态。In conjunction with any one of the embodiments provided in the present disclosure, controlling the posture of the interactive object according to the obtained control parameter value includes: obtaining a sequence of posture control vectors corresponding to the second coding sequence; and according to the posture control vector The sequence of controls the gesture of the interactive object.
结合本公开提供的任一实施方式,所述方法还包括:在所述音素序列中的所述音素之间的时间间隔大于设定阈值的情况下,根据所述局部区域的设定控制参数值,控制所述交互对象的姿态。With reference to any one of the embodiments provided in the present disclosure, the method further includes: in the case that the time interval between the phonemes in the phoneme sequence is greater than a set threshold, controlling the parameter value according to the setting of the local area To control the posture of the interactive object.
结合本公开提供的任一实施方式,获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量,包括:将所述特征编码输入至预先训练的循环神经网络,获得与所述特征编码对应的所述交互对象的至少一个局部区域的所述姿态控制向量。In conjunction with any one of the embodiments provided in the present disclosure, acquiring the attitude control vector of at least one local area of the interactive object corresponding to the feature code includes: inputting the feature code into a pre-trained recurrent neural network to obtain the The attitude control vector of at least one local area of the interactive object corresponding to the feature code.
结合本公开提供的任一实施方式,所述循环神经网络通过特征编码样本训练得到;所述方法还包括:获取一角色发出语音的视频段,并根据所述视频段获取多个包含所述角色的第一图像帧;从所述视频段中提取相应的语音段,根据所述语音段获取样本音素序列,并对所述样本音素序列进行特征编码;获取与所述第一图像帧对应的至少一个音素的特征编码;将所述第一图像帧转化为包含所述交互对象的第二图像帧,获取所述第二图像帧对应的至少一个局部区域的姿态控制向量值;根据所述姿态控制向量值,对与所述第一图像帧对应的所述特征编码进行标注,获得所述特征编码样本。With reference to any one of the embodiments provided in the present disclosure, the recurrent neural network is obtained through feature coding sample training; the method further includes: obtaining a video segment of a character's voice, and obtaining a plurality of video segments containing the character according to the video segment Extract the corresponding voice segment from the video segment, obtain a sample phoneme sequence according to the voice segment, and perform feature encoding on the sample phoneme sequence; obtain at least the corresponding voice segment corresponding to the first image frame A feature encoding of a phoneme; converting the first image frame into a second image frame containing the interactive object, and obtaining the attitude control vector value of at least one local area corresponding to the second image frame; controlling according to the attitude The vector value is used to annotate the feature code corresponding to the first image frame to obtain the feature code sample.
结合本公开提供的任一实施方式,所述方法还包括:根据所述特征编码样本对初始循环神经网络进行训练,在网络损失的变化满足收敛条件后训练得到所述循环神经网络,其中,所述网络损失包括所述循环神经网络预测得到的所述至少一个局部区域的所述姿态控制向量值与标注的所述姿态控制向量值之间的差异。With reference to any one of the embodiments provided in the present disclosure, the method further includes: training the initial recurrent neural network according to the characteristic coding samples, and training to obtain the recurrent neural network after the change of the network loss satisfies the convergence condition, wherein The network loss includes the difference between the attitude control vector value of the at least one local area predicted by the recurrent neural network and the marked attitude control vector value.
根据本公开的一方面,提供一种交互对象的驱动装置,所述装置包括:第一获取单元,用于获取文本数据对应的音素序列;第二获取单元,用于获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值;驱动单元,用于根据获取的所述控制参数值控制所述交互对象的姿态。According to an aspect of the present disclosure, there is provided a driving device for an interactive object, the device including: a first acquiring unit for acquiring a phoneme sequence corresponding to text data; a second acquiring unit for acquiring a phoneme sequence matching the phoneme sequence The control parameter value of at least one partial area of the interactive object; the driving unit is used to control the posture of the interactive object according to the acquired control parameter value.
根据本公开的一方面,提供一种电子设备,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现本公开提供的任一实施方式所述的交互对象的驱动方法。According to an aspect of the present disclosure, an electronic device is provided, the device includes a memory and a processor, the memory is used to store computer instructions that can be run on the processor, and the processor is used to execute the computer instructions when the computer instructions are executed. The method for driving interactive objects described in any of the embodiments provided in the present disclosure is implemented.
根据本公开的一方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本公开提供的任一实施方式所述的交互对象的驱动方法。According to an aspect of the present disclosure, there is provided a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the method for driving an interactive object according to any one of the embodiments provided in the present disclosure is realized.
本公开一个或多个实施例的交互对象的驱动方法、装置、设备及计算机可读存储介质,通过获取文本数据对应的音素序列,并获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值,来控制所述交互对象的姿态,可以使交互对象做出与文本数据所对应的音素匹配的姿态,该姿态包括面部姿态和肢体姿态,从而使目标对象产生交互对象正在说出文本内容的感觉,提升了目标对象与交互对象的交互体验。The driving method, device, device, and computer readable storage medium of an interactive object according to one or more embodiments of the present disclosure obtain a phoneme sequence corresponding to text data, and obtain at least one partial region of an interactive object matching the phoneme sequence The control parameter value of to control the posture of the interactive object, so that the interactive object can make a posture that matches the phoneme corresponding to the text data. The posture includes facial posture and body posture, so that the target object generates that the interactive object is speaking The sense of text content enhances the interactive experience between the target object and the interactive object.
附图说明Description of the drawings
为了更清楚地说明本说明书一个或多个实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书一个或多个实施例中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly explain one or more embodiments of this specification or the technical solutions in the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, in the following description The drawings are only some of the embodiments described in one or more embodiments of this specification. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1是本公开至少一个实施例提出的交互对象的驱动方法中显示设备的示意图;FIG. 1 is a schematic diagram of a display device in a method for driving interactive objects proposed by at least one embodiment of the present disclosure;
图2是本公开至少一个实施例提出的交互对象的驱动方法的流程图;2 is a flowchart of a method for driving interactive objects proposed by at least one embodiment of the present disclosure;
图3是本公开至少一个实施例提出的对音素序列进行特征编码的过程示意图;FIG. 3 is a schematic diagram of a process of feature encoding for a phoneme sequence proposed by at least one embodiment of the present disclosure;
图4是本公开至少一个实施例提出的交互对象的驱动装置的结构示意图;4 is a schematic structural diagram of a driving device for interactive objects proposed in at least one embodiment of the present disclosure;
图5是本公开至少一个实施例提出的电子设备的结构示意图。FIG. 5 is a schematic structural diagram of an electronic device proposed in at least one embodiment of the present disclosure.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。The exemplary embodiments will be described in detail here, and examples thereof are shown in the accompanying drawings. When the following description refers to the accompanying drawings, unless otherwise indicated, the same numbers in different drawings represent the same or similar elements. The implementation manners described in the following exemplary embodiments do not represent all implementation manners consistent with the present disclosure. On the contrary, they are merely examples of devices and methods consistent with some aspects of the present disclosure as detailed in the appended claims.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations. In addition, the term "at least one" in this document means any one of a plurality of or any combination of at least two of the plurality, for example, including at least one of A, B, and C, may mean including A, Any one or more elements selected in the set formed by B and C.
本公开至少一个实施例提供了一种交互对象的驱动方法,所述驱动方法可以由终端设备或服务器等电子设备执行,所述终端设备可以是固定终端或移动终端,例如手机、平板电脑、游戏机、台式机、广告机、一体机、车载终端等等,所述服务器包括本地服务器或云端服务器等,所述方法还可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。At least one embodiment of the present disclosure provides a method for driving interactive objects. The driving method may be executed by electronic devices such as a terminal device or a server. The terminal device may be a fixed terminal or a mobile terminal, such as a mobile phone, a tablet, or a game. The server includes a local server or a cloud server, etc., and the method can also be implemented by a processor calling computer-readable instructions stored in a memory.
在本公开实施例中,交互对象可以是任意一种能够与目标对象进行交互的虚拟形象。在一实施例中,交互对象可以是虚拟人物,还可以是虚拟动物、虚拟物品、卡通形象等等其他能够实现交互功能的虚拟形象。交互对象的展现形式既可以是2D形式也可以是3D形式,本公开对此并不限定。所述目标对象可以是用户,也可以是机器人,还可以是其他智能设备。所述交互对象和所述目标对象之间的交互方式可以是主动交互方式,也可以是被动交互方式。一示例中,目标对象可以通过做出手势或者肢体动作来发出需求,通过主动交互的方式来触发交互对象与其交互。另一示例中,交互对象可以通过主动打招呼、提示目标对象做出动作等方式,使得目标对象采用被动方式与交互对象进行交互。In the embodiments of the present disclosure, the interaction object may be any virtual image capable of interacting with the target object. In an embodiment, the interactive object may be a virtual character, or may also be a virtual animal, virtual item, cartoon image, or other virtual images capable of implementing interactive functions. The display form of the interactive object may be 2D or 3D, which is not limited in the present disclosure. The target object may be a user, a robot, or other smart devices. The interaction manner between the interaction object and the target object may be an active interaction manner or a passive interaction manner. In an example, the target object can make a demand by making gestures or body movements, and trigger the interactive object to interact with it by means of active interaction. In another example, the interactive object may actively greet the target object, prompt the target object to make an action, etc., so that the target object interacts with the interactive object in a passive manner.
所述交互对象可以通过终端设备进行展示,所述终端设备可以是电视机、带有显示 功能的一体机、投影仪、虚拟现实(Virtual Reality,VR)设备、增强现实(Augmented Reality,AR)设备等,本公开并不限定终端设备的具体形式。The interactive objects may be displayed through terminal devices, which may be televisions, all-in-one machines with display functions, projectors, virtual reality (VR) devices, and augmented reality (AR) devices Etc., the present disclosure does not limit the specific form of the terminal device.
图1示出本公开至少一个实施例提出的显示设备。如图1所示,该显示设备具有透明显示屏,在透明显示屏上可以显示立体画面,以呈现出具有立体效果的虚拟场景以及交互对象。例如图1中透明显示屏显示的交互对象包括虚拟卡通人物。在一些实施例中,本公开中所述的终端设备也可以为上述具有透明显示屏的显示设备,显示设备中配置有存储器和处理器,存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现本公开提供的交互对象的驱动方法,以驱动透明显示屏中显示的交互对象对目标对象进行交流或回应。Fig. 1 shows a display device proposed by at least one embodiment of the present disclosure. As shown in Figure 1, the display device has a transparent display screen, and a stereoscopic picture can be displayed on the transparent display screen to present a virtual scene and interactive objects with a stereoscopic effect. For example, the interactive objects displayed on the transparent display screen in FIG. 1 include virtual cartoon characters. In some embodiments, the terminal device described in the present disclosure may also be the above-mentioned display device with a transparent display screen. The display device is configured with a memory and a processor, and the memory is used to store computer instructions that can run on the processor. The processor is used to implement the method for driving the interactive object provided in the present disclosure when the computer instruction is executed, so as to drive the interactive object displayed on the transparent display screen to communicate or respond to the target object.
在一些实施例中,响应于用于驱动交互对象输出语音的声音驱动数据,交互对象可以对目标对象发出指定语音。终端设备可以根据终端设备周边目标对象的动作、表情、身份、偏好等,生成声音驱动数据,以驱动交互对象通过发出指定语音进行回应,从而为目标对象提供拟人化的服务。需要说明的是,声音驱动数据也可以通过其他方式生成,比如,由服务器生成并发送给终端设备。In some embodiments, in response to the sound-driven data used to drive the interactive object to output voice, the interactive object may emit a specified voice to the target object. The terminal device can generate sound-driven data according to the actions, expressions, identities, preferences, etc. of the target object around the terminal device to drive the interactive object to respond by issuing a specified voice, thereby providing anthropomorphic services for the target object. It should be noted that the sound-driven data can also be generated in other ways, for example, generated by the server and sent to the terminal device.
在交互对象与目标对象的交互过程中,根据该声音驱动数据驱动交互对象发出指定语音时,可能无法驱动所述交互对象做出与该指定语音同步的面部动作,使得交互对象在发出语音时呆板、不自然,影响了目标对象与交互对象的交互体验。基于此,本公开至少一个实施例提出一种交互对象的驱动方法,以提升目标对象与交互对象进行交互的体验。During the interaction between the interactive object and the target object, when the interactive object is driven to make a specified voice according to the sound driving data, the interactive object may not be able to drive the interactive object to make facial movements synchronized with the specified voice, making the interactive object dull when uttering the voice , Unnatural, affecting the interactive experience between the target object and the interactive object. Based on this, at least one embodiment of the present disclosure proposes a method for driving an interactive object, so as to improve the interaction experience between the target object and the interactive object.
图2示出根据本公开至少一个实施例的交互对象的驱动方法的流程图,如图2所示,所述方法包括步骤201~步骤203。FIG. 2 shows a flowchart of a method for driving an interactive object according to at least one embodiment of the present disclosure. As shown in FIG. 2, the method includes steps 201 to 203.
步骤201,获取文本数据对应的音素序列。Step 201: Obtain a phoneme sequence corresponding to the text data.
所述文本数据可以是用于驱动所述交互对象的驱动数据。该驱动数据可以是服务器或终端设备根据与交互对象进行交互的目标对象的动作、表情、身份、偏好等生成的驱动数据,也可以是终端设备从内部存储器调用的驱动数据。本公开对于该文本数据的获取方式不进行限制。The text data may be driving data used to drive the interactive object. The drive data can be drive data generated by the server or terminal device according to the actions, expressions, identity, preferences, etc. of the target object interacting with the interactive object, or drive data called by the terminal device from the internal memory. The present disclosure does not limit the method of obtaining the text data.
在本公开实施例中,可以根据文本所包含的语素,获得所述语素所对应的音素,从而获得文本对应的音素序列。其中,音素是根据语音的自然属性划分出来的最小语音单元,真实人物一个发音动作能够形成一个音素。In the embodiment of the present disclosure, the phoneme corresponding to the morpheme can be obtained according to the morphemes contained in the text, so as to obtain the phoneme sequence corresponding to the text. Among them, the phoneme is the smallest phonetic unit divided according to the natural attributes of the speech, and a pronunciation action of a real person can form a phoneme.
在一实施例中,响应于所述文本为中文文本,可以通过将中文文本文字转换成拼音,利用拼音生成音素序列,并生成每个音素的时间戳。In an embodiment, in response to the text being a Chinese text, the Chinese text can be converted into pinyin, a phoneme sequence can be generated using pinyin, and a timestamp for each phoneme can be generated.
步骤202,获取与所述音素序列匹配的、交互对象的至少一个局部区域的控制参数值。Step 202: Obtain a control parameter value of at least one partial region of an interactive object that matches the phoneme sequence.
所述局部区域是对交互对象的整体(包括面部和/或身体)进行划分而得到的。面部的一个或多个局部区域的控制可以对应于交互对象的一系列面部表情或动作,例如眼部区域的控制可以对应于交互对象睁眼、闭眼、眨眼、视角变换等面部动作;又例如嘴部区域的控制可以对应于交互对象闭嘴、不同程度的张嘴等面部动作。而身体的其中一个或多个局部区域的控制可以对应于交互对象的一系列肢体动作,例如腿部区域的控制可以对应于交互对象走路、跳跃、踢腿等动作。The local area is obtained by dividing the whole (including face and/or body) of the interactive object. The control of one or more local areas of the face may correspond to a series of facial expressions or actions of the interactive object. For example, the control of the eye area may correspond to the facial actions of the interactive object such as opening, closing, blinking, and changing the perspective; The control of the mouth area can correspond to facial actions such as closing the mouth of the interactive object and opening the mouth to different degrees. The control of one or more local areas of the body may correspond to a series of physical actions of the interactive object. For example, the control of the leg area may correspond to the actions of the interactive object such as walking, jumping, and kicking.
所述交互对象的局部区域的控制参数包括所述局部区域的姿态控制向量。每个局部区域的姿态控制向量用于驱动所述交互对象的所述局部区域进行动作。不同的姿态控制向量值对应于不同的动作或者动作幅度。例如,对于嘴部区域的姿态控制向量,其一组姿态控制向量值可以使所述交互对象的嘴部微张,而另一组姿态控制向量值可以使所述交互对象的嘴部大张。通过以不同的姿态控制向量值来驱动所述交互对象,可以使相应的局部区域做出不同动作或者不同幅度的动作。The control parameter of the local area of the interactive object includes the posture control vector of the local area. The attitude control vector of each local area is used to drive the local area of the interactive object to perform actions. Different posture control vector values correspond to different motions or motion ranges. For example, for the posture control vector of the mouth area, one set of posture control vector values can make the mouth of the interactive object slightly open, and another set of posture control vector values can make the mouth of the interactive object open wider. By controlling the vector values with different postures to drive the interactive objects, the corresponding local areas can make different actions or actions with different amplitudes.
局部区域可以根据需要控制的交互对象的动作进行选择,例如在需要控制所述交互对象面部以及肢体同时进行动作时,可以获取全部局部区域的姿态控制向量值;在需要控制所述交互对象的表情时,则可以获取所述面部所对应的局部区域的姿态控制向量值。The local area can be selected according to the action of the interactive object that needs to be controlled. For example, when the face and limbs of the interactive object need to be controlled to perform actions at the same time, the posture control vector values of all the local areas can be obtained; when the expression of the interactive object needs to be controlled At this time, the posture control vector value of the local area corresponding to the face can be obtained.
在本公开实施例中,可以通过对所述音素序列进行特征编码,确定特征编码所对应的控制参数值,从而确定所述音素序列对应的控制参数值。不同的编码方式可以体现所述音素序列的不同特征。对于具体的编码方式本公开不进行限制。In the embodiment of the present disclosure, by performing feature encoding on the phoneme sequence, the control parameter value corresponding to the feature code can be determined, thereby determining the control parameter value corresponding to the phoneme sequence. Different encoding methods can reflect different characteristics of the phoneme sequence. The present disclosure does not limit the specific encoding method.
在本公开实施例中,可以预先建立所述文本数据对应的音素序列的特征编码与交互对象的控制参数值的对应关系,从而通过文本数据,可获得对应的控制参数值。获取与所述文本数据的音素序列的特征编码匹配的控制参数值的具体方法容后详述。In the embodiment of the present disclosure, the corresponding relationship between the feature code of the phoneme sequence corresponding to the text data and the control parameter value of the interactive object can be established in advance, so that the corresponding control parameter value can be obtained through the text data. The specific method for obtaining the control parameter value matching the feature code of the phoneme sequence of the text data will be described in detail later.
步骤203,根据获取的所述控制参数值控制所述交互对象的姿态。Step 203: Control the posture of the interactive object according to the acquired control parameter value.
其中,所述控制参数值,例如姿态控制向量值,是与所述文本数据所包含的音素序列相匹配的。例如,在根据所述文本数据控制展示所述交互对象的显示设备展示文本, 和/或根据所述文本数据对应的音素序列控制所述显示设备输出语音时,交互对象所做出的姿态与所输出的语音和/或所展示的文本是同步的,从而给目标对象一种所述交互对象正在说话的感觉。Wherein, the control parameter value, such as the posture control vector value, matches the phoneme sequence contained in the text data. For example, when the display device that displays the interactive object is controlled to display text according to the text data, and/or the display device is controlled to output speech according to the phoneme sequence corresponding to the text data, the gesture made by the interactive object is different from that of the interactive object. The output voice and/or the displayed text are synchronized, thereby giving the target object a feeling that the interactive object is speaking.
在本公开实施例中,通过获取文本数据对应的音素序列,并获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值,来控制所述交互对象的姿态,可以使交互对象做出与文本数据所对应的音素匹配的姿态,该姿态包括面部姿态和肢体姿态,从而使目标对象产生交互对象正在说出文本内容的感觉,提升了目标对象的交互体验。In the embodiment of the present disclosure, by obtaining the phoneme sequence corresponding to the text data, and obtaining the control parameter value of at least one local area of the interactive object matching the phoneme sequence, the posture of the interactive object can be controlled, so that the interactive object A gesture that matches the phoneme corresponding to the text data is made, and the gesture includes facial gestures and body gestures, so that the target object feels that the interactive object is speaking the text content, and the interactive experience of the target object is improved.
在一些实施例中,所述方法应用于服务器,包括本地服务器或云端服务器等,所述服务器对于文本数据进行处理,生成所述交互对象的控制参数值,并根据所述控制参数值利用三维渲染引擎进行渲染,得到所述交互对象的动画。所述服务器可以将所述动画发送至终端进行展示来对目标对象进行交流或回应,还可以将所述动画发送至云端,以使终端能够从云端获取所述动画来对目标对象进行交流或回应。在服务器生成所述交互对象的控制参数值后,还可以将所述控制参数值发送至终端,以使终端完成渲染、生成动画、进行展示的过程。In some embodiments, the method is applied to a server, including a local server or a cloud server. The server processes text data, generates control parameter values of the interactive objects, and uses three-dimensional rendering according to the control parameter values. The engine performs rendering to obtain the animation of the interactive object. The server may send the animation to the terminal for display to communicate or respond to the target object, and may also send the animation to the cloud, so that the terminal can obtain the animation from the cloud to communicate or respond to the target object . After the server generates the control parameter value of the interactive object, the control parameter value may also be sent to the terminal, so that the terminal completes the process of rendering, generating animation, and performing display.
在一些实施例中,所述方法应用于终端,所述终端对于文本数据进行处理,生成所述交互对象的控制参数值,并根据所述控制参数值利用三维渲染引擎进行渲染,得到所述交互对象的动画,所述终端可以展示所述动画以对目标对象进行交流或回应。In some embodiments, the method is applied to a terminal, and the terminal processes text data, generates control parameter values of the interactive object, and renders the interactive object using a three-dimensional rendering engine according to the control parameter value to obtain the interactive The animation of the object, the terminal can display the animation to communicate or respond to the target object.
在一些实施例中,可以根据所述文本数据控制展示所述交互对象的显示设备展示文本,和/或根据所述文本数据对应的音素序列控制所述显示设备输出语音。In some embodiments, the display device displaying the interactive object may be controlled to display text according to the text data, and/or the display device may be controlled to output speech according to the phoneme sequence corresponding to the text data.
在本公开实施例中,由于所述控制参数值与所述文本数据的音素序列相匹配,因此根据所述文本数据输出的语音和/或文本,与根据所述控制参数值控制交互对象的姿态是同步进行的情况下,交互对象所做出的姿态与所输出的语音和/或所展示的文本是同步的,给目标对象以所述交互对象正在说话的感觉。In the embodiment of the present disclosure, since the control parameter value matches the phoneme sequence of the text data, the voice and/or text output according to the text data is different from controlling the gesture of the interactive object according to the control parameter value. In the case of synchronization, the gesture made by the interactive object is synchronized with the output voice and/or displayed text, giving the target object the feeling that the interactive object is speaking.
在一些实施例中,所述交互对象的至少一个局部区域的控制参数包括姿态控制向量,所述姿态控制向量可以通过以下方式获得。In some embodiments, the control parameter of at least one local area of the interactive object includes a posture control vector, and the posture control vector can be obtained in the following manner.
首先,对所述音素序列进行特征编码,获得所述音素序列对应的编码序列。为了与后续提到的编码序列进行区分,将所述文本数据的音素序列对应的编码序列称为第一编码序列,即通过对所述音素序列进行特征编码,获得第一编码序列。First, feature encoding is performed on the phoneme sequence to obtain the encoding sequence corresponding to the phoneme sequence. In order to distinguish it from the coding sequence mentioned later, the coding sequence corresponding to the phoneme sequence of the text data is called the first coding sequence, that is, the first coding sequence is obtained by performing feature coding on the phoneme sequence.
针对所述音素序列包含的多种音素,生成每种音素对应的子编码序列。For multiple phonemes included in the phoneme sequence, a sub-coding sequence corresponding to each phoneme is generated.
在一个示例中,检测各时间点上是否对应有第一音素,所述第一音素为所述多种音素中的任一种;将有所述第一音素的时间点上的编码值设置为第一数值,将没有所述第一音素的时间点上的编码值设置为第二数值,在对各个时间点上的编码值进行赋值之后可得到第一音素对应的编码序列。例如,可以将有所述第一音素的时间点上的编码值设置为1,将没有所述第一音素的时间点上的编码值设置为0。即,针对所述音素序列包含的多种音素中的每种音素,检测各时间点上是否对应有该音素;将有所述音素的时间点上的编码值设置为第一数值,将没有所述音素的时间点上的编码值设置为第二数值,在对各个时间点上的编码值进行赋值之后可得到该音素对应的编码序列。本领域技术人员应当理解,上述编码值的设置仅为示例,也可以将编码值设置为其他值,本公开对此不进行限制。In an example, it is detected whether there is a first phoneme corresponding to each time point, and the first phoneme is any one of the multiple phonemes; the encoding value at the time point where the first phoneme is present is set to The first value, the coding value at the time point without the first phoneme is set to the second value, and the coding sequence corresponding to the first phoneme can be obtained after assigning the coding value at each time point. For example, the code value at the time point when the first phoneme is present may be set to 1, and the code value at the time point when the first phoneme is not present may be set to 0. That is, for each phoneme of the multiple phonemes included in the phoneme sequence, it is detected whether the phoneme corresponds to the phoneme at each time point; the encoding value at the time point where the phoneme is present is set to the first value, and there is no phoneme. The coding value at the time point of the phoneme is set to the second value, and the coding sequence corresponding to the phoneme can be obtained after assigning the coding value at each time point. Those skilled in the art should understand that the above setting of the encoding value is only an example, and the encoding value can also be set to other values, which is not limited in the present disclosure.
根据所述多种音素分别对应的子编码序列,获得所述音素序列对应的第一编码序列。The first coding sequence corresponding to the phoneme sequence is obtained according to the respective sub-coding sequences corresponding to the multiple phonemes.
在一个示例中,对于第一音素对应的子编码序列,可利用高斯滤波器对所述第一音素在时间上的连续值进行高斯卷积操作,以对特征编码所对应的矩阵进行滤波,平滑每一个音素转换时,嘴部区域过渡的动作。In an example, for the sub-coding sequence corresponding to the first phoneme, a Gaussian filter may be used to perform a Gaussian convolution operation on the continuous values of the first phoneme in time, so as to filter and smooth the matrix corresponding to the feature encoding. The transition of the mouth area when each phoneme is converted.
图3示出了本公开至少一个实施例提出的交互对象的驱动方法的示意图。如图3所示,音素序列310含音素j、i1、j、ie4(为简洁起见,只示出部分音素),针对每种音素j、i1、ie4分别获得对应的子编码序列321、322、323。在各个子编码序列中,将有所述音素的时间点上对应的编码值设置为第一数值(例如为1),将没有所述音素的时间点上对应的编码值设置为第二数值(例如为0)。以子编码序列321为例,在音素序列310中有音素j的时间点上,子编码序列321的值为第一数值1,在没有音素j的时间点上,子编码序列321的值为第二数值0。所有子编码序列构成第一编码序列320。FIG. 3 shows a schematic diagram of a driving method of interactive objects proposed by at least one embodiment of the present disclosure. As shown in FIG. 3, the phoneme sequence 310 contains phonemes j, i1, j, and ie4 (for brevity, only some phonemes are shown), and corresponding sub-coding sequences 321, 322, and 321 are respectively obtained for each phoneme j, i1, and ie4. 323. In each sub-coding sequence, the corresponding code value at the time point where the phoneme is present is set to a first value (for example, 1), and the corresponding code value at the time point without the phoneme is set to the second value ( For example, 0). Taking the sub-coding sequence 321 as an example, at the time point when there is phoneme j in the phoneme sequence 310, the value of the sub-coding sequence 321 is the first value 1, and at the time point when there is no phoneme j, the value of the sub-coding sequence 321 is the first value. The two value is 0. All the sub-coding sequences constitute the first coding sequence 320.
接下来,根据所述第一编码序列,获取至少一个音素对应的特征编码。Next, according to the first coding sequence, a feature code corresponding to at least one phoneme is obtained.
根据音素j、i1、ie4分别对应的子编码序列321、322、323的编码值,以及该三个子编码序列中对应的音素的持续时间,也即在子编码序列321中j的持续时间、在子编码序列322中i1的持续时间、在子编码序列323中ie4的持续时间,可以获得子编码序列321、322、323的特征信息。According to the encoding values of the sub-coding sequences 321, 322, and 323 corresponding to phonemes j, i1, and ie4, and the duration of the corresponding phonemes in the three sub-coding sequences, that is, the duration of j in the sub-coding sequence 321, The duration of i1 in the sub-coding sequence 322 and the duration of ie4 in the sub-coding sequence 323 can obtain the characteristic information of the sub-coding sequences 321, 322, and 323.
在一个示例中,可以利用高斯滤波器分别对子编码序列321、322、323中的音素j、i1、ie4在时间上的连续值进行高斯卷积操作,以对特征编码进行平滑,得到平滑后的第一编码序列330。也即,通过高斯滤波器对于音素在时间上的连续值进行高斯卷积操作, 使得各个编码序列中编码值从第二数值到第一数值或者从第一数值到第二数值的变化阶段变得平滑。例如,编码序列的值除了0和1也呈现出中间状态的值,例如0.2、0.3等等,而根据这些中间状态的值所获取的姿态控制向量,使得交互人物的动作过度、表情变化更加平缓、自然,提高了目标对象的交互体验。In an example, a Gaussian filter may be used to perform Gaussian convolution operations on the consecutive values of phonemes j, i1, and ie4 in the sub-encoding sequences 321, 322, and 323, respectively, to smooth the feature encoding to obtain the smoothed的第一coding sequence 330. That is, the Gaussian convolution operation is performed on the continuous value of the phoneme in time through the Gaussian filter, so that the code value in each code sequence changes from the second value to the first value or from the first value to the second value. smooth. For example, in addition to 0 and 1, the values of the coding sequence also show intermediate values, such as 0.2, 0.3, etc., and the posture control vector obtained according to the values of these intermediate states makes the interaction characters excessively move and change their expressions more smoothly , Naturally, improve the interactive experience of the target object.
在一些实施例中,可以通过在所述第一编码序列上进行滑窗的方式获取至少一个音素对应的特征编码。其中,所述第一编码序列可以是经过高斯卷积操作后的编码序列。In some embodiments, the feature code corresponding to at least one phoneme may be obtained by performing a sliding window on the first code sequence. Wherein, the first coding sequence may be a coding sequence after a Gaussian convolution operation.
以设定长度的时间窗口和设定步长,对所述编码序列进行滑窗,将所述时间窗口内的特征编码作为所对应的至少一个音素的特征编码,在完成滑窗后,根据得到的多个特征编码,可以获得第二编码序列。由于各音素的持续时间不同,且各音素的持续时间与时间窗口的长度所成比例不同,故时间窗口内的特征编码所对应的音素数量根据时间窗口的位置可能为1、2甚至更多。如图3所示,通过在第一编码序列320或者平滑后的第一编码序列330上,滑动设定长度的时间窗口,分别获得特征编码1、特征编码2、特征编码3,以此类推,在遍历第一编码序列后,获得特征编码1、特征编码2、特征编码3、…、特征编码M,从而得到了第二编码序列340。其中,M为正整数,其数值根据第一编码序列的长度、时间窗口的长度以及时间窗口滑动的步长确定。A sliding window is performed on the coding sequence with a time window of a set length and a set step size, and the feature code in the time window is used as the feature code of the corresponding at least one phoneme. After the sliding window is completed, according to With multiple feature codes of, the second code sequence can be obtained. Since the duration of each phoneme is different, and the duration of each phoneme is different in proportion to the length of the time window, the number of phonemes corresponding to the feature code in the time window may be 1, 2 or even more depending on the position of the time window. As shown in FIG. 3, by sliding a time window of a set length on the first coding sequence 320 or the smoothed first coding sequence 330, feature code 1, feature code 2, feature code 3 are obtained respectively, and so on, After traversing the first code sequence, feature code 1, feature code 2, feature code 3,..., Feature code M are obtained, thereby obtaining a second code sequence 340. Among them, M is a positive integer, and its value is determined according to the length of the first coding sequence, the length of the time window, and the sliding step of the time window.
最后,获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量。Finally, the attitude control vector of at least one partial region of the interactive object corresponding to the feature code is acquired.
根据特征编码1、特征编码2、特征编码3、…、特征编码M,分别可以获得相应的姿态控制向量1、姿态控制向量2、姿态控制向量3、…、姿态控制向量M,从而获得姿态控制向量的序列350。According to feature code 1, feature code 2, feature code 3,..., feature code M, the corresponding attitude control vector 1, attitude control vector 2, attitude control vector 3,..., attitude control vector M can be obtained respectively, thereby obtaining attitude control 350 of the sequence of vectors.
姿态控制向量的序列350与第二编码序列340在时间上是对齐的,由于所述第二编码序列中的每个特征编码是根据音素序列中的至少一个音素获得的,因此姿态控制向量的序列350中的每个控制向量同样是根据音素序列中的至少一个音素获得的。在播放文本数据所对应的音素序列的同时,根据所述姿态控制向量的序列驱动所述交互对象做出动作,即能够实现驱动交互对象发出文本内容所对应的声音的同时,做出与声音同步的动作,给目标对象以所述交互对象正在说话的感觉,提升了目标对象的交互体验。The sequence 350 of the attitude control vector and the second coding sequence 340 are aligned in time. Since each feature code in the second coding sequence is obtained according to at least one phoneme in the phoneme sequence, the sequence of the attitude control vector Each control vector in 350 is also obtained based on at least one phoneme in the phoneme sequence. While playing the phoneme sequence corresponding to the text data, the interactive object is driven to make an action according to the sequence of the posture control vector, that is, the interactive object can be driven to emit the sound corresponding to the text content while making synchronization with the sound The action gives the target object the feeling that the interactive object is speaking, which improves the interactive experience of the target object.
假设在第一个时间窗口的设定时刻开始输出特征编码,可以将在所述设定时刻之前的姿态控制向量值设置为默认值,也即在刚开始播放音素序列时,使所述交互对象做出默认的动作,在所述设定时刻之后开始利用根据第一编码序列所得到的姿态控制向量的序列驱动所述交互对象做出动作。以图3为例,在t0时刻开始输出特征编码1,在t0 时刻之前对应的是默认姿态控制向量。Assuming that the feature code starts to be output at the set time of the first time window, the attitude control vector value before the set time can be set to the default value, that is, when the phoneme sequence is just started to be played, the interactive object A default action is made, and after the set time, the interactive object is driven to make an action using the sequence of the posture control vector obtained according to the first coding sequence. Taking Fig. 3 as an example, the feature code 1 starts to be output at time t0, which corresponds to the default attitude control vector before time t0.
所述时间窗口的长度与所述特征编码所包含的信息量相关。在时间窗口所含的信息量较大的情况下,经所述循环神经网络处理会输出较均匀的结果。若时间窗口的长度过大,可能导致交互对象说话时的表情无法与部分文字对应;若时间窗口的长度过小,可能导致交互对象说话时的表情显得生硬。因此,时间窗口的时长需要根据文本数据所对应的音素持续的最小时间来确定,以使驱动所述交互对象所做出的动作与声音具有更强的关联性。The length of the time window is related to the amount of information contained in the feature code. In the case where the amount of information contained in the time window is relatively large, the cyclic neural network processing will output a relatively uniform result. If the length of the time window is too large, the expression of the interactive object may not correspond to part of the text; if the length of the time window is too small, the expression of the interactive object may appear rigid when speaking. Therefore, the duration of the time window needs to be determined according to the minimum duration of the phoneme corresponding to the text data, so that the actions taken by driving the interactive object have a stronger correlation with the sound.
时间窗口滑动的步长与获取姿态控制向量的时间间隔(频率)相关,也即与驱动交互对象做出动作的频率相关。可以根据实际的交互场景来设置所述时间窗口的长度以及步长,以使交互对象做出的表情和动作与声音的关联性更强,并且更加生动、自然。The sliding step length of the time window is related to the time interval (frequency) of obtaining the attitude control vector, that is, it is related to the frequency of driving the interactive object to make an action. The length and step length of the time window can be set according to the actual interactive scene, so that the expressions and actions made by the interactive object are more closely related to the sound, and are more vivid and natural.
在一些实施例中,在所述音素序列中音素之间的时间间隔大于设定阈值的情况下,根据所述局部区域的设定姿态控制向量,驱动所述交互对象做出动作。也即,在交互人物说话停顿较长的时候,则驱动所述交互对象做出设定的动作。例如,在输出的语音停顿较长时,可以使交互对象做出微笑的表情,或者身体微微的摆动,以避免在停顿较长时交互对象面无表情地直立,从而使得交互对象说话的过程更加自然、流畅,提高了目标对象与交互对象的交互感受。In some embodiments, when the time interval between phonemes in the phoneme sequence is greater than a set threshold, the interactive object is driven to take actions according to the set posture control vector of the local area. That is, when the interactive character pauses for a long time, the interactive object is driven to make a set action. For example, when the output voice pauses for a long time, the interactive object can be made to make a smiling expression or slightly swing the body to avoid the interactive object standing upright without expression when the pause is long, thereby making the interactive object speak more Natural and smooth, it improves the interaction between the target object and the interactive object.
在一些实施例中,可以通过将所述特征编码输入至预先训练的循环神经网络,所述循环神经网络根据所述第一编码序列,输出与所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量。由于所述循环神经网络是一种时间递归神经网络,其可以学习所输入的特征编码的历史信息,根据所述特征编码序列输出所述至少一个局部区域的姿态控制向量。其中,所述特征编码序列包括第一编码序列和第二编码序列。所述循环神经网络例如可以是长短期记忆网络(Long Short-Term Memory,LSTM)。In some embodiments, the feature code may be input to a pre-trained recurrent neural network, and the recurrent neural network outputs at least one of the interactive objects corresponding to the feature code according to the first coding sequence. The attitude control vector of the local area. Since the recurrent neural network is a time recurrent neural network, it can learn the historical information of the input feature code, and output the attitude control vector of the at least one local area according to the feature code sequence. Wherein, the characteristic coding sequence includes a first coding sequence and a second coding sequence. The recurrent neural network may be, for example, a long short-term memory network (Long Short-Term Memory, LSTM).
在本公开实施例中,利用预先训练的循环神经网络获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量,将特征编码的历史特征信息和当前特征信息进行融合,从而使得历史姿态控制向量对当前姿态控制向量的变化产生影响,使得交互人物的表情变化和肢体动作更加平缓、自然。In the embodiment of the present disclosure, a pre-trained recurrent neural network is used to obtain the posture control vector of at least one local area of the interactive object corresponding to the feature code, and the historical feature information of the feature code and the current feature information are merged, thereby The historical attitude control vector has an impact on the current attitude control vector change, making the expression changes and body movements of the interactive characters more smooth and natural.
在一些实施例中,可以通过以下方式对所述循环神经网络进行训练。In some embodiments, the recurrent neural network can be trained in the following manner.
首先,获取特征编码样本,所述特征编码样本标注有真实值,所述真实值为所述交互对象的至少一个局部区域的姿态控制向量值。First, a feature coded sample is obtained, the feature coded sample is annotated with a true value, and the true value is a posture control vector value of at least one local area of the interactive object.
在获得了特征编码样本后,根据所述特征编码样本对初始循环神经网络进行训练,在网络损失的变化满足收敛条件后训练得到所述循环神经网络,其中,所述网络损失包括所述循环神经网络预测得到的所述至少一个局部区域的姿态控制向量值与所述真实值之间的差异。After obtaining the feature coding samples, the initial recurrent neural network is trained according to the feature coding samples, and the recurrent neural network is trained after the change of the network loss satisfies the convergence condition, wherein the network loss includes the recurrent neural network The difference between the attitude control vector value of the at least one local area and the real value obtained by network prediction.
在一些实施例中,可以通过以下方法获取特征编码样本。In some embodiments, feature code samples can be obtained by the following method.
首先,获取一角色发出语音的视频段,并根据所述视频段获取多个包含所述角色的第一图像帧。例如,可以获取一真实人物正在说话的视频段。First, obtain a video segment of a character's voice, and obtain a plurality of first image frames containing the character according to the video segment. For example, a video segment in which a real person is speaking can be obtained.
接下来,从所述视频段中提取相应的语音段,根据所述语音段获取样本音素序列,并对所述样本音素序列进行特征编码。其中,对所述样本音素序列进行编码的方式与上述的文本数据对应的音素序列的编码方式相同。Next, extract a corresponding voice segment from the video segment, obtain a sample phoneme sequence according to the voice segment, and perform feature encoding on the sample phoneme sequence. Wherein, the manner of encoding the sample phoneme sequence is the same as the encoding manner of the phoneme sequence corresponding to the text data described above.
根据对所述样本音素序列进行特征编码所得到的样本编码序列,获取与所述第一图像帧对应的至少一个音素的特征编码。其中,所述至少一个音素可以是在所述第一图像帧出现时间的设定范围内的音素。According to the sample code sequence obtained by performing feature coding on the sample phoneme sequence, the feature code of at least one phoneme corresponding to the first image frame is obtained. Wherein, the at least one phoneme may be a phoneme within a set range of the appearance time of the first image frame.
接着,将所述第一图像帧转化为包含所述交互对象的第二图像帧,获取所述第二图像帧对应的至少一个局部区域的姿态控制向量值。其中,该姿态控制向量值可以包括所有局部区域的姿态控制向量值,也可以包括其中部分的局部区域的姿态控制向量值。Then, the first image frame is converted into a second image frame containing the interactive object, and the attitude control vector value of at least one local area corresponding to the second image frame is obtained. Wherein, the attitude control vector value may include the attitude control vector value of all the local areas, and may also include the attitude control vector value of some of the local areas.
以所述第一图像帧为包含真实人物的图像帧为例,可以将该真实人物的图像帧转换为包含交互对象所表示的形象的第二图像帧,并且所述真实人物的各个局部区域的姿态控制向量与所述交互对象的各个局部区域的姿态控制向量是对应的,从而可以获取第二图像帧中交互对象的各个局部区域的姿态控制向量。Taking the first image frame as an image frame containing a real person as an example, the image frame of the real person can be converted into a second image frame containing the image represented by the interactive object, and the local area of the real person The posture control vector corresponds to the posture control vector of each local area of the interactive object, so that the posture control vector of each local area of the interactive object in the second image frame can be obtained.
最后,根据所述姿态控制向量值对上述所获得的所述第一图像帧对应的至少一个音素的特征编码进行标注,获得特征编码样本。Finally, the feature code of at least one phoneme corresponding to the first image frame obtained above is annotated according to the attitude control vector value to obtain feature code samples.
在本公开实施例中,通过将一角色的视频段,拆分为对应的多个第一图像帧和语音段,并通过将包含真实人物的第一图像帧转化为包含交互对象的第二图像帧来获取音素的特征编码对应的姿态控制向量,使得特征编码与姿态控制向量的对应性较好,从而获得高质量的特征编码样本,使得交互对象的动作更接近于对应角色的真实动作。In the embodiment of the present disclosure, the video segment of a character is split into a plurality of corresponding first image frames and voice segments, and the first image frame containing the real person is converted into the second image containing the interactive object. Frames are used to obtain the attitude control vector corresponding to the feature code of the phoneme, so that the feature code has a better correspondence with the attitude control vector, so as to obtain high-quality feature code samples, so that the actions of the interactive objects are closer to the real actions of the corresponding characters.
图4示出根据本公开至少一个实施例的交互对象的驱动装置的结构示意图,如图4所示,该装置可以包括:第一获取单元401,用于获取文本数据对应的音素序列;第二获取单元402,用于获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参 数值;驱动单元403,用于根据获取的所述控制参数值控制所述交互对象的姿态。FIG. 4 shows a schematic structural diagram of a driving device for interactive objects according to at least one embodiment of the present disclosure. As shown in FIG. 4, the device may include: a first obtaining unit 401, configured to obtain a phoneme sequence corresponding to text data; and second The acquiring unit 402 is configured to acquire a control parameter value of at least one partial region of an interactive object matching the phoneme sequence; the driving unit 403 is configured to control the posture of the interactive object according to the acquired control parameter value.
在一些实施例中,所述装置还包括输出单元,用于根据所述文本数据控制展示所述交互对象的显示设备展示文本,和/或根据所述文本数据对应的音素序列控制所述显示设备输出语音。In some embodiments, the device further includes an output unit for controlling the display device displaying the interactive object to display text according to the text data, and/or controlling the display device according to the phoneme sequence corresponding to the text data Output speech.
在一些实施例中,所述第二获取单元具体用于:对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列;根据所述第一编码序列,获取至少一个音素对应的特征编码;获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量。In some embodiments, the second obtaining unit is specifically configured to: perform feature coding on the phoneme sequence to obtain a first coding sequence corresponding to the phoneme sequence; and obtain at least one phoneme corresponding to the phoneme sequence according to the first coding sequence. The feature code of; obtaining the attitude control vector of at least one local area of the interactive object corresponding to the feature code.
在一些实施例中,在对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列时,所述第二获取单元具体用于:针对所述音素序列包含的多种音素,生成每种音素对应的子编码序列;根据所述多种音素分别对应的子编码序列,获得所述音素序列对应的第一编码序列。In some embodiments, when performing feature encoding on the phoneme sequence to obtain the first encoding sequence corresponding to the phoneme sequence, the second acquiring unit is specifically configured to: for multiple phonemes included in the phoneme sequence, A sub-coding sequence corresponding to each phoneme is generated; and the first coding sequence corresponding to the phoneme sequence is obtained according to the sub-coding sequences respectively corresponding to the multiple phonemes.
在一些实施例中,在针对所述音素序列包含的多种音素,生成每种音素对应的子编码序列时,所述第二获取单元具体用于:检测各时间点上是否对应有第一音素,所述第一音素为所述多种音素中的任一种;通过将有所述第一音素的时间点上的编码值设置为第一数值,将没有所述第一音素的时间点上的编码值设置为第二数值,得到所述第一音素对应的子编码序列。In some embodiments, when generating the sub-coding sequence corresponding to each phoneme for the multiple phonemes included in the phoneme sequence, the second acquiring unit is specifically configured to: detect whether there is a first phoneme corresponding to each time point. , The first phoneme is any one of the multiple phonemes; by setting the code value at the time point when the first phoneme is present to the first value, the time point when the first phoneme is not present is set to the first value. The encoding value of is set to a second value to obtain the sub-encoding sequence corresponding to the first phoneme.
在一些实施例中,所述装置还包括滤波单元,用于对于所述多种音素中的每种音素对应的所述子编码序列,利用高斯滤波器对所述音素在时间上的连续值进行高斯卷积操作。在一实施例中,对于第一音素对应的子编码序列,利用高斯滤波器对所述第一音素在时间上的连续值进行高斯卷积操作,所述第一音素为所述多种音素中的任一种。In some embodiments, the device further includes a filtering unit for performing a Gaussian filter on the continuous value of the phoneme in time for the sub-coding sequence corresponding to each phoneme of the multiple phonemes. Gaussian convolution operation. In an embodiment, for the sub-coding sequence corresponding to the first phoneme, a Gaussian filter is used to perform a Gaussian convolution operation on the continuous values of the first phoneme in time, and the first phoneme is one of the multiple phonemes. Of any kind.
在一些实施例中,在根据所述第一编码序列,获取至少一个音素对应的特征编码时,所述第二获取单元具体用于:以设定长度的时间窗口和设定步长,对所述编码序列进行滑窗,将所述时间窗口内的特征编码作为所对应的至少一个音素的特征编码,并根据完成滑窗得到的多个特征编码,获得第二编码序列。In some embodiments, when acquiring feature codes corresponding to at least one phoneme according to the first coding sequence, the second acquiring unit is specifically configured to: A sliding window is performed on the coding sequence, the feature code in the time window is used as the feature code of the corresponding at least one phoneme, and the second code sequence is obtained according to the multiple feature codes obtained by completing the sliding window.
在一些实施例中,所述驱动单元具体用于:获取与所述第二编码序列对应的姿态控制向量的序列;根据所述姿态控制向量的序列控制所述交互对象的姿态。In some embodiments, the driving unit is specifically configured to: obtain a sequence of a posture control vector corresponding to the second coding sequence; and control the posture of the interactive object according to the sequence of the posture control vector.
在一些实施例中,所述装置还包括停顿驱动单元,用于在所述音素序列中音素之间的时间间隔大于设定阈值的情况下,根据所述局部区域的设定控制参数值,控制所述交 互对象的姿态。In some embodiments, the device further includes a pause drive unit, which is used to control the set control parameter value of the local area when the time interval between phonemes in the phoneme sequence is greater than a set threshold. The posture of the interactive object.
在一些实施例中,在获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量时,所述第二获取单元具体用于:将所述特征编码输入至预先训练的循环神经网络,获得与所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量。In some embodiments, when acquiring the attitude control vector of at least one partial region of the interactive object corresponding to the feature code, the second acquiring unit is specifically configured to: input the feature code into a pre-trained loop A neural network obtains a posture control vector of at least one local area of the interactive object corresponding to the feature code.
在一些实施例中,所述神经网络通过音素序列样本训练得到;所述装置还包括样本获取单元,用于:获取一角色发出语音的视频段,并根据所述视频段获取多个包含所述角色的第一图像帧;从所述视频段中提取相应的语音段,根据所述语音段获取样本音素序列,并对所述样本音素序列进行特征编码;获取与所述第一图像帧对应的至少一个音素的特征编码;将所述第一图像帧转化为包含所述交互对象的第二图像帧,获取所述第二图像帧对应的至少一个局部区域的姿态控制向量值;根据所述姿态控制向量值,对所述第一图像帧对应的特征编码进行标注,获得特征编码样本。In some embodiments, the neural network is obtained through phoneme sequence sample training; the device further includes a sample acquisition unit for: acquiring a video segment of a character's voice, and acquiring a plurality of video segments containing the voice according to the video segment The first image frame of the character; extract the corresponding voice segment from the video segment, obtain a sample phoneme sequence according to the voice segment, and perform feature encoding on the sample phoneme sequence; obtain the corresponding voice segment of the first image frame Feature encoding of at least one phoneme; transforming the first image frame into a second image frame containing the interactive object, and obtaining the attitude control vector value of at least one local area corresponding to the second image frame; according to the attitude The control vector value is used to annotate the feature code corresponding to the first image frame to obtain a feature code sample.
在一些实施例中,所述装置还包括训练单元,用于根据所述特征编码样本对初始循环神经网络进行训练,在网络损失的变化满足收敛条件后训练得到所述循环神经网络,其中,所述网络损失包括所述循环神经网络预测得到的所述至少一个局部区域的姿态控制向量值与标注的姿态控制向量值之间的差异。In some embodiments, the device further includes a training unit for training the initial recurrent neural network according to the characteristic coding samples, and training to obtain the recurrent neural network after the change in network loss satisfies the convergence condition, wherein The network loss includes the difference between the attitude control vector value of the at least one local area and the labeled attitude control vector value predicted by the recurrent neural network.
本说明书至少一个实施例还提供了一种电子设备,如图5所示,所述设备包括存储器、处理器,存储器用于存储可在处理器上运行的计算机指令,处理器用于在执行所述计算机指令时实现本公开任一实施例所述的交互对象的驱动方法。At least one embodiment of this specification also provides an electronic device. As shown in FIG. 5, the device includes a memory and a processor. The memory is used to store computer instructions that can run on the processor. The method for driving interactive objects described in any embodiment of the present disclosure is realized by computer instructions.
本说明书至少一个实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现本公开任一实施例所述的交互对象的驱动方法。At least one embodiment of this specification also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method for driving an interactive object according to any embodiment of the present disclosure is realized.
本领域技术人员应明白,本说明书一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt computer programs implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于数据处理设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处 参见方法实施例的部分说明即可。The various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the data processing device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的行为或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of this specification. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps described in the claims can be performed in a different order than in the embodiments and still achieve desired results. In addition, the processes depicted in the drawings do not necessarily require the specific order or sequential order shown in order to achieve the desired results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
本说明书中描述的主题及功能操作的实施例可以在以下中实现:数字电子电路、有形体现的计算机软件或固件、包括本说明书中公开的结构及其结构性等同物的计算机硬件、或者它们中的一个或多个的组合。本说明书中描述的主题的实施例可以实现为一个或多个计算机程序,即编码在有形非暂时性程序载体上以被数据处理装置执行或控制数据处理装置的操作的计算机程序指令中的一个或多个模块。可替代地或附加地,程序指令可以被编码在人工生成的传播信号上,例如机器生成的电、光或电磁信号,该信号被生成以将信息编码并传输到合适的接收机装置以由数据处理装置执行。计算机存储介质可以是机器可读存储设备、机器可读存储基板、随机或串行存取存储器设备、或它们中的一个或多个的组合。The embodiments of the subject and functional operations described in this specification can be implemented in the following: digital electronic circuits, tangible computer software or firmware, computer hardware including the structures disclosed in this specification and structural equivalents thereof, or among them A combination of one or more. The embodiments of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or one of the computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing device or to control the operation of the data processing device Multiple modules. Alternatively or in addition, the program instructions may be encoded on artificially generated propagated signals, such as machine-generated electrical, optical or electromagnetic signals, which are generated to encode information and transmit it to a suitable receiver device for use by the data The processing device executes. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
本说明书中描述的处理及逻辑流程可以由执行一个或多个计算机程序的一个或多个可编程计算机执行,以通过根据输入数据进行操作并生成输出来执行相应的功能。所述处理及逻辑流程还可以由专用逻辑电路—例如FPGA(现场可编程门阵列)或ASIC(专用集成电路)来执行,并且装置也可以实现为专用逻辑电路。The processing and logic flow described in this specification can be executed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating according to input data and generating output. The processing and logic flow can also be executed by a dedicated logic circuit, such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the device can also be implemented as a dedicated logic circuit.
适合用于执行计算机程序的计算机包括,例如通用和/或专用微处理器,或任何其他类型的中央处理单元。通常,中央处理单元将从只读存储器和/或随机存取存储器接收指令和数据。计算机的基本组件包括用于实施或执行指令的中央处理单元以及用于存储指令和数据的一个或多个存储器设备。通常,计算机还将包括用于存储数据的一个或多个大容量存储设备,例如磁盘、磁光盘或光盘等,或者计算机将可操作地与此大容量存储设备耦接以从其接收数据或向其传送数据,抑或两种情况兼而有之。然而,计算机不是必须具有这样的设备。此外,计算机可以嵌入在另一设备中,例如移动电话、个人数字助理(PDA)、移动音频或视频播放器、游戏操纵台、全球定位系统(GPS)接收机、或例如通用串行总线(USB)闪存驱动器的便携式存储设备,仅举几例。Computers suitable for executing computer programs include, for example, general-purpose and/or special-purpose microprocessors, or any other type of central processing unit. Generally, the central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks, or the computer will be operatively coupled to this mass storage device to receive data from or send data to it. It transmits data, or both. However, the computer does not have to have such equipment. In addition, the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or, for example, a universal serial bus (USB ) Flash drives are portable storage devices, just to name a few.
适合于存储计算机程序指令和数据的计算机可读介质包括所有形式的非易失性存储器、媒介和存储器设备,例如包括半导体存储器设备(例如EPROM、EEPROM和 闪存设备)、磁盘(例如内部硬盘或可移动盘)、磁光盘以及CD ROM和DVD-ROM盘。处理器和存储器可由专用逻辑电路补充或并入专用逻辑电路中。Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or Removable disks), magneto-optical disks, CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by or incorporated into a dedicated logic circuit.
虽然本说明书包含许多具体实施细节,但是这些不应被解释为限制任何发明的范围或所要求保护的范围,而是主要用于描述特定发明的具体实施例的特征。本说明书内在多个实施例中描述的某些特征也可以在单个实施例中被组合实施。另一方面,在单个实施例中描述的各种特征也可以在多个实施例中分开实施或以任何合适的子组合来实施。此外,虽然特征可以如上所述在某些组合中起作用并且甚至最初如此要求保护,但是来自所要求保护的组合中的一个或多个特征在一些情况下可以从该组合中去除,并且所要求保护的组合可以指向子组合或子组合的变型。Although this specification contains many specific implementation details, these should not be construed as limiting the scope of any invention or the scope of the claimed protection, but are mainly used to describe the features of specific embodiments of a particular invention. Certain features described in multiple embodiments in this specification can also be implemented in combination in a single embodiment. On the other hand, various features described in a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. In addition, although features may work in certain combinations as described above and even initially claimed as such, one or more features from the claimed combination may in some cases be removed from the combination, and the claimed The combination of protection can be directed to a sub-combination or a variant of the sub-combination.
类似地,虽然在附图中以特定顺序描绘了操作,但是这不应被理解为要求这些操作以所示的特定顺序执行或顺次执行、或者要求所有例示的操作被执行,以实现期望的结果。在某些情况下,多任务和并行处理可能是有利的。此外,上述实施例中的各种系统模块和组件的分离不应被理解为在所有实施例中均需要这样的分离,并且应当理解,所描述的程序组件和系统通常可以一起集成在单个软件产品中,或者封装成多个软件产品。Similarly, although operations are depicted in a specific order in the drawings, this should not be construed as requiring these operations to be performed in the specific order shown or sequentially, or requiring all illustrated operations to be performed to achieve the desired result. In some cases, multitasking and parallel processing may be advantageous. In addition, the separation of various system modules and components in the above embodiments should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can usually be integrated together in a single software product. In, or packaged into multiple software products.
由此,主题的特定实施例已被描述。其他实施例在所附权利要求书的范围以内。在某些情况下,权利要求书中记载的动作可以以不同的顺序执行并且仍实现期望的结果。此外,附图中描绘的处理并非必需所示的特定顺序或顺次顺序,以实现期望的结果。在某些实现中,多任务和并行处理可能是有利的。Thus, specific embodiments of the subject matter have been described. Other embodiments are within the scope of the appended claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desired results. In addition, the processes depicted in the drawings are not necessarily in the specific order or sequential order shown in order to achieve the desired result. In some implementations, multitasking and parallel processing may be advantageous.
以上所述仅为本说明书一个或多个实施例的较佳实施例而已,并不用以限制本说明书一个或多个实施例,凡在本说明书一个或多个实施例的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例保护的范围之内。The above descriptions are only preferred embodiments of one or more embodiments of this specification, and are not intended to limit one or more embodiments of this specification. All within the spirit and principle of one or more embodiments of this specification, Any modification, equivalent replacement, improvement, etc. made should be included in the protection scope of one or more embodiments of this specification.

Claims (20)

  1. 一种交互对象的驱动方法,包括:A driving method of interactive objects includes:
    获取文本数据对应的音素序列;Obtaining the phoneme sequence corresponding to the text data;
    获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值;Acquiring a control parameter value of at least one partial region of an interactive object matching the phoneme sequence;
    根据获取的所述控制参数值控制所述交互对象的姿态。Controlling the posture of the interactive object according to the acquired control parameter value.
  2. 根据权利要求1所述的方法,还包括:根据所述文本数据控制展示所述交互对象的显示设备展示文本,和/或根据所述文本数据对应的音素序列控制所述显示设备输出语音。The method according to claim 1, further comprising: controlling the display device displaying the interactive object to display text according to the text data, and/or controlling the display device to output voice according to the phoneme sequence corresponding to the text data.
  3. 根据权利要求1或2所述的方法,其中,所述交互对象的局部区域的控制参数包括所述局部区域的姿态控制向量;The method according to claim 1 or 2, wherein the control parameter of the local area of the interactive object includes the attitude control vector of the local area;
    获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值,包括:Obtaining the control parameter value of at least one partial region of the interaction object matching the phoneme sequence includes:
    对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列;Performing feature encoding on the phoneme sequence to obtain a first encoding sequence corresponding to the phoneme sequence;
    根据所述第一编码序列,获取至少一个音素对应的特征编码;Obtaining a feature code corresponding to at least one phoneme according to the first coding sequence;
    获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量。Obtain a posture control vector of at least one local area of the interactive object corresponding to the feature code.
  4. 根据权利要求3所述的方法,其中,对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列,包括:The method according to claim 3, wherein, performing feature encoding on the phoneme sequence to obtain the first encoding sequence corresponding to the phoneme sequence comprises:
    针对所述音素序列包含的多种音素中的每种音素,生成所述音素对应的子编码序列;For each phoneme of the multiple phonemes included in the phoneme sequence, generating a sub-coding sequence corresponding to the phoneme;
    根据所述多种音素分别对应的子编码序列,获得所述音素序列对应的第一编码序列。The first coding sequence corresponding to the phoneme sequence is obtained according to the respective sub-coding sequences corresponding to the multiple phonemes.
  5. 根据权利要求4所述的方法,其中,针对所述音素序列包含的多种音素中的每种音素,生成所述音素对应的子编码序列,包括:The method according to claim 4, wherein, for each phoneme of a plurality of phonemes included in the phoneme sequence, generating a sub-coding sequence corresponding to the phoneme comprises:
    检测各时间点上是否对应有所述音素;Detecting whether the phoneme corresponds to each time point;
    通过将有所述音素的时间点上的编码值设置为第一数值,将没有所述音素的时间点上的编码值设置为第二数值,得到所述音素对应的所述子编码序列。By setting the code value at the time point with the phoneme to the first value, and the code value at the time point without the phoneme to the second value, the sub-coding sequence corresponding to the phoneme is obtained.
  6. 根据权利要求5所述的方法,还包括:The method according to claim 5, further comprising:
    对于所述多种音素中的每种音素对应的所述子编码序列,利用高斯滤波器对所述音 素在时间上的连续值进行高斯卷积操作。For the sub-coding sequence corresponding to each phoneme of the plurality of phonemes, a Gaussian filter is used to perform a Gaussian convolution operation on the continuous values of the phonemes in time.
  7. 根据权利要求3至6任一项所述的方法,其中,根据所述第一编码序列,获取至少一个音素对应的特征编码,包括:The method according to any one of claims 3 to 6, wherein, according to the first coding sequence, obtaining a feature code corresponding to at least one phoneme comprises:
    以设定长度的时间窗口和设定步长,对所述第一编码序列进行滑窗,将所述时间窗口内的特征编码作为所对应的所述至少一个音素的特征编码,并根据完成所述滑窗得到的多个所述特征编码,获得第二编码序列;Using a time window of a set length and a set step size, the first coding sequence is window-slid, the feature code in the time window is used as the feature code of the corresponding at least one phoneme, and according to the completion of the Obtaining a second coding sequence by using a plurality of the feature codes obtained by the sliding window;
    根据获取的所述控制参数值控制所述交互对象的姿态,包括:Controlling the posture of the interactive object according to the acquired control parameter value includes:
    获取与所述第二编码序列对应的姿态控制向量的序列;Acquiring a sequence of attitude control vectors corresponding to the second coding sequence;
    根据所述姿态控制向量的序列控制所述交互对象的姿态。The posture of the interactive object is controlled according to the sequence of the posture control vector.
  8. 根据权利要求1至7任一项所述的方法,还包括:The method according to any one of claims 1 to 7, further comprising:
    在所述音素序列中的所述音素之间的时间间隔大于设定阈值的情况下,根据所述局部区域的设定控制参数值,控制所述交互对象的姿态。In the case that the time interval between the phonemes in the phoneme sequence is greater than a set threshold, the posture of the interactive object is controlled according to the set control parameter value of the local area.
  9. 根据权利要求3所述的方法,其中,获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量,包括:The method according to claim 3, wherein acquiring a posture control vector of at least one local area of the interactive object corresponding to the feature code comprises:
    将所述特征编码输入至预先训练的循环神经网络,获得与所述特征编码对应的所述交互对象的至少一个局部区域的所述姿态控制向量。The feature code is input to a pre-trained recurrent neural network, and the attitude control vector of at least one local area of the interactive object corresponding to the feature code is obtained.
  10. 根据权利要求9所述的方法,其中,所述循环神经网络通过特征编码样本训练得到;The method according to claim 9, wherein the recurrent neural network is obtained by training of feature coding samples;
    所述方法还包括:The method also includes:
    获取一角色发出语音的视频段,并根据所述视频段获取多个包含所述角色的第一图像帧;Acquiring a video segment of a character uttering a voice, and acquiring a plurality of first image frames containing the character according to the video segment;
    从所述视频段中提取相应的语音段,根据所述语音段获取样本音素序列,并对所述样本音素序列进行特征编码;Extracting a corresponding speech segment from the video segment, obtaining a sample phoneme sequence according to the speech segment, and performing feature encoding on the sample phoneme sequence;
    获取与所述第一图像帧对应的至少一个音素的特征编码;Acquiring a feature code of at least one phoneme corresponding to the first image frame;
    将所述第一图像帧转化为包含所述交互对象的第二图像帧,获取所述第二图像帧对应的至少一个局部区域的姿态控制向量值;Transforming the first image frame into a second image frame containing the interactive object, and obtaining a posture control vector value of at least one local area corresponding to the second image frame;
    根据所述姿态控制向量值,对与所述第一图像帧对应的所述特征编码进行标注,获 得所述特征编码样本。According to the attitude control vector value, the feature code corresponding to the first image frame is annotated to obtain the feature code sample.
  11. 根据权利要求10所述的方法,还包括:The method according to claim 10, further comprising:
    根据所述特征编码样本对初始循环神经网络进行训练,在网络损失的变化满足收敛条件后训练得到所述循环神经网络,其中,所述网络损失包括所述循环神经网络预测得到的所述至少一个局部区域的所述姿态控制向量值与标注的所述姿态控制向量值之间的差异。The initial recurrent neural network is trained according to the feature code samples, and the recurrent neural network is trained after the change of the network loss satisfies the convergence condition, wherein the network loss includes the at least one predicted by the recurrent neural network The difference between the attitude control vector value of the local area and the marked attitude control vector value.
  12. 一种交互对象的驱动装置,包括:A driving device for interactive objects, including:
    第一获取单元,用于获取文本数据对应的音素序列;The first acquiring unit is used to acquire the phoneme sequence corresponding to the text data;
    第二获取单元,用于获取与所述音素序列匹配的交互对象的至少一个局部区域的控制参数值;The second acquiring unit is configured to acquire the control parameter value of at least one partial region of the interaction object that matches the phoneme sequence;
    驱动单元,用于根据获取的所述控制参数值控制所述交互对象的姿态。The driving unit is configured to control the posture of the interactive object according to the acquired control parameter value.
  13. 根据权利要求12所述的装置,还包括输出单元,用于根据所述文本数据控制展示所述交互对象的显示设备展示文本,和/或根据所述文本数据对应的音素序列控制所述显示设备输出语音。The apparatus according to claim 12, further comprising an output unit for controlling the display device displaying the interactive object to display text according to the text data, and/or controlling the display device according to the phoneme sequence corresponding to the text data Output speech.
  14. 根据权利要求12或13所述的装置,其中,所述第二获取单元用于:The device according to claim 12 or 13, wherein the second obtaining unit is configured to:
    对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列;Performing feature encoding on the phoneme sequence to obtain a first encoding sequence corresponding to the phoneme sequence;
    根据所述第一编码序列,获取至少一个音素对应的特征编码;Obtaining a feature code corresponding to at least one phoneme according to the first coding sequence;
    获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量;Acquiring a posture control vector of at least one local area of the interactive object corresponding to the feature code;
    其中,对所述音素序列进行特征编码,获得所述音素序列对应的第一编码序列,包括:Wherein, performing feature encoding on the phoneme sequence to obtain the first encoding sequence corresponding to the phoneme sequence includes:
    针对所述音素序列包含的多种音素中的每种音素,生成所述音素对应的子编码序列;For each phoneme of the multiple phonemes included in the phoneme sequence, generating a sub-coding sequence corresponding to the phoneme;
    根据所述多种音素分别对应的子编码序列,获得所述音素序列对应的第一编码序列。The first coding sequence corresponding to the phoneme sequence is obtained according to the respective sub-coding sequences corresponding to the multiple phonemes.
  15. 根据权利要求14所述的装置,其特征在于,在根据所述第一编码序列,获取至少一个音素对应的特征编码时,所述第二获取单元用于:The apparatus according to claim 14, wherein when acquiring the feature code corresponding to at least one phoneme according to the first coding sequence, the second acquiring unit is configured to:
    以设定长度的时间窗口和设定步长,对所述编码序列进行滑窗,将所述时间窗口内 的特征编码作为所对应的所述至少一个音素的特征编码,并根据完成所述滑窗得到的多个特征编码,获得第二编码序列;A sliding window is performed on the coding sequence with a time window of a set length and a set step size, the feature code in the time window is used as the feature code of the corresponding at least one phoneme, and the sliding is performed according to the completion of the sliding window. Multiple feature codes obtained by the window to obtain a second code sequence;
    所述驱动单元用于:The driving unit is used for:
    获取与所述第二编码序列对应的姿态控制向量的序列;Acquiring a sequence of attitude control vectors corresponding to the second coding sequence;
    根据所述姿态控制向量的序列控制所述交互对象的姿态。The posture of the interactive object is controlled according to the sequence of the posture control vector.
  16. 根据权利要求12至15任一项所述的装置,还包括:The device according to any one of claims 12 to 15, further comprising:
    停顿驱动单元,在所述音素序列中的所述音素之间的时间间隔大于设定阈值的情况下,根据所述局部区域的设定控制参数值,控制所述交互对象的姿态。The pause driving unit, when the time interval between the phonemes in the phoneme sequence is greater than a set threshold, controls the posture of the interactive object according to the set control parameter value of the local area.
  17. 根据权利要求14所述的装置,其中,在获取所述特征编码对应的所述交互对象的至少一个局部区域的姿态控制向量时,所述第二获取单元用于:将所述特征编码输入至预先训练的循环神经网络,获得与所述特征编码对应的所述交互对象的至少一个局部区域的所述姿态控制向量。The device according to claim 14, wherein, when acquiring the attitude control vector of at least one partial region of the interactive object corresponding to the feature code, the second acquiring unit is configured to: input the feature code to A pre-trained recurrent neural network obtains the attitude control vector of at least one local area of the interactive object corresponding to the feature code.
  18. 根据权利要求17所述的装置,还包括样本获取单元,用于:The device according to claim 17, further comprising a sample acquisition unit, configured to:
    获取一角色发出语音的视频段,并根据所述视频段获取多个包含所述角色的第一图像帧;Acquiring a video segment of a character uttering a voice, and acquiring a plurality of first image frames containing the character according to the video segment;
    从所述视频段中提取相应的语音段,根据所述语音段获取样本音素序列,并对所述样本音素序列进行特征编码;Extracting a corresponding speech segment from the video segment, obtaining a sample phoneme sequence according to the speech segment, and performing feature encoding on the sample phoneme sequence;
    获取与所述第一图像帧对应的至少一个音素的特征编码;Acquiring a feature code of at least one phoneme corresponding to the first image frame;
    将所述第一图像帧转化为包含所述交互对象的第二图像帧,获取所述第二图像帧对应的至少一个局部区域的姿态控制向量值;Transforming the first image frame into a second image frame containing the interactive object, and obtaining a posture control vector value of at least one local area corresponding to the second image frame;
    根据所述姿态控制向量值,对与所述第一图像帧对应的所述特征编码进行标注,获得所述特征编码样本;Mark the feature code corresponding to the first image frame according to the attitude control vector value to obtain the feature code sample;
    所述装置还包括训练单元,用于根据所述特征编码样本对初始循环神经网络进行训练,在网络损失的变化满足收敛条件后训练得到所述循环神经网络,其中,所述网络损失包括所述循环神经网络预测得到的所述至少一个局部区域的所述姿态控制向量值与标注的所述姿态控制向量值之间的差异。The device further includes a training unit for training the initial recurrent neural network according to the characteristic coding samples, and training to obtain the recurrent neural network after the change of the network loss satisfies the convergence condition, wherein the network loss includes the The difference between the attitude control vector value of the at least one local area and the marked attitude control vector value obtained by the cyclic neural network.
  19. 一种电子设备,包括存储器、处理器,所述存储器用于存储可在处理器上运行 的计算机指令,所述处理器用于在执行所述计算机指令时实现权利要求1至11任一项所述的方法。An electronic device, comprising a memory and a processor, the memory is used to store computer instructions that can run on the processor, and the processor is used to implement any one of claims 1 to 11 when the computer instructions are executed Methods.
  20. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至11中任一所述的方法。A computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the method according to any one of claims 1 to 11 is realized.
PCT/CN2020/129793 2020-03-31 2020-11-18 Method, apparatus and device for driving interactive object, and storage medium WO2021196644A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020217027692A KR20210124307A (en) 2020-03-31 2020-11-18 Interactive object driving method, apparatus, device and recording medium
SG11202111909QA SG11202111909QA (en) 2020-03-31 2020-11-18 Methods and apparatuses for driving an interactive object, devices and storage media
JP2021549562A JP2022530935A (en) 2020-03-31 2020-11-18 Interactive target drive methods, devices, devices, and recording media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010245802.4A CN111460785B (en) 2020-03-31 2020-03-31 Method, device and equipment for driving interactive object and storage medium
CN202010245802.4 2020-03-31

Publications (1)

Publication Number Publication Date
WO2021196644A1 true WO2021196644A1 (en) 2021-10-07

Family

ID=71683475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129793 WO2021196644A1 (en) 2020-03-31 2020-11-18 Method, apparatus and device for driving interactive object, and storage medium

Country Status (6)

Country Link
JP (1) JP2022530935A (en)
KR (1) KR20210124307A (en)
CN (1) CN111460785B (en)
SG (1) SG11202111909QA (en)
TW (1) TW202138992A (en)
WO (1) WO2021196644A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409920A (en) * 2022-08-30 2022-11-29 重庆爱车天下科技有限公司 Virtual object lip driving system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459450A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111460785B (en) * 2020-03-31 2023-02-28 北京市商汤科技开发有限公司 Method, device and equipment for driving interactive object and storage medium
KR102601159B1 (en) * 2022-09-30 2023-11-13 주식회사 아리아스튜디오 Virtual human interaction generating device and method therof
CN115662388A (en) * 2022-10-27 2023-01-31 维沃移动通信有限公司 Avatar face driving method, apparatus, electronic device and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109377540A (en) * 2018-09-30 2019-02-22 网易(杭州)网络有限公司 Synthetic method, device, storage medium, processor and the terminal of FA Facial Animation
CN110136698A (en) * 2019-04-11 2019-08-16 北京百度网讯科技有限公司 For determining the method, apparatus, equipment and storage medium of nozzle type
CN110876024A (en) * 2018-08-31 2020-03-10 百度在线网络技术(北京)有限公司 Method and device for determining lip action of avatar
CN111145322A (en) * 2019-12-26 2020-05-12 上海浦东发展银行股份有限公司 Method, apparatus and computer-readable storage medium for driving avatar
CN111459452A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459450A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111460785A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459454A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003058908A (en) * 2001-08-10 2003-02-28 Minolta Co Ltd Method and device for controlling face image, computer program and recording medium
CN102609969B (en) * 2012-02-17 2013-08-07 上海交通大学 Method for processing face and speech synchronous animation based on Chinese text drive
JP2015038725A (en) * 2013-07-18 2015-02-26 国立大学法人北陸先端科学技術大学院大学 Utterance animation generation device, method, and program
JP5913394B2 (en) * 2014-02-06 2016-04-27 Psソリューションズ株式会社 Audio synchronization processing apparatus, audio synchronization processing program, audio synchronization processing method, and audio synchronization system
JP2015166890A (en) * 2014-03-03 2015-09-24 ソニー株式会社 Information processing apparatus, information processing system, information processing method, and program
CN106056989B (en) * 2016-06-23 2018-10-16 广东小天才科技有限公司 A kind of interactive learning methods and device, terminal device
CN107704169B (en) * 2017-09-26 2020-11-17 北京光年无限科技有限公司 Virtual human state management method and system
CN107891626A (en) * 2017-11-07 2018-04-10 嘉善中奥复合材料有限公司 Urea-formaldehyde moulding powder compression molding system
CN110176284A (en) * 2019-05-21 2019-08-27 杭州师范大学 A kind of speech apraxia recovery training method based on virtual reality

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110876024A (en) * 2018-08-31 2020-03-10 百度在线网络技术(北京)有限公司 Method and device for determining lip action of avatar
CN109377540A (en) * 2018-09-30 2019-02-22 网易(杭州)网络有限公司 Synthetic method, device, storage medium, processor and the terminal of FA Facial Animation
CN110136698A (en) * 2019-04-11 2019-08-16 北京百度网讯科技有限公司 For determining the method, apparatus, equipment and storage medium of nozzle type
CN111145322A (en) * 2019-12-26 2020-05-12 上海浦东发展银行股份有限公司 Method, apparatus and computer-readable storage medium for driving avatar
CN111459452A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459450A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111460785A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium
CN111459454A (en) * 2020-03-31 2020-07-28 北京市商汤科技开发有限公司 Interactive object driving method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409920A (en) * 2022-08-30 2022-11-29 重庆爱车天下科技有限公司 Virtual object lip driving system

Also Published As

Publication number Publication date
CN111460785A (en) 2020-07-28
SG11202111909QA (en) 2021-11-29
CN111460785B (en) 2023-02-28
TW202138992A (en) 2021-10-16
KR20210124307A (en) 2021-10-14
JP2022530935A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
WO2021196644A1 (en) Method, apparatus and device for driving interactive object, and storage medium
WO2021169431A1 (en) Interaction method and apparatus, and electronic device and storage medium
WO2021196643A1 (en) Method and apparatus for driving interactive object, device, and storage medium
WO2021196646A1 (en) Interactive object driving method and apparatus, device, and storage medium
JP7227395B2 (en) Interactive object driving method, apparatus, device, and storage medium
US11514634B2 (en) Personalized speech-to-video with three-dimensional (3D) skeleton regularization and expressive body poses
CN112528936B (en) Video sequence arrangement method, device, electronic equipment and storage medium
WO2022252890A1 (en) Interaction object driving and phoneme processing methods and apparatus, device and storage medium
WO2021232876A1 (en) Method and apparatus for driving virtual human in real time, and electronic device and medium
CN113689880A (en) Method, device, electronic equipment and medium for driving virtual human in real time
WO2021196647A1 (en) Method and apparatus for driving interactive object, device, and storage medium
CN110166844B (en) Data processing method and device for data processing
Heisler et al. Making an android robot head talk
KR102514580B1 (en) Video transition method, apparatus and computer program
CN116958328A (en) Method, device, equipment and storage medium for synthesizing mouth shape

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021549562

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217027692

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20929350

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20929350

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 521430720

Country of ref document: SA