CN109326151A - Implementation method, client and server based on semantics-driven virtual image - Google Patents
Implementation method, client and server based on semantics-driven virtual image Download PDFInfo
- Publication number
- CN109326151A CN109326151A CN201811292531.7A CN201811292531A CN109326151A CN 109326151 A CN109326151 A CN 109326151A CN 201811292531 A CN201811292531 A CN 201811292531A CN 109326151 A CN109326151 A CN 109326151A
- Authority
- CN
- China
- Prior art keywords
- broadcast information
- client
- specified
- shape
- virtual image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of implementation methods based on semantics-driven virtual image, client and server.The described method includes: client carries out feature extraction to broadcast information, obtains each of broadcast information and specify syllable characteristic and each specified corresponding play time of syllable characteristic;Client matches the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;For client in the process for playing broadcast information, shape of the mouth as one speaks feature and play time based on each specified syllable characteristic change the shape of the mouth as one speaks shape of the specified virtual image of display.The application may be implemented expression, the shape of the mouth as one speaks and limbs of virtual image etc. can be driven accordingly to be changed, and then solve the technical problem of virtual image personification virtual effect difference in the related technology based on semantic sign extraction and analysis is carried out to information.
Description
Technical field
This application involves data application technical fields, in particular to a kind of reality based on semantics-driven virtual image
Existing method, client and server.
Background technique
With popularizing for English education, the demand that parent receives English education to child is also more more and more intense.Application at present
The disadvantages of also generally existing price of education is high, and teacher quality is irregular, inconvenience of attending class.
Existing English teaching mode common are following two mode, such as: middle religion/foreign teacher classroom is given lessons under 1. lines,
Student periodically arrives Xian Xia mechanism and attends class.This generally existing price of mode is high, and distance is remote, not convenient enough.2. foreign teacher's video on line
It is one-to-one, one-to-many teaching.This generally existing network state difference of mode leads to video blur or Caton, and time difference reason causes outer
Teach teacher out of order, about class is difficult.In addition to this, there are following major issues for both of which: 1. cannot learn whenever and wherever possible.
2. lecture contents cannot be personalized.3. the mode of giving lessons cannot be personalized.4. lacking long-term, lasting guidance feedback.And it uses remote
When journey 3D virtual image replaces teacher to impart knowledge to students, 3D virtual image can not make movement for voice is played, virtual display effect
Fruit is poor.
For the problem of 3D virtual image personification virtual effect difference, any technical solution is not disclosed in the related technology.
Summary of the invention
The main purpose of the application is to provide a kind of implementation method based on semantics-driven virtual image, client kimonos
Business device, to solve the problems, such as that virtual image personification virtual effect is poor in the related technology.
To achieve the goals above, in a first aspect, the embodiment of the present application provides one kind based on semantics-driven virtual image
Implementation method.
Include: according to the implementation method based on semantics-driven virtual image of the application
Client carries out feature extraction to broadcast information, obtains each of broadcast information and specifies syllable characteristic and each finger
Determine the corresponding play time of syllable characteristic;
Client matches the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;
Client is in the process for playing broadcast information, when the shape of the mouth as one speaks feature based on each specified syllable characteristic is with playing
Between, change the shape of the mouth as one speaks shape of the specified virtual image of display.
Optionally, broadcast information is speech message;Client carries out feature extraction to broadcast information, comprising:
Client extracts each vowel syllable in speech message, obtains each vowel syllable in speech message
And each vowel syllable corresponding play time in speech message broadcasting, wherein specified syllable characteristic includes playing message
In vowel syllable.
Optionally, this method further include:
Client receives the expressive features of broadcast information and corresponding broadcast information;
Client is based on expressive features, changes the table of the specified virtual image of display in the process for playing broadcast information
Situation shape.
Second aspect, the embodiment of the present application also provides a kind of implementation method based on semantics-driven virtual image, the party
Method includes:
Server carries out sentiment analysis to sentence information, obtains the corresponding emotional characteristics of sentence information;
Sentence information is generated broadcast information by server, and matches the expressive features of corresponding emotional characteristics;
The expressive features of broadcast information and corresponding broadcast information are sent to client by server, so that client is according to right
Broadcast information carries out feature extraction, obtains each of broadcast information and specifies syllable characteristic and each specify syllable characteristic corresponding
Play time;Client matches the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;Client is being broadcast
It puts in the process of broadcast information, shape of the mouth as one speaks feature and play time based on expressive features, each specified syllable characteristic change display
Specified virtual image expression shape and shape of the mouth as one speaks shape.
The third aspect, the embodiment of the present application also provides a kind of client, which includes:
Extraction module, for carrying out feature extraction to broadcast information, obtain the specified syllable characteristic of broadcast information each of with
And each specify the corresponding play time of syllable characteristic;
Matching module, the shape of the mouth as one speaks feature of the specified virtual image for matching corresponding each specified syllable characteristic;
Display module, in the process for playing broadcast information, shape of the mouth as one speaks feature based on each specified syllable characteristic and
Play time changes the shape of the mouth as one speaks shape of the specified virtual image of display.
Optionally, broadcast information is speech message;Extraction module is used for:
Each vowel syllable in speech message is extracted, each vowel syllable in speech message and every is obtained
A vowel syllable corresponding play time in speech message broadcasting, wherein specified syllable characteristic includes the member played in message
Sound syllable.
Optionally, which further includes receiving module;
Receiving module, for receiving the expressive features of broadcast information and corresponding broadcast information;
Display module is also used in the process for playing broadcast information, is based on expressive features, changes the specified virtual of display
The expression shape of image.
Fourth aspect, the embodiment of the present application also provides a kind of server, which includes:
Semantic module obtains the corresponding emotional characteristics of sentence information for carrying out sentiment analysis to sentence information;
Matching module is generated, is used to sentence information generating broadcast information, and match the expression of corresponding emotional characteristics
Feature;
Information sending module, for the expressive features of broadcast information and corresponding broadcast information to be sent to client, so that
Client carries out feature extraction according to broadcast information, obtains each of broadcast information and specifies syllable characteristic and each designated tone
Save the corresponding play time of feature;The shape of the mouth as one speaks that client matches the specified virtual image of corresponding each specified syllable characteristic is special
Sign;Client is in the process for playing broadcast information, shape of the mouth as one speaks feature and broadcasting based on expressive features, each specified syllable characteristic
Time changes the expression shape and shape of the mouth as one speaks shape of the specified virtual image of display.
5th aspect, the embodiment of the present application also provides a kind of computer readable storage medium, the computer-readable storages
Media storage has computer code, when computer code is performed, the above-mentioned realization side based on semantics-driven virtual image
Method is performed.
6th aspect, the embodiment of the present application also provides a kind of computer equipment, which includes:
One or more processors;
Memory, for storing one or more computer programs;
When one or more computer programs are executed by one or more processors, so that one or more processors are real
The existing above-mentioned implementation method based on semantics-driven virtual image.
The implementation method based on semantics-driven virtual image provided in the embodiment of the present application, by client to broadcasting
Information carries out feature extraction, obtains each of broadcast information and specifies syllable characteristic and each specified corresponding broadcasting of syllable characteristic
Time;Client matches the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;Client is broadcast in broadcasting
It puts in the process of information, shape of the mouth as one speaks feature and play time based on each specified syllable characteristic change the specified virtual shape of display
The shape of the mouth as one speaks shape of elephant.In this way, being extracted by the syllable characteristic to broadcast information, and the shape of the mouth as one speaks of matching virtual image, broadcasting
While putting the broadcasting message, change the shape of the mouth as one speaks shape of virtual image, so that the shape of the mouth as one speaks of virtual image matches the broadcasting message, from
And realize the movement of virtual image and play message progress precisely matched purpose, the personification for improving virtual image is virtual
Effect, and then solve the technical problem of virtual image personification virtual effect difference in the related technology.
Detailed description of the invention
The attached drawing constituted part of this application is used to provide further understanding of the present application, so that the application's is other
Feature, objects and advantages become more apparent upon.The illustrative examples attached drawing and its explanation of the application is for explaining the application, not
Constitute the improper restriction to the application.In the accompanying drawings:
Fig. 1 is a kind of flow chart of message treatment method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another implementation method based on semantics-driven virtual image provided by the embodiments of the present application;
Fig. 3 is a kind of structural schematic diagram of client provided by the embodiments of the present application;
Fig. 4 is the structural schematic diagram of another client provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of server provided by the embodiments of the present application.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection
It encloses.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
The embodiment of the present application provides a kind of implementation method based on semantics-driven virtual image, this method application and client
End, Fig. 1 is a kind of flow chart of message treatment method provided by the embodiments of the present application, as shown in Figure 1, this method includes following
Step S110 and step S130:
S110, client carry out feature extraction to broadcast information, obtain the specified syllable characteristic of broadcast information each of and
Each specified corresponding play time of syllable characteristic.
Wherein, client can be mobile terminal or PC with display image function and AF playing function
(personalcomputer, personal computer) end etc.;Broadcast information can be voice messaging, be also possible to video information,
It can be text information, when broadcast information is text information, need to generate text information into voice letter when to broadcast information
Breath;Each specified corresponding play time of syllable characteristic, the as duration where broadcast information is when playing.
Specifically, client carries out the extraction of specified syllable characteristic to the broadcast information, obtains after receiving broadcast information
To the specified syllable characteristic of each of broadcast information, and determine broadcasting of the broadcast information when playing where each specified syllable characteristic
Duration is as each specified corresponding play time of syllable characteristic.
S120, client match the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic.
Specifically, client is based on each specified syllable characteristic in local data base or other databases (such as cloud)
Search the shape of the mouth as one speaks feature for matching the corresponding specified virtual image of each specified syllable characteristic.Wherein, specified virtual image can be with
It is the high-precision 3d virtual image of local terminal storage, alternatively, the virtual image selected by user.
S130, client shape of the mouth as one speaks feature based on each specified syllable characteristic and are broadcast in the process for playing broadcast information
The time is put, the shape of the mouth as one speaks shape of the specified virtual image of display is changed.
Specifically, specified virtual image can carry out real-time display in the display interface of client, believe when playing the broadcasting
When breath, when determining the broadcasting for whether reaching some specified syllable characteristic in the process of currently playing information according to playing duration
Between, specify the shape of the mouth as one speaks feature of the corresponding specified syllable characteristic of syllable characteristic to configure specified virtual image according to this, to change
Become the shape of the mouth as one speaks shape of the specified virtual image currently shown.
In a feasible embodiment, broadcast information is speech message;Step S120, client to broadcast information into
Row feature extraction, comprising:
Client extracts each vowel syllable in speech message, obtains each vowel syllable in speech message
And each vowel syllable corresponding play time in speech message broadcasting, wherein specified syllable characteristic includes playing message
In vowel syllable.
Specifically, when broadcast information is speech message, can audio frequency characteristics directly to the speech message carry out feature and mention
It takes, each vowel syllable in the audio frequency characteristics of the speech message is extracted, using the vowel syllable as specified syllable characteristic.
Wherein, speech message can be is made of Chinese or foreign language (such as English), such as the speech message is by English
When language forms, then the specified syllable characteristic is 20 vowel syllables in English.
In a feasible embodiment, based on the implementation method of semantics-driven virtual image further include:
Client receives the expressive features of broadcast information and corresponding broadcast information;
Client is based on expressive features, changes the table of the specified virtual image of display in the process for playing broadcast information
Situation shape.
Specifically, client also has received the expressive features of corresponding broadcast information while receiving broadcast information,
In, which is the expressive features of specified virtual image, and client is in the process for playing the broadcast information, by the table
Feelings feature configures specified virtual image, to change the expression shape of the currently assigned virtual image of display, for example, receiving
Expressive features be presented as that the smile expression of virtual image, the expression of current virtual image are tranquil expression, then, in client
End plays in the process of the corresponding broadcast information of the expressive features, be tranquil expression configuration modification by the expression of virtual image is micro-
Laugh at expression.
The implementation method based on semantics-driven virtual image provided in the embodiment of the present application, passes through S110, client
Feature extraction is carried out to broadcast information, each of broadcast information is obtained and specifies syllable characteristic and each specify syllable characteristic corresponding
Play time;S120, client match the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;
S130, in the process for playing broadcast information, shape of the mouth as one speaks feature and play time based on each specified syllable characteristic change client
Become the shape of the mouth as one speaks shape of the specified virtual image of display.In this way, extracting by the syllable characteristic to broadcast information, and match void
The shape of the mouth as one speaks for intending image changes the shape of the mouth as one speaks shape of virtual image, so that the shape of the mouth as one speaks of virtual image while playing the broadcasting message
The broadcasting message is matched, to realize the movement of virtual image and play message progress precisely matched purpose, is improved
The anthropomorphic virtual effect of virtual image, and then solve the technical problem of virtual image personification virtual effect difference in the related technology.
Based on the same technical idea, the embodiment of the present application also provides a kind of realizations based on semantics-driven virtual image
Method, this method application and server, Fig. 2 are another realities based on semantics-driven virtual image provided by the embodiments of the present application
The flow chart of existing method, as shown in Fig. 2, this method includes the following steps, namely S210 to step S230:
S210, server carry out sentiment analysis to sentence information, obtain the corresponding emotional characteristics of sentence information;
S220, sentence information is generated broadcast information by server, and matches the expressive features of corresponding emotional characteristics;
S230, the expressive features of broadcast information and corresponding broadcast information are sent to client by server, so that client
Feature extraction is carried out according to broadcast information, each of broadcast information is obtained and specifies syllable characteristic and each specify syllable characteristic
Corresponding play time;Client matches the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;Client
It holds in the process for playing broadcast information, shape of the mouth as one speaks feature and play time based on expressive features, each specified syllable characteristic change
Become the expression shape and shape of the mouth as one speaks shape of the specified virtual image of display.
The implementation method based on semantics-driven virtual image provided in the embodiment of the present application, passes through S210, server
Sentiment analysis is carried out to sentence information, obtains the corresponding emotional characteristics of sentence information;S220, server broadcast the generation of sentence information
Information is put, and matches the expressive features of corresponding emotional characteristics;S230, server is by broadcast information and corresponds to broadcast information
Expressive features are sent to client, so that client carries out feature extraction according to broadcast information, obtain each of broadcast information
Specified syllable characteristic and each specified corresponding play time of syllable characteristic;It is special that client matches corresponding each specified syllable
The shape of the mouth as one speaks feature of the specified virtual image of sign;Client is based on expressive features, each specifies in the process for playing broadcast information
The shape of the mouth as one speaks feature and play time of syllable characteristic change the expression shape and shape of the mouth as one speaks shape of the specified virtual image of display.In this way,
Server priority carries out sentiment analysis (one kind of semantic analysis application) to sentence information and obtains the mood of sentence information, further according to
The mood matches the expressive features for the specified virtual image that client is shown in the database, by the expressive features and by sentence
The broadcast information that information generates is sent to client, and client is extracted by the syllable characteristic to broadcast information, and is matched
The shape of the mouth as one speaks of virtual image changes the shape of the mouth as one speaks shape and expression shape of virtual image, so that empty while playing the broadcasting message
The shape of the mouth as one speaks for intending image matches the broadcasting message, to realize the movement of virtual image and broadcasting message progress is precisely matched
Purpose, improves the anthropomorphic virtual effect of virtual image, and then it is poor to solve virtual image personification virtual effect in the related technology
The technical issues of.
Based on the same technical idea, the embodiment of the present application also provides a kind of client, Fig. 3 is that the embodiment of the present application mentions
The structural schematic diagram of a kind of client supplied, as shown in figure 3, the client includes:
Extraction module 10 obtains each of broadcast information and specifies syllable characteristic for carrying out feature extraction to broadcast information
And each specify the corresponding play time of syllable characteristic;
Matching module 20, the shape of the mouth as one speaks feature of the specified virtual image for matching corresponding each specified syllable characteristic;
Display module 30, for play broadcast information process in, the shape of the mouth as one speaks feature based on each specified syllable characteristic
And play time, change the shape of the mouth as one speaks shape of the specified virtual image of display.
Optionally, broadcast information is speech message;Extraction module 10, is used for:
Each vowel syllable in speech message is extracted, each vowel syllable in speech message and every is obtained
A vowel syllable corresponding play time in speech message broadcasting, wherein specified syllable characteristic includes the member played in message
Sound syllable.
Optionally, Fig. 4 is the structural schematic diagram of another client provided by the embodiments of the present application, as shown in figure 4, the visitor
Family end further includes receiving module 40;
Receiving module 40, for receiving the expressive features of broadcast information and corresponding broadcast information;
Display module 30 is also used in the process for playing broadcast information, is based on expressive features, is changed the specified void of display
Intend the expression shape of image.
Based on the same technical idea, the embodiment of the present application also provides a kind of server, Fig. 5 is that the embodiment of the present application mentions
The structural schematic diagram of a kind of server supplied, as shown in figure 5, the server includes:
Semantic module 50 obtains the corresponding emotional characteristics of sentence information for carrying out sentiment analysis to sentence information;
Matching module 60 is generated, is used to sentence information generating broadcast information, and match the table of corresponding emotional characteristics
Feelings feature;
Information sending module 70, for the expressive features of broadcast information and corresponding broadcast information to be sent to client, with
So that client is carried out feature extraction according to broadcast information, obtains each of broadcast information and specify syllable characteristic and each specify
The corresponding play time of syllable characteristic;The shape of the mouth as one speaks that client matches the specified virtual image of corresponding each specified syllable characteristic is special
Sign;Client is in the process for playing broadcast information, shape of the mouth as one speaks feature and broadcasting based on expressive features, each specified syllable characteristic
Time changes the expression shape and shape of the mouth as one speaks shape of the specified virtual image of display.
Based on the same technical idea, the embodiment of the present application also provides a kind of computer readable storage medium, the calculating
Machine readable storage medium storing program for executing is stored with computer code, above-mentioned based on the virtual shape of semantics-driven when computer code is performed
The implementation method of elephant is performed.
Based on the same technical idea, the embodiment of the present application also provides a kind of computer program products, when the computer
When program product is executed by computer equipment, the above-mentioned implementation method based on semantics-driven virtual image is performed.
Based on the same technical idea, the embodiment of the present application also provides a kind of computer equipment, the computer equipment packets
It includes:
One or more processors;
Memory, for storing one or more computer programs;
When one or more computer programs are executed by one or more processors, so that one or more processors are real
The existing above-mentioned implementation method based on semantics-driven virtual image.
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general
Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed
Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored
Be performed by computing device in the storage device, perhaps they are fabricated to each integrated circuit modules or by they
In multiple modules or step be fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific
Hardware and software combines.
Computer program involved in the application can store in computer readable storage medium, described computer-readable
Storage medium may include: any entity apparatus that can carry computer program code, virtual bench, flash disk, mobile hard disk,
Magnetic disk, CD, computer storage, read-only computer storage (Read-Only Memory, ROM), random access computer
Memory (Random Access Memory, RAM), electric carrier signal, telecommunication signal and other software distribution medium etc..
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general
Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed
Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored
Be performed by computing device in the storage device, perhaps they are fabricated to each integrated circuit modules or by they
In multiple modules or step be fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific
Hardware and software combines.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field
For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair
Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.
Claims (10)
1. a kind of implementation method based on semantics-driven virtual image, which is characterized in that the described method includes:
Client to the broadcast information carry out feature extraction, obtain described broadcast information each of specify syllable characteristic and often
The corresponding play time of a specified syllable characteristic;
The client matches the shape of the mouth as one speaks feature of the specified virtual image of corresponding each specified syllable characteristic;
For the client in the process for playing the broadcast information, the shape of the mouth as one speaks based on each specified syllable characteristic is special
It seeks peace the play time, changes the shape of the mouth as one speaks shape of the specified virtual image of display.
2. the implementation method according to claim 1 based on semantics-driven virtual image, which is characterized in that the broadcasting letter
Breath is speech message;The client carries out feature extraction to the broadcast information, comprising:
The client extracts each vowel syllable in the speech message, obtains each of described speech message
The vowel syllable and each vowel syllable corresponding play time in speech message broadcasting, wherein
The specified syllable characteristic includes the vowel syllable in the broadcasting message.
3. the implementation method according to claim 1 based on semantics-driven virtual image, which is characterized in that the method is also
Include:
The client receives the expressive features of the broadcast information and the corresponding broadcast information;
The client is based on the expressive features in the process for playing the broadcast information, changes the described specified of display
The expression shape of virtual image.
4. a kind of implementation method based on semantics-driven virtual image, which is characterized in that the described method includes:
Server carries out sentiment analysis to sentence information, obtains the corresponding emotional characteristics of the sentence information;
The sentence information is generated broadcast information by the server, and matches the expression spy of the corresponding emotional characteristics
Sign;
The expressive features of the broadcast information and the corresponding broadcast information are sent to client by the server, so that
The client carries out feature extraction according to the broadcast information, obtain the specified syllable characteristic of described broadcast information each of with
And the corresponding play time of each specified syllable characteristic;The client matches corresponding each specified syllable characteristic
Specified virtual image shape of the mouth as one speaks feature;The client is special based on the expression in the process for playing the broadcast information
Sign, the shape of the mouth as one speaks feature and the play time of each specified syllable characteristic, change the described of display and specify virtual shape
The expression shape and shape of the mouth as one speaks shape of elephant.
5. a kind of client, which is characterized in that the client includes:
Extraction module obtains each of described broadcast information and specifies syllable special for carrying out feature extraction to the broadcast information
Sign and the corresponding play time of each specified syllable characteristic;
Matching module, the shape of the mouth as one speaks feature of the specified virtual image for matching corresponding each specified syllable characteristic;
Display module, in the process for playing the broadcast information, the mouth based on each specified syllable characteristic
Type feature and the play time change the shape of the mouth as one speaks shape of the specified virtual image of display.
6. client according to claim 5, which is characterized in that the broadcast information is speech message;The extraction mould
Block is used for:
Each vowel syllable in the speech message is extracted, each of described speech message vowel sound is obtained
Section and each vowel syllable corresponding play time in speech message broadcasting, wherein the designated tone
Section feature includes the vowel syllable in the broadcasting message.
7. client according to claim 5, which is characterized in that the client further includes receiving module;
The receiving module, for receiving the expressive features of the broadcast information and the corresponding broadcast information;
The display module is also used in the process for playing the broadcast information, is based on the expressive features, is changed display
The expression shape of the specified virtual image.
8. a kind of server, which is characterized in that the server includes:
Semantic module obtains the corresponding emotional characteristics of the sentence information for carrying out sentiment analysis to sentence information;
Matching module is generated, is used to the sentence information generating broadcast information, and matches the corresponding emotional characteristics
Expressive features;
Information sending module, for the expressive features of the broadcast information and the corresponding broadcast information to be sent to client
End obtains each designated tone of the broadcast information so that the client carries out feature extraction according to the broadcast information
Save feature and the corresponding play time of each specified syllable characteristic;The client matches corresponding each described specified
The shape of the mouth as one speaks feature of the specified virtual image of syllable characteristic;The client is based on institute in the process for playing the broadcast information
Expressive features, the shape of the mouth as one speaks feature and the play time of each specified syllable characteristic are stated, the finger of display is changed
Determine the expression shape and shape of the mouth as one speaks shape of virtual image.
9. a kind of computer readable storage medium, the computer-readable recording medium storage has computer code, when the meter
Calculation machine code is performed, and the implementation method according to any one of claims 1-4 based on semantics-driven virtual image is held
Row.
10. a kind of computer equipment, the computer equipment include:
One or more processors;
Memory, for storing one or more computer programs;
When one or more computer programs are executed by one or more processors, so that one or more processors are realized such as
The described in any item implementation methods based on semantics-driven virtual image of claim 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811292531.7A CN109326151A (en) | 2018-11-01 | 2018-11-01 | Implementation method, client and server based on semantics-driven virtual image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811292531.7A CN109326151A (en) | 2018-11-01 | 2018-11-01 | Implementation method, client and server based on semantics-driven virtual image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109326151A true CN109326151A (en) | 2019-02-12 |
Family
ID=65259993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811292531.7A Pending CN109326151A (en) | 2018-11-01 | 2018-11-01 | Implementation method, client and server based on semantics-driven virtual image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109326151A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872724A (en) * | 2019-03-29 | 2019-06-11 | 广州虎牙信息科技有限公司 | Virtual image control method, virtual image control device and electronic equipment |
WO2020200081A1 (en) * | 2019-03-29 | 2020-10-08 | 广州虎牙信息科技有限公司 | Live streaming control method and apparatus, live streaming device, and storage medium |
CN113050794A (en) * | 2021-03-24 | 2021-06-29 | 北京百度网讯科技有限公司 | Slider processing method and device for virtual image |
CN113194348A (en) * | 2021-04-22 | 2021-07-30 | 清华珠三角研究院 | Virtual human lecture video generation method, system, device and storage medium |
CN113506360A (en) * | 2021-07-12 | 2021-10-15 | 北京顺天立安科技有限公司 | Virtual character expression driving method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002108382A (en) * | 2000-09-27 | 2002-04-10 | Sony Corp | Animation method and device for performing lip sinchronization |
CN101364309A (en) * | 2008-10-09 | 2009-02-11 | 中国科学院计算技术研究所 | Cartoon generating method for mouth shape of source virtual characters |
CN101937570A (en) * | 2009-10-11 | 2011-01-05 | 上海本略信息科技有限公司 | Animation mouth shape automatic matching implementation method based on voice and text recognition |
CN106485774A (en) * | 2016-12-30 | 2017-03-08 | 当家移动绿色互联网技术集团有限公司 | Expression based on voice Real Time Drive person model and the method for attitude |
CN107808191A (en) * | 2017-09-13 | 2018-03-16 | 北京光年无限科技有限公司 | The output intent and system of the multi-modal interaction of visual human |
CN108447474A (en) * | 2018-03-12 | 2018-08-24 | 北京灵伴未来科技有限公司 | A kind of modeling and the control method of virtual portrait voice and Hp-synchronization |
-
2018
- 2018-11-01 CN CN201811292531.7A patent/CN109326151A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002108382A (en) * | 2000-09-27 | 2002-04-10 | Sony Corp | Animation method and device for performing lip sinchronization |
CN101364309A (en) * | 2008-10-09 | 2009-02-11 | 中国科学院计算技术研究所 | Cartoon generating method for mouth shape of source virtual characters |
CN101937570A (en) * | 2009-10-11 | 2011-01-05 | 上海本略信息科技有限公司 | Animation mouth shape automatic matching implementation method based on voice and text recognition |
CN106485774A (en) * | 2016-12-30 | 2017-03-08 | 当家移动绿色互联网技术集团有限公司 | Expression based on voice Real Time Drive person model and the method for attitude |
CN107808191A (en) * | 2017-09-13 | 2018-03-16 | 北京光年无限科技有限公司 | The output intent and system of the multi-modal interaction of visual human |
CN108447474A (en) * | 2018-03-12 | 2018-08-24 | 北京灵伴未来科技有限公司 | A kind of modeling and the control method of virtual portrait voice and Hp-synchronization |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109872724A (en) * | 2019-03-29 | 2019-06-11 | 广州虎牙信息科技有限公司 | Virtual image control method, virtual image control device and electronic equipment |
WO2020200081A1 (en) * | 2019-03-29 | 2020-10-08 | 广州虎牙信息科技有限公司 | Live streaming control method and apparatus, live streaming device, and storage medium |
CN113050794A (en) * | 2021-03-24 | 2021-06-29 | 北京百度网讯科技有限公司 | Slider processing method and device for virtual image |
US11842457B2 (en) | 2021-03-24 | 2023-12-12 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method for processing slider for virtual character, electronic device, and storage medium |
CN113194348A (en) * | 2021-04-22 | 2021-07-30 | 清华珠三角研究院 | Virtual human lecture video generation method, system, device and storage medium |
CN113194348B (en) * | 2021-04-22 | 2022-07-22 | 清华珠三角研究院 | Virtual human lecture video generation method, system, device and storage medium |
CN113506360A (en) * | 2021-07-12 | 2021-10-15 | 北京顺天立安科技有限公司 | Virtual character expression driving method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110033659B (en) | Remote teaching interaction method, server, terminal and system | |
CN109326151A (en) | Implementation method, client and server based on semantics-driven virtual image | |
Tayebinik et al. | Mobile learning to support teaching English as a second language | |
CN103646574B (en) | A kind of classroom interactions's teaching method based on panorama study system platform | |
CN102819969B (en) | Implementation method for multimedia education platform and multimedia education platform system | |
CN111651497B (en) | User tag mining method and device, storage medium and electronic equipment | |
CN110600033A (en) | Learning condition evaluation method and device, storage medium and electronic equipment | |
CN104021326A (en) | Foreign language teaching method and foreign language teaching tool | |
CN106027485A (en) | Rich media display method and system based on voice interaction | |
CN110795917A (en) | Personalized handout generation method and system, electronic equipment and storage medium | |
CN109360458A (en) | Interest assistant teaching method, device and robot | |
Díaz-Cintas | 10 Audiovisual Translation in Mercurial Mediascapes | |
Coupland | Social context, style, and identity in sociolinguistics | |
CN114969282B (en) | Intelligent interaction method based on rich media knowledge graph multi-modal emotion analysis model | |
Xu | The new media environment presents challenges and opportunities for music education in higher education | |
CN103927907B (en) | A kind of foreign language teaching aid | |
CN116881412A (en) | Chinese character multidimensional information matching training method and device, electronic equipment and storage medium | |
CN113038259B (en) | Method and system for feeding back class quality of Internet education | |
KR20200039907A (en) | Smart language learning services using scripts and their service methods | |
CN107844552A (en) | One kind sketches the contours frame knowledge base content providing and device | |
CN114765033A (en) | Information processing method and device based on live broadcast room | |
US20160372154A1 (en) | Substitution method and device for replacing a part of a video sequence | |
CN112541493A (en) | Topic explaining method and device and electronic equipment | |
Kit et al. | Perception of university students towards the use of artificial intelligence-generated voice in explainer videos | |
Levy et al. | Quality requirements for multimedia interactive informative systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190212 |