CN109522427B - Intelligent robot-oriented story data processing method and device - Google Patents

Intelligent robot-oriented story data processing method and device Download PDF

Info

Publication number
CN109522427B
CN109522427B CN201811154962.7A CN201811154962A CN109522427B CN 109522427 B CN109522427 B CN 109522427B CN 201811154962 A CN201811154962 A CN 201811154962A CN 109522427 B CN109522427 B CN 109522427B
Authority
CN
China
Prior art keywords
text
information
played
story
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811154962.7A
Other languages
Chinese (zh)
Other versions
CN109522427A (en
Inventor
贾志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Infinite Technology Co ltd
Original Assignee
Beijing Guangnian Infinite Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Infinite Technology Co ltd filed Critical Beijing Guangnian Infinite Technology Co ltd
Priority to CN201811154962.7A priority Critical patent/CN109522427B/en
Publication of CN109522427A publication Critical patent/CN109522427A/en
Application granted granted Critical
Publication of CN109522427B publication Critical patent/CN109522427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Abstract

A story data processing method and device for an intelligent robot are provided, wherein the method comprises the following steps: the method comprises the steps of firstly, acquiring text information to be played currently, wherein the text information is a text fragment of story content processed by natural language, and the text information carries text and text attributes; and step two, calling and playing corresponding music audio according to the text attribute. The method can play different background music according to different playing scenes in the process of playing the story text, so that the played story text can be supported, borne and connected by the background music, a user can be more integrated into a story, and the user experience is improved.

Description

Intelligent robot-oriented story data processing method and device
Technical Field
The invention relates to the technical field of robots, in particular to a story data processing method and device for an intelligent robot.
Background
With the continuous development of science and technology and the introduction of information technology, computer technology and artificial intelligence technology, the research of robots has gradually gone out of the industrial field and gradually expanded to the fields of medical treatment, health care, families, entertainment, service industry and the like. The requirements of people on the robot are also improved from simple and repeated mechanical actions to an intelligent robot with anthropomorphic question answering, autonomy and interaction with other robots, and human-computer interaction also becomes an important factor for determining the development of the intelligent robot. Therefore, the improvement of the interaction capability of the intelligent robot and the improvement of the human-like nature and intelligence of the robot are important problems to be solved urgently at present.
Disclosure of Invention
In order to solve the above problems, the present invention provides a story data processing method for an intelligent robot, the method including:
the method comprises the steps of firstly, acquiring text information to be played currently, wherein the text information is a text fragment of story content processed by natural language, and the text information carries text and text attributes;
and step two, calling and playing corresponding music audio according to the text attribute.
According to an embodiment of the present invention, the text attribute includes position information of a sentence in which the text fragment is located.
According to one embodiment of the invention, if the position information of the text information to be played currently is a beginning position, a first music audio is called and played;
if the position information of the text information to be played currently is the middle position, calling and playing a second music audio;
and if the position information of the text information to be played currently is the ending position, calling and playing a third music audio after the text information to be played currently is played completely.
According to an embodiment of the present invention, when the position information attribute of the text information to be currently played is that the start position is the start position or the end position, the corresponding music audio is played in a fade-in and fade-out manner.
According to an embodiment of the present invention, if the position information of the text information to be currently played is an intermediate position, it is detected whether the text information to be currently played is dialog text information, and if so, the playing of the second music audio is paused, and the playing of the second music audio is continued after the dialog is ended.
According to an embodiment of the present invention, the method further determines whether the current user is a child user according to the obtained user information, wherein if the current user is a child user, in the second step, a corresponding music audio is called from the child database according to the text attribute and played.
The invention also provides a program product having stored thereon program code executable to perform the method steps of any of the above.
The invention also provides a story data processing device facing the intelligent robot, which comprises:
the system comprises a text information acquisition module, a text information playing module and a text playing module, wherein the text information acquisition module is used for acquiring text information to be played currently, the text information is a text fragment of story content processed by natural language, and the text information carries text and text attributes;
and the playing control module is used for calling and playing corresponding music audio according to the text attribute.
According to an embodiment of the present invention, the text attribute includes position information of a sentence in which the text fragment is located.
The invention also provides the special equipment for the children, and the special equipment for the children executes the program codes in a manner of being matched with the cloud server to play the music audio obtained by calling.
The invention also provides a multimodal interaction system, comprising:
the cloud server is used for calling corresponding music audio by executing the intelligent robot-oriented story data processing method, and generating a corresponding music playing instruction according to the called music audio;
the children's special device is used for playing the corresponding music audio according to the received music playing instruction.
The story data processing method and device for the intelligent robot can play different music audios as music audios according to different playing scenes in the process of playing the story text, so that the played story text can be entrusted, borne and connected by the music audios, a user can be more integrated into a story, and the user experience is improved.
Meanwhile, according to different played story texts, the method and the device can automatically call different music audios according to actual needs (for example, calling morden music for horror story texts and calling more soothing music for prose story texts), so that the infectivity of the story texts can be improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the drawings required in the description of the embodiments or the prior art:
fig. 1 is a schematic flow chart of an implementation of a story data processing method for an intelligent robot according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating the implementation of music audio calling based on the position information of the text message according to one embodiment of the present invention;
FIG. 3 is a flow diagram illustrating an implementation of playing a second music audio according to one embodiment of the invention;
fig. 4 is a schematic structural diagram of a story data processing apparatus oriented to an intelligent robot according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a multimodal interaction system in accordance with one embodiment of the invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented. It should be noted that, as long as there is no conflict, the embodiments and the features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are within the scope of the present invention.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details or with other methods described herein.
Additionally, the steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions and, although a logical order is illustrated in the flow charts, in some cases, the steps illustrated or described may be performed in an order different than here.
With the development of artificial intelligence technology, the method is applied to various fields. Taking the life field as an example, story telling robots serving children are developed, so that the burden of parents can be greatly reduced, and the reading habits of the children are improved by reading stories for the children. The existing story telling robots generally store a set amount of story voices, play corresponding stories according to the selection of users, or convert story texts into story voices to be output to the users in real time. The conventional story telling robot can only tell stories to users and output story voices to the users, however, the story voices output by the story robot are only voices corresponding to story texts, and the story telling robot does not bring better experience to the users, so that the users are easy to feel tired in the past.
Aiming at the problems in the prior art, the invention provides a novel story data processing method and a story data processing system for an intelligent robot. The inventor finds that the music audio can play a good role in creating atmosphere, bearing story plots and connecting story chapters in the playing process of the whole story, and if the music audio can be introduced in the story telling process of the intelligent robot, the user experience of the intelligent robot can be greatly improved.
Fig. 1 shows a schematic implementation flow diagram of the story data processing method for an intelligent robot provided by the embodiment.
As shown in fig. 1, in the story data processing method for an intelligent robot according to this embodiment, first, in step S101, text information to be played currently is acquired. The text information to be played at present is a text fragment of the story content after natural language processing (namely NLP), and the text information carries a text corresponding to the story content and a corresponding text attribute.
After obtaining the text information to be played currently, the method calls and plays the corresponding music audio according to the text information in step S102. In this embodiment, the text attribute of the text information of the information to be played preferably includes position information of a sentence where the text segment is located. Specifically, in this embodiment, the position information of the sentence in which the text segment is located preferably includes a start position, a middle position, and an end position.
Specifically, as shown in fig. 2, if the position information of the text information to be currently played is the beginning position (i.e. the story text has not been formally started to be played currently), then the method calls the first music audio and plays it in step S201; if the position information of the text information to be played currently is an intermediate position (i.e. currently during playing of the story text), then the method will call a second music audio and play in step S202; if the position information of the text information to be currently played is the ending position (i.e. the playing of the story text is about to end), then the method calls up the third music audio and plays it in step S203.
In this embodiment, the first music audio and the third music audio are preferably different music. Of course, in other embodiments of the present invention, the first music audio and the third music audio may be the same music according to actual needs, and the present invention is not limited thereto.
Since the speech information of the story text is not usually played at the beginning and the end positions, in order to avoid that the playing of the first music audio and the third music audio is too obtrusive, in this embodiment, the method preferably plays the first music audio and the third music audio in a fade-in and fade-out manner. That is, in the process of playing the first music audio and the third music audio, the method gradually increases the volume to the first preset volume threshold when the playing is started, continues to play according to the first preset volume threshold, and gradually decreases the volume from the preset volume threshold to the second preset volume threshold when the playing is finished.
It should be noted that, in different embodiments of the present invention, specific values of the first preset volume threshold and the second preset volume threshold may be configured to be different reasonable values according to actual needs, where the value of the first preset volume threshold is greater than the value of the second preset volume threshold, and when the volume of the audio is less than the second preset volume threshold, the audio cannot be well recognized by human ears. For example, in one embodiment of the present invention, the first preset traffic threshold may be configured as 89db, and the second preset volume threshold may be configured as 70 db.
In this embodiment, when obtaining the text information to be currently played, the method preferably reads the text to be currently played and the text in a subsequent first preset duration interval. Subsequently, the method judges whether the text to be played currently is empty, and simultaneously judges whether the text in a subsequent first preset duration interval is empty.
If the text to be played currently is empty and the text in the subsequent first preset duration interval is not empty, it means that the corresponding text is not required to be played currently, but the corresponding text is to be played in the first preset duration thereafter, so that the method can determine that the position information of the sentence in which the text segment to be played currently is located is the beginning position of the whole story.
If the text to be played currently is not empty, the corresponding text is indicated to be played currently, and at this time, the method can determine that the position information of the sentence where the text clip to be played currently is located is the middle position of the whole story.
If the text to be played currently is empty and the text in the subsequent first preset duration interval is also empty, it means that the corresponding text is not required to be played currently, and the corresponding text is not required to be played in the first preset duration thereafter, so that the method can determine that the position information of the sentence in which the text segment to be played currently is the end position of the whole story.
It should be noted that, in other embodiments of the present invention, the method may also determine, in other reasonable manners, whether the position information of the text information to be currently played is a start position, an intermediate position, or an end position, and call up different music audios and play the music audios according to different position information, which is not limited in this respect.
In order to avoid that the second music audio is played during the process of playing the speech information of the story text, so that the speech information of the story text is mixed with the music audio, and the user cannot hear the speech information of the story text, as shown in fig. 3, in this embodiment, the method preferably determines whether the text information to be currently played is dialog text information in step S301. If the text information to be played currently is the dialog text information, the method may pause playing the second music audio in step S302, and play the second music audio again after the dialog is finished.
It should be noted that, in different embodiments of the present invention, the replaying the second music audio after the session is ended may be to continue playing the music before the pause, or may replay the entire second music audio, which is not limited by the present invention.
Meanwhile, it should be noted that in other embodiments of the present invention, the method may further determine whether the current user is a child user according to the acquired user information. If the current user is a child user, the method calls the corresponding audio frequency from the child database according to the acquired text information to be played currently and plays the audio frequency.
Specifically, according to actual needs, the method can judge whether the current user is a child user through the acquired user picture, judge whether the current user is a child user through the acquired user audio input by the current user, or judge whether the current user is a child user through other reasonable modes.
The story data processing method provided by the embodiment is realized in a computer system. The computer system may be provided, for example, in a control core processor of the robot. For example, the methods described herein may be implemented as software executable with control logic that is executed by a CPU in a robotic operating system. The functionality described herein may be implemented as a set of program instructions stored in a non-transitory tangible computer readable medium.
When implemented in this manner, the computer program comprises a set of instructions which, when executed by a computer, cause the computer to perform a method capable of carrying out the functions described above. Programmable logic may be temporarily or permanently installed in a non-transitory tangible computer-readable medium, such as a read-only memory chip, computer memory, disk, or other storage medium. In addition to being implemented in software, the logic described herein may be embodied using discrete components, integrated circuits, programmable logic used in conjunction with a programmable logic device such as a Field Programmable Gate Array (FPGA) or microprocessor, or any other device including any combination thereof. All such embodiments are intended to fall within the scope of the present invention.
It can be seen from the above description that the story data processing method for the intelligent robot provided by the invention can play different music audios according to different playing situations in the process of playing the story text, so that the played story text can be entrusted, borne and connected by using the music audios, and thus, a user can be more integrated into a story, and the user experience is improved.
Meanwhile, according to different played story texts, the method can automatically call different music audios according to actual needs (for example, calling more shady music audios for horror story texts, and calling more soothing music audios for prose story texts), so that the infectivity of the story texts can be improved.
The present invention also provides a program product storing program code that, when executed by an operating system, is capable of implementing the intelligent robot-oriented story data processing method as described above.
Meanwhile, the invention also provides a story data processing device facing the intelligent robot. Fig. 4 shows a schematic structural diagram of the story data processing apparatus in this embodiment.
As shown in fig. 4, the story data processing apparatus facing an intelligent robot provided in this embodiment preferably includes: a text information acquisition module 401 and a play control module 402. The text information obtaining module 401 can obtain the text information to be currently played. The text information to be played at present is a text fragment of the story content after natural language processing (i.e. NLP), and the text information carries a text corresponding to the story content and a corresponding text attribute.
After obtaining the position information of the currently played text information, the text information obtaining module 401 transmits the text information to the playing control module 402 connected thereto, so that the playing control module 402 calls and plays the corresponding music audio according to the text information. In this embodiment, the text attribute of the text information of the information to be played preferably includes position information of a sentence where the text segment is located. Specifically, in this embodiment, the position information of the sentence in which the text segment is located preferably includes a start position, a middle position, and an end position.
The position information of the currently played text information can represent the position of the currently played text information in the entire story text (for example, the start position, the middle position, the end position, and the like of the entire story text), and according to the difference of the position information of the currently played text information, the play control module 402 may call different music audios and play the music audios.
It should be noted that in this embodiment, the principle and the process of the text information obtaining module 401 and the playing control module 402 for implementing their respective functions are similar to those disclosed in the foregoing steps S101 and S102, and therefore detailed descriptions of the text information obtaining module 401 and the playing control module 402 are not repeated herein.
According to actual needs, optionally, the story data processing apparatus for the intelligent robot may further include a user attribute determination module 403. The user attribute determining module 403 may further determine whether the current user is a child user according to the obtained user information. Wherein, if the current user is a child user, the user attribute determination module 403 generates a corresponding child user indication signal.
Specifically, according to actual needs, the user attribute determining module 403 may determine whether the current user is a child user by using the obtained user picture, may determine whether the current user is a child user by using the obtained user audio input by the current user, or may determine whether the current user is a child user by using other reasonable manners.
The playing control module 402 is connected to the user attribute determining module 403, and based on the child user instruction signal, it will retrieve and play the corresponding audio from the child database according to the retrieved text information to be played currently. Wherein, children's database both can set up in intelligent robot's self data memory, also can save in the high in the clouds server. In this embodiment, optionally, the child database may be further configured to correspond to a user ID of a child user, that is, different child databases correspond to different child users.
In addition, as shown in fig. 5, the present invention further provides a multi-modal interaction system, which includes a child-specific device 501 and a cloud server 502. The child-dedicated device 501 can execute the program code that can implement the aforementioned intelligent robot-oriented story data processing method in cooperation with the cloud server 502, and then play the tuned music audio.
Specifically, in this embodiment, the cloud server 502 is configured to call corresponding music audio by executing the foregoing intelligent robot-oriented story data processing method, generate a corresponding music playing instruction according to the called music audio, and send the music playing instruction to the child-dedicated device 501. The music playing instruction generated by the cloud server 502 can represent music audio to be played.
After receiving the music playing instruction, the child dedicated device 502 may play the corresponding music audio according to the music playing instruction. Namely, if the position information of the text information to be played currently is the beginning position, playing a first music audio; if the position information of the text information to be played currently is the middle position, playing a second music audio; and if the position information of the text information to be played currently is the ending position, playing the text information to be played currently after the playing of the text information to be played is finished.
Therefore, the multi-modal interaction system provided by the invention can play corresponding music audio while playing a corresponding children story, so that the children story can attract the attention of a children user 503 more, and the user experience and the user viscosity of the device are improved.
Meanwhile, the multi-modal interaction system rapidly calls out the required music audio by utilizing the powerful data processing capacity of the cloud server 502, so that the requirement on the data processing capacity of the special equipment 501 for children can be reduced, the interaction efficiency can be improved, and the size and the cost of the special equipment 501 for children can be effectively reduced.
In this embodiment, the cloud server 502 can provide a child database and/or a public database for the child user device 501 according to actual needs while completing the music audio retrieval.
It is noted that in various embodiments of the present invention, optionally, for the multi-modal interactive system, part of the data processing function may also be implemented by the child-specific device 501, and the present invention is not limited thereto.
In different embodiments of the present invention, the child-dedicated device 501 may be an intelligent device including an input/output module supporting sensing and control, such as a tablet computer, a robot, a mobile phone, a story machine, or a book-drawing reading robot, and can tell a story to a child, solve a problem posed by the child in real time, and have rich performance.
It is to be understood that the disclosed embodiments of the invention are not limited to the particular structures or process steps disclosed herein, but extend to equivalents thereof as would be understood by those skilled in the relevant art. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While the above examples are illustrative of the principles of the present invention in one or more applications, it will be apparent to those of ordinary skill in the art that various changes in form, usage and details of implementation can be made without departing from the principles and concepts of the invention. Accordingly, the invention is defined by the appended claims.

Claims (9)

1. A story data processing method facing an intelligent robot is characterized by comprising the following steps:
the method comprises the steps of firstly, acquiring text information to be played currently, wherein the text information is a text fragment of story content processed by natural language, and the text information carries text and text attributes; the text attribute at least comprises position information of a text fragment in story data, and specifically, the position information of the current text fragment relative to the story data is judged according to a reading result by reading a text to be played currently and a text in a subsequent preset duration interval;
the process of judging the position information of the current text clip relative to the story data according to the reading result comprises the following steps: judging whether the text to be played currently and the text in a subsequent first preset duration interval are empty or not, and determining the position information of the current text clip relative to story data according to the comprehensive judgment result;
if the text to be played currently is empty and the text in the subsequent first preset time interval is not empty, judging that the position information of the text clip to be played currently is the beginning position of the whole story;
if the text to be played currently is not empty, judging that the position information of the text clip to be played currently is the middle position of the whole story;
if the text to be played currently is empty and the text in the subsequent first preset duration interval is also empty, judging that the position information of the text clip to be played currently is the tail position of the whole story;
step two, calling and playing corresponding music audio according to the text attribute;
if the text attribute indicates that the position information of the text information to be played currently is a start position, calling and playing a first music audio;
if the text attribute indicates that the position information of the text information to be played currently is the middle position, calling and playing a second music audio;
if the text attribute indicates that the position information of the text information to be played currently is an end position, calling and playing a third music audio after the text information to be played currently is played;
and if the position information of the text information to be played currently is the middle position, detecting whether the text information to be played currently is the dialogue text information, if so, pausing the playing of the second music audio, and continuing to play the second music audio after the dialogue is finished.
2. The method of claim 1, wherein the text attribute comprises location information of a sentence in which the text fragment is located.
3. The method according to claim 1, wherein when the position information attribute of the text information to be currently played is a start position or an end position, the corresponding music audio is played fade-in and fade-out.
4. The method according to claim 2 or 3, wherein the method further determines whether the current user is a child user according to the acquired user information, wherein if the current user is a child user, in the second step, corresponding music audio is called from a child database according to the text attribute and played.
5. A storage medium having stored thereon program code executable to perform the method steps of any of claims 1-4.
6. A story data processing apparatus for an intelligent robot, the apparatus comprising:
the system comprises a text information acquisition module, a text information playing module and a text playing module, wherein the text information acquisition module is used for acquiring text information to be played currently, the text information is a text fragment of story content processed by natural language, and the text information carries text and text attributes; the text attribute at least includes position information of the text clip in the story data, and specifically, the text information acquisition module is configured to: reading a text to be played currently and a text in a subsequent preset duration interval, and further judging the position information of a current text clip relative to story data according to the read result;
the text information acquisition module is configured to execute the following operations to judge the position information of the current text clip relative to the story data:
judging whether the text to be played currently and the text in a subsequent first preset duration interval are empty or not, and determining the position information of the current text clip relative to story data according to the comprehensive judgment result;
if the text to be played currently is empty and the text in the subsequent first preset time interval is not empty, judging that the position information of the text clip to be played currently is the beginning position of the whole story;
if the text to be played currently is not empty, judging that the position information of the text clip to be played currently is the middle position of the whole story;
if the text to be played currently is empty and the text in the subsequent first preset duration interval is also empty, judging that the position information of the text clip to be played currently is the tail position of the whole story;
the playing control module is used for calling and playing corresponding music audio according to the text attribute;
the play control module is configured to: if the text attribute indicates that the position information of the text information to be played currently is a start position, calling and playing a first music audio;
if the text attribute indicates that the position information of the text information to be played currently is the middle position, calling and playing a second music audio;
if the text attribute indicates that the position information of the text information to be played currently is an end position, calling and playing a third music audio after the text information to be played currently is played;
the playback control module is further configured to: and if the position information of the text information to be played currently is the middle position, detecting whether the text information to be played currently is the dialogue text information, if so, pausing the playing of the second music audio, and continuing to play the second music audio after the dialogue is finished.
7. The apparatus of claim 6, wherein the text attribute comprises location information of a sentence in which the text fragment is located.
8. A child-specific device that plays tuned music audio by executing the program code of claim 5 in cooperation with a cloud server.
9. A multimodal interaction system, the system comprising:
the cloud server is used for calling corresponding music audio by executing the intelligent robot-oriented story data processing method of any one of claims 1-4 and generating a corresponding music playing instruction according to the called music audio;
and a child-specific device as claimed in claim 8, configured to play the corresponding music audio according to the received music playing instruction.
CN201811154962.7A 2018-09-30 2018-09-30 Intelligent robot-oriented story data processing method and device Active CN109522427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154962.7A CN109522427B (en) 2018-09-30 2018-09-30 Intelligent robot-oriented story data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154962.7A CN109522427B (en) 2018-09-30 2018-09-30 Intelligent robot-oriented story data processing method and device

Publications (2)

Publication Number Publication Date
CN109522427A CN109522427A (en) 2019-03-26
CN109522427B true CN109522427B (en) 2021-12-10

Family

ID=65772185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154962.7A Active CN109522427B (en) 2018-09-30 2018-09-30 Intelligent robot-oriented story data processing method and device

Country Status (1)

Country Link
CN (1) CN109522427B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087270B (en) * 2019-05-15 2021-09-17 深圳市沃特沃德信息有限公司 Reading method and device, storage medium and computer equipment
CN111652344A (en) * 2020-05-29 2020-09-11 百度在线网络技术(北京)有限公司 Method and apparatus for presenting information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174448A (en) * 2007-12-10 2008-05-07 北京炬力北方微电子有限公司 Talking picture playing method and device, method for generating index file of talking picture
CN101640058A (en) * 2009-07-24 2010-02-03 王祐凡 Multimedia synchronization method, player and multimedia data making device
WO2012167276A1 (en) * 2011-06-03 2012-12-06 Apple Inc. Automatically creating a mapping between text data and audio data
CN103403797A (en) * 2011-08-01 2013-11-20 松下电器产业株式会社 Speech synthesis device and speech synthesis method
CN104391980A (en) * 2014-12-08 2015-03-04 百度在线网络技术(北京)有限公司 Song generating method and device
CN105335455A (en) * 2015-08-28 2016-02-17 广东小天才科技有限公司 Text reading method and apparatus
CN106557298A (en) * 2016-11-08 2017-04-05 北京光年无限科技有限公司 Background towards intelligent robot matches somebody with somebody sound outputting method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103988193B (en) * 2011-03-23 2018-01-16 奥德伯公司 Manage the broadcasting of synchronizing content
TWI488174B (en) * 2011-06-03 2015-06-11 Apple Inc Automatically creating a mapping between text data and audio data
CN106033678A (en) * 2015-03-18 2016-10-19 珠海金山办公软件有限公司 Playing content display method and apparatus thereof
CN105096932A (en) * 2015-07-14 2015-11-25 百度在线网络技术(北京)有限公司 Voice synthesis method and apparatus of talking book
CN105609096A (en) * 2015-12-30 2016-05-25 小米科技有限责任公司 Text data output method and device
CN107103795A (en) * 2017-06-28 2017-08-29 广州播比网络科技有限公司 A kind of interactive player method of Story machine

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174448A (en) * 2007-12-10 2008-05-07 北京炬力北方微电子有限公司 Talking picture playing method and device, method for generating index file of talking picture
CN101640058A (en) * 2009-07-24 2010-02-03 王祐凡 Multimedia synchronization method, player and multimedia data making device
WO2012167276A1 (en) * 2011-06-03 2012-12-06 Apple Inc. Automatically creating a mapping between text data and audio data
CN103403797A (en) * 2011-08-01 2013-11-20 松下电器产业株式会社 Speech synthesis device and speech synthesis method
CN104391980A (en) * 2014-12-08 2015-03-04 百度在线网络技术(北京)有限公司 Song generating method and device
CN105335455A (en) * 2015-08-28 2016-02-17 广东小天才科技有限公司 Text reading method and apparatus
CN106557298A (en) * 2016-11-08 2017-04-05 北京光年无限科技有限公司 Background towards intelligent robot matches somebody with somebody sound outputting method and device

Also Published As

Publication number Publication date
CN109522427A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
US11544274B2 (en) Context-based digital assistant
WO2017012511A1 (en) Voice control method and device, and projector apparatus
JP2019117623A (en) Voice dialogue method, apparatus, device and storage medium
CN109360567B (en) Customizable wake-up method and apparatus
CN107256707B (en) Voice recognition method, system and terminal equipment
CN108664472B (en) Natural language processing method, device and equipment
CN105446146A (en) Intelligent terminal control method based on semantic analysis, system and intelligent terminal
US20190333514A1 (en) Method and apparatus for dialoguing based on a mood of a user
US9495450B2 (en) Audio animation methods and apparatus utilizing a probability criterion for frame transitions
CN105702253A (en) Voice awakening method and device
CN109522427B (en) Intelligent robot-oriented story data processing method and device
CN107016070B (en) Man-machine conversation method and device for intelligent robot
CN110267113B (en) Video file processing method, system, medium, and electronic device
CN109036374A (en) Data processing method and device
CN106792048B (en) Method and device for recognizing voice command of smart television user
CN109460548B (en) Intelligent robot-oriented story data processing method and system
CN108986810A (en) A kind of method and device for realizing interactive voice by earphone
CN109686372B (en) Resource playing control method and device
CN108492826B (en) Audio processing method and device, intelligent equipment and medium
CN109195016B (en) Intelligent terminal equipment-oriented voice interaction method and terminal system for video barrage and intelligent terminal equipment
CN111063375A (en) Music playing control system, method, equipment and medium
CN114244821A (en) Data processing method, device, equipment, electronic equipment and storage medium
CN111063356A (en) Electronic equipment response method and system, sound box and computer readable storage medium
CN116825105A (en) Speech recognition method based on artificial intelligence
CN109271480B (en) Voice question searching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant