CN106960051B - Audio playing method and device based on electronic book and terminal equipment - Google Patents

Audio playing method and device based on electronic book and terminal equipment Download PDF

Info

Publication number
CN106960051B
CN106960051B CN201710209671.2A CN201710209671A CN106960051B CN 106960051 B CN106960051 B CN 106960051B CN 201710209671 A CN201710209671 A CN 201710209671A CN 106960051 B CN106960051 B CN 106960051B
Authority
CN
China
Prior art keywords
electronic book
data
tag data
audio
reading position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710209671.2A
Other languages
Chinese (zh)
Other versions
CN106960051A (en
Inventor
陈继良
韩飞
雷雪琪
陈尧一
王晨
袁艳
卢嘉兴
于聪聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangyue Technology Co Ltd
Original Assignee
Zhangyue Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangyue Technology Co Ltd filed Critical Zhangyue Technology Co Ltd
Priority to CN201710209671.2A priority Critical patent/CN106960051B/en
Publication of CN106960051A publication Critical patent/CN106960051A/en
Application granted granted Critical
Publication of CN106960051B publication Critical patent/CN106960051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • G11B2020/10546Audio or video recording specifically adapted for audio data

Abstract

The embodiment of the invention provides an audio playing method, an audio playing device and terminal equipment based on an electronic book, wherein the method comprises the following steps: determining a current reading position of the electronic book, and acquiring label data corresponding to the current reading position; acquiring audio data corresponding to the tag data; and playing the audio data. According to the embodiment of the invention, the audio can be played while the user reads the electronic book, so that the reading mode of the electronic book is enriched. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.

Description

Audio playing method and device based on electronic book and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to an electronic book-based audio playing method and device and a terminal device.
background
With the development of network technology, people can obtain various electronic resources through different devices and different ways, and the electronic resource sources greatly enrich the work and life contents of people.
For example, it is a trend to read an electronic book by installing a corresponding APP (application), but at present, when reading an electronic book, only the text content in the electronic book can be displayed, the reading mode of the electronic book is fixed and single, and the reading interest is poor.
disclosure of Invention
The embodiment of the invention provides an electronic book-based audio playing method and device and a terminal device, and aims to solve the problems that the conventional electronic book reading mode is fixed and single and the reading interestingness is poor.
according to an aspect of an embodiment of the present invention, there is provided an electronic book-based audio playing method, including: determining a current reading position of the electronic book, and acquiring label data corresponding to the current reading position; acquiring audio data corresponding to the tag data; and playing the audio data.
According to another aspect of the embodiments of the present invention, there is also provided an electronic book-based audio playing apparatus, including: the electronic book reading device comprises a tag data acquisition module, a tag data acquisition module and a tag reading module, wherein the tag data acquisition module is used for determining the current reading position of the electronic book and acquiring tag data corresponding to the current reading position; the first audio data acquisition module is used for acquiring audio data corresponding to the tag data; and the audio data playing module is used for playing the audio data.
According to another aspect of the embodiments of the present invention, there is also provided a terminal device, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the electronic book-based audio playing method.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium storing: an executable instruction for determining a current reading position of the electronic book and acquiring tag data corresponding to the current reading position; executable instructions for obtaining audio data corresponding to the tag data; and executable instructions for playing the audio data.
According to the technical scheme provided by the embodiment of the invention, the current reading position of the electronic book is determined, the tag data is preset in the electronic book, the tag data corresponding to the current reading position is obtained, the audio data corresponding to the tag data is obtained, and the audio data is played. According to the embodiment of the invention, the audio can be played while the user reads the electronic book, so that the reading mode of the electronic book is enriched. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for playing electronic book-based audio according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for playing electronic book-based audio according to a second embodiment of the present invention;
fig. 3 is a block diagram of an electronic book-based audio playing apparatus according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic book-based audio playing apparatus according to a fifth embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a terminal device according to a fifth embodiment of the present invention.
Detailed Description
the following detailed description of embodiments of the invention is provided in conjunction with the accompanying drawings (like numerals indicate like elements throughout the several views) and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
it will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present invention are used merely to distinguish one element, step, device, module, or the like from another element, and do not denote any particular technical or logical order therebetween.
Example one
Referring to fig. 1, a flowchart illustrating steps of a method for playing an electronic book-based audio according to an embodiment of the present invention is shown.
the method for playing the audio based on the electronic book provided by the embodiment comprises the following steps.
Step S100, determining the current reading position of the electronic book, and acquiring label data corresponding to the current reading position.
the electronic book in this embodiment includes tag data in addition to text information related to the content of the electronic book. The tag data may be preset at an appropriate position of the electronic book.
for example, the tag data B10 set on the 10 th page of the electronic book.
And step S102, acquiring audio data corresponding to the label data.
the acquisition of audio data corresponding to the tag data in this step S102 may be performed in the following two ways.
In the first mode, if the electronic book is adapted to be a movie work, audio data corresponding to the tag data is acquired from the adapted movie work. The audio data acquired in this way may be episodes in movie and television works, background sounds, and the like.
And secondly, if the electronic book is not adapted to be a movie and television work, acquiring audio data corresponding to the tag data from a preset audio data set. The audio data acquired in this manner may be a song or the like.
and step S104, playing the audio data.
after the audio data is acquired in step S102, the acquired audio data may be played while the electronic book is being read. If the audio data of one audio file is acquired, the acquired audio data of the audio file can be played in a circulating manner in the reading process from the current reading position of the electronic book to the next label data; if the audio data of the multiple audio files are acquired, the audio data of any audio file can be randomly played from the acquired audio data of the multiple audio files in the reading process from the current reading position of the electronic book to the next tag data.
the audio data in this embodiment may be in various audio formats, such as mpeg, mp3, midi, wma, flac, ape, etc., and this embodiment does not limit the specific format of the audio data.
According to the technical scheme provided by the embodiment, the current reading position of the electronic book is determined, the tag data is preset in the electronic book, the tag data corresponding to the current reading position is obtained, the audio data corresponding to the tag data is obtained, and the audio data is played. The embodiment can play audio while the user reads the electronic book, thereby enriching the reading mode of the electronic book. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.
example two
Referring to fig. 2, a flowchart illustrating steps of a method for playing an electronic book-based audio according to a second embodiment of the present invention is shown.
In the present embodiment, on the basis of the above embodiments, the differences from the above embodiments are emphasized, and the same points can be referred to the descriptions in the above embodiments, and are not repeated herein.
The method for playing the audio based on the electronic book provided by the embodiment comprises the following steps.
And step S200, setting label data for the electronic book.
the tag data in this embodiment is used to indicate plot information and/or emotion information corresponding to the current reading position of the electronic book. The embodiment may set the tag data in the electronic book during the generation of the electronic book or after the generation of the electronic book.
If the tag data is used to indicate the plot information corresponding to the current reading position of the electronic book, in this step S200, plot analysis may be performed on the content of the electronic book, and the tag data is set for the electronic book according to the plot analysis result.
Optionally, in a possible implementation manner, the episode analysis is performed on one or more combinations of chapter information, paragraph information, and page information of the electronic book; and setting label data for the electronic book according to at least one position of the plot analysis result in one or more combinations of chapters, paragraphs and pages of the corresponding electronic book. Wherein, the chapter information may indicate that the text content of the electronic book belongs to chapter number, section number, and the like; the page information may indicate that the text content of the e-book belongs to the page number, etc.; the paragraph information may indicate that the text content of the electronic book belongs to the paragraph number, etc. The chapter information, the page information and the paragraph information respectively correspond to the plot structure of the electronic book, for example, a first chapter and a second chapter correspond to the plot beginning of the electronic book; the eighth page and the third segment correspond to the climax of the plot of the electronic book.
By analyzing the plot of the e-book, for example, by semantic analysis or manual setting, the position of the corresponding plot content in the e-book, such as what plot content the e-book belongs to in the fourth chapter, or what plot the e-book belongs to from the fourth chapter to the fourth chapter, etc., can be determined. Further, at the beginning of the episode, corresponding tag data is set for it. According to different plots, different tag data corresponding to different plots can be set, if the plots include at least one of the following: the corresponding label data includes at least one of the following: episode beginning label, episode mat label, episode climax label, episode transition label. Wherein, the plot beginning corresponds to a plot beginning label, the plot laying corresponds to a plot laying label, the plot climax corresponds to a plot climax label, and the plot transition corresponds to a plot transition label. In a feasible implementation manner, the scenario corresponding to the reading position is determined according to the number of characters of the electronic book, for example, the electronic book has 5 thousand characters, and the reading position of the electronic book with 4 thousand characters is determined as a climax scenario, but not limited thereto, the technical means adopted by the scenario analysis in this embodiment is not particularly limited, and the scenario analysis may be implemented in any appropriate manner.
(II) if the tag data is used to indicate emotion information corresponding to the current reading position of the electronic book, in this step S200, semantic analysis may be performed on the content of the electronic book, at least one emotion content in the electronic book is determined according to a semantic analysis result, and tag data is set for the electronic book according to the determined emotion content.
The semantic analysis in this embodiment is used to analyze semantic types represented by text contents of an electronic book, and in a possible implementation manner, the semantic analysis may be performed on the contents of the electronic book according to content types of audio data, for example, the content types of the audio data include: and if the electronic book is relaxed, quiet, excited, mystery and the like, semantic analysis can be performed on the content of the electronic book according to the content types of the relaxation, quiet, excited, mystery and the like, and the position of the emotional content of the semantic type contained in the electronic book is obtained through analysis. If a certain part of the electronic book contains relaxed emotional content, setting relaxed label data for the part; the emotional content of different semantic types corresponds to different tag data. If a certain part of the electronic book contains excited emotional content, excited label data is set for the part. And so on, setting corresponding label data for the electronic book according to the emotional content of the electronic book. In another feasible implementation manner, if the content type of the audio data cannot accurately express the meaning of the semantic type of the text content of the electronic book, the text content of the electronic book may be analyzed by any appropriate semantic analysis method to obtain an appropriate semantic type.
if the content of the electronic book is subjected to semantic analysis to obtain various emotional contents, which emotional content of the various emotional contents has a large proportion can be further determined, and the emotional content with the large proportion is determined as the emotional content of the current reading position. The score or the number of the emotional content may be counted, and the specific gravity of the emotional content is determined according to the score or the number, and the embodiment does not limit the technical means for determining the specific gravity of the emotional content. In a possible semantic analysis implementation, the text in the electronic book may be identified, and the text in which semantic type the text at each reading position belongs to is counted. The reading position in this embodiment is used to indicate a position where the currently displayed content of the electronic book is located, and is not limited to a specific certain segment, a specific certain line, or a specific certain character. The semantic types may include an excitation type, a silence type, and the like, for example, if the number of the words containing the excitation type at a reading position (which may be a page of words, a segment of words, or a word in a certain chapter) is 20, and the number of the words containing the silence type is 3, it may be determined that the semantic type corresponding to the reading position is the excitation type.
Step S202, determining the current reading position of the electronic book, and acquiring label data corresponding to the current reading position.
in this embodiment, the current reading position may correspond to one tag data, may correspond to a plurality of tag data, and may not have corresponding tag data, which will be described below. (1) And enabling the current reading position to correspond to one piece of label data, and taking the obtained piece of label data as the label data corresponding to the current reading position.
(2) In a feasible implementation manner, the current reading position corresponds to a plurality of tag data, in this step S202, the plurality of tag data corresponding to the current reading position may be obtained, the plurality of tag data are classified, the number of each type of tag data is obtained, and the tag data with the largest number is determined as the tag data corresponding to the current reading position.
for example, the current reading position (which may be a page of text, a section of text, or a text in a chapter) corresponds to 4 tag data, which are tag data B1, B2, B3, and B4, respectively, where tag data B1 and B4 belong to the first class, tag data B2 belongs to the second class, and tag data B3 belongs to the third class, and then the number of the first class of tag data is the largest, and then the tag data corresponding to the current reading position are tag data B1 and tag data B4.
(3) And the current reading position has no corresponding tag data, in a possible implementation manner, the step S202 may be: determining the current reading position of the electronic book, judging whether the current reading position corresponds to tag data, if no tag data exists, acquiring the tag data which are before and after the current reading position and are closest to the current reading position, and determining the tag data which is closest to the current reading position in the acquired tag data as the tag data corresponding to the current reading position.
For example, if it is determined that the current reading position is page 10 and page 10 has no corresponding tag data, tag data of page 9, page 8, and the like before page 10 and tag data of page 11, page 12, and the like after page 10 are acquired, and if the tag data of page 9 and the tag data of page 12 are acquired, since page 9 is more updated from the current reading position than page 12, the tag data of page 9 is determined as the tag data of page 10.
And step S204, acquiring audio data corresponding to the label data.
In this embodiment, the content type of the audio data corresponding to the tag data is matched with the episode information and/or emotion information indicated by the tag data, where the content type of the audio data may be understood as a tag or classification of the audio data, such as a content type of comfort, silence, excitement, mystery, and the like. Matching may be understood as expressing the same or similar meaning, for example, tag data B1 indicates that the current reading position corresponds to story information J1, and the content type of audio data Y1 corresponding to tag data B1 matches story information J1, i.e., the content type of audio data Y1 may express the same or similar meaning as story information J1. The acquisition of audio data corresponding to the tag data in this step S204 can be performed in the following two ways.
Firstly, acquiring audio data corresponding to tag data from an audio/video file generated by recomposing an electronic book; and the audio/video file is preset with an audio tag corresponding to the tag data.
The audio/video file generated by the electronic book recomposition can be a movie, a television play and the like, and the audio tags preset in the audio/video file and corresponding to the tag data can comprise an episode starting audio tag, an episode bedding audio tag, an episode climax audio tag, an episode transition audio tag and the like. The plot beginning audio label corresponds to the plot beginning label, the plot pad audio label corresponds to the plot pad label, the plot climax audio label corresponds to the plot climax label, and the plot transition audio label corresponds to the plot transition label.
the method II comprises the steps of extracting audio data corresponding to the label data from a preset audio database; the audio data in the audio database are classified and stored according to the type of the audio data.
the preset audio database may be an audio database corresponding to the current electronic book, or may be a general audio database, and the audio data in the preset audio database may be audio data that has been classified in an existing audio data application.
And step S206, playing the audio data.
In this step S206, the acquired audio data may be played while the electronic book is being read. If the audio data of one audio file is acquired, the acquired audio data of the audio file can be played in a circulating manner in the reading process from the current reading position of the electronic book to the next label data; if the audio data of the plurality of audio files are acquired, any audio file can be randomly played from the acquired audio data of the plurality of audio files in the reading process from the current reading position of the electronic book to the next tag data, or an audio file selection instruction is received, and the selected audio file is played. In a possible implementation manner, the audio data being played may also be paused, stopped, or switched, and the technical means adopted for playing the audio data is not limited in this embodiment.
optionally, in a possible implementation manner, if the electronic book is a Text To Speech (TTS) electronic book, before step S202, a TTS electronic book including a plurality of audio data may be further generated, specifically, the Text content of the TTS electronic book may be obtained, where the Text content is provided with a plurality of tag data; acquiring audio data of an audio file corresponding to each tag data; and generating the TTS e-book according to the text content, the plurality of label data and the audio data of the corresponding audio files. The above-mentioned technical means for acquiring the audio data of the audio file may refer to the related contents in the present embodiment and the first embodiment, and are not described herein again. TTS is a speech synthesis technology, and can intelligently convert characters into natural speech for output, and any appropriate technical means can be adopted as the technical means for generating the TTS electronic book in this embodiment, which is not limited in this embodiment.
For example, after performing relevant analysis such as semantic analysis on the text content of the electronic book, setting a plurality of tag data at corresponding text content positions; acquiring audio data corresponding to the label data; in the process of converting the text content of the electronic book into TTS, when the tag data is detected, the audio data corresponding to the tag data and the voice data converted from the text data are synthesized together, and finally, a voice stream including the voice data corresponding to the text content of the electronic book and the corresponding audio data is generated.
According to the technical scheme provided by the embodiment, the current reading position of the electronic book is determined, the tag data is preset in the electronic book, the tag data corresponding to the current reading position is obtained, the audio data corresponding to the tag data is obtained, and the audio data is played. The embodiment can play audio while the user reads the electronic book, thereby enriching the reading mode of the electronic book. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.
The tag data in the present embodiment may indicate plot information and/or emotion information depending on whether the electronic book is adapted as an audio-video file. If the electronic book is adapted to be an audio/video file, the tag data indicates the plot information, audio data corresponding to the tag data can be obtained from the adapted audio/video file, the obtained audio data is played, and the correlation between the audio data and the plot of the electronic book is realized; if the electronic book is not adapted to be an audio/video file, the tag data indicates emotion information, audio data corresponding to the tag data can be acquired from a preset audio database, and the acquired audio data is played, so that the emotion correlation between the audio data and the electronic book is realized.
In this embodiment, if the current reading position of the electronic book corresponds to multiple tag data, for example, two or more tag data, the multiple tag data may be further classified, and the tag data with the largest number after classification is determined as the tag data corresponding to the current reading position, so as to adapt to the situation that the reading position corresponds to multiple tag data, and improve flexibility for determining the tag data.
If the current reading position of the electronic book in this embodiment does not have corresponding tag data, the tag data before and after the current reading position and closest to the current reading position may be obtained, and the tag data closest to the current reading position is determined as the tag data corresponding to the current reading position, so that the situation that no audio is played when the current reading position does not have corresponding tag data is avoided, and the continuity of audio playing in the electronic book reading process is ensured.
If the electronic book is a TTS electronic book, a new TTS electronic book containing a plurality of audio data can be generated according to the text content of the electronic book, a plurality of tag data in the electronic book and the audio data corresponding to each tag data, so that the audio data in the TTS electronic book are enriched.
EXAMPLE III
Referring to fig. 3, a block diagram of an electronic book-based audio playing apparatus according to a third embodiment of the present invention is shown.
The embodiment provides an audio playing device based on an electronic book, which comprises: a tag data obtaining module 300, configured to determine a current reading position of the electronic book, and obtain tag data corresponding to the current reading position; a first audio data obtaining module 302, configured to obtain audio data corresponding to the tag data; and an audio data playing module 304, configured to play audio data.
According to the technical scheme provided by the embodiment, the current reading position of the electronic book is determined, the tag data is preset in the electronic book, the tag data corresponding to the current reading position is obtained, the audio data corresponding to the tag data is obtained, and the audio data is played. The embodiment can play audio while the user reads the electronic book, thereby enriching the reading mode of the electronic book. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.
Example four
Referring to fig. 4, a block diagram of an electronic book-based audio playing apparatus according to a fourth embodiment of the present invention is shown.
the embodiment provides an audio playing device based on an electronic book, which comprises: a tag data obtaining module 400, configured to determine a current reading position of the electronic book, and obtain tag data corresponding to the current reading position; a first audio data obtaining module 402, configured to obtain audio data corresponding to the tag data; and an audio data playing module 404, configured to play audio data.
Optionally, the tag data is used for indicating the plot information and/or emotion information corresponding to the current reading position; the content type of the audio data corresponding to the tag data is matched with the episode information and/or emotion information.
Optionally, the electronic book-based audio playing apparatus provided in this embodiment further includes: the plot analysis module 406 is configured to, if the tag data is used to indicate plot information corresponding to the current reading position, perform plot analysis on the content of the electronic book before the tag data acquisition module 400 determines the current reading position of the electronic book, and set tag data for the electronic book according to a plot analysis result.
Optionally, the episode analysis module 406 is configured to perform episode analysis on one or more combinations of chapter information, paragraph information, and page information of the electronic book, and set tag data for the electronic book according to at least one position of a result of the episode analysis in one or more combinations of chapters, paragraphs, and pages of the corresponding electronic book.
optionally, the tag data comprises at least one of: episode beginning label, episode mat label, episode climax label, episode transition label.
optionally, the electronic book-based audio playing apparatus provided in this embodiment further includes: a semantic analysis module 408, configured to, if the tag data is used to indicate emotion information corresponding to the current reading position, perform semantic analysis on the content of the electronic book before the tag data acquisition module 400 determines the current reading position of the electronic book, determine at least one emotion content in the electronic book according to a result of the semantic analysis, and set tag data for the electronic book according to the determined emotion content.
optionally, the tag data obtaining module 400 includes: the first tag data obtaining sub-module 4000 is configured to determine a current reading position of the electronic book, and obtain a plurality of tag data corresponding to the current reading position; the tag data classification submodule 4002 is configured to classify the plurality of tag data to obtain the number of each type of tag data; the first tag data determining submodule 4004 is configured to determine the tag data with the largest number as the tag data corresponding to the current reading position.
Optionally, the tag data obtaining module 400 includes: a reading position determining sub-module 4006, configured to determine a current reading position of the electronic book; the tag data judgment sub-module 4008 is configured to judge whether the current reading position corresponds to tag data; the second tag data obtaining sub-module 4010 is configured to, if there is no tag data at the current reading position, obtain tag data before and after the current reading position and closest to the current reading position; the second tag data determining submodule 4012 is configured to determine, as the tag data corresponding to the current reading position, the tag data that is closest to the current reading position in the obtained tag data.
Optionally, the first audio data obtaining module 402 is configured to obtain audio data corresponding to the tag data from an audio/video file generated by electronic book recomposition; and the audio/video file is preset with an audio tag corresponding to the tag data.
Optionally, the first audio data obtaining module 402 is configured to extract audio data corresponding to the tag data from a preset audio database; the audio data in the audio database are classified and stored according to the type of the audio data.
Optionally, the electronic book-based audio playing apparatus provided in this embodiment further includes: a text content obtaining module 410, configured to, if the electronic book is a TTS electronic book, obtain text content of the electronic book before the tag data obtaining module 400 determines a current reading position of the electronic book, where a plurality of tag data are set in the text content; a second audio data obtaining module 412, configured to obtain audio data corresponding to each tag data; and a TTS e-book generating module 414, configured to generate a TTS e-book according to the text content, the multiple tag data, and the corresponding audio data.
according to the technical scheme provided by the embodiment, the current reading position of the electronic book is determined, the tag data is preset in the electronic book, the tag data corresponding to the current reading position is obtained, the audio data corresponding to the tag data is obtained, and the audio data is played. The embodiment can play audio while the user reads the electronic book, thereby enriching the reading mode of the electronic book. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.
The tag data in the present embodiment may indicate plot information and/or emotion information depending on whether the electronic book is adapted as an audio-video file. If the electronic book is adapted to be an audio/video file, the tag data indicates the plot information, audio data corresponding to the tag data can be obtained from the adapted audio/video file, the obtained audio data is played, and the correlation between the audio data and the plot of the electronic book is realized; if the electronic book is not adapted to be an audio/video file, the tag data indicates emotion information, audio data corresponding to the tag data can be acquired from a preset audio database, and the acquired audio data is played, so that the emotion correlation between the audio data and the electronic book is realized.
In this embodiment, if the current reading position of the electronic book corresponds to multiple tag data, for example, two or more tag data, the multiple tag data may be further classified, and the tag data with the largest number after classification is determined as the tag data corresponding to the current reading position, so as to adapt to the situation that the reading position corresponds to multiple tag data, and improve flexibility for determining the tag data.
If the current reading position of the electronic book in this embodiment does not have corresponding tag data, the tag data before and after the current reading position and closest to the current reading position may be obtained, and the tag data closest to the current reading position is determined as the tag data corresponding to the current reading position, so that the situation that no audio is played when the current reading position does not have corresponding tag data is avoided, and the continuity of audio playing in the electronic book reading process is ensured.
If the electronic book is a TTS electronic book, a new TTS electronic book containing a plurality of audio data can be generated according to the text content of the electronic book, a plurality of tag data in the electronic book and the audio data corresponding to each tag data, so that the audio data in the TTS electronic book are enriched.
EXAMPLE five
fig. 5 is a schematic structural diagram of a terminal device according to a fifth embodiment of the present invention, where the specific embodiment of the present invention does not limit the specific implementation of the terminal device.
As shown in fig. 5, the terminal device may include: a processor (processor)502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with network elements of other devices, such as clients or other servers.
the processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described embodiment of the method for playing an electronic book-based audio.
in particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The server comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
the memory 506 is used for storing a first data set, a second data set and a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may specifically be used to cause the processor 502 to perform the following operations: determining a current reading position of the electronic book, and acquiring label data corresponding to the current reading position; acquiring audio data corresponding to the tag data; and playing the audio data.
In an optional embodiment, the tag data is used for indicating the plot information and/or emotion information corresponding to the current reading position; the content type of the audio data corresponding to the tag data is matched with the episode information and/or emotion information.
in an alternative embodiment, the program 510 is configured to enable the processor 502 to perform episode analysis on the content of the electronic book before determining the current reading position of the electronic book if the tag data is used to indicate episode information corresponding to the current reading position, and set the tag data for the electronic book according to the episode analysis result.
in an alternative embodiment, the program 510 is further configured to enable the processor 502 to perform plot analysis on one or a combination of chapter information, paragraph information, and page information of the electronic book when performing plot analysis on the content of the electronic book and setting tag data for the electronic book according to a plot analysis result; and setting label data for the electronic book according to at least one position of the plot analysis result in one or more combinations of chapters, paragraphs and pages of the corresponding electronic book.
In an alternative embodiment, the tag data includes at least one of: episode beginning label, episode mat label, episode climax label, episode transition label.
In an optional implementation manner, the program 510 is further configured to enable the processor 502 to perform semantic analysis on the content of the electronic book before determining the current reading position of the electronic book if the tag data is used to indicate emotion information corresponding to the current reading position, and determine at least one emotion content in the electronic book according to a result of the semantic analysis; and setting label data for the electronic book according to the determined emotional content.
In an optional implementation manner, the program 510 is further configured to enable the processor 502 to determine a current reading position of the electronic book and obtain a plurality of tag data corresponding to the current reading position when determining the current reading position of the electronic book and obtaining the tag data corresponding to the current reading position; classifying the plurality of label data to obtain the number of various types of label data; and determining the tag data with the maximum number as the tag data corresponding to the current reading position.
In an alternative embodiment, the program 510 is further configured to enable the processor 502 to determine the current reading position of the electronic book when determining the current reading position of the electronic book and acquiring tag data corresponding to the current reading position; judging whether the current reading position corresponds to the label data or not; if no label data exists, acquiring label data which are before and after the current reading position and are closest to the current reading position; and determining the label data closest to the current reading position in the acquired label data as the label data corresponding to the current reading position.
in an alternative embodiment, the program 510 is further configured to enable the processor 502 to, when acquiring the audio data corresponding to the tag data, acquire the audio data corresponding to the tag data from an audio/video file generated according to the electronic book adaptation; and the audio/video file is preset with an audio tag corresponding to the tag data.
In an alternative embodiment, the program 510 is further configured to enable the processor 502 to extract the audio data corresponding to the tag data from a preset audio database when the audio data corresponding to the tag data is acquired; the audio data in the audio database are classified and stored according to the type of the audio data.
In an optional implementation manner, the program 510 is further configured to enable the processor 502, if the electronic book is a TTS electronic book, to obtain text content of the electronic book before determining a current reading position of the electronic book, where the text content is provided with a plurality of tag data; acquiring audio data corresponding to each tag data; and generating the TTS e-book according to the text content, the plurality of label data and the corresponding audio data.
For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing electronic book-based audio playing embodiment, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
According to the technical scheme provided by the embodiment, the current reading position of the electronic book is determined, the tag data is preset in the electronic book, the tag data corresponding to the current reading position is obtained, the audio data corresponding to the tag data is obtained, and the audio data is played. The embodiment can play audio while the user reads the electronic book, thereby enriching the reading mode of the electronic book. In addition, the played audio corresponds to the tag data in the electronic book, so that the electronic book is associated with the audio, and the interest of reading the electronic book is improved by playing the audio.
The tag data in the present embodiment may indicate plot information and/or emotion information depending on whether the electronic book is adapted as an audio-video file. If the electronic book is adapted to be an audio/video file, the tag data indicates the plot information, audio data corresponding to the tag data can be obtained from the adapted audio/video file, the obtained audio data is played, and the correlation between the audio data and the plot of the electronic book is realized; if the electronic book is not adapted to be an audio/video file, the tag data indicates emotion information, audio data corresponding to the tag data can be acquired from a preset audio database, and the acquired audio data is played, so that the emotion correlation between the audio data and the electronic book is realized.
In this embodiment, if the current reading position of the electronic book corresponds to multiple tag data, for example, two or more tag data, the multiple tag data may be further classified, and the tag data with the largest number after classification is determined as the tag data corresponding to the current reading position, so as to adapt to the situation that the reading position corresponds to multiple tag data, and improve flexibility for determining the tag data.
If the current reading position of the electronic book in this embodiment does not have corresponding tag data, the tag data before and after the current reading position and closest to the current reading position may be obtained, and the tag data closest to the current reading position is determined as the tag data corresponding to the current reading position, so that the situation that no audio is played when the current reading position does not have corresponding tag data is avoided, and the continuity of audio playing in the electronic book reading process is ensured.
if the electronic book is a TTS electronic book, a new TTS electronic book containing a plurality of audio data can be generated according to the text content of the electronic book, a plurality of tag data in the electronic book and the audio data corresponding to each tag data, so that the audio data in the TTS electronic book are enriched.
It should be noted that, according to the implementation requirement, each component/step described in the embodiment of the present invention may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present invention.
The above-described method according to an embodiment of the present invention may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, a RAM, a floppy disk, a hard disk, or a magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium downloaded through a network and to be stored in a local recording medium, so that the method described herein may be stored in such software processing on a recording medium using a general-purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It will be appreciated that the computer, processor, microprocessor controller or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the processing methods described herein. Further, when a general-purpose computer accesses code for implementing the processes shown herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the processes shown herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
the above embodiments are only for illustrating the embodiments of the present invention and not for limiting the embodiments of the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present invention, so that all equivalent technical solutions also belong to the scope of the embodiments of the present invention, and the scope of patent protection of the embodiments of the present invention should be defined by the claims.
The embodiment of the invention discloses A1 and an electronic book-based audio playing method, which comprises the following steps:
Determining a current reading position of the electronic book, and acquiring label data corresponding to the current reading position;
Acquiring audio data corresponding to the tag data;
And playing the audio data.
A2, the method according to A1, wherein the tag data is used for indicating the plot information and/or emotion information corresponding to the current reading position;
The content type of the audio data corresponding to the tag data matches the episode information and/or emotion information.
A3, the method according to a2, wherein if the tag data is used to indicate episode information corresponding to a current reading position, before the determining the current reading position of the electronic book, the method further includes:
And performing plot analysis on the content of the electronic book, and setting label data for the electronic book according to a plot analysis result.
a4, the method according to A3, wherein the performing plot analysis on the content of the electronic book, and setting label data for the electronic book according to the plot analysis result includes:
Performing plot analysis on one or more combinations of chapter information, paragraph information and page information of the electronic book;
and setting label data for the electronic book according to at least one position of the plot analysis result in one or more combinations of chapters, paragraphs and pages of the corresponding electronic book.
A5, the method of A4, wherein the tag data includes at least one of: episode beginning label, episode mat label, episode climax label, episode transition label.
A6, the method according to a2, wherein if the tag data is used to indicate emotion information corresponding to the current reading position, before the determining the current reading position of the electronic book, the method further includes:
Performing semantic analysis on the content of the electronic book, and determining at least one emotional content in the electronic book according to a semantic analysis result; and setting label data for the electronic book according to the determined emotional content.
A7, the method according to A6, wherein the determining the current reading position of the electronic book and acquiring the tag data corresponding to the current reading position includes:
Determining a current reading position of the electronic book, and acquiring a plurality of label data corresponding to the current reading position;
classifying the plurality of label data to obtain the number of various types of label data;
and determining the tag data with the maximum number as the tag data corresponding to the current reading position.
a8 and the method according to any one of A1-A7, wherein the determining a current reading position of the electronic book and acquiring tag data corresponding to the current reading position include:
determining a current reading position of the electronic book;
judging whether the current reading position corresponds to the label data or not;
If no label data exists, acquiring label data which are before and after the current reading position and are closest to the current reading position;
And determining the label data closest to the current reading position in the acquired label data as the label data corresponding to the current reading position.
A9, the method according to any one of A1-A7, wherein the obtaining audio data corresponding to the tag data includes:
Acquiring audio data corresponding to the tag data from an audio/video file generated by recomposing the electronic book;
and the audio/video file is preset with an audio tag corresponding to the tag data.
A10, the method according to any one of A1-A7, wherein the obtaining audio data corresponding to the tag data includes:
Extracting audio data corresponding to the tag data from a preset audio database;
And the audio data in the audio database are classified and stored according to the type of the audio data.
A11, the method according to A1, wherein if the electronic book is a TTS electronic book, before the determining the current reading position of the electronic book, the method further includes:
acquiring the text content of the electronic book, wherein a plurality of label data are arranged in the text content;
acquiring audio data corresponding to each tag data;
And generating a TTS electronic book according to the text content, the plurality of label data and the corresponding audio data.
The embodiment of the invention also discloses B12, an audio playing device based on the electronic book, comprising:
The electronic book reading device comprises a tag data acquisition module, a tag data acquisition module and a tag reading module, wherein the tag data acquisition module is used for determining the current reading position of the electronic book and acquiring tag data corresponding to the current reading position;
the first audio data acquisition module is used for acquiring audio data corresponding to the tag data;
And the audio data playing module is used for playing the audio data.
b13, the device according to B12, wherein the label data is used for indicating the plot information and/or emotion information corresponding to the current reading position;
the content type of the audio data corresponding to the tag data matches the episode information and/or emotion information.
B14, the apparatus according to B13, wherein the apparatus further comprises:
and the plot analysis module is used for performing plot analysis on the content of the electronic book before the tag data acquisition module determines the current reading position of the electronic book if the tag data is used for indicating the plot information corresponding to the current reading position, and setting the tag data for the electronic book according to the plot analysis result.
and B15, the device according to B14, wherein the episode analysis module is configured to perform episode analysis on one or more combinations of chapter information, paragraph information, and page information of the electronic book, and set tag data for the electronic book according to at least one position of an episode analysis result in the corresponding one or more combinations of chapter, paragraph, and page of the electronic book.
B16, the apparatus of B15, wherein the tag data includes at least one of: episode beginning label, episode mat label, episode climax label, episode transition label.
b17, the apparatus according to B13, wherein the apparatus further comprises:
And the semantic analysis module is used for performing semantic analysis on the content of the electronic book before the tag data acquisition module determines the current reading position of the electronic book if the tag data is used for indicating emotion information corresponding to the current reading position, determining at least one emotion content in the electronic book according to a semantic analysis result, and setting tag data for the electronic book according to the determined emotion content.
b18, the apparatus according to B17, wherein the tag data obtaining module includes:
The first tag data acquisition submodule is used for determining the current reading position of the electronic book and acquiring a plurality of tag data corresponding to the current reading position;
The label data classification submodule is used for classifying the plurality of label data to obtain the number of various types of label data;
and the first tag data determining submodule is used for determining the tag data with the maximum quantity as the tag data corresponding to the current reading position.
The device of any one of B19 and B12-B18, wherein the tag data acquisition module comprises:
the reading position determining submodule is used for determining the current reading position of the electronic book;
The tag data judgment submodule is used for judging whether the current reading position corresponds to tag data or not;
The second tag data acquisition submodule is used for acquiring tag data which are before and after the current reading position and are closest to the current reading position if the current reading position has no tag data;
and the second tag data determining submodule is used for determining the tag data which is closest to the current reading position in the acquired tag data as the tag data corresponding to the current reading position.
The device comprises a B20 and a B12-B18, wherein the first audio data acquisition module is used for acquiring audio data corresponding to the tag data from an audio and video file generated by the electronic book adaptation;
And the audio/video file is preset with an audio tag corresponding to the tag data.
B21 and the device according to any one of B12-B18, wherein the first audio data acquisition module is configured to extract audio data corresponding to the tag data from a preset audio database;
And the audio data in the audio database are classified and stored according to the type of the audio data.
b22, the apparatus according to B12, wherein the apparatus further comprises:
the electronic book reading device comprises a text content acquisition module, a tag data acquisition module and a display module, wherein the text content acquisition module is used for acquiring text content of the electronic book before the tag data acquisition module determines the current reading position of the electronic book if the electronic book is a TTS electronic book, and a plurality of tag data are arranged in the text content;
the second audio data acquisition module is used for acquiring audio data corresponding to each tag data;
And the TTS electronic book generating module is used for generating a TTS electronic book according to the text content, the plurality of label data and the corresponding audio data.
the embodiment of the invention also discloses C23 and a terminal device, which comprises: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
The memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the electronic book-based audio playing method according to any one of A1-A11.

Claims (13)

1. An electronic book-based audio playing method comprises the following steps:
Determining a current reading position of the electronic book, and acquiring label data corresponding to the current reading position; the tag data is used for indicating the plot information and/or emotion information corresponding to the current reading position; if a plurality of label data corresponding to the current reading position are obtained, classifying the plurality of label data to obtain the number of various types of label data, and determining the label data with the maximum number as the label data corresponding to the current reading position; if the current reading position has no label data, acquiring label data which are before and after the current reading position and are closest to the current reading position, acquiring the label data which are closest to the current reading position from the label data, and determining the label data as the label data corresponding to the current reading position;
Acquiring audio data corresponding to the tag data; the content type of the audio data corresponding to the tag data is matched with the episode information and/or emotion information;
Playing the audio data;
if the tag data is used to indicate the episode information corresponding to the current reading position, before determining the current reading position of the electronic book, the method further includes:
Performing plot analysis on one or more combinations of chapter information, paragraph information and page information of the electronic book;
and setting label data for the electronic book according to at least one position of the plot analysis result in one or more combinations of chapters, paragraphs and pages of the corresponding electronic book.
2. the method of claim 1, wherein the tag data comprises at least one of: episode beginning label, episode mat label, episode climax label, episode transition label.
3. The method of claim 1, wherein if the tag data is used to indicate emotional information corresponding to a current reading position, before the determining the current reading position of the electronic book, the method further comprises:
performing semantic analysis on the content of the electronic book, and determining at least one emotional content in the electronic book according to a semantic analysis result; and setting label data for the electronic book according to the determined emotional content.
4. The method of any of claims 1-3, wherein the obtaining audio data corresponding to the tag data comprises:
acquiring audio data corresponding to the tag data from an audio/video file generated by recomposing the electronic book;
And the audio/video file is preset with an audio tag corresponding to the tag data.
5. the method of any of claims 1-3, wherein the obtaining audio data corresponding to the tag data comprises:
Extracting audio data corresponding to the tag data from a preset audio database;
And the audio data in the audio database are classified and stored according to the type of the audio data.
6. The method of claim 1, wherein, if the electronic book is a TTS electronic book, before the determining the current reading position of the electronic book, the method further comprises:
acquiring the text content of the electronic book, wherein a plurality of label data are arranged in the text content;
Acquiring audio data corresponding to each tag data;
And generating a TTS electronic book according to the text content, the plurality of label data and the corresponding audio data.
7. An electronic book-based audio playback apparatus, comprising:
The electronic book reading device comprises a tag data acquisition module, a tag data acquisition module and a tag reading module, wherein the tag data acquisition module is used for determining the current reading position of the electronic book and acquiring tag data corresponding to the current reading position; the tag data is used for indicating the plot information and/or emotion information corresponding to the current reading position;
The first audio data acquisition module is used for acquiring audio data corresponding to the tag data; the content type of the audio data corresponding to the tag data is matched with the episode information and/or emotion information;
the audio data playing module is used for playing the audio data;
The device further comprises:
The plot analysis module is used for performing plot analysis on one or more combinations of chapter information, paragraph information and page information of the electronic book before the tag data acquisition module determines the current reading position of the electronic book if the tag data is used for indicating the plot information corresponding to the current reading position, and setting tag data for the electronic book according to a plot analysis result at least one position of one or more combinations of chapters, paragraphs and pages of the electronic book;
The tag data acquisition module comprises:
The reading position determining submodule is used for determining the current reading position of the electronic book;
the first tag data acquisition submodule is used for acquiring a plurality of tag data corresponding to the current reading position;
The label data classification submodule is used for classifying the plurality of label data to obtain the number of various types of label data;
The first tag data determining submodule is used for determining the tag data with the largest quantity as the tag data corresponding to the current reading position;
The tag data judgment submodule is used for judging whether the current reading position corresponds to tag data or not;
The second tag data acquisition submodule is used for acquiring tag data which are before and after the current reading position and are closest to the current reading position if the current reading position has no tag data;
And the second tag data determining submodule is used for determining the tag data which is closest to the current reading position in the acquired tag data as the tag data corresponding to the current reading position.
8. the apparatus of claim 7, wherein the tag data comprises at least one of: episode beginning label, episode mat label, episode climax label, episode transition label.
9. The apparatus of claim 7, wherein the apparatus further comprises:
and the semantic analysis module is used for performing semantic analysis on the content of the electronic book before the tag data acquisition module determines the current reading position of the electronic book if the tag data is used for indicating emotion information corresponding to the current reading position, determining at least one emotion content in the electronic book according to a semantic analysis result, and setting tag data for the electronic book according to the determined emotion content.
10. The apparatus according to any one of claims 7 to 9, wherein the first audio data obtaining module is configured to obtain audio data corresponding to the tag data from an audio-video file generated according to the electronic book recomposition;
And the audio/video file is preset with an audio tag corresponding to the tag data.
11. the apparatus according to any one of claims 7 to 9, wherein the first audio data obtaining module is configured to extract audio data corresponding to the tag data from a preset audio database;
And the audio data in the audio database are classified and stored according to the type of the audio data.
12. the apparatus of claim 7, wherein the apparatus further comprises:
the electronic book reading device comprises a text content acquisition module, a tag data acquisition module and a display module, wherein the text content acquisition module is used for acquiring text content of the electronic book before the tag data acquisition module determines the current reading position of the electronic book if the electronic book is a TTS electronic book, and a plurality of tag data are arranged in the text content;
the second audio data acquisition module is used for acquiring audio data corresponding to each tag data;
And the TTS electronic book generating module is used for generating a TTS electronic book according to the text content, the plurality of label data and the corresponding audio data.
13. a terminal device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
The memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the electronic book-based audio playing method according to any one of claims 1-6.
CN201710209671.2A 2017-03-31 2017-03-31 Audio playing method and device based on electronic book and terminal equipment Active CN106960051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710209671.2A CN106960051B (en) 2017-03-31 2017-03-31 Audio playing method and device based on electronic book and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710209671.2A CN106960051B (en) 2017-03-31 2017-03-31 Audio playing method and device based on electronic book and terminal equipment

Publications (2)

Publication Number Publication Date
CN106960051A CN106960051A (en) 2017-07-18
CN106960051B true CN106960051B (en) 2019-12-10

Family

ID=59483203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710209671.2A Active CN106960051B (en) 2017-03-31 2017-03-31 Audio playing method and device based on electronic book and terminal equipment

Country Status (1)

Country Link
CN (1) CN106960051B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369462B (en) * 2017-07-21 2020-06-26 阿里巴巴(中国)有限公司 Electronic book voice playing method and device and terminal equipment
CN108121758A (en) * 2017-11-16 2018-06-05 五八有限公司 Methods of exhibiting, device, equipment and the system of details page
CN108230428B (en) * 2017-12-29 2019-02-01 掌阅科技股份有限公司 E-book rendering method, electronic equipment and storage medium based on augmented reality
CN108053696A (en) * 2018-01-04 2018-05-18 广州阿里巴巴文学信息技术有限公司 A kind of method, apparatus and terminal device that sound broadcasting is carried out according to reading content
CN110532213A (en) * 2018-05-23 2019-12-03 广州阿里巴巴文学信息技术有限公司 Rendering method, device and the equipment of e-book
CN108877764B (en) * 2018-06-28 2019-06-07 掌阅科技股份有限公司 Audio synthetic method, electronic equipment and the computer storage medium of talking e-book
CN110032355B (en) * 2018-12-24 2022-05-17 阿里巴巴集团控股有限公司 Voice playing method and device, terminal equipment and computer storage medium
CN109726308A (en) * 2018-12-27 2019-05-07 上海连尚网络科技有限公司 A kind of method and apparatus for the background music generating novel
CN110362744B (en) * 2019-06-26 2023-10-24 联通沃悦读科技文化有限公司 Reading recommendation method and system, terminal equipment, computer equipment and medium
CN111008287B (en) * 2019-12-19 2023-08-04 Oppo(重庆)智能科技有限公司 Audio and video processing method and device, server and storage medium
CN111125314B (en) * 2019-12-25 2020-11-10 掌阅科技股份有限公司 Display method of book query page, electronic device and computer storage medium
CN111459446B (en) * 2020-03-27 2021-08-17 掌阅科技股份有限公司 Resource processing method of electronic book, computing equipment and computer storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136199A (en) * 2011-03-10 2011-07-27 刘超 On-line electronic book reader and on-line electronic book editor
CN102687182A (en) * 2009-11-10 2012-09-19 杜尔塞塔有限公司 Dynamic audio playback of soundtracks for electronic visual works
CN105095422A (en) * 2015-07-15 2015-11-25 百度在线网络技术(北京)有限公司 Multimedia display method and device and talking pen
CN105335455A (en) * 2015-08-28 2016-02-17 广东小天才科技有限公司 Text reading method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102687182A (en) * 2009-11-10 2012-09-19 杜尔塞塔有限公司 Dynamic audio playback of soundtracks for electronic visual works
CN102136199A (en) * 2011-03-10 2011-07-27 刘超 On-line electronic book reader and on-line electronic book editor
CN105095422A (en) * 2015-07-15 2015-11-25 百度在线网络技术(北京)有限公司 Multimedia display method and device and talking pen
CN105335455A (en) * 2015-08-28 2016-02-17 广东小天才科技有限公司 Text reading method and apparatus

Also Published As

Publication number Publication date
CN106960051A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
CN106960051B (en) Audio playing method and device based on electronic book and terminal equipment
CN108986826A (en) Automatically generate method, electronic device and the readable storage medium storing program for executing of minutes
CN109754783B (en) Method and apparatus for determining boundaries of audio sentences
CN105245917A (en) System and method for generating multimedia voice caption
CN108242238B (en) Audio file generation method and device and terminal equipment
CN104980790B (en) The generation method and device of voice subtitle, playing method and device
WO2021259300A1 (en) Sound effect adding method and apparatus, storage medium, and electronic device
US20170092277A1 (en) Search and Access System for Media Content Files
CN103053173B (en) Interest interval determines that device, interest interval determine that method and interest interval determine integrated circuit
US9666211B2 (en) Information processing apparatus, information processing method, display control apparatus, and display control method
CN104994404A (en) Method and device for obtaining keywords for video
CN111046226B (en) Tuning method and device for music
CN106550268B (en) Video processing method and video processing device
CN110312161B (en) Video dubbing method and device and terminal equipment
US8069044B1 (en) Content matching using phoneme comparison and scoring
CN110555117B (en) Data processing method and device and electronic equipment
CN111243618B (en) Method, device and electronic equipment for determining specific voice fragments in audio
WO2023029984A1 (en) Video generation method and apparatus, terminal, server, and storage medium
CN109410972A (en) Generate the method, apparatus and storage medium of sound effect parameters
CN104978404B (en) A kind of generation method and device of video album title
Hazel et al. Transcription linking software: integrating the ephemeral and the fixed in interaction research
CN114520931A (en) Video generation method and device, electronic equipment and readable storage medium
CN111462736B (en) Image generation method and device based on voice and electronic equipment
US20150220629A1 (en) Sound Melody as Web Search Query
CN112135201A (en) Video production method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant