CN110717066A - Intelligent searching method based on audio electronic book and electronic equipment - Google Patents

Intelligent searching method based on audio electronic book and electronic equipment Download PDF

Info

Publication number
CN110717066A
CN110717066A CN201910965391.3A CN201910965391A CN110717066A CN 110717066 A CN110717066 A CN 110717066A CN 201910965391 A CN201910965391 A CN 201910965391A CN 110717066 A CN110717066 A CN 110717066A
Authority
CN
China
Prior art keywords
query result
full
target object
information
electronic book
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910965391.3A
Other languages
Chinese (zh)
Inventor
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ireader Technology Co Ltd
Zhangyue Technology Co Ltd
Original Assignee
Zhangyue Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangyue Technology Co Ltd filed Critical Zhangyue Technology Co Ltd
Priority to CN201910965391.3A priority Critical patent/CN110717066A/en
Publication of CN110717066A publication Critical patent/CN110717066A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Library & Information Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an intelligent searching method based on a sound electronic book and electronic equipment, wherein the method comprises the following steps: in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined; obtaining an object query result matched with the target object; and displaying the object query result matched with the target object in a search result page. By the method, the question answering and the confusion of the user in the listening process can be performed, so that the reading quality of the user is improved.

Description

Intelligent searching method based on audio electronic book and electronic equipment
Technical Field
The invention relates to the field of computers, in particular to an intelligent searching method based on a sound electronic book and electronic equipment.
Background
Currently, electronic book applications are increasingly widespread. To facilitate reading by a user, many ebook applications have introduced a talking ebook to convert the traditional "reading" to "listening to a book". Through the sound electronic book, a user can directly listen to the audio content corresponding to the text of the electronic book, so that the eyesight of the user can be protected, and the visual fatigue is prevented; and the user can acquire knowledge under various conditions that the reading is inconvenient.
However, the above solutions in the prior art have at least the following drawbacks: existing talking ebooks cannot interact with users during listening, and most users are less sensitive to audio, so that confusion during listening is likely to be caused by the understandings of the context. Therefore, the existing sound electronic book cannot answer questions generated in the listening process of the user, so that the reading quality of the user is affected.
Disclosure of Invention
In view of the above, the present invention provides an intelligent searching method and an electronic device based on a talking electronic book, which overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided an intelligent search method based on a talking electronic book, including:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
According to another aspect of the present invention, there is provided an electronic apparatus including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
According to yet another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing the processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
According to the intelligent searching method and the electronic equipment based on the sound electronic book, the intelligent searching instruction can be received in the playing process of the sound electronic book, the target object corresponding to the intelligent searching instruction is determined, and then the object query result matched with the target object is obtained and displayed. Therefore, the method can acquire the object query result corresponding to the target object through the intelligent search instruction, and is helpful for the user to understand the current content. Therefore, the method can be used for answering and solving the questions generated in the listening process of the user, so that the reading quality of the user is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating a method for intelligent searching based on a talking electronic book according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for intelligent searching based on a talking electronic book according to another embodiment of the present invention;
fig. 3 shows a schematic structural diagram of an electronic device according to another embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
Fig. 1 shows a flowchart of an intelligent searching method based on a talking electronic book according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
step S110: in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined.
The intelligent search instruction can be a voice instruction or an instruction triggered by a preset search entry, and the specific form of the intelligent search instruction is not limited by the invention. In particular, when a target object corresponding to the smart search instruction is determined, it can be flexibly determined in various ways. For example, when the intelligent search instruction is a voice instruction, semantic recognition may be performed on the voice instruction to determine the target object according to the recognition result. For another example, when the intelligent search instruction is an instruction triggered through a preset search entry, the content being played by the audio electronic book when the instruction is triggered is determined according to the trigger time of the instruction, and then the target object is extracted from the content.
Step S120: and acquiring an object query result matched with the target object.
The object query result matched with the target object is used to explain the target object, and the object query result may be in a text form, or may be in a picture or audio form, which is not limited in this invention. In specific implementation, the object query result may be extracted directly from the audio e-book, or extracted from a text e-book or a comic e-book corresponding to the audio e-book, or even extracted according to information such as user comments, author answers, and the like, which is not limited in the present invention.
Step S130: and displaying the object query result matched with the target object in the search result page.
The search result page may be a page covered in a floating layer or the like on the audio playing interface, or may be a local page area in the audio playing interface. In addition, in the process of displaying the object query result, the audio content being played may be paused or played normally, which is not limited in the present invention.
Therefore, the method can acquire the object query result corresponding to the target object through the intelligent search instruction, and is helpful for the user to understand the current content. Therefore, the method can be used for answering and solving the questions generated in the listening process of the user, so that the reading quality of the user is improved.
Example two
Fig. 2 is a flowchart illustrating an intelligent searching method based on a talking electronic book according to another embodiment of the present invention. As shown in fig. 2, the method comprises the steps of:
step S210: target objects contained in a talking electronic book are analyzed in advance.
The target object is mainly representative content included in the electronic book and interested by the user, and may be, for example, a character, an event, an action, and the like. In specific implementation, the target objects contained in the electronic book can be extracted in the following ways:
firstly, text information corresponding to the sound electronic book is obtained, and a plurality of target keywords contained in the text information are identified. For the audio e-book with the text, the text information corresponding to the audio e-book can be directly determined according to the text of the e-book; for the audio e-book without the original text, the text information corresponding to the audio e-book can be obtained by performing speech recognition on the speech content played in the current time period. In specific implementation, word segmentation is performed on text information corresponding to the audio electronic book, part-of-speech recognition is performed on each word obtained after word segmentation, and nouns, actions and the like in the words are selected as target keywords according to a part-of-speech recognition result. Wherein, the target keyword includes: a person name type keyword, an event type keyword, and/or an action type keyword.
And then, extracting the target object from the target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or the user interaction data corresponding to each target keyword. Since the text information generally includes a large number of target keywords, in order to filter the keywords of interest to the user as the target objects, the filtering may be performed in at least one of the following manners:
in the first mode, the occurrence frequency of each target keyword in the audio electronic book is counted, and the target object is extracted according to the occurrence frequency. For example, according to the frequency of occurrence of each target keyword in the audio electronic book, the target keyword with a higher frequency of occurrence is extracted as the target object corresponding to the time period. The target keywords with high occurrence frequency are often important contents in the electronic book, and therefore, the target keywords are screened as the target objects, which is beneficial to improving the reading effect.
In the second mode, the interaction frequency of the user interaction data corresponding to each target keyword is counted, and the target object is extracted according to the interaction frequency. For example, according to the user interaction data corresponding to each target keyword, the target keywords with higher interaction frequency are counted and determined as the target objects. The target keyword with the high interaction frequency is extracted as the target object, so that the content which is interested by the user is searched for.
In the third mode, the interaction type of the user interaction data corresponding to each target keyword is determined, and the target keyword with the interaction type of the user interaction data being a preset interaction type is extracted as the target object. The interaction types of the user interaction data include a comment type, a search type, a note type, a sharing type, a collection type and the like, wherein the preset interaction types may include: the search type, the comment type, the sharing type, the collection type and the like can reflect the types in which the user is interested. Because the preset interaction type can reflect the interestingness of the user, the target object extracted in the mode is the favorite content of the user generally, and the search result of the content provided can meet the requirements of the user better. Preferably, the preset interaction type is set as a search type so as to determine a target object of the contents frequently searched by most users.
The three methods described above may be used alone or in combination. For example, different weights may be set for various interaction types in advance, then, the number of interactions is weighted according to the interaction type of each target keyword to obtain a weighted operation result, and the target object is selected according to the height of the weighted operation result.
Among them, the target objects in the talking ebook generally include: persona entities, animal and plant entities, and/or non-biological entities, etc. When the target object is a character entity, some types of electronic books are composed of a plurality of audio segments respectively corresponding to different character entities and provided by different readers, and at this time, the corresponding character entity can be determined according to the audio attribute information of each time segment of the audio electronic book.
Step S220: and configuring corresponding full-text search query results for each target object according to the analysis results, and storing each target object and the full-text search query results configured for each target object in a database in a correlated manner.
Wherein the full-text search query results include: text type query results, image type query results, and/or audio type query results.
In an implementation manner of this embodiment, the full-text search query result is an audio-type query result, specifically, an audio segment corresponding to the target object in the audio electronic book. In specific implementation, the audio query result is determined in the following manner: acquiring audio information corresponding to each occurrence of a target object in a sound electronic book; and extracting each audio segment contained in the audio query result from the plurality of audio information according to the full-text appearance sequence corresponding to each audio information and/or the user interaction data corresponding to each audio information. For example, when the target object is a preset person name, first, audio information corresponding to the preset person name appearing in the audio electronic book is acquired, so that all audio contents in the book referring to the preset person name are extracted. And then, screening each extracted audio information to obtain an audio query result. Because the number of the audio information containing the preset name is often large, in order to extract important parts from the audio information, the audio information can be extracted according to the appearance sequence of the full text corresponding to each audio information, for example, the audio information which appears for the first time is extracted (usually, the author will be described with emphasis when the character appears for the first time); for another example, the audio information that appears N times before is extracted, where N is a natural number, and may be, for example, 3. In addition, the extraction can be performed according to user interaction data corresponding to each audio information, specifically according to the number of interactions and/or the type of interactions. Wherein the interaction types include: comment types, search types, collection types and the like, and audio information interested by the user can be extracted as audio query results according to the interaction types. Accordingly, in the present embodiment, a plurality of pieces of audio information including the preset person name are extracted as the object query result corresponding to the target object, so that the user can quickly know the person, thereby providing guidance for subsequent reading.
In another implementation manner of this embodiment, the full-text search query result is a text-type query result, specifically, a text segment of the target object in the text information corresponding to the audio electronic book. In specific implementation, the text type query result is determined in the following way: acquiring text contents corresponding to the target object when the target object appears in text information corresponding to the sound electronic book; and extracting each text segment contained in the text type query result from the plurality of text contents according to the full text appearance sequence corresponding to each text content and/or the user interaction data corresponding to each text content. The specific implementation manner is similar to the audio query result, and is not described herein again.
In another implementation manner of this embodiment, the full-text search query result is an image class query result, and accordingly, corresponding object image information is configured in advance for each target object so as to serve as the image class query result. For example, when the target object is a character, the object image information may be a picture of the character or cartoon animation picture, which is portrayed by an actor; when the target object is an animal, the object image information may be a corresponding animal photograph or cartoon picture.
The three implementations described above can be used alone or in combination. For example, a target object may be configured with full-text search query results that match the object type based on the object type. Correspondingly, a mapping relation between the object type and the result type of the full-text search query result is established in advance, so that the corresponding full-text search query result is configured for the target object of each type based on the mapping relation. The object types can be divided in various manners, and the object types can be divided according to the types of the object types, such as characters, animals, events, actions and the like; the division may also be made according to the importance of the object type. In a preferred implementation manner, the object types are divided into three types of people, events and non-people entities, and accordingly, audio type query results, such as people dialogue and the like, are configured for people objects; configuring a text type query result of a text type aiming at the event type object so as to comprehensively replace an event background; and configuring an image type query result aiming at the non-human type entity object so as to realize visualization presentation.
Step S230: in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined.
In one implementation, the intelligent search instruction is a voice call instruction, for example, a user may trigger the intelligent search instruction by a preset call word. In addition, the intelligent search instruction also contains voice search content input by the user. Correspondingly, the voice search content except the evoking words contained in the voice evoking instruction is recognized, and the target object corresponding to the intelligent search instruction is determined according to the recognition result.
In yet another implementation, the smart search instruction is a search instruction triggered through a search entry. The search entry may be implemented in the form of a search button, a search hotspot, etc. in the interface. Correspondingly, after the search instruction is received, the electronic book playing content corresponding to the intelligent search instruction is obtained, and the target object corresponding to the intelligent search instruction is determined according to the electronic book playing content. That is, in this manner, the audio electronic book is acquired as the content being played in the trigger of the search instruction, and the target object is determined based on the content being played. For example, when an audio electronic book is playing a dialogue about a person a, the user triggers a search instruction, and accordingly, the person a is determined as a target object.
When determining the e-book playing content corresponding to the intelligent search instruction and determining the target object corresponding to the intelligent search instruction according to the e-book playing content, the method can be implemented in a voice recognition mode, and can also analyze the target object corresponding to each time point of the audio e-book in advance and store the mapping relationship between each time point and the target object corresponding to the time point in advance.
Step S240: and acquiring an object query result matched with the target object. Specifically, first, full-text search query results that match a target object are obtained, for example, full-text search query results that are queried from a database and stored in association with the target object are obtained. Then, extracting local query results from the full-text search query results; and determining the local query result as an object query result matched with the target object.
Specifically, this step may be achieved in at least one of the following ways:
in one implementation, the top N query results are extracted from the full-text search query results as local query results; wherein N is a natural number. For example, the top 3 query results are extracted as local query results. In addition, the value of N may be further determined according to the current playing progress, for example, if the current playing progress is one third of the content of the full book, and if the full-text search query result has 30 pieces of content, the top 10 pieces of query result are extracted according to the current playing progress to serve as the local query result, so that the local query result is matched with the current playing progress.
In yet another implementation, a plurality of non-adjacent query results are extracted from the full-text search query results at preset intervals as local query results. For example, starting from the first full-text search query result, every M query results are extracted as local query results, and the value of M is a natural number. For example, when the value of M is 1, one local query result is extracted every other line, and correspondingly, the 1 st, 3 rd and 5 th query results … … are extracted as local query results, so that the query results of the target object can be displayed in the time line development sequence.
In yet another implementation, according to user interaction data corresponding to full-text search query results, query results with interaction times greater than a preset number and/or interaction types belonging to a preset type are extracted from the full-text search query results as local query results. For example, the query results with more interaction times or the interaction types belonging to deep interaction types such as comment types and search types are displayed, so that the climax paragraphs corresponding to the target object can be displayed conveniently.
Preferably, in this step, a local query result is extracted from the full-text search query result according to the current playing progress information. This preferred mode may be used alone or in combination with at least one of the above modes. And the local query result does not contain the information content corresponding to the unplayed part in the audio electronic book. In this embodiment, a local query result matching the current playing progress information is further extracted from the full-text search query result according to the current playing progress information, so as to avoid the problem that the reading enthusiasm of the user is affected by informing the user of the subsequent plot, and avoid unnecessary trouble to the user.
In a specific implementation, the full-text search query result includes a plurality of content segments carrying time paragraph information. Correspondingly, determining the content segments matched with the current playing progress information according to the time segment information of each content segment; and extracting local query results from the full-text search query results according to the content segments matched with the current playing progress information. Wherein the time section information includes: timeline information, and/or chapter paragraph information, etc. In specific implementation, determining a current time paragraph corresponding to the current playing progress information; determining a difference value between the time paragraph information of each content segment contained in the full-text search query result and the current time paragraph; and determining the content segment of which the difference value is not greater than the preset range as the content segment matched with the current playing progress information. For example, the current time segment corresponding to the current play progress information refers to: time point information or paragraph information corresponding to the current playing progress information, for example, when the current playing is up to 17 minutes and 30 seconds, the current time paragraph is 17 minutes and 30 seconds; if the current time paragraph is chapter 3, section 2, the current time paragraph is chapter 3, section 2. When determining the difference between the time paragraph information of each content segment contained in the full-text search query result and the current time paragraph, the time paragraph information of each content segment is mainly matched with the current time paragraph to determine the content segment with the time paragraph information consistent with the current time paragraph. And taking the content segment with the same time segment information as the current time segment as the content segment matched with the current playing progress information. However, considering that the time section information of each content section may not be completely the same as the current time section, for example, it is assumed that the current time section is 17 minutes and 30 seconds, and the full-text search query result includes two content sections respectively located at 17 minutes and 40 seconds and 17 minutes and 28 seconds, at this time, according to the principle that the difference is smaller than the preset range and the difference is the minimum, the content section of 17 minutes and 28 seconds is selected as the content section matched with the current playing progress information. Correspondingly, the local query result is determined according to each content segment before the content segment matched with the current playing progress information, wherein when the local query result is determined, the local query result can be determined only according to each content segment before the content segment matched with the current playing progress information, or can be further determined according to the content segment matched with the current playing progress information, and specific details are not limited in the invention.
The operation of extracting the local query result from the full-text search query result may be performed in real time in this step, or may be performed in advance in step S210, for example, the local query result corresponding to the target object at each time point is analyzed in advance, and the mapping relationship between the time period in which the target object appears and the corresponding local query result is stored, so that the object query result is determined quickly based on the pre-stored mapping relationship.
Step S250: and displaying the object query result matched with the target object in the search result page.
The search result page may be a page covered in a floating layer or the like on the audio playing interface, or may be a local page area in the audio playing interface. In addition, in the process of displaying the object query result, the audio content being played may be paused or played normally, which is not limited in the present invention.
In addition, the object image information in the present embodiment may be in various forms such as still picture information, moving picture information, animation model information, and/or video information. For example, still picture information, moving picture information, and animation model information may be configured separately for each target object in the same talking ebook, in connection with a specific scene. In the present embodiment, in order to diversify the presentation effect, the information types of the object image information include at least two of the following information types: a still picture type, a moving picture type, and an animation model type. Accordingly, the target object in the audio electronic book is divided into at least two object types in advance, and an information type matched with the object type is configured for each object type. For example, the target objects in the talking ebook may be divided into objects of a first class, objects of a second class, and objects of a third class. The first type of object has the highest importance, and is usually a main character role or a key description object in the electronic book, and accordingly, object image information of an animation model type is configured for the first type of object, and the animation model can be customized according to requirements, so that the display effect is better. The second type of object is of a slightly lower importance, typically a secondary persona in an electronic book, and accordingly, object image information of a moving picture type is configured for the second type of object. The importance of the third type of object is the lowest, and is usually a supporting corner in the electronic book, and accordingly, the object image information of the still picture type is configured for the third type of object, so that data traffic is saved, and transmission delay is reduced. In a word, different types of object image information are configured for different types of target objects, so that important persons can be highlighted, and different search requirements of users for different levels of roles are met.
Those skilled in the art may also make various modifications and variations to the above steps, for example, the steps may be combined into fewer steps or split into more steps, and the execution sequence of each step may also be adjusted, which is not limited by the present invention.
In addition, the full-text search query result configured for each target object can be determined directly according to the audio content or the speech-to-text content of the vocal electronic book, and can be further determined by combining other types of electronic books corresponding to the vocal electronic book, such as a text electronic book or a cartoon electronic book. For example, for the same copyright content, a plurality of types of electronic books are published at the same time, and besides the audio electronic book which is convenient for the user to listen to, there are also text electronic books with fine and comprehensive text description and cartoon electronic books with rich composition, and although the subject matter of each type of electronic book is the same, the display forms of the electronic books are different. For example, a text e-book is very comprehensive in description of events, and thus, text-type query results may be extracted from the text e-book. For another example, the cartoon e-book is very vivid in drawing aiming at the image of the character, so that the image query result can be extracted from the cartoon e-book, thereby realizing content sharing among various types of e-books, expanding the reading content of the user and improving the reading efficiency of the user.
It can be seen that, in the present embodiment, the object query result includes at least one of the following: the method comprises the steps of determining a text object query result according to a text electronic book corresponding to a talking electronic book, determining an image object query result according to an image electronic book (such as a cartoon electronic book) corresponding to the talking electronic book, and determining an audio object query result according to each audio segment in the talking electronic book. Correspondingly, when an object query result matched with the target object is displayed in the search result page, an object query result matched with the target object and a jump entry corresponding to the object query result are displayed in the search result page; and the jump entry is used for jumping to the context information corresponding to the object query result in the electronic book. For example, when the object query result is a text object query result, the jump entry is used for jumping to a context position corresponding to the object query result in a text electronic book corresponding to the audio electronic book. For another example, when the object query result is an image object query result, the jump entry is used to jump to a context location corresponding to the object query result in an image e-book corresponding to the audio e-book. For another example, when the object query result is an audio object query result, the jump entry is used to jump to a context location in the audio electronic book corresponding to the object query result to play context information of the object query result.
In summary, in the above manner of the present invention, the object query result corresponding to the target object can be obtained through the intelligent search instruction, and the object query result is determined according to the current playing progress information, so that the present content understanding by the user is facilitated. Therefore, the method can be used for answering and solving the questions generated in the listening process of the user, so that the reading quality of the user is improved. Wherein, the object query result can be shown in various forms.
EXAMPLE III
The embodiment of the application provides a non-volatile computer storage medium, wherein the computer storage medium stores at least one executable instruction, and the computer executable instruction can execute the intelligent search method based on the audio electronic book in any method embodiment.
The executable instructions may be specifically configured to cause the processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
In an alternative, the executable instructions cause the processor to:
acquiring a full-text search query result matched with the target object;
extracting local query results from the full-text search query results;
and determining the local query result as the object query result matched with the target object.
In an alternative, the executable instructions cause the processor to:
extracting the first N query results from the full-text search query results as local query results; wherein N is a natural number; and/or the presence of a gas in the gas,
extracting a plurality of query results which are not adjacent to each other from the full text search query results according to a preset interval to be used as local query results; and/or the presence of a gas in the gas,
and extracting the query result with the interaction times larger than the preset times and/or the interaction type belonging to the preset type from the full-text search query result as a local query result according to the user interaction data corresponding to the full-text search query result.
In an alternative, the executable instructions cause the processor to:
extracting a local query result from the full-text search query result according to the current playing progress information;
and the local query result does not contain information content corresponding to the unplayed part of the audio electronic book.
In an optional manner, the full-text search query result includes a plurality of content segments carrying time paragraph information;
the executable instructions cause the processor to:
determining the content segments matched with the current playing progress information according to the time segment information of each content segment;
extracting a local query result from the full-text search query result according to the content segment matched with the current playing progress information;
wherein the time section information includes: timeline information, and/or chapter section information.
In an alternative, the executable instructions cause the processor to:
determining a current time segment corresponding to the current playing progress information;
determining a difference between the temporal segment information of each content segment contained in the full-text search query result and the current temporal segment;
and determining the content segment with the difference value not larger than the preset range as the content segment matched with the current playing progress information.
In an alternative, the executable instructions cause the processor to:
target objects contained in the sound electronic book are analyzed in advance, and corresponding full-text search query results are configured for all the target objects according to analysis results;
storing each target object and a full text search query result configured for each target object in a database in a correlated manner;
then the obtaining full-text search query results that match the target object includes: and querying and obtaining full-text search query results stored in association with the target object from the database.
In an alternative, the executable instructions cause the processor to:
acquiring text information corresponding to a sound electronic book, and identifying a plurality of target keywords contained in the text information;
extracting target objects from a plurality of target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or user interaction data corresponding to each target keyword;
wherein the target keywords comprise: a person name type keyword, an event type keyword, and/or an action type keyword.
In an alternative approach, the full-text search query results include: text type query results, image type query results and/or audio type query results;
wherein the audio query result comprises: and the target object corresponds to an audio segment in the audio electronic book.
In an alternative, the executable instructions cause the processor to:
acquiring audio information corresponding to the target object when the target object appears in the sound electronic book each time;
and extracting each audio segment contained in the audio query result from the plurality of audio information according to the full-text appearance sequence corresponding to each audio information and/or the user interaction data corresponding to each audio information.
In an alternative, the smart search instruction includes: voice evoking instructions, and/or search instructions triggered by a search entry;
the executable instructions cause the processor to:
recognizing voice search content contained in the voice evoking instruction, and determining a target object corresponding to the intelligent search instruction according to a recognition result; and/or the presence of a gas in the gas,
and acquiring electronic book playing content corresponding to the intelligent searching instruction, and determining a target object corresponding to the intelligent searching instruction according to the electronic book playing content.
In an alternative approach, the object query result includes at least one of: according to a text object query result determined by a text electronic book corresponding to the talking electronic book, an image object query result determined by an image electronic book corresponding to the talking electronic book, and an audio object query result determined by each audio segment in the talking electronic book;
the executable instructions cause the processor to:
displaying an object query result matched with the target object and a jump inlet corresponding to the object query result in a search result page; and the jump entry is used for jumping to the context information corresponding to the object query result in the electronic book.
Example four
Fig. 3 is a schematic structural diagram of an electronic device according to another embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
As shown in fig. 3, the electronic device may include: a processor (processor)302, a communication Interface 304, a memory 306, and a communication bus 308.
Wherein: the processor 302, communication interface 304, and memory 306 communicate with each other via a communication bus 308. A communication interface 304 for communicating with network elements of other devices, such as clients or other servers. The processor 302 is configured to execute the program 310, and may specifically perform relevant steps in the above-described intelligent search method embodiment based on the e-book.
In particular, program 310 may include program code comprising computer operating instructions.
The processor 302 may be a central processing unit CPU, or an application specific Integrated circuit (asic), or one or more Integrated circuits configured to implement an embodiment of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 306 for storing a program 310. Memory 306 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 310 may specifically be configured to cause the processor 302 to perform the following operations:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
In an alternative, the executable instructions cause the processor to:
acquiring a full-text search query result matched with the target object;
extracting local query results from the full-text search query results;
and determining the local query result as the object query result matched with the target object.
In an alternative, the executable instructions cause the processor to:
extracting the first N query results from the full-text search query results as local query results; wherein N is a natural number; and/or the presence of a gas in the gas,
extracting a plurality of query results which are not adjacent to each other from the full text search query results according to a preset interval to be used as local query results; and/or the presence of a gas in the gas,
and extracting the query result with the interaction times larger than the preset times and/or the interaction type belonging to the preset type from the full-text search query result as a local query result according to the user interaction data corresponding to the full-text search query result.
In an alternative, the executable instructions cause the processor to:
extracting a local query result from the full-text search query result according to the current playing progress information;
and the local query result does not contain information content corresponding to the unplayed part of the audio electronic book.
In an optional manner, the full-text search query result includes a plurality of content segments carrying time paragraph information;
the executable instructions cause the processor to:
determining the content segments matched with the current playing progress information according to the time segment information of each content segment;
extracting a local query result from the full-text search query result according to the content segment matched with the current playing progress information;
wherein the time section information includes: timeline information, and/or chapter section information.
In an alternative, the executable instructions cause the processor to:
determining a current time segment corresponding to the current playing progress information;
determining a difference between the temporal segment information of each content segment contained in the full-text search query result and the current temporal segment;
and determining the content segment with the difference value not larger than the preset range as the content segment matched with the current playing progress information.
In an alternative, the executable instructions cause the processor to:
target objects contained in the sound electronic book are analyzed in advance, and corresponding full-text search query results are configured for all the target objects according to analysis results;
storing each target object and a full text search query result configured for each target object in a database in a correlated manner;
then the obtaining full-text search query results that match the target object includes: and querying and obtaining full-text search query results stored in association with the target object from the database.
In an alternative, the executable instructions cause the processor to:
acquiring text information corresponding to a sound electronic book, and identifying a plurality of target keywords contained in the text information;
extracting target objects from a plurality of target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or user interaction data corresponding to each target keyword;
wherein the target keywords comprise: a person name type keyword, an event type keyword, and/or an action type keyword.
In an alternative approach, the full-text search query results include: text type query results, image type query results and/or audio type query results;
wherein the audio query result comprises: and the target object corresponds to an audio segment in the audio electronic book.
In an alternative, the executable instructions cause the processor to:
acquiring audio information corresponding to the target object when the target object appears in the sound electronic book each time;
and extracting each audio segment contained in the audio query result from the plurality of audio information according to the full-text appearance sequence corresponding to each audio information and/or the user interaction data corresponding to each audio information.
In an alternative, the smart search instruction includes: voice evoking instructions, and/or search instructions triggered by a search entry;
the executable instructions cause the processor to:
recognizing voice search content contained in the voice evoking instruction, and determining a target object corresponding to the intelligent search instruction according to a recognition result; and/or the presence of a gas in the gas,
and acquiring electronic book playing content corresponding to the intelligent searching instruction, and determining a target object corresponding to the intelligent searching instruction according to the electronic book playing content.
In an alternative approach, the object query result includes at least one of: according to a text object query result determined by a text electronic book corresponding to the talking electronic book, an image object query result determined by an image electronic book corresponding to the talking electronic book, and an audio object query result determined by each audio segment in the talking electronic book;
the executable instructions cause the processor to:
displaying an object query result matched with the target object and a jump inlet corresponding to the object query result in a search result page; and the jump entry is used for jumping to the context information corresponding to the object query result in the electronic book.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The invention also discloses A1. an intelligent searching method based on the audio electronic book, which comprises the following steps:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
A2. The method of a1, wherein the obtaining object query results that match the target object includes:
acquiring a full-text search query result matched with the target object;
extracting local query results from the full-text search query results;
and determining the local query result as the object query result matched with the target object.
A3. The method of a2, wherein the extracting local query results from the full-text search query results comprises:
extracting the first N query results from the full-text search query results as local query results; wherein N is a natural number; and/or the presence of a gas in the gas,
extracting a plurality of query results which are not adjacent to each other from the full text search query results according to a preset interval to be used as local query results; and/or the presence of a gas in the gas,
and extracting the query result with the interaction times larger than the preset times and/or the interaction type belonging to the preset type from the full-text search query result as a local query result according to the user interaction data corresponding to the full-text search query result.
A4. The method of a2 or 3, wherein the extracting local query results from the full-text search query results comprises:
extracting a local query result from the full-text search query result according to the current playing progress information;
and the local query result does not contain information content corresponding to the unplayed part of the audio electronic book.
A5. The method according to a4, wherein the full-text search query result includes a plurality of content segments carrying time-paragraph information;
then said extracting a local query result from said full-text search query result according to the current playing progress information comprises:
determining the content segments matched with the current playing progress information according to the time segment information of each content segment;
extracting a local query result from the full-text search query result according to the content segment matched with the current playing progress information;
wherein the time section information includes: timeline information, and/or chapter section information.
A6. The method according to a5, wherein the determining the content segment matching the current play progress information includes:
determining a current time segment corresponding to the current playing progress information;
determining a difference between the temporal segment information of each content segment contained in the full-text search query result and the current temporal segment;
and determining the content segment with the difference value not larger than the preset range as the content segment matched with the current playing progress information.
A7. The method according to any of a1-6, wherein before the method is performed, the method further comprises:
target objects contained in the sound electronic book are analyzed in advance, and corresponding full-text search query results are configured for all the target objects according to analysis results;
storing each target object and a full text search query result configured for each target object in a database in a correlated manner;
then the obtaining full-text search query results that match the target object includes: and querying and obtaining full-text search query results stored in association with the target object from the database.
A8. The method of a7, wherein the pre-analyzing target objects contained in the talking ebook includes:
acquiring text information corresponding to a sound electronic book, and identifying a plurality of target keywords contained in the text information;
extracting target objects from a plurality of target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or user interaction data corresponding to each target keyword;
wherein the target keywords comprise: a person name type keyword, an event type keyword, and/or an action type keyword.
A9. The method of any of a2-5, wherein the full-text search query results include: text type query results, image type query results and/or audio type query results;
wherein the audio query result comprises: and the target object corresponds to an audio segment in the audio electronic book.
A10. The method according to a9, wherein the audio class query result is determined by:
acquiring audio information corresponding to the target object when the target object appears in the sound electronic book each time;
and extracting each audio segment contained in the audio query result from the plurality of audio information according to the full-text appearance sequence corresponding to each audio information and/or the user interaction data corresponding to each audio information.
A11. The method of any of a1-10, wherein the smart search instruction includes: voice evoking instructions, and/or search instructions triggered by a search entry;
the determining the target object corresponding to the smart search instruction comprises:
recognizing voice search content contained in the voice evoking instruction, and determining a target object corresponding to the intelligent search instruction according to a recognition result; and/or the presence of a gas in the gas,
and acquiring electronic book playing content corresponding to the intelligent searching instruction, and determining a target object corresponding to the intelligent searching instruction according to the electronic book playing content.
A12. The method of A1, wherein the object query result includes at least one of: according to a text object query result determined by a text electronic book corresponding to the talking electronic book, an image object query result determined by an image electronic book corresponding to the talking electronic book, and an audio object query result determined by each audio segment in the talking electronic book;
the presenting the object query result matching the target object in a search results page includes:
displaying an object query result matched with the target object and a jump inlet corresponding to the object query result in a search result page; and the jump entry is used for jumping to the context information corresponding to the object query result in the electronic book.
B13. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
B14. The electronic device of B13, wherein the executable instructions cause the processor to:
acquiring a full-text search query result matched with the target object;
extracting local query results from the full-text search query results;
and determining the local query result as the object query result matched with the target object.
B15. The electronic device of B14, wherein the executable instructions cause the processor to:
extracting the first N query results from the full-text search query results as local query results; wherein N is a natural number; and/or the presence of a gas in the gas,
extracting a plurality of query results which are not adjacent to each other from the full text search query results according to a preset interval to be used as local query results; and/or the presence of a gas in the gas,
and extracting the query result with the interaction times larger than the preset times and/or the interaction type belonging to the preset type from the full-text search query result as a local query result according to the user interaction data corresponding to the full-text search query result.
B16. The electronic device of B14 or 15, wherein the executable instructions cause the processor to:
extracting a local query result from the full-text search query result according to the current playing progress information;
and the local query result does not contain information content corresponding to the unplayed part of the audio electronic book.
B17. The electronic device according to B16, wherein the full-text search query result includes a plurality of content segments carrying time-paragraph information;
the executable instructions cause the processor to:
determining the content segments matched with the current playing progress information according to the time segment information of each content segment;
extracting a local query result from the full-text search query result according to the content segment matched with the current playing progress information;
wherein the time section information includes: timeline information, and/or chapter section information.
B18. The electronic device of B17, wherein the executable instructions cause the processor to:
determining a current time segment corresponding to the current playing progress information;
determining a difference between the temporal segment information of each content segment contained in the full-text search query result and the current temporal segment;
and determining the content segment with the difference value not larger than the preset range as the content segment matched with the current playing progress information.
B19. The electronic device of any of B13-18, wherein the executable instructions cause the processor to:
target objects contained in the sound electronic book are analyzed in advance, and corresponding full-text search query results are configured for all the target objects according to analysis results;
storing each target object and a full text search query result configured for each target object in a database in a correlated manner;
then the obtaining full-text search query results that match the target object includes: and querying and obtaining full-text search query results stored in association with the target object from the database.
B20. The electronic device of B19, wherein the executable instructions cause the processor to:
acquiring text information corresponding to a sound electronic book, and identifying a plurality of target keywords contained in the text information;
extracting target objects from a plurality of target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or user interaction data corresponding to each target keyword;
wherein the target keywords comprise: a person name type keyword, an event type keyword, and/or an action type keyword.
B21. The electronic device of any of B14-17, wherein the full-text search query results include: text type query results, image type query results and/or audio type query results;
wherein the audio query result comprises: and the target object corresponds to an audio segment in the audio electronic book.
B22. The electronic device of B21, wherein the executable instructions cause the processor to:
acquiring audio information corresponding to the target object when the target object appears in the sound electronic book each time;
and extracting each audio segment contained in the audio query result from the plurality of audio information according to the full-text appearance sequence corresponding to each audio information and/or the user interaction data corresponding to each audio information.
B23. The electronic device of any of B13-22, wherein the smart search instruction includes: voice evoking instructions, and/or search instructions triggered by a search entry;
the executable instructions cause the processor to:
recognizing voice search content contained in the voice evoking instruction, and determining a target object corresponding to the intelligent search instruction according to a recognition result; and/or the presence of a gas in the gas,
and acquiring electronic book playing content corresponding to the intelligent searching instruction, and determining a target object corresponding to the intelligent searching instruction according to the electronic book playing content.
B24. The electronic device of B13, wherein the object query result includes at least one of: according to a text object query result determined by a text electronic book corresponding to the talking electronic book, an image object query result determined by an image electronic book corresponding to the talking electronic book, and an audio object query result determined by each audio segment in the talking electronic book;
the executable instructions cause the processor to:
displaying an object query result matched with the target object and a jump inlet corresponding to the object query result in a search result page; and the jump entry is used for jumping to the context information corresponding to the object query result in the electronic book.
C25. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
C26. The computer storage medium of C25, wherein the executable instructions cause the processor to:
acquiring a full-text search query result matched with the target object;
extracting local query results from the full-text search query results;
and determining the local query result as the object query result matched with the target object.
C27. The computer storage medium of C26, wherein the executable instructions cause the processor to:
extracting the first N query results from the full-text search query results as local query results; wherein N is a natural number; and/or the presence of a gas in the gas,
extracting a plurality of query results which are not adjacent to each other from the full text search query results according to a preset interval to be used as local query results; and/or the presence of a gas in the gas,
and extracting the query result with the interaction times larger than the preset times and/or the interaction type belonging to the preset type from the full-text search query result as a local query result according to the user interaction data corresponding to the full-text search query result.
C28. The computer storage medium of C26 or 27, wherein the executable instructions cause the processor to:
extracting a local query result from the full-text search query result according to the current playing progress information;
and the local query result does not contain information content corresponding to the unplayed part of the audio electronic book.
C29. The computer storage medium of C28, wherein the full text search query result includes a plurality of content segments carrying time paragraph information;
the executable instructions cause the processor to:
determining the content segments matched with the current playing progress information according to the time segment information of each content segment;
extracting a local query result from the full-text search query result according to the content segment matched with the current playing progress information;
wherein the time section information includes: timeline information, and/or chapter section information.
C30. The computer storage medium of C29, wherein the executable instructions cause the processor to:
determining a current time segment corresponding to the current playing progress information;
determining a difference between the temporal segment information of each content segment contained in the full-text search query result and the current temporal segment;
and determining the content segment with the difference value not larger than the preset range as the content segment matched with the current playing progress information.
C31. The computer storage medium of any of C25-30, wherein the executable instructions cause the processor to:
target objects contained in the sound electronic book are analyzed in advance, and corresponding full-text search query results are configured for all the target objects according to analysis results;
storing each target object and a full text search query result configured for each target object in a database in a correlated manner;
then the obtaining full-text search query results that match the target object includes: and querying and obtaining full-text search query results stored in association with the target object from the database.
C32. The computer storage medium of C31, wherein the executable instructions cause the processor to:
acquiring text information corresponding to a sound electronic book, and identifying a plurality of target keywords contained in the text information;
extracting target objects from a plurality of target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or user interaction data corresponding to each target keyword;
wherein the target keywords comprise: a person name type keyword, an event type keyword, and/or an action type keyword.
C33. The computer storage medium of any of C26-29, wherein the full text search query results include: text type query results, image type query results and/or audio type query results;
wherein the audio query result comprises: and the target object corresponds to an audio segment in the audio electronic book.
C34. The computer storage medium of C33, wherein the executable instructions cause the processor to:
acquiring audio information corresponding to the target object when the target object appears in the sound electronic book each time;
and extracting each audio segment contained in the audio query result from the plurality of audio information according to the full-text appearance sequence corresponding to each audio information and/or the user interaction data corresponding to each audio information.
C35. The computer storage medium of any of C25-34, wherein the smart search instruction comprises: voice evoking instructions, and/or search instructions triggered by a search entry;
the executable instructions cause the processor to:
recognizing voice search content contained in the voice evoking instruction, and determining a target object corresponding to the intelligent search instruction according to a recognition result; and/or the presence of a gas in the gas,
and acquiring electronic book playing content corresponding to the intelligent searching instruction, and determining a target object corresponding to the intelligent searching instruction according to the electronic book playing content.
C36. The computer storage medium of C25, wherein the object query result includes at least one of: according to a text object query result determined by a text electronic book corresponding to the talking electronic book, an image object query result determined by an image electronic book corresponding to the talking electronic book, and an audio object query result determined by each audio segment in the talking electronic book;
the executable instructions cause the processor to:
displaying an object query result matched with the target object and a jump inlet corresponding to the object query result in a search result page; and the jump entry is used for jumping to the context information corresponding to the object query result in the electronic book.

Claims (10)

1. An intelligent searching method based on a talking electronic book comprises the following steps:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
2. The method of claim 1, wherein the obtaining object query results that match the target object comprises:
acquiring a full-text search query result matched with the target object;
extracting local query results from the full-text search query results;
and determining the local query result as the object query result matched with the target object.
3. The method of claim 2, wherein said extracting local query results from the full-text search query results comprises:
extracting the first N query results from the full-text search query results as local query results; wherein N is a natural number; and/or the presence of a gas in the gas,
extracting a plurality of query results which are not adjacent to each other from the full text search query results according to a preset interval to be used as local query results; and/or the presence of a gas in the gas,
and extracting the query result with the interaction times larger than the preset times and/or the interaction type belonging to the preset type from the full-text search query result as a local query result according to the user interaction data corresponding to the full-text search query result.
4. The method of claim 2 or 3, wherein said extracting local query results from the full-text search query results comprises:
extracting a local query result from the full-text search query result according to the current playing progress information;
and the local query result does not contain information content corresponding to the unplayed part of the audio electronic book.
5. The method of claim 4, wherein the full text search query result comprises a plurality of content segments carrying time paragraph information;
then said extracting a local query result from said full-text search query result according to the current playing progress information comprises:
determining the content segments matched with the current playing progress information according to the time segment information of each content segment;
extracting a local query result from the full-text search query result according to the content segment matched with the current playing progress information;
wherein the time section information includes: timeline information, and/or chapter section information.
6. The method of claim 5, wherein the determining the content segment matching the current play progress information comprises:
determining a current time segment corresponding to the current playing progress information;
determining a difference between the temporal segment information of each content segment contained in the full-text search query result and the current temporal segment;
and determining the content segment with the difference value not larger than the preset range as the content segment matched with the current playing progress information.
7. The method of any of claims 1-6, wherein prior to performing the method, further comprising:
target objects contained in the sound electronic book are analyzed in advance, and corresponding full-text search query results are configured for all the target objects according to analysis results;
storing each target object and a full text search query result configured for each target object in a database in a correlated manner;
then the obtaining full-text search query results that match the target object includes: and querying and obtaining full-text search query results stored in association with the target object from the database.
8. The method of claim 7, wherein the pre-analyzing target objects contained in the talking ebook comprises:
acquiring text information corresponding to a sound electronic book, and identifying a plurality of target keywords contained in the text information;
extracting target objects from a plurality of target keywords according to the occurrence frequency of each target keyword in the audio electronic book and/or user interaction data corresponding to each target keyword;
wherein the target keywords comprise: a person name type keyword, an event type keyword, and/or an action type keyword.
9. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
10. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to:
in the playing process of the sound electronic book, when an intelligent searching instruction is received, a target object corresponding to the intelligent searching instruction is determined;
obtaining an object query result matched with the target object;
and displaying the object query result matched with the target object in a search result page.
CN201910965391.3A 2019-10-11 2019-10-11 Intelligent searching method based on audio electronic book and electronic equipment Pending CN110717066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910965391.3A CN110717066A (en) 2019-10-11 2019-10-11 Intelligent searching method based on audio electronic book and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910965391.3A CN110717066A (en) 2019-10-11 2019-10-11 Intelligent searching method based on audio electronic book and electronic equipment

Publications (1)

Publication Number Publication Date
CN110717066A true CN110717066A (en) 2020-01-21

Family

ID=69212491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910965391.3A Pending CN110717066A (en) 2019-10-11 2019-10-11 Intelligent searching method based on audio electronic book and electronic equipment

Country Status (1)

Country Link
CN (1) CN110717066A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314454A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method and system for automatically adding internal links
CN102902661A (en) * 2012-10-24 2013-01-30 广东欧珀移动通信有限公司 Method for realizing hyperlinks of electronic books
CN106462636A (en) * 2014-06-20 2017-02-22 谷歌公司 Clarifying audible verbal information in video content
CN106844679A (en) * 2017-01-24 2017-06-13 广州朗锐数字传媒科技有限公司 A kind of audiobook illustration display systems and method
CN107515871A (en) * 2016-06-15 2017-12-26 北京陌上花科技有限公司 Searching method and device
CN110225387A (en) * 2019-05-20 2019-09-10 北京奇艺世纪科技有限公司 A kind of information search method, device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314454A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method and system for automatically adding internal links
CN102902661A (en) * 2012-10-24 2013-01-30 广东欧珀移动通信有限公司 Method for realizing hyperlinks of electronic books
CN106462636A (en) * 2014-06-20 2017-02-22 谷歌公司 Clarifying audible verbal information in video content
CN107515871A (en) * 2016-06-15 2017-12-26 北京陌上花科技有限公司 Searching method and device
CN106844679A (en) * 2017-01-24 2017-06-13 广州朗锐数字传媒科技有限公司 A kind of audiobook illustration display systems and method
CN110225387A (en) * 2019-05-20 2019-09-10 北京奇艺世纪科技有限公司 A kind of information search method, device and electronic equipment

Similar Documents

Publication Publication Date Title
US10096145B2 (en) Method and system for assembling animated media based on keyword and string input
CN110719518A (en) Multimedia data processing method, device and equipment
CN109558513B (en) Content recommendation method, device, terminal and storage medium
CN106462640B (en) Contextual search of multimedia content
US11822868B2 (en) Augmenting text with multimedia assets
US20070294295A1 (en) Highly meaningful multimedia metadata creation and associations
KR20180107147A (en) Multi-variable search user interface
CN105224581B (en) The method and apparatus of picture are presented when playing music
WO2022111249A1 (en) Information presentation method, apparatus, and computer storage medium
US20080215548A1 (en) Information search method and system
CN106096003B (en) Data searching method and client
CN104008180B (en) Association method of structural data with picture, association device thereof
CN111046225B (en) Audio resource processing method, device, equipment and storage medium
US10769196B2 (en) Method and apparatus for displaying electronic photo, and mobile device
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
CN113536172B (en) Encyclopedia information display method and device and computer storage medium
CN111125314B (en) Display method of book query page, electronic device and computer storage medium
CN109979450A (en) Information processing method, device and electronic equipment
CN110727629A (en) Playing method of audio electronic book, electronic equipment and computer storage medium
JP2008268985A (en) Method for attaching tag
JP5942052B1 (en) Data analysis system, data analysis method, and data analysis program
CN113450804A (en) Voice visualization method and device, projection equipment and computer readable storage medium
WO2012145561A1 (en) Systems and methods for assembling and/or displaying multimedia objects, modules or presentations
CN109145261B (en) Method and device for generating label
CN110717066A (en) Intelligent searching method based on audio electronic book and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination