CN108415995B - Searching method and device - Google Patents

Searching method and device Download PDF

Info

Publication number
CN108415995B
CN108415995B CN201810149957.0A CN201810149957A CN108415995B CN 108415995 B CN108415995 B CN 108415995B CN 201810149957 A CN201810149957 A CN 201810149957A CN 108415995 B CN108415995 B CN 108415995B
Authority
CN
China
Prior art keywords
search
played
information
animation image
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810149957.0A
Other languages
Chinese (zh)
Other versions
CN108415995A (en
Inventor
杜奕岐
朱虹烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810149957.0A priority Critical patent/CN108415995B/en
Publication of CN108415995A publication Critical patent/CN108415995A/en
Application granted granted Critical
Publication of CN108415995B publication Critical patent/CN108415995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9038Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Processing Or Creating Images (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a searching method and a searching device, wherein the method comprises the following steps: the method comprises the steps of obtaining a search question, sending the search question to a server, generating information to be played described by a first person according to a search answer which is obtained from the server and used for answering the search question, and displaying the information to be played by adopting an animation image which is associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high.

Description

Searching method and device
Technical Field
The present invention relates to the field of search technologies, and in particular, to a search method and apparatus.
Background
With the development of internet technology, search technology brings convenience to the life of people, and the current search interaction mode in the whole industry is developing towards a more intelligent and more human-friendly human-computer interaction mode.
In the related technology, for the search requirements of the user, a keyword matching mode is adopted, the corresponding search results are captured from the Internet and presented to the user, the mode is single, and the human-computer interaction is less.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide a search method, so as to display information to be played corresponding to a search answer by using an animation image associated with the information to be played, wherein the animation image is vivid, so that the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience is high.
A second object of the present invention is to provide a search apparatus.
A third object of the invention is to propose a computer device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
A fifth object of the invention is to propose a computer program product.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a search method, including:
acquiring a search question;
sending the search question to a server;
generating information to be played described by a first person according to a search answer which is acquired from the server and used for answering the search question;
and displaying the information to be played by adopting the animation image associated with the information to be played.
According to the searching method, the searching question is obtained, the searching question is sent to the server, the information to be played described by the first person is generated according to the searching answer which is obtained from the server and used for answering the searching question, and the information to be played is displayed by the aid of the animation image which is associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high.
In order to achieve the above object, a second embodiment of the present invention provides a search apparatus, including:
the acquisition module is used for acquiring a search question;
the sending module is used for sending the search question to a server;
the generating module is used for generating information to be played, which is described by a first person, according to the search answers which are acquired from the server and used for answering the search questions;
and the display module is used for displaying the information to be played by adopting the animation image associated with the information to be played.
In the search device of the embodiment of the invention, the acquisition module is used for acquiring a search question, the transmission module is used for transmitting the search question to the server, the generation module is used for generating information to be played described by a first person according to the search answer acquired from the server and used for answering the search question, and the display module is used for displaying the information to be played by adopting an animation image associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high.
To achieve the above object, a third embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the search method according to the first aspect.
In order to achieve the above object, a fourth aspect of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the search method according to the first aspect.
In order to achieve the above object, an embodiment of a fifth aspect of the present invention provides a computer program product, wherein when the instructions of the computer program product are executed by a processor, the searching method according to the first aspect is implemented.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a searching method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another search method provided in an embodiment of the present application;
FIG. 3 is a schematic interface display diagram of a search method provided herein;
fig. 4 is a schematic structural diagram of a search apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of another searching apparatus according to an embodiment of the present invention; and
FIG. 6 illustrates a block diagram of an exemplary computer device suitable for use to implement embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the related art, search answers are simply displayed in a display mode of the search answers captured from the Internet, a human-computer interface is not friendly, interestingness is lacked, people do not have the desire to further interact, and the problem of less interaction between a machine and people is solved.
According to the searching method, the animation image associated with the information to be played is adopted, the information to be played corresponding to the searching answer is displayed, the animation image is vivid, the interestingness of displaying the searching answer is improved, deep impression is easily given to people, the desire of people for man-machine interaction is stimulated, and the user experience is improved.
A search method and apparatus according to an embodiment of the present invention are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of a search method according to an embodiment of the present invention.
As shown in fig. 1, the method comprises the steps of:
step 101, a search question is obtained.
The execution main body of the search method provided by the embodiment of the present application is the search device provided by the embodiment of the present application, and the search device may be configured in the electronic device to perform the search process. The electronic devices are various, such as desktop computers, mobile devices, and the like, and the mobile devices include tablet computers, notebook computers, mobile phones, and the like.
Specifically, a search question input by a user is acquired, and the method for inputting the search question by the user is various, and one possible implementation manner is that the search question is input through a keyboard of the electronic device, wherein the input can be characters and/or pictures; another possible implementation is to perform speech input through a microphone of the electronic device.
Step 102, sending a search question to a server.
Specifically, after receiving a search question input by a user, the question of the user is sent to the server, so that the server matches a corresponding search answer according to the search question of the user.
Step 103, generating information to be played described by a first person according to the search answers acquired from the server for answering the search questions.
Specifically, a search answer returned by the server for answering the search question is received, an object entity related to the search answer is determined from object entities recorded in a preset knowledge base, and the search answer is subjected to person-to-person conversion according to the object entity related to the search answer, so that information to be played, which is described by a first person, is obtained.
And 104, displaying the information to be played by adopting the animation image associated with the information to be played.
Specifically, according to an object entity corresponding to information to be played, an animation image associated with the playing information is determined, corresponding voice is generated for the information to be played, a video of the animation image is generated according to the voice and a picture of the animation image, a thumbnail of the video is displayed on a search result display page, and when the video is detected to be played, the video is played in the search result display page.
It should be noted that the facial movements of the animated character in the video are matched with the facial movements corresponding to the syllables played synchronously in the voice, wherein the facial movements include mouth shape and/or expression.
According to the searching method, the searching question is obtained, the searching question is sent to the server, the information to be played described by the first person is generated according to the searching answer which is obtained from the server and used for answering the searching question, and the information to be played is displayed by the aid of the animation image which is associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high.
In the above embodiment, it is described that a search answer is determined according to a search question of a user, and a voice message to be played described by a first person is generated, and an animation image associated with the play information is used to play the play information. However, before the playing information is displayed through the animated image, the animated image associated with the playing information needs to be determined, and for this reason, another search method is proposed in this embodiment, and the search method in this embodiment is further explained.
Fig. 2 is a schematic flowchart of another search method provided in an embodiment of the present application, and as shown in fig. 2, the method may include the following steps:
step 201, a search question is obtained.
Specifically, a search question of the user is obtained, for example, the search question of the user is "birthday of easy-to-melt qianxui".
Step 202, sending the search question to a server.
Specifically, the search question is transmitted to the server so that the server generates an answer corresponding to the search question.
Step 203, according to the search answers acquired from the server for answering the search questions, performing person-to-person conversion on the search answers, and generating information to be played, which is described by a first person.
Specifically, the search answer for answering the search question acquired from the server is segmented by a segmentation algorithm, stop words such as auxiliary words, prepositions, adjectives and the like are removed according to a stop word list, object entities contained in the text of the search answer are determined, and components of each object entity in the text corresponding to the search answer are determined, wherein the components are the subject entities which are subjects, slogans or objects in the text, and the object entities are objects which exist objectively. And determining an object entity needing person-to-person conversion in the text of the search answer according to a preset identification strategy, and replacing characters corresponding to the object entity with characters corresponding to the first person.
The preset identification strategy may specifically be: the subject entity to be subjected to the person name conversion belongs to a living body and is a subject or a idiomatic part of the subject in the text. For example, the search answers are: the yellow dawn family is the European classic decoration style. According to a preset identification strategy, for each object entity contained in the search answer, the object entity which belongs to a living creature and is a subject or a fixed language part of the subject in the text is found, namely the object entity is 'yellow dawn', and therefore the 'yellow dawn' is determined to be the object entity needing to be subjected to personal name conversion.
Further, the information to be played, which is described by the first person by converting the search answer, is: i am a european classical device style.
In a possible scenario of the embodiment of the present application, when an object entity needing person name conversion in an answer to a search question is not a living thing, a target object entity related to the object entity needing person name conversion is searched from a preset knowledge base, and the search answer is converted into a text of a first person object according to an association relationship between the target object entity and the object entity needing person name conversion. For example, the search problem is: how big is forbidden city? The search answers are: forbidden cities have 72 kilo-square meters. Determining that an object entity needing to be subjected to person name conversion in the search answer is 'purple city', the object entity 'purple city' does not belong to a living body and can not be directly subjected to person name conversion, and searching an object entity which has the highest correlation with the 'purple city' and is a living body, such as an object entity 'Kangxi emperor', from a preset knowledge base, and performing person name conversion on the search answer according to the relation that the 'purple city' is 'home' of the 'Kangxi emperor', so as to obtain information to be played, which is described by a first person: my home has 72 ten thousand square meters.
Step 204, determining the animation image associated with the information to be played.
Specifically, according to the object entity which is converted into the first person description in the search answer, the label of each animation image in the preset animation library is inquired, the animation image matched with the object entity is determined, and the animation image is used as the animation image related to the information to be played.
As a possible situation of this embodiment, after querying the tag of each animation image in the preset animation library, if the animation image whose tag matches the object entity is not queried, the user is requested to obtain the animation image selected by the user. As a possible implementation, the label of the animated character associated with the subject entity may be listed by pop-up for selection by the user.
It should be noted that the animation character selected by the user is requested to be associated with the information to be played described by the first person determined in the step 203, and the animation character corresponds to the subject corresponding to the first person in the information to be played or the fixed language part of the subject. For example, when the search answer is "forbidden city has 72 ten thousand square meters", the information to be played converted into the first person is "my home has 72 ten thousand square meters", and the object entity corresponding to "my home" is "forbidden city", then the relevant animation image selected by the requesting user is the animation image corresponding to the subject "i", which may be: kangxi emperor Qianlong emperor etc.
And step 205, displaying the information to be played by using the animation image associated with the information to be played.
Specifically, according to an object entity corresponding to information to be played, an animation image associated with the playing information is determined, corresponding voice is generated for the information to be played, a video of the animation image is generated according to the voice and a picture of the animation image, a thumbnail of the video is displayed on a search result display page, when an operation of playing the video is detected, the video is played in the search result display page, as a possible implementation mode, the operation of playing the video can be realized by setting a starting image button in the thumbnail of the video, and the video starts to be played by clicking of a user; as another possible implementation manner, after the thumbnail of the video is displayed in the display interface, the video is directly played.
It should be noted that the facial movements of the animated character in the video are matched with the facial movements corresponding to the syllables played synchronously in the voice, wherein the facial movements include mouth shape and/or expression. Through the animated image, the video display of the information to be played is carried out by the kiss of the animated image, the interest of information display is increased, the interface is more friendly, and the user experience is good.
As a possible implementation mode, in the process of playing the video, the summary text of the information to be played can be displayed, so that the user can check the corresponding text information conveniently, the information to be played is displayed through two ways, multiple choices are provided for the user to check the information, the convenience is improved, the interface is more friendly, and the user experience degree is high.
Further, after the information to be played is played, the user may also continue to ask questions, that is, repeatedly perform the above step 201 and 205, to obtain the answer the user wants, thereby implementing a multi-round human-computer interaction mode and increasing the stickiness of the user.
To further clearly illustrate the search method of this embodiment, a mobile phone in a mobile device is taken as an example, and a problem in an actual application scenario is combined for description, fig. 3 is an interface display schematic diagram of the search method provided by the present application, and as shown in fig. 3, a search problem of a user is: how long the sheep can live, inputting the question on a search interface of the mobile phone by a user in a voice input mode, and sending the question to a server corresponding to a search engine after the mobile phone receives the question so as to enable the server to return an answer corresponding to the search question: the average service life of the sheep is 10-15 years, according to the answer, a preset identification strategy is adopted to determine that the object entity needing to be subjected to person-to-person conversion is the sheep, characters corresponding to the object entity are replaced by characters corresponding to the first person in the text of the search answer, and information to be played described by the first person is obtained: the average life span of our sheep was 10-15 years. Furthermore, according to the described object entity 'sheep', inquiring the label of each animation image in a preset animation library, and finding out the label of the animation image matched with the 'sheep': the happiness sheep uses the happiness sheep as the animation image associated with the information to be played, thereby improving the interest.
Further, a pre-trained voice synthesis model is adopted, information to be played generates voice of the kiss of the animation image corresponding to the xi sheep, the video of the animation image is generated according to the voice and the picture of the animation image corresponding to the xi sheep, when the thumbnail of the video is displayed in the display interface of the search result, the video is directly played, namely, the voice is broadcasted by the kiss of the xi sheep: the average life of the sheep is 10-15 years, and in the video, the facial motion of the cartoon image of the favorite sheep is matched with the facial motion corresponding to the syllable played synchronously in the voice. As shown in fig. 3, in the video playing process, a summary text of the information to be played may also be displayed on the display interface of the video.
According to the searching method, the searching question is obtained, the searching question is sent to the server, the information to be played described by the first person is generated according to the searching answer which is obtained from the server and used for answering the searching question, the animation image which is associated with the information to be played is determined according to the object entity which is converted into the description by the first person in the searching answer, and the information to be played is displayed by adopting the animation image which is associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high. Meanwhile, the text information is played in the process of playing the video, so that the information display mode is increased, more modes for obtaining the search result are provided for the user, convenience is brought to the user, and the user experience is improved.
In order to implement the above embodiments, the present invention further provides a search apparatus.
Fig. 4 is a schematic structural diagram of a search apparatus according to an embodiment of the present invention.
As shown in fig. 4, the apparatus includes: an acquisition module 41, a sending module 42, a generation module 43 and a presentation module 44.
An obtaining module 41, configured to obtain a search question.
And a sending module 42, configured to send the search question to the server.
And a generating module 43, configured to generate information to be played, which is described by a first person, according to the search answer obtained from the server and used for answering the search question.
And the display module 44 is configured to display the information to be played by using the animation image associated with the information to be played.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
In the search device of the embodiment of the invention, the acquisition module is used for acquiring a search question, the transmission module is used for transmitting the search question to the server, the generation module is used for generating information to be played described by a first person according to the search answer acquired from the server and used for answering the search question, and the display module is used for displaying the information to be played by adopting an animation image associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high.
Based on the foregoing embodiment, the embodiment of the present invention further provides a possible implementation manner of a search apparatus, fig. 5 is a schematic structural diagram of another search apparatus provided in the embodiment of the present invention, and on the basis of the foregoing embodiment, as shown in fig. 5, the apparatus further includes: a determination module 45.
And the determining module 45 is used for inquiring the label of each animation image in the preset animation library according to the object entity which is converted into the first person description in the search answer, and taking the animation image of which the label is matched with the object entity as the animation image associated with the information to be played.
As a possible implementation manner of this embodiment, the generating module 43 may further include: a determination unit 431 and a conversion unit 432.
The determining unit 431 is configured to determine an object entity involved in the search answer from object entities recorded in the preset knowledge base.
The conversion unit 432 performs person-to-person conversion on the search answer according to the object entity related in the search answer, so as to obtain the information to be played, which is described by the first person.
As a possible implementation manner, the conversion unit 432 is specifically configured to:
identifying each object entity contained in the text of the search answer, determining the object entity needing person-to-person conversion according to a preset identification strategy, and replacing characters corresponding to the object entity with characters corresponding to the first person in the text of the search answer. Wherein, presetting the identification strategy comprises: the object entity to be subjected to the person name conversion belongs to a living body and is a subject or a fixed language part of the subject in the text.
As a possible implementation, the display module 44 is specifically configured to: generating voice corresponding to the information to be played, and generating a video of the animated image according to the voice and the picture of the animated image, wherein the facial action of the animated image in the video is matched with the facial action corresponding to the syllable played synchronously in the voice, and the facial action comprises mouth shape and/or expression. And displaying the thumbnail of the video on the search result display page, and playing the video in the search result display page when the operation of playing the video is detected.
As another possible implementation, the display module 44 may further be configured to: and displaying the abstract text of the information to be played in the process of playing the video.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
In the search device of the embodiment of the invention, the acquisition module is used for acquiring a search question, the transmission module is used for transmitting the search question to the server, the generation module is used for generating information to be played described by a first person according to the search answer acquired from the server and used for answering the search question, and the display module is used for displaying the information to be played by adopting an animation image associated with the information to be played. The animation image correlated with the information to be played is adopted to display the information to be played corresponding to the search answer, the animation image is vivid, the interestingness of displaying the search answer is improved, the interaction frequency is increased, and the user experience degree is high.
In order to implement the foregoing embodiments, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the computer device implements the search method described in the foregoing method embodiments.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium on which a computer program is stored, the instructions in the storage medium, when executed by a processor, implementing the search method described in the foregoing method embodiments.
In order to implement the above embodiments, the present invention further provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the search method described in the foregoing method embodiments is implemented.
FIG. 6 illustrates a block diagram of an exemplary computer device suitable for use to implement embodiments of the present application. The computer device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and scope of use of the embodiments of the present application.
As shown in FIG. 6, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via Network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A search method, comprising the steps of:
acquiring a search question;
sending the search question to a server;
generating information to be played described by a first person according to a search answer which is acquired from the server and used for answering the search question;
displaying the information to be played by adopting an animation image associated with the information to be played, wherein the information to be played generates corresponding voice, the animation image associated with the information to be played is determined according to an object entity corresponding to the information to be played, a video of the animation image is generated according to the voice and the animation image, and a thumbnail of the video of the animation image is displayed on a search result display page.
2. The searching method according to claim 1, wherein the generating information to be played described by a first person according to the search answer acquired from the server for solving the search question comprises:
determining object entities related to the search answers from object entities recorded in a preset knowledge base;
and according to the object entity related in the search answer, carrying out person-to-person conversion on the search answer to obtain the information to be played, which is described by a first person.
3. The method according to claim 2, wherein before displaying the information to be played by using the animation image associated with the information to be played, the method further comprises:
inquiring labels of all animation images in a preset animation library according to the object entity which is converted into the first person description in the search answer;
and taking the animation image matched with the label and the object entity as the animation image associated with the information to be played.
4. The method according to claim 3, wherein after querying the labels of the animation characters in the preset animation library, the method further comprises:
and if the animation image matched with the object entity by the label is not inquired, requesting to acquire the animation image selected by the user.
5. The searching method according to claim 2, wherein the human scale conversion of the search answer according to the object entity involved in the search answer comprises:
identifying each object entity contained in the text of the search answer;
determining an object entity needing to be subjected to person-to-person conversion according to a preset identification strategy; wherein, the preset identification strategy comprises: the object entity needing human scale conversion belongs to a living body and is a subject or a fixed language part of the subject in the text;
and in the text of the search answer, replacing the characters corresponding to the object entity with the characters corresponding to the first person.
6. The searching method according to any one of claims 1-5, wherein said displaying the information to be played by using an animated character associated with the information to be played comprises:
generating voice corresponding to the information to be played;
generating a video of the animation image according to the voice and the picture of the animation image; the facial action of the animation image in the video is matched with the facial action corresponding to the syllable played synchronously in the voice; the facial movements comprise mouth shape and/or expression;
displaying a thumbnail of the video on a search result display page;
and when the operation of playing the video is detected, playing the video in the search result display page.
7. The method of searching as claimed in claim 6, further comprising:
and displaying the abstract text of the information to be played in the process of playing the video.
8. A search apparatus, comprising:
the acquisition module is used for acquiring a search question;
the sending module is used for sending the search question to a server;
the generating module is used for generating information to be played, which is described by a first person, according to the search answers which are acquired from the server and used for answering the search questions;
the display module is used for displaying the information to be played by adopting an animation image associated with the information to be played, wherein the information to be played generates corresponding voice, the animation image associated with the information to be played is determined according to an object entity corresponding to the information to be played, a video of the animation image is generated according to the voice and the animation image, and a thumbnail of the video of the animation image is displayed on a search result display page.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the search method of any one of claims 1 to 7 when executing the program.
10. A non-transitory computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the search method of any one of claims 1-7.
CN201810149957.0A 2018-02-13 2018-02-13 Searching method and device Active CN108415995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810149957.0A CN108415995B (en) 2018-02-13 2018-02-13 Searching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810149957.0A CN108415995B (en) 2018-02-13 2018-02-13 Searching method and device

Publications (2)

Publication Number Publication Date
CN108415995A CN108415995A (en) 2018-08-17
CN108415995B true CN108415995B (en) 2022-04-22

Family

ID=63128802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810149957.0A Active CN108415995B (en) 2018-02-13 2018-02-13 Searching method and device

Country Status (1)

Country Link
CN (1) CN108415995B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947911B (en) * 2019-01-14 2023-06-16 达闼机器人股份有限公司 Man-machine interaction method and device, computing equipment and computer storage medium
CN111914563A (en) * 2019-04-23 2020-11-10 广东小天才科技有限公司 Intention recognition method and device combined with voice

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825253A (en) * 2016-03-29 2016-08-03 腾讯科技(深圳)有限公司 Target object display method, device and system
CN106653050A (en) * 2017-02-08 2017-05-10 康梅 Method for matching animation mouth shapes with voice in real time
CN107340859A (en) * 2017-06-14 2017-11-10 北京光年无限科技有限公司 The multi-modal exchange method and system of multi-modal virtual robot
CN107369209A (en) * 2017-07-07 2017-11-21 深圳市华琥技术有限公司 A kind of data processing method
CN107480766A (en) * 2017-07-18 2017-12-15 北京光年无限科技有限公司 The method and system of the content generation of multi-modal virtual robot
CN207817795U (en) * 2017-11-21 2018-09-04 江西服装学院 A kind of manufacturing system of three-dimensional animation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7610556B2 (en) * 2001-12-28 2009-10-27 Microsoft Corporation Dialog manager for interactive dialog with computer user
US10237082B2 (en) * 2012-08-31 2019-03-19 Avaya Inc. System and method for multimodal interaction aids
CN104461525B (en) * 2014-11-27 2018-01-23 韩慧健 A kind of intelligent consulting platform generation system that can customize
CN106200886A (en) * 2015-04-30 2016-12-07 包伯瑜 A kind of intelligent movable toy manipulated alternately based on language and toy using method
US20170318013A1 (en) * 2016-04-29 2017-11-02 Yen4Ken, Inc. Method and system for voice-based user authentication and content evaluation
CN106202165B (en) * 2016-06-24 2020-03-17 北京小米移动软件有限公司 Intelligent learning method and device for man-machine interaction
CN107294837A (en) * 2017-05-22 2017-10-24 北京光年无限科技有限公司 Engaged in the dialogue interactive method and system using virtual robot
CN107300970B (en) * 2017-06-05 2020-12-11 百度在线网络技术(北京)有限公司 Virtual reality interaction method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825253A (en) * 2016-03-29 2016-08-03 腾讯科技(深圳)有限公司 Target object display method, device and system
CN106653050A (en) * 2017-02-08 2017-05-10 康梅 Method for matching animation mouth shapes with voice in real time
CN107340859A (en) * 2017-06-14 2017-11-10 北京光年无限科技有限公司 The multi-modal exchange method and system of multi-modal virtual robot
CN107369209A (en) * 2017-07-07 2017-11-21 深圳市华琥技术有限公司 A kind of data processing method
CN107480766A (en) * 2017-07-18 2017-12-15 北京光年无限科技有限公司 The method and system of the content generation of multi-modal virtual robot
CN207817795U (en) * 2017-11-21 2018-09-04 江西服装学院 A kind of manufacturing system of three-dimensional animation

Also Published As

Publication number Publication date
CN108415995A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN109348275B (en) Video processing method and device
CN110069608B (en) Voice interaction method, device, equipment and computer storage medium
CN114578969B (en) Method, apparatus, device and medium for man-machine interaction
KR101160597B1 (en) Content retrieval based on semantic association
JP2021009701A (en) Interface intelligent interaction control method, apparatus, system, and program
CN109343696B (en) Electronic book commenting method and device and computer readable storage medium
Deldjoo et al. Towards multi-modal conversational information seeking
JP2015162244A (en) Methods, programs and computation processing systems for ranking spoken words
CN108563655A (en) Text based event recognition method and device
CN110223365A (en) A kind of notes generation method, system, device and computer readable storage medium
CN108415995B (en) Searching method and device
CN109388725A (en) The method and device scanned for by video content
CN109408834A (en) Auxiliary machinery interpretation method, device, equipment and storage medium
CN113392687A (en) Video title generation method and device, computer equipment and storage medium
CN116628150A (en) Method, apparatus, device and storage medium for question and answer
US11017073B2 (en) Information processing apparatus, information processing system, and method of processing information
CN115529500A (en) Method and device for generating dynamic image
CN115171673A (en) Role portrait based communication auxiliary method and device and storage medium
CN114443889A (en) Audio acquisition method and device, electronic equipment and storage medium
CN112233648A (en) Data processing method, device, equipment and storage medium combining RPA and AI
CN111161737A (en) Data processing method and device, electronic equipment and storage medium
CN111160044A (en) Text-to-speech conversion method and device, terminal and computer readable storage medium
CN111160051A (en) Data processing method and device, electronic equipment and storage medium
CN111159472A (en) Multi-modal chat techniques
CN117289804B (en) Virtual digital human facial expression management method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant