CN109145310B - Searching method, device and equipment - Google Patents

Searching method, device and equipment Download PDF

Info

Publication number
CN109145310B
CN109145310B CN201710465411.1A CN201710465411A CN109145310B CN 109145310 B CN109145310 B CN 109145310B CN 201710465411 A CN201710465411 A CN 201710465411A CN 109145310 B CN109145310 B CN 109145310B
Authority
CN
China
Prior art keywords
translated
interpretation information
user
information
presenting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710465411.1A
Other languages
Chinese (zh)
Other versions
CN109145310A (en
Inventor
侯柏岑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201710465411.1A priority Critical patent/CN109145310B/en
Publication of CN109145310A publication Critical patent/CN109145310A/en
Application granted granted Critical
Publication of CN109145310B publication Critical patent/CN109145310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a searching method. The method comprises the following steps: responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation; searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated; and correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated. By the method provided by the embodiment of the invention, a user can search and view the interpretation information of a plurality of objects to be translated simultaneously, so that the query operation of the user is simplified. In addition, the invention also discloses a searching device.

Description

Searching method, device and equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a search method, apparatus, and device.
Background
Currently, many applications are capable of providing text translation functionality. The applications can acquire a word to be translated input by a user, inquire explanation information corresponding to the word to be translated and present the explanation information to the user. Existing applications can only search for the user for explanatory information for one word at a time. However, in many cases, the user needs to inquire the explanation information of a plurality of different words, and especially, the user may compare and distinguish the explanations of a plurality of similar words. In these cases, since the application can search only one word of explanatory information at a time, the query operation of the user is very cumbersome. For example, if the user needs to search for the explanatory information of the word a and the word B, the user needs to input the word a first so that the application searches for the explanatory information of the word a, then delete the word a and input the word B so that the application searches for the explanatory information of the word B. If the word a and the word B are two different words with similar meanings, the user may need to check the interpretation information of the word a again after the application searches the interpretation information of the word B in order to compare and distinguish the word a and the word B, and at this time, the user also needs to delete the word B and input the word a again so that the application searches the interpretation information of the word a again.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a searching method, apparatus and device, so that a user can simultaneously query interpretation information of a plurality of objects to be translated at one time, thereby simplifying the query operation of the user.
In a first aspect, an embodiment of the present invention provides a search method, including:
responding to a user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information corresponding to the first object to be translated, and the second interpretation information is interpretation information corresponding to the second object to be translated;
and correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated.
Optionally, the method further includes:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information corresponding to the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
Optionally, the user operation is an operation of inputting a text by a user, and the input content under the user operation is the text input by the user.
Optionally, the separation mode is as follows: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
Optionally, the user operation is an operation of shooting an image by a user, and the input content under the user operation is the image shot by the user.
Optionally, the separation mode is as follows: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
Optionally, the presenting the interpretation information of the first object to be translated corresponding to the first object to be translated and presenting the interpretation information of the second object to be translated corresponding to the second object to be translated includes:
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located in the same presentation interface.
Optionally, the first interpretation information is brief interpretation information of the first object to be translated, and the second interpretation information is brief interpretation information of the second object to be translated.
Optionally, the method further includes:
in response to a triggering operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
Optionally, the presenting the interpretation information of the first object to be translated and the first object to be translated correspondingly and presenting the interpretation information of the second object to be translated and the second object to be translated correspondingly includes:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing interpretation information for the first object to be translated;
and presenting the second interpretation information in response to a trigger operation of viewing the interpretation information for the second object to be translated.
Alternatively to this, the first and second parts may,
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user-related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
In a second aspect, an embodiment of the present invention provides a search apparatus, including:
the translation device comprises a recognition unit, a translation unit and a translation unit, wherein the recognition unit is used for responding to a user operation of inputting an object to be translated, and recognizing a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
the searching unit is used for searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
and the presentation unit is used for correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated.
Optionally, the apparatus further comprises:
the first identification unit is used for identifying a third object to be translated from the input content operated by the user;
the first searching unit is used for searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated;
and the first presentation unit is used for correspondingly presenting the third interpretation information and the third object to be translated.
Optionally, the user operation is an operation of inputting a text by a user, and the input content under the user operation is the text input by the user.
Optionally, the separation mode is as follows: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
Optionally, the user operation is an operation of a user to capture an image, and the input content under the user operation is an image captured by the user.
Optionally, the separation mode is as follows: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
Optionally, the presenting unit includes:
the first presentation subunit is used for presenting the first to-be-translated object and the first interpretation information in a first display area;
the second presentation subunit is used for presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located in the same presentation interface.
Optionally, the first interpretation information is specifically brief interpretation information of the first object to be translated, and the second interpretation information is specifically brief interpretation information of the second object to be translated.
Optionally, the apparatus further comprises:
the first expansion unit is used for responding to the triggering operation of viewing the detailed explanation information of the first object to be translated, expanding the first display area and presenting the detailed explanation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and the second expansion unit is used for expanding the second display area and presenting the detailed interpretation information of the second object to be translated in the expanded second display area in response to the triggering operation of viewing the detailed interpretation information aiming at the second object to be translated.
Optionally, the presenting unit includes:
the third presentation subunit is used for presenting the first object to be translated and the second object to be translated;
a fourth presentation subunit, configured to present the first interpretation information in response to a trigger operation of viewing the interpretation information for the first object to be translated;
and the fifth presentation subunit is used for presenting the second interpretation information in response to the trigger operation of viewing the interpretation information aiming at the second object to be translated.
Optionally, the apparatus further comprises:
a first priority presentation unit, configured to preferentially present paraphrases matching the user-related information among the plurality of paraphrases of the first object to be translated in the first interpretation information;
and the second priority presentation unit is used for preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
In a third aspect, an embodiment of the present invention provides a search apparatus, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by the one or more processors and include instructions for:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
and correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
Optionally, the user operation is an operation of inputting a text by a user, and the input content under the user operation is the text input by the user.
Optionally, the separation mode is as follows: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
Optionally, the user operation is an operation of shooting an image by a user, and the input content under the user operation is the image shot by the user.
Optionally, the separation mode is as follows: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located in the same presentation interface.
Optionally, the first interpretation information is specifically brief interpretation information of the first object to be translated, and the second interpretation information is specifically brief interpretation information of the second object to be translated.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
in response to a trigger operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing interpretation information for the first object to be translated;
and presenting the second interpretation information in response to a trigger operation of viewing the interpretation information for the second object to be translated.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
In a fourth aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, where instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform a search method, where the method includes:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
and correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated.
Optionally, the method further includes:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information corresponding to the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
Optionally, the user operation is an operation of inputting a text by a user, and the input content under the user operation is the text input by the user.
Optionally, the separation mode is as follows: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
Optionally, the user operation is an operation of shooting an image by a user, and the input content under the user operation is the image shot by the user.
Optionally, the separation mode is as follows: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
Optionally, the presenting the interpretation information of the first object to be translated and the first object to be translated correspondingly and presenting the interpretation information of the second object to be translated and the second object to be translated correspondingly includes:
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located in the same presentation interface.
Optionally, the first interpretation information is brief interpretation information of the first object to be translated, and the second interpretation information is brief interpretation information of the second object to be translated.
Optionally, the method further includes:
in response to a triggering operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
Optionally, the presenting the interpretation information of the first object to be translated corresponding to the first object to be translated and presenting the interpretation information of the second object to be translated corresponding to the second object to be translated includes:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing interpretation information for the first object to be translated;
and presenting the second interpretation information in response to a trigger operation of viewing the interpretation information for the second object to be translated.
Alternatively to this, the first and second parts may,
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
Compared with the prior art, the embodiment of the invention has the following advantages:
according to the method provided by the embodiment of the invention, for the user operation for triggering the search once, the user can input a plurality of objects to be translated, and the application can search the interpretation information of the plurality of objects to be translated and present the interpretation information to the user. For example, if a user needs to query the interpretation information of the first object to be translated and the second object to be translated, the first object to be translated and the second object to be translated can be input into the application together, and the application can search the interpretation information of the first object to be translated and the second object to be translated at the same time and present the same to the user. Therefore, the user can search and view the interpretation information of the objects to be translated at the same time, and does not need to input and view the interpretation information for each object to be translated respectively, so that the query operation of the user is simplified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and it is also possible for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a block diagram of an exemplary application scenario in an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a searching method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a card-type presentation according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a searching method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a search apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The inventor has found that in the prior art, the application can only search the explanation information of one word for the user at a time. But users sometimes need to query for explanatory information for a number of different words. For example, if the user needs to query the explanatory information of the word a and the word B, the user needs to input the word a first so that the application searches the explanatory information of the word a, then delete the word a and input the word B so that the application searches the explanatory information of the word B. Therefore, in the prior art, when the interpretation information of a plurality of different words needs to be queried, the query operation of the user is very complicated.
In order to solve the above problem, in the embodiment of the present invention, a user may input a plurality of objects to be translated in a certain separation manner, and an application may recognize the plurality of objects to be translated input by the user in the separation manner of the objects to be translated under user operation, so that the application may search for interpretation information of the plurality of objects to be translated and present the interpretation information to the user. For example, if a user needs to query the interpretation information of the first object to be translated and the second object to be translated, the first object to be translated and the second object to be translated can be input into the application together, and the application can search the interpretation information of the first object to be translated and the second object to be translated at the same time and present the same to the user. Therefore, the user can search and view the interpretation information of a plurality of objects to be translated at the same time, and does not need to input and view the interpretation information for each object to be translated respectively, so that the query operation of the user is simplified.
For example, embodiments of the present invention may be applied to a scenario as shown in FIG. 1. In this scenario, the user terminal 101 and the server 103 may interact with each other via the network 102. First, in response to a user operation for inputting an object to be translated, the user terminal 101 may identify a first object to be translated and a second object to be translated from input contents under the user operation in a manner of separating the objects to be translated under the user operation. Then, the user terminal 101 can upload the first object to be translated and the second object to be translated to the server 103 through the network 102. The server 103 may query interpretation information of the first object to be translated and the second object to be translated and present the query result on the user terminal 101.
It is understood that the user terminal 101 may be an existing, developing or future developed user device with camera functionality capable of interacting with the server 103 through any form of wired and/or wireless connection (e.g., Wi-Fi, LAN, cellular, coaxial cable, etc.), including but not limited to: existing, developing, or future developed smartphones, non-smartphones, tablet computers, and the like.
Further, the server 103 is only one example of an existing, developing, or future developed device capable of querying translation or interpretation information of an object to be translated and presenting the result to a user. The embodiments of the invention are not limited in any way in this respect.
It is to be appreciated that in the application scenarios described above, while the actions of the embodiments of the present invention are described as being performed in part by the user terminal 101 and in part by the server 103, the actions may be performed entirely by the user terminal 101 or entirely by the server 103. The invention is not limited in its implementation to the details of execution, provided that the acts disclosed in the embodiments of the invention are performed.
It should be noted that the above application scenarios are only presented to facilitate understanding of the present invention, and the embodiments of the present invention are not limited in any way in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Various non-limiting embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Exemplary method
Referring to fig. 2, a flowchart of a searching method in the embodiment of the present invention is shown. In this embodiment, the method may include, for example, the steps of:
s201, responding to a user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation.
In specific implementation, a user inputs a first object to be translated and a second object to be translated in an input box according to a certain separation mode, and an application can recognize the first object to be translated and the second object to be translated which are input by the user from the content input by the user in the input box.
The application may be an APP, such as a hundred degree translation APP, a track translation APP, and the like. The user opens the APP, a first object to be translated and a second object to be translated are input into an input box of the APP, and the APP can identify the first object to be translated and the second object to be translated which are input by the user from the content input into the input box.
The application may also be a web page, such as a dog search web page, a hundred degree search web page, and the like. The user opens the webpage, and inputs the first object to be translated and the second object to be translated in the input box of the webpage, and the webpage can identify the first object to be translated and the second object to be translated which are input by the user from the content input in the input box.
The method for inputting the object to be translated by the user can be manually inputting a text of the object to be translated, or photographing the object to be translated. That is, the user operation may be an operation of the user inputting a text or an operation of the user capturing an image.
When the user operation is an operation of inputting a text by a user, the input content under the user operation may be the text input by the user. In this case, a plurality of separation methods may be used to separate the first object to be translated and the second object to be translated from the text input by the user.
As an example, the separation manner may be that, in the input content under the user operation, a separation symbol is provided between the first to-be-translated object and the second to-be-translated object, and the separation symbol is located between the first to-be-translated object and the second to-be-translated object. The separation symbol may be a comma, a pause, or the like separation symbol input by the user.
For example, what the user needs to query may be the explanation information of the words "smile" and "laugh", where "smile" may be the first object to be translated and "laugh" may be the second object to be translated. When a user inputs two words, namely "smile" and "laugh", the user can input "smile" in the input box, then input a pause sign and "and then input another word" laugh "in the input box, that is, the input content in the input box can be" smile and "laugh", then the application can recognize "smile" and "laugh" from the input content, and can query two recognized objects to be translated simultaneously in the next step.
As another example, the dividing manner may be that the user presses a "+" shortcut key every time the user inputs an object to be translated.
For example, when the user enters two words, "smile" and "laugh," he may also first enter "smile" in the input box and then click on the "+" shortcut key to cause the application to add the word "smile" to the search list and delete "smile" in the input box. The user then enters "laugh" in the input box and clicks on the "+" shortcut to add the word "laugh" to the search list by the application. In this way, the application can recognize two objects to be translated, namely "smile" and "laugh", so that the two objects to be translated added to the search list can be simultaneously queried in the next step.
When the user operation is an operation of a user to capture an image, the input content under the user operation may be an image captured by the user. In this case, a plurality of separation means may be used to separate the first object to be translated and the second object to be translated from the image taken by the user.
As an example, the separation manner may also be that when taking a word in a word-taking box, the user presses a "+" shortcut key every time the user finishes taking a word of an object to be translated.
Take "smile" and "laugh" as examples, where "smile" may be a first object to be translated and "laugh" may be a second object to be translated. The user aligns the word taking frame with smile, shoots an image of the smile, and then clicks a '+' shortcut key, so that the smile is added into a list to be searched; and then moving the mobile phone to make the word-taking frame aligned with the 'laugh', shooting an image of the 'laugh', and clicking the '+' shortcut key to add the 'laugh' to the list to be searched, so that the application can identify two objects to be encountered, namely 'smile' and 'laugh', and simultaneously inquire the two objects to be translated in the search list.
As an example, the separation manner may be that, in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by the second mark symbol. The first and second reference symbols may be the same or different.
A user can mark each object to be translated which is desired to be queried by using a mark symbol, wherein the object to be translated can be a word, such as 'take'; the object to be translated may be a phrase, such as "take care of"; the object to be translated may be a phrase, such as "take the change"; the object to be translated may be a sentence, such as "We need to take care of outer books".
The marker symbol may be an underline, a circle, a marker highlight, or the like. For example, if the object to be translated is a word "take" and the mark symbol is an underline, the effect after marking may be "take". Then, the input contents under the user operation include "take", the application can recognize the mark symbol underline in the shot image, and then recognize the object to be translated"take”。
If a user needs to query the interpretation information of a plurality of words at the same time, each word can be labeled by using a label symbol, that is, one label symbol is used for labeling an object to be translated. The user shoots images of a plurality of objects to be translated with the mark symbols at the same time, and the application can identify the mark symbols and further identify a plurality of corresponding objects to be translated for inquiring and presenting the objects to be translated subsequently. As an example, the word that the user wants to query may be "smile" and "laugh", the user may mark the two words with mark symbols underlined respectively, and the image taken by the user may be "smile laugh". Wherein "smile" may be a first object to be translated, and the underline labeled "smile" may be a first marker; "laugh" may be the second object to be translated, and the underline labeled "laugh" may be the second marker. Application can recognize "smile"underlining and"laughThe two marks are underlined, and two objects to be translated, namely smile and laugh, are further identified.
S202, searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information corresponding to the first object to be translated, and the second interpretation information is interpretation information corresponding to the second object to be translated.
During specific implementation, for the identified first object to be translated and the identified second object to be translated, a user can click a search key to trigger an application to search the first object to be translated and the second object to be translated simultaneously, so as to obtain interpretation information corresponding to the first object to be translated and the second object to be translated respectively. The interpretation information can be related interpretation information such as phonetic symbol, part of speech, word meaning corresponding to the part of speech, example sentence and the like of the object to be translated.
S203, correspondingly presenting the first interpretation information and the first object to be translated, and correspondingly presenting the second interpretation information and the second object to be translated.
In order to enable a user to see the interpretation information of the first object to be translated and the interpretation information of the second object to be translated, and to distinguish the interpretation information of the first object to be translated from the interpretation information of the second object to be translated, in this embodiment, the application may present the first interpretation information in correspondence with the first object to be translated, and present the second interpretation information in correspondence with the second object to be translated.
As an example, the application may present interpretation information of a plurality of objects to be translated in the same interface, and the presentation manner may be referred to as a card-type presentation manner, for example. Referring to fig. 3, a schematic diagram of a presentation manner of the card type in the present embodiment is shown. The card-type presentation mode may specifically include: presenting the first object to be translated and the first interpretation information in a first display area; presenting the second object to be translated and the second interpretation information in a second display area; the first display area and the second display area are located in the same presentation interface.
Fig. 3 takes the objects to be translated as "book" and "door" as examples, where "book" may be the first object to be translated, and "door" may be the second object to be translated. In the presentation interface, a user input text box 301, a first display area 302, and a second display area 303 are included. In the user input text box 301, a user can perform separation in a mode of inputting commas among different objects to be translated; the first display area 302 presents first interpretation information that may be the first to-be-translated objects "book" and "book". The first interpretation information can be the first interpretation information such as the phonetic symbol, the part of speech and corresponding paraphrases of different parts of speech of the book; the second display area 303 may present second interpretation information of the second object to be translated "door" and "door". The second interpretation information may be the second interpretation information such as the phonetic symbol, the part of speech, the paraphrase corresponding to the part of speech, the network paraphrase, and the complex form of "door".
The card-type presentation manner described in this embodiment can present the plurality of objects to be translated and the interpretation information respectively corresponding to the plurality of objects to be translated to the user in the same presentation interface at the same time. For a plurality of words with similar meanings, the user can clearly see the explanation information corresponding to each word through a card type presentation mode, so that the user can distinguish and compare the words with similar meanings, and the user operation is simplified.
It can be understood that the card-type presentation mode can present the interpretation information of a plurality of objects to be translated on the same interface at the same time, so that the display area of the interpretation information of each object to be translated is relatively limited, and sometimes only part of the interpretation information of the objects to be translated can be presented. For this reason, in some embodiments, in the card-type presentation manner, only brief explanatory information of the object to be translated may be presented in the display area of the explanatory information of the object to be translated. For example, the first interpretation information may be brief interpretation information of the first object to be translated, and the second interpretation information may be brief interpretation information of the second object to be translated. Wherein, the brief information can be the paraphrase of the object to be translated.
It can be understood that, in the card-type presentation manner, in the case that the display area presents only brief explanation information of the object to be translated, if the user needs to view detailed explanation information of the object to be translated, the embodiment may provide an operation of viewing detailed explanation information of the object to be translated. Specifically, the card-type presentation manner may further include: in response to a triggering operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area; and/or expanding the second display area and presenting the detailed interpretation information of the second object to be translated in the expanded second display area in response to a trigger operation for viewing the detailed interpretation information of the second object to be translated.
The card type presentation mode only presents the brief explanation information of the object to be translated and hides other explanation information, so that a user can see the detailed explanation information of the object to be translated and can reduce the space of a presentation interface occupied by the explanation information of the object to be translated.
As an example, the application may further provide a directory of objects to be translated for the user, and the user may select an object to be translated in the directory to obtain interpretation information of the selected object to be translated, where the presentation manner may be referred to as a list-directory type presentation manner, for example. The list-directory presentation manner may specifically include: presenting the first object to be translated and the second object to be translated; presenting the first interpretation information in response to a trigger operation of viewing interpretation information for the first object to be translated; and presenting the second interpretation information in response to the trigger operation of viewing the interpretation information aiming at the second object to be translated.
As an example, the list-in-directory presentation may be such that the application presents the interpretation information to the user in a pop-up manner. Specifically, "book" may be a first object to be translated, "door" may be a second object to be translated, and after the application completes search on "book" and "door", the application presents the first object to be translated "book" and the second object to be translated "door" to the user in a list catalog; clicking "book", popping up a new interface by the application, and presenting interpretation information of "book" in the new interface to the user, wherein the interpretation information can be used as first interpretation information, and the first interpretation information can be detailed interpretation information of the first object to be translated "book"; clicking the "door", popping up a new interface by the application, and presenting interpretation information of the "door" to the user in the new interface, wherein the interpretation information can be used as second interpretation information, and the second interpretation information can be detailed interpretation information of a second object to be translated "door".
As an example, the list-directory type presentation mode may be that the application presents the interpretation information to the user in a blank area of the interface presenting the object to be translated. Specifically, after the application finishes searching for "book" and "door", the application presents a first object "book" to be translated and a second object "door" to be translated to the user, clicks "book", and presents the interpretation information of "book" to the user in a blank area of the current interface; and clicking the "door", and presenting the interpretation information of the "door" to the user by the application in the blank area of the current interface.
In some embodiments of this embodiment, in order to save the space occupied by the interpretation information, the application may further provide the user with a list of the object to be translated and a part of the interpretation information corresponding to the object to be translated, and the user may click a pull-down menu button, so as to obtain the detailed interpretation information of the object to be translated. For example, the application presents "book" and corresponding partial interpretation information, which may be part of speech, paraphrases, and "door" and corresponding partial interpretation information, to the user. If the user needs to check the detailed explanation information of the book and the door, the user clicks a pull-down menu key of the book to obtain the detailed explanation information of the book; the user clicks the drop-down menu of "door", and the detailed explanation information of "door" can be obtained.
In the embodiment, a list directory type presentation mode is adopted, a plurality of objects to be translated are presented at the same time, and each object to be translated occupies a small space, so that the plurality of objects to be translated are more easily presented to a user at one time. And then, by means of popup or pull-down menus, the purpose of simultaneously presenting the detailed explanation information respectively corresponding to the multiple objects to be translated to the user can be achieved.
It can be understood that when the application presents the interpretation information of the object to be translated to the user at the same time, there may be many paraphrases of the object to be translated, and the user may need to find the paraphrases required by the user among the many interpretation information, which may cause the user to find the paraphrases more inconveniently. Therefore, the embodiment can preferentially present the paraphrases meeting the requirements of the user according to the requirements of the user on the paraphrases.
Specifically, S203 may specifically be: preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information; and preferentially presenting paraphrases with matching degrees with the user related information in a plurality of paraphrases of the second object to be translated in the second interpretation information. It is understood that the mentioned user-related information is related information that can reflect the user's paraphrasing requirements. For example, the user-related information may be a grade of the user, a research area of the user, and the like. The user-related information may be input by the user in advance when using the application, or may be determined by the application according to the paragraph where the input object to be translated is located. The preferential presentation may be to present only paraphrases matching the user-related information to the user, or may be to present paraphrases matching the user-related information in front of all explanatory information.
As an example, the information that the user previously entered when using the application is a junior first-grade student. The user inputs 'book' and 'door' in an input box of the application, and the application judges that the user is a junior-middle-grade student according to the user-related information input by the user in advance. Then, the application can preferentially present the paraphrases and related example sentences of "book" and "door" that the junior-middle-school grade students may need to the user, and avoid the user from finding the required interpretation information from a plurality of interpretation information.
As an example, the objects to be translated may be the words "interference" and "reflection" in the optical domain. When the user inputs the object to be translated by shooting the image, the content in the shot image may include the paragraph where the object to be translated is located. The application can judge that the object to be translated belongs to the optical field according to the paragraph where the object to be translated is located. Then, the application can preferentially present interpretation information of the object to be translated in the optical field, such as paraphrases and related example sentences, to the user, so that the problem that the user has difficulty in obtaining correct interpretation information of the word from a large amount of interpretation information is avoided.
Further, for the first object to be translated, it is possible to have a plurality of paraphrases in the first interpretation information thereof that can be matched with the user-related information. For multiple paraphrases for which the user-related information matches, the order of presentation may be determined according to the degree of matching. That is, S203 may also specifically be: preferentially presenting the paraphrases which are highly matched with the user-related information in the plurality of paraphrases of the first object to be translated in the first interpretation information; and preferentially presenting paraphrases with higher matching degree with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
It should be noted that, in the case that the first object to be translated and the second object to be translated are identified, in addition to presenting the interpretation information of the first object to be translated and the interpretation information of the second object to be translated, only the interpretation information of the first object to be translated or only the interpretation information of the second object to be translated may be presented. Specifically, after the application identifies a plurality of objects to be translated, the application may select one of the objects to be translated or a part of the objects to be translated according to a preset rule to search and present the searched interpretation information.
As an example, the image taken by the user may be "smile laugh"," smile "may be the first object to be translated, and" laugh "may be the second object to be translated. The application can recognize two objects to be translated, namely "smile" and "laugh". However, the user only needs to obtain the explanation information of the 'laugh', and then before searching the object to be translated, the user can set the key through the target search object on the search interface and set the 'laugh' as the target search object, so that the purpose of only searching the 'laugh' is achieved, the explanation information of the 'laugh' is obtained, and the explanation information of the 'laugh' is presented to the user.
It should be noted that the method described in this embodiment may be applied to not only two objects to be translated, but also more objects to be translated, for example, three objects to be translated. When the number of the objects to be translated is three, the method of this embodiment may further include: identifying a third object to be translated from the input content operated by the user; searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated; and correspondingly presenting the third interpretation information and the third object to be translated.
In addition, in addition to the various embodiments including the first object to be translated, the second object to be translated, and the third object to be translated, the method described in this embodiment may be applied to search and present any plurality of objects to be translated. For example, in response to a user operation of inputting an object to be translated, the application may identify at least one object to be translated from input contents operated by the user in a separation manner of the object to be translated under the user operation, and perform search to obtain interpretation information corresponding to each object to be translated, and then, the application may correspondingly present the at least one object to be translated and the interpretation information corresponding to each object to be translated.
For the method provided by the embodiment, for the user operation for triggering the search once, the user can input a plurality of objects to be translated, and the application can search the interpretation information of the plurality of objects to be translated and present the interpretation information to the user. For example, if a user needs to query the interpretation information of the first object to be translated and the second object to be translated, the first object to be translated and the second object to be translated can be input into the application together, and the application can search the interpretation information of the first object to be translated and the second object to be translated at the same time and present the same to the user. Therefore, the user can search and check the interpretation information of a plurality of objects to be translated at the same time, and the user does not need to input and check the interpretation information for each object to be translated respectively, so that the query operation of the user is simplified.
A specific scenario is taken as an example below to introduce a search method. In this specific scenario, the words "look", "see", and "watch" all have the meaning of "see", the user marks "look", "see", and "watch" with underlines, respectively, and the user needs to input the three words and simultaneously query the explanatory information of the three words, so as to compare and distinguish the three words.
Referring to fig. 4, a flow chart of a search method in an embodiment of the present invention is shown, where the method may include, for example, the following steps:
s401, shooting image by user "look see watch"input the object to be translated.
S402, application from "look see watch"identifies the first object to be translated" look ", the second object to be translated" see "and the third object to be translated" watch ".
S403, searching the "look", "see" and "watch" by the application to obtain first interpretation information, second interpretation information and third interpretation information, wherein the first interpretation information is the interpretation information of the "look", the second interpretation information is the interpretation information of the "see", and the third interpretation information is the interpretation information of the "watch".
S404, displaying detailed explanation information of the first objects to be translated, namely 'look' and 'look' in a first display area by application; presenting detailed interpretation information of the second objects to be translated 'see' and 'see' in a second display area; and presenting detailed explanation information of the third object to be translated, namely 'watch' and 'watch' in a third display area. The first display area, the second display area and the third display area are located in the same presentation interface.
According to the embodiment of the invention, the object to be translated is input by shooting the image, so that the user operation is simplified, and the application can more accurately identify the object to be translated; meanwhile, in the embodiment of the invention, the application can simultaneously search the interpretation information of the objects to be translated and present the interpretation information to the user, and the user does not need to input and check the interpretation information for each object to be translated respectively, so that the user can distinguish and compare the meanings of the objects to be translated conveniently.
Exemplary device
Referring to fig. 5, a schematic structural diagram of a search apparatus in an embodiment of the present invention is shown. In this embodiment, the apparatus may specifically include: an identification unit 501, a search unit 502 and a presentation unit 503.
The identifying unit 501 is configured to identify, in response to a user operation of inputting an object to be translated, a first object to be translated and a second object to be translated from input contents under the user operation in a manner of separating the objects under the user operation;
the searching unit 502 is configured to search the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, where the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
the presenting unit 503 is configured to present the first interpretation information and the first object to be translated correspondingly, and present the second interpretation information and the second object to be translated correspondingly.
Optionally, the apparatus further comprises:
the first identification unit is used for identifying a third object to be translated from the input content operated by the user;
the first searching unit is used for searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated;
and the first presentation unit is used for correspondingly presenting the third interpretation information and the third object to be translated.
Optionally, the user operation is an operation of inputting a text by a user, and the input content under the user operation is the text input by the user.
Optionally, the separation mode is as follows: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
Optionally, the user operation is an operation of shooting an image by a user, and the input content under the user operation is the image shot by the user.
Optionally, the separation mode is as follows: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
Optionally, the presenting unit 503 includes:
the first presentation subunit is used for presenting the first to-be-translated object and the first interpretation information in a first display area;
the second presentation subunit is used for presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located in the same presentation interface.
Optionally, the first interpretation information is specifically brief interpretation information of the first object to be translated, and the second interpretation information is specifically brief interpretation information of the second object to be translated.
Optionally, the apparatus further comprises:
the first expansion unit is used for responding to the triggering operation of viewing the detailed explanation information of the first object to be translated, expanding the first display area and presenting the detailed explanation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the atmosphere,
and the second expansion unit is used for expanding the second display area and presenting the detailed interpretation information of the second object to be translated in the expanded second display area in response to the triggering operation of viewing the detailed interpretation information aiming at the second object to be translated.
Optionally, the presenting unit 503 includes:
the third presentation subunit is used for presenting the first object to be translated and the second object to be translated;
a fourth presentation subunit, configured to present the first interpretation information in response to a trigger operation of viewing the interpretation information for the first object to be translated;
and the fifth presentation subunit is used for responding to the trigger operation of viewing the interpretation information aiming at the second object to be translated and presenting the second interpretation information.
Optionally, the apparatus further comprises:
a first priority presentation unit, configured to preferentially present paraphrases matching the user-related information among the plurality of paraphrases of the first object to be translated in the first interpretation information;
and the second priority presentation unit is used for preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
According to the device provided by the embodiment of the invention, for the user operation for triggering the search once, the user can input a plurality of objects to be translated, and the application can search the interpretation information of the plurality of objects to be translated and present the interpretation information to the user. For example, if a user needs to query the interpretation information of the first object to be translated and the second object to be translated, the first object to be translated and the second object to be translated can be input into the application together, and the application can search the interpretation information of the first object to be translated and the second object to be translated at the same time and present the same to the user. Therefore, the user can search and check the interpretation information of a plurality of objects to be translated at the same time, and the user does not need to input and check the interpretation information for each object to be translated respectively, so that the query operation of the user is simplified.
Referring to fig. 6, apparatus 1800 may include one or more of the following components: processing component 1802, memory 1804, power component 1806, multimedia component 1806, audio component 1810, input/output (I/O) interface 1812, sensor component 1814, and communications component 1816.
The processing component 1802 generally controls the overall operation of the device 1800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1802 may include one or more processors 1820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1802 may include one or more modules that facilitate interaction between the processing component 1802 and other components. For example, the processing component 1802 can include a multimedia module to facilitate interaction between the multimedia component 1806 and the processing component 1802.
The memory 1804 is configured to store various types of data to support operation at the device 1800. Examples of such data include instructions for any application or method operating on the device 1800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1804 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 1806 provides power to the various components of the device 1800. The power components 1806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 1800.
The multimedia component 1806 includes a screen providing an output interface between the apparatus 1800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1806 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the back-facing camera may receive external multimedia data when the device 1800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1810 is configured to output and/or input audio signals. For example, audio component 1810 may include a Microphone (MIC) configured to receive external audio signals when apparatus 1800 is in an operational mode, such as a call mode, a record mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1804 or transmitted via the communication component 1816. In some embodiments, audio component 1810 also includes a speaker for outputting audio signals.
I/O interface 1812 provides an interface between processing component 1802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1814 includes one or more sensors for providing various aspects of state assessment for the apparatus 1800. For example, the sensor assembly 1814 can detect the open/closed state of the device 1800, the relative positioning of components, such as the display and keypad of the apparatus 1800, the sensor assembly 1814 can also detect a change in the position of the apparatus 1800 or a component of the apparatus 1800, the presence or absence of user contact with the apparatus 1800, orientation or acceleration/deceleration of the apparatus 1800, and a change in the temperature of the apparatus 1800. Sensor assembly 1814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1816 is configured to facilitate communications between the apparatus 1800 and other devices in a wired or wireless manner. The device 1800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication section 1816 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
Specifically, the embodiment of the present invention provides a search device, which includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs configured to be executed by one or more processors include instructions for:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
and correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
Optionally, the user operation is an operation of inputting a text by a user, and the input content under the user operation is the text input by the user.
Optionally, the separation mode is as follows: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
Optionally, the user operation is an operation of a user to capture an image, and the input content under the user operation is an image captured by the user.
Optionally, the separation mode is as follows: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located in the same presentation interface.
Optionally, the first interpretation information is specifically brief interpretation information of the first object to be translated, and the second interpretation information is specifically brief interpretation information of the second object to be translated.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
in response to a triggering operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing interpretation information for the first object to be translated;
and presenting the second interpretation information in response to a trigger operation of viewing the interpretation information for the second object to be translated.
Optionally, the processor is further configured to execute the one or more programs including instructions for:
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
Embodiments of the invention also provide a non-transitory computer readable storage medium, such as memory 1804, including instructions executable by processor 1820 of device 1800 to perform the above-described method, and such as storage medium 1930 including instructions executable by central processor 1922 of server 1900 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an electronic device, enable the electronic device to perform a method of communication prompting, the method comprising:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
and correspondingly presenting the first interpretation information and the first object to be translated and presenting the second interpretation information and the second object to be translated.
Fig. 7 is a schematic structural diagram of a server in an embodiment of the present invention. The server 1900, which may vary considerably in configuration or performance, may include one or more Central Processing Units (CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage media 1930 (e.g., one or more mass storage devices) storing applications 1942 or data 1944. Memory 1932 and storage medium 1930 can be, among other things, transient or persistent storage. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instructions operating on a server. Still further, a central processor 1922 may be provided in communication with the storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input-output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. The invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is only limited by the appended claims
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (36)

1. A method of searching, comprising:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information corresponding to the first object to be translated, and the second interpretation information is interpretation information corresponding to the second object to be translated;
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located on the same presentation interface, the first interpretation information is brief interpretation information of the first object to be translated, the second interpretation information is brief interpretation information of the second object to be translated, the brief interpretation information of the object to be translated is presented in a card type presentation mode, and other interpretation information is hidden.
2. The method of claim 1, further comprising:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information corresponding to the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
3. The method according to claim 1, wherein the user operation is an operation of inputting text by a user, and the input content under the user operation is the text input by the user.
4. A method according to claim 3, wherein the separation is by: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
5. The method according to claim 1, wherein the user operation is an operation of a user to capture an image, and the input content under the user operation is an image captured by the user.
6. The method of claim 5, wherein the separating is by: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
7. The method of claim 1, further comprising:
in response to a triggering operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the atmosphere,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
8. The method of claim 1, wherein presenting the interpretation information of the first object to be translated in correspondence with the first object to be translated and presenting the interpretation information of the second object to be translated in correspondence with the second object to be translated comprises:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing interpretation information for the first object to be translated;
and presenting the second interpretation information in response to the trigger operation of viewing the interpretation information aiming at the second object to be translated.
9. The method of claim 1,
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user-related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
10. A search apparatus, comprising:
the translation device comprises a recognition unit, a translation unit and a translation unit, wherein the recognition unit is used for responding to a user operation of inputting an object to be translated, and recognizing a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
the searching unit is used for searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
a presentation unit configured to present the first object to be translated and the first interpretation information in a first display area; presenting the second object to be translated and the second interpretation information in a second display area; the first display area and the second display area are located on the same presentation interface, the first interpretation information is brief interpretation information of the first object to be translated, the second interpretation information is brief interpretation information of the second object to be translated, the brief interpretation information of the object to be translated is presented in a card type presentation mode, and other interpretation information is hidden.
11. The apparatus of claim 10, further comprising:
the first identification unit is used for identifying a third object to be translated from the input content operated by the user;
the first searching unit is used for searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated;
and the first presentation unit is used for correspondingly presenting the third interpretation information and the third object to be translated.
12. The apparatus according to claim 10, wherein the user operation is an operation of inputting text by a user, and the input content by the user operation is the text input by the user.
13. The apparatus of claim 12, wherein the separation is: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
14. The apparatus according to claim 10, wherein the user operation is an operation of a user to capture an image, and the input content under the user operation is an image captured by the user.
15. The apparatus of claim 14, wherein the separation is: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
16. The apparatus of claim 10, further comprising:
the first expansion unit is used for responding to the triggering operation of viewing the detailed explanation information of the first object to be translated, expanding the first display area and presenting the detailed explanation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and the second expansion unit is used for expanding the second display area and presenting the detailed interpretation information of the second object to be translated in the expanded second display area in response to the triggering operation of viewing the detailed interpretation information aiming at the second object to be translated.
17. The apparatus of claim 10, wherein the presentation unit comprises:
the third presentation subunit is used for presenting the first object to be translated and the second object to be translated;
a fourth presentation subunit, configured to present the first interpretation information in response to a trigger operation of viewing the interpretation information for the first object to be translated;
and the fifth presentation subunit is used for presenting the second interpretation information in response to the trigger operation of viewing the interpretation information aiming at the second object to be translated.
18. The apparatus of claim 10, further comprising:
a first priority presentation unit, configured to preferentially present paraphrases matching the user-related information among the plurality of paraphrases of the first object to be translated in the first interpretation information;
and the second priority presentation unit is used for preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
19. A search apparatus comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by one or more processors the one or more programs including instructions for:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located on the same presentation interface, the first interpretation information is brief interpretation information of the first object to be translated, the second interpretation information is brief interpretation information of the second object to be translated, the brief interpretation information of the object to be translated is presented in a card type presentation mode, and other interpretation information is hidden.
20. The apparatus of claim 19, wherein the processor is further configured to execute the one or more programs including instructions for:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information of the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
21. The apparatus according to claim 19, wherein the user operation is an operation of inputting text by a user, and the input content under the user operation is the text input by the user.
22. The apparatus of claim 21, wherein the separation is: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
23. The apparatus according to claim 19, wherein the user operation is an operation of a user to capture an image, and the input content under the user operation is an image captured by the user.
24. The apparatus of claim 23, wherein the separation is: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
25. The apparatus of claim 19, wherein the processor is further configured to execute the one or more programs including instructions for:
in response to a triggering operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
26. The apparatus of claim 19, wherein the processor is further configured to execute the one or more programs including instructions for:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing the interpretation information for the first object to be translated;
and presenting the second interpretation information in response to a trigger operation of viewing the interpretation information for the second object to be translated.
27. The apparatus of claim 19, wherein the processor is further configured to execute the one or more programs including instructions for:
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
28. A non-transitory computer readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform a search method, the method comprising:
responding to user operation of inputting an object to be translated, and identifying a first object to be translated and a second object to be translated from input contents under the user operation according to a separation mode of the object to be translated under the user operation;
searching the first object to be translated and the second object to be translated to obtain first interpretation information and second interpretation information, wherein the first interpretation information is interpretation information of the first object to be translated, and the second interpretation information is interpretation information of the second object to be translated;
presenting the first object to be translated and the first interpretation information in a first display area;
presenting the second object to be translated and the second interpretation information in a second display area;
the first display area and the second display area are located on the same presentation interface, the first interpretation information is brief interpretation information of the first object to be translated, the second interpretation information is brief interpretation information of the second object to be translated, the brief interpretation information of the object to be translated is presented in a card type presentation mode, and other interpretation information is hidden.
29. The storage medium of claim 28, wherein the method further comprises:
identifying a third object to be translated from the input content operated by the user;
searching the third object to be translated to obtain third interpretation information, wherein the third interpretation information is interpretation information corresponding to the third object to be translated;
and correspondingly presenting the third interpretation information and the third object to be translated.
30. The storage medium of claim 28, wherein the user operation is an operation of inputting text by a user, and the input content under the user operation is the text input by the user.
31. The storage medium of claim 30, wherein the separation is: in the input content under the user operation, a separation symbol is arranged between the first object to be translated and the second object to be translated.
32. The storage medium according to claim 28, wherein the user operation is an operation of a user to capture an image, and the input content under the user operation is an image captured by the user.
33. The storage medium of claim 32, wherein the separation is: in the input content operated by the user, the first object to be translated is marked by a first mark symbol, and the second object to be translated is marked by a second mark symbol.
34. The storage medium of claim 28, further comprising:
in response to a trigger operation of viewing detailed interpretation information for a first object to be translated, expanding the first display area and presenting the detailed interpretation information of the first object to be translated in the expanded first display area;
and/or the presence of a gas in the gas,
and in response to a trigger operation of viewing detailed explanation information for a second object to be translated, expanding the second display area and presenting the detailed explanation information of the second object to be translated in the expanded second display area.
35. The storage medium of claim 28, wherein presenting the interpretation information of the first object to be translated in correspondence with the first object to be translated and presenting the interpretation information of the second object to be translated in correspondence with the second object to be translated comprises:
presenting the first object to be translated and the second object to be translated;
presenting the first interpretation information in response to a trigger operation of viewing the interpretation information for the first object to be translated;
and presenting the second interpretation information in response to a trigger operation of viewing the interpretation information for the second object to be translated.
36. The storage medium of claim 28, wherein the method further comprises:
preferentially presenting paraphrases matched with user-related information in a plurality of paraphrases of the first object to be translated in the first interpretation information;
and preferentially presenting paraphrases matched with the user-related information in the plurality of paraphrases of the second object to be translated in the second interpretation information.
CN201710465411.1A 2017-06-19 2017-06-19 Searching method, device and equipment Active CN109145310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710465411.1A CN109145310B (en) 2017-06-19 2017-06-19 Searching method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710465411.1A CN109145310B (en) 2017-06-19 2017-06-19 Searching method, device and equipment

Publications (2)

Publication Number Publication Date
CN109145310A CN109145310A (en) 2019-01-04
CN109145310B true CN109145310B (en) 2022-09-23

Family

ID=64804330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710465411.1A Active CN109145310B (en) 2017-06-19 2017-06-19 Searching method, device and equipment

Country Status (1)

Country Link
CN (1) CN109145310B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105786804A (en) * 2016-02-26 2016-07-20 维沃移动通信有限公司 Translation method and mobile terminal
CN106776585A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 Instant translation method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080154853A1 (en) * 2006-12-22 2008-06-26 International Business Machines Corporation English-language translation of exact interpretations of keyword queries

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105786804A (en) * 2016-02-26 2016-07-20 维沃移动通信有限公司 Translation method and mobile terminal
CN106776585A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 Instant translation method and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
妙用软件,翻译很简单;笨狼;《电脑知识与技术(经验技巧)》;20100905(第09期);全文 *

Also Published As

Publication number Publication date
CN109145310A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN108829686B (en) Translation information display method, device, equipment and storage medium
WO2017092121A1 (en) Information processing method and device
CN105447109A (en) Key word searching method and apparatus
CN111160047A (en) Data processing method and device and data processing device
CN106919642B (en) Cross-language search method and device for cross-language search
CN107943317B (en) Input method and device
CN108628461B (en) Input method and device and method and device for updating word stock
KR20140133153A (en) Mobile terminal and method for controlling of the same
CN113033163A (en) Data processing method and device and electronic equipment
CN111414766B (en) Translation method and device
CN112199032A (en) Expression recommendation method and device and electronic equipment
CN109799916B (en) Candidate item association method and device
CN109145310B (en) Searching method, device and equipment
CN112329480A (en) Area adjustment method and device and electronic equipment
CN108108356B (en) Character translation method, device and equipment
WO2017035985A1 (en) String storing method and device
CN111092971A (en) Display method and device for displaying
CN109388328B (en) Input method, device and medium
CN112015281A (en) Cloud association method and related device
CN110851624A (en) Information query method and related device
CN112199033B (en) Voice input method and device and electronic equipment
CN111722726B (en) Method and device for determining pigment and text
CN111880696B (en) Encyclopedic-based data processing method and device
CN111381688B (en) Method and device for real-time transcription and storage medium
CN109408623B (en) Information processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant