CN111814492A - Translation method, terminal and computer storage medium - Google Patents

Translation method, terminal and computer storage medium Download PDF

Info

Publication number
CN111814492A
CN111814492A CN202010543789.0A CN202010543789A CN111814492A CN 111814492 A CN111814492 A CN 111814492A CN 202010543789 A CN202010543789 A CN 202010543789A CN 111814492 A CN111814492 A CN 111814492A
Authority
CN
China
Prior art keywords
translation
target language
translated
terminal
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010543789.0A
Other languages
Chinese (zh)
Inventor
杨里
徐巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chuanying Information Technology Co Ltd
Original Assignee
Shanghai Chuanying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chuanying Information Technology Co Ltd filed Critical Shanghai Chuanying Information Technology Co Ltd
Priority to CN202010543789.0A priority Critical patent/CN111814492A/en
Publication of CN111814492A publication Critical patent/CN111814492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation

Abstract

The application discloses a translation method, a terminal and a computer storage medium, wherein the translation method comprises the following steps: identifying a usage scenario; translating the content to be translated according to the use scene to obtain a corresponding target language translation result; and displaying the target language translation result. According to the translation method, the terminal and the computer storage medium, the terminal automatically translates the content to be translated according to the use scene of the user, the translation operation is convenient, the translation efficiency is improved, and the use experience of the user is improved.

Description

Translation method, terminal and computer storage medium
Technical Field
The present application relates to the field of terminals, and in particular, to a translation method, a terminal, and a computer storage medium.
Background
With the increasing trend of globalization and integration of world economy, people sometimes need to know foreign technologies, communicate with foreigners and the like, but the language difference brings great inconvenience to cognition, communication and the like. For example, the user communication between China and England is the biggest obstacle to the communication between China and England, wherein most Chinese users use Chinese, most England users use English, and the language obstruction becomes the biggest obstacle to the communication between the China and England users. Although many translation software or partial software with translation functions are available, the translation range of most translation software is limited in the application, and the software with translation functions usually needs to be manually operated by a user to translate in order to realize the multi-language translation function, and the existing translation mode has the problem of complicated translation operation, which affects the translation operation efficiency and the user experience.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
The application aims to provide a translation method, a terminal and a computer storage medium, wherein the terminal automatically translates contents to be translated according to a user use scene, the translation operation is convenient, the translation efficiency is improved, and the user use experience is improved.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a translation method, which is applied to a terminal, and includes:
identifying a usage scenario;
translating the content to be translated according to the use scene to obtain a corresponding target language translation result;
and displaying the target language translation result.
As an embodiment, the identifying the usage scenario includes:
acquiring content displayed on a display screen of the terminal;
and extracting scene characteristic values of the content displayed on the display screen of the terminal, and determining a use scene based on the extracted scene characteristic values.
As an implementation manner, the translating the content to be translated according to the usage scenario to obtain a corresponding target language translation result includes:
determining a translation service function corresponding to the use scene according to the use scene;
and translating the content to be translated based on the translation service function to obtain a corresponding target language translation result.
As an implementation manner, the translating contents to be translated based on the translation service function to obtain a corresponding target language translation result includes:
determining content to be translated based on the translation service function;
and translating the content to be translated to obtain a corresponding target language translation result.
As an implementation manner, before translating the content to be translated according to the usage scenario and obtaining the corresponding target language translation result, the method further includes:
receiving an input language selection instruction, and determining a language selected by the language selection instruction as a target language;
or, determining the system language of the terminal as a target language;
or determining the target language according to the use scene.
As one of the embodiments, the usage scenario includes a chat scenario; the determining the target language according to the use scenario comprises:
acquiring characteristic information of a chat object in a chat scene;
and determining a target language according to the characteristic information of the chat object.
As an embodiment, before the identifying the usage scenario, the method further includes:
detecting whether the chat object in the chat scene has the same preset characteristics with the current user;
if not, the intelligent translation function is started to identify the use scene.
As one embodiment, the content to be translated includes voice information to be translated; the translating the content to be translated according to the user usage scenario to obtain a corresponding target language translation result includes:
and converting the voice information to be translated into text information, and translating the text information into target text information of a target language.
In a second aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a storage device for storing a program; when executed by the processor, cause the processor to implement the translation method of the first aspect.
In a third aspect, an embodiment of the present application provides a computer storage medium storing a computer program, where the computer program, when executed by a processor, implements the translation method according to the first aspect.
The translation method, the terminal and the computer storage medium provided by the embodiment of the application comprise the following steps: identifying a usage scenario; translating the content to be translated according to the use scene to obtain a corresponding target language translation result; and displaying the target language translation result. Therefore, the terminal automatically translates the content to be translated according to the user using scene without manual operation of the user, can self-adaptively translate the content to be translated into the required language, is convenient and fast to translate, improves the translation efficiency and improves the user experience.
Drawings
FIG. 1 is a schematic interface diagram of a prior art translation process;
fig. 2 is a schematic flowchart of a translation method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating the translation effect in the embodiment of the present application;
fig. 4 is a schematic flowchart of a translation method according to an embodiment of the present application;
FIG. 5 is a schematic flowchart of another translation method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments of the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 2, a translation method provided in this embodiment of the present application may be applicable to a situation that a displayed content needs to be translated, and the translation method may be executed by a translation apparatus provided in this embodiment of the present application, and the translation apparatus may be implemented in a software and/or hardware manner, and in a specific application, the translation apparatus may specifically be a terminal such as a smart phone, a personal digital assistant, and a tablet computer. In this embodiment, taking an execution subject of the translation method as an example, the translation method includes the following steps:
step S101: identifying a usage scenario;
here, the usage scenario may refer to a scenario displayed by the terminal when the user needs to use the translation function, and may include an entertainment scenario, a chat scenario, a learning scenario, and the like, and the usage scenario may specifically be "listen to a song", "watch a movie", "chat", "search a web page", "play a game", and the like. Since the usage scene can be specifically represented by the content displayed on the terminal display screen, the usage scene can be identified by the content displayed on the terminal display screen. In one embodiment, the identifying the usage scenario may include: acquiring content displayed on a display screen of the terminal; and extracting scene characteristic values of the content displayed on the display screen of the terminal, and determining a use scene based on the extracted scene characteristic values. Here, the terminal may acquire the content displayed on the terminal display screen by scanning or the like, and may acquire page layout information according to the content displayed on the terminal display screen, and may further perform scene feature value extraction on the content displayed on the terminal display screen to obtain a corresponding scene feature value, and finally determine a usage scene based on the extracted scene feature value. It can be understood that the terminal may be preset with a corresponding relationship between the scene characteristic value and a usage scene of the user, for example, the usage scene corresponding to the scene characteristic value "lyrics" is listening to songs, etc. In addition, the terminal may also determine a usage scenario according to information such as an application program run by the terminal, for example, when the smartphone runs a game application, the usage scenario is a game play. Therefore, the use scene can be accurately acquired, and the operation speed is high.
Step S102: translating the content to be translated according to the use scene to obtain a corresponding target language translation result;
it should be noted that the usage scenario may represent which language needs to be translated into, i.e. the target language, for example, if the usage scenario is a chat with an english person, the target language may be determined to be english. Meanwhile, translation service functions required by the user, such as full webpage translation, subtitle translation, picture and character translation and the like, can also be represented according to the use scene. In an embodiment, the translating the content to be translated according to the usage scenario to obtain a corresponding target language translation result includes: determining a translation service function corresponding to the use scene according to the use scene; and translating the content to be translated based on the translation service function to obtain a corresponding target language translation result. It is understood that, in different scenarios, translation service functions required by a user may be different, for example, when the user browses a web page, a global translation, that is, a real-time translation may be required; when a user listens to a song or watches a movie, the user may need to translate the subtitle; when a user shoots an image, the user may need to translate characters in the image; the user may need to translate the entered text while playing the game. After determining the translation service function corresponding to the use scene according to the use scene, the terminal can translate the content to be translated based on the translation service function, so as to obtain a corresponding target language translation result. Compared with the existing mode that the user manually selects the translation service function according to actual needs, the method for automatically determining the translation service function according to the use scene of the user is more convenient, and the efficiency of translation operation is further improved.
In an embodiment, the translating the content to be translated based on the translation service function to obtain a corresponding target language translation result includes: determining content to be translated based on the translation service function; and translating the content to be translated to obtain a corresponding target language translation result. Here, the terminal may determine the location of the content to be translated based on the translation service function, and may obtain the content to be translated from the content displayed on the display screen of the terminal through optical character recognition or an auxiliary service based on the location of the content to be translated, and further translate the content to be translated to obtain a corresponding target language translation result. Taking the translation service function as caption translation as an example, the terminal can determine the position of the content to be translated as a caption, and further can acquire the content displayed in the caption from the content displayed on the display screen of the terminal as the content to be translated.
Step S103: and displaying the target language translation result.
Specifically, the terminal translates the content to be translated to obtain a target language translation result, and then displays the target language translation result. Here, the displaying the target language translation result may be displaying the target language translation result in an area above the position of the content to be translated, and specifically, may be determining a size of a required display area according to the target language translation result, and displaying the target language translation result in an area above the position of the content to be translated based on the size of the required display area.
It should be noted that, in the translation method of the present application, a usage scenario is identified, a content to be translated is translated according to the usage scenario, a corresponding target language translation result is obtained, and the target language translation result is displayed, on one hand, such a function may be commanded as an intelligent translation function, and may also be commanded as another function; on the other hand, such a function may be always in an operating state as a system function, for example, like a touch panel of a notebook computer, or may be operated when a turn-on instruction is received. Therefore, the descriptive description of the intelligent translation function does not limit the translation method of the present application, but only describes the functional effects of the translation method of the present application in a summary manner.
It should be noted that the terminal may be implemented in various forms, and the terminal described in this embodiment may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like. Here, the terminal is provided with an intelligent translation function, and when the intelligent translation function is turned on, the terminal can recognize a use scene, translate the content to be translated according to the use scene, obtain a corresponding target language translation result, and display the target language translation result, thereby implementing automatic translation. It should be noted that the terminal may be provided with a switch button for the intelligent translation function, and the intelligent translation function may be turned on or off by performing a switching operation on the switch button. Here, after the intelligent translation function is started, the terminal may display a status identifier for indicating that the intelligent translation function is started on a display interface, so that a user can operate the terminal conveniently.
In summary, in the translation method provided by the embodiment, the terminal automatically translates the content to be translated according to the user usage scenario, without manual operation of the user, and can adaptively translate the content to be translated into the required language, so that the translation operation is convenient and fast, the translation efficiency is improved, and the user experience is improved.
In an embodiment, before translating the content to be translated according to the usage scenario and obtaining the corresponding target language translation result, the method further includes: receiving a language selection instruction input by a user, and determining a language selected by the language selection instruction as a target language; or, determining the system language of the terminal as a target language; or determining the target language according to the use scene. Specifically, the terminal may display a language list, and the user may select a language from the language list as a target language by clicking the language list, and at this time, the terminal receives a language selection instruction input by the user and determines the language selected by the language selection instruction as the target language. For example, assume that the terminal is provided with a language list of three languages of chinese, english, and french, from which the terminal takes french as the target language when the user selects french. Alternatively, since the user mostly needs to translate characters, voices, and the like expressed in other languages into a language familiar to the user, and the language familiar to the user is generally set as the system language of the terminal, the system language of the terminal can be directly determined as the target language. Or, the user usage scenario can also represent the language into which the user needs to translate the content to be translated to a certain extent, for example, when the user chats with the user in one day, in order to achieve normal communication, the chinese characters input by the user need to be converted into japanese, and at this time, the japanese is the target language. In one embodiment, the usage scenario includes a chat scenario; the determining the target language according to the use scenario comprises: acquiring characteristic information of a chat object in a chat scene; and determining a target language according to the characteristic information of the chat object. It is understood that the characteristic information of the chat object may include a history chat record with the chat object, a region information of the chat object, a tag information of the chat object, and the like, for example, the terminal may know a language used when chatting with the chat object according to the history chat record with the chat object and set the language as a target language; or, the terminal can take the corresponding language as the target language according to the nationality of the chat object; or, the terminal may use the corresponding language as the target language according to the nationality, the work address, etc. marked by the user for the chat object. Therefore, the target language to be translated is determined through the characteristic information of the chat object, and the method is accurate and fast.
In an embodiment, before the identifying the usage scenario, the method further includes: detecting whether the chat object in the chat scene has the same preset characteristics with the current user; if not, the intelligent translation function is started to identify the use scene. Specifically, the terminal detects whether the chat object in the chat scene and the current user of the terminal have the same preset characteristics, and if the chat object and the current user have the same preset characteristics, the chat object and the current user are indicated to have the same language, so that the intelligent translation function does not need to be started; if the chat object and the current user do not have the same preset characteristics, the language used by the chat object and the language used by the current user are different, and an intelligent translation function needs to be started. It can be understood that when a user chats with another user using the same language, the chat content does not need to be translated, and only when the user chats with another user not using the same language, the chat content needs to be translated, for example, when a chinese user chats with a british user, the chat content usually needs to be translated to enable smooth communication between the two users. Here, the terminal detecting whether the chat object has the same preset characteristics as the current user in the chat scene may detect whether the nationality of the chat object, the used language are the same as the nationality corresponding to the current user or the used language. If the nationality of the chat object is different from the nationality corresponding to the current user and/or the language used by the chat object is different from the language used by the current user, the translation is needed when the chat object is communicated with the current user. Therefore, the intelligent translation function is started only when the fact that the chat object and the current user do not have the same preset characteristics is detected, so that the conversation content is translated, the translation operation flexibility is improved, and the user experience is further improved.
In one embodiment, the content to be translated includes voice information to be translated; the translating the content to be translated according to the use scene to obtain a corresponding target language translation result comprises the following steps: and converting the voice information to be translated into text information, and translating the text information into target text information of a target language. It is understood that in the chat scenario, the users may chat by sending voice messages, and the voice needs to be converted into text and then translated. For example, when a chinese user chats with a british user, if the chinese user inputs a speech in a chinese manner, the speech needs to be converted into a chinese character, then the chinese character is translated into an english character, and then the english character is sent to the british client, where the target language is english; then, if the chinese user receives another piece of speech sent by the british user, the speech needs to be converted into english words, and then the english words are translated into chinese words for display, and at this time, the target language is chinese. With the terminal as a mobile phone, referring to fig. 3, a schematic diagram of a translation effect provided in the embodiment of the present application is that, when the mobile phone detects that a chat object with a chinese user is a british person, if the chinese user inputs a voice message "what Can Help" to the mobile phone, the mobile phone needs to convert the voice into a text message "what Can Help", then translate the text message into the english "Can I Help You", and then display the english in an area above the voice message. In addition, a key can be displayed above the voice message, and the English is displayed only when the user clicks the key.
Based on the same inventive concept of the foregoing embodiments, the present embodiment describes in detail the technical solutions of the foregoing embodiments by a specific example. Referring to fig. 4, a specific flowchart of the translation method provided in the embodiment of the present application is schematically illustrated, and includes the following steps:
step S201: setting a translation target language;
here, the user can set a target language to be translated into after the terminal starts the translation service. If the user has not manually set, the system language of the terminal can be used as the target language.
Step S202: acquiring a scene where a user is located according to the page characteristic value;
here, the terminal may extract the page feature value and determine a scene where the user is located according to the page feature value, such as whether the user is playing a game, listening to a song, playing a social application, or shopping.
Step S203: performing algorithm matching according to the scene where the user is located to provide a corresponding translation function;
here, the terminal may perform algorithm matching according to a scene where the user is located to provide a corresponding translation function. For example, when the scene of the user is browsing a webpage, a full-text translation function can be provided; when the scene of the user is listening to the song, a subtitle translation function and the like can be provided.
Step S204: extracting text information and translating the text information into a target language;
here, the terminal may extract text information to be translated according to a scene where the user is located, and translate the text information into a target language through the translation engine.
Step S205: and displaying the translation result above the original text area.
Here, the terminal may calculate how large a rectangular frame is required for the translation result to be completely displayed according to the translation result, and then display the translation result on the original text, that is, above the text information.
In summary, after the translation service of the terminal is started, the terminal automatically identifies the scene where the user is located, performs algorithm matching according to the current feature value, then identifies the text content in the scene and the position information corresponding to the text content through the OCR or the auxiliary service, translates the identified text content into the preset target language, and displays the translation result according to the position corresponding to the previous text content. Therefore, different schemes are displayed under different scenes by acquiring the text information on the terminal and calling the translation engine to translate the text information in different scenes, the problem that the text translation operation is complex in the prior art is solved, a user can be greatly facilitated to recognize and translate the text, and the operation efficiency of the user is improved.
Based on the same inventive concept of the foregoing embodiments, the present embodiment describes in detail the technical solutions of the foregoing embodiments by using another specific example. Referring to fig. 5, a specific flowchart of the translation method provided in the embodiment of the present application is schematically illustrated, and includes the following steps:
step S301: starting an intelligent dialogue translation function and displaying the intelligent dialogue translation function on the terminal screen;
here, the terminal may start the intelligent dialog translation function according to a start instruction input by the user, and display an identifier of the started intelligent dialog translation function above a screen of the terminal. Of course, the terminal may also detect whether the intelligent dialog translation function needs to be started according to the chat object of the current dialog, for example, when the language used by the chat object is different from the language used by the user, the intelligent dialog translation function is started.
Step S302: identifying a scene where a user is located;
here, the terminal can recognize whether it is necessary to interpret the system voice in the terminal or to turn on the microphone to record the sound outside the terminal.
Step S303: monitoring a user receiving voice information event;
here, the terminal may activate a user acquisition information listener to listen to user reception voice information events.
Step S304: converting the voice into characters, and translating the characters into a translation result expressed by a preset target language;
specifically, the voice recognition module of the terminal can firstly perform noise elimination on the recorded voice information, filter out redundant information, convert effective information into a text, and input the text into a translation engine for translation while recognizing the language.
Step S305: and displaying the translation result.
Here, after the text information is translated into a translation result expressed in a preset target language, the translation result is displayed above the original text region.
In conclusion, when a user has a voice translation requirement, an intelligent scene judgment is carried out, the scene is divided into an external translation scene and an internal translation scene, a scene selection box pops up after automatic identification, the user can translate in real time according to the requirement after confirmation, the translation application is not required to be skipped, the voice conversation translation can be realized at any time and any place, and the user has no language barrier when the user is in face-to-face or video conversation abroad. Therefore, the user can realize real-time cross-language conversation in the social process without jumping back and forth between the social application and the translation application, so that the user can communicate with the opposite side according to own native language habits without considering language barriers in the conversation process.
Based on the same inventive concept of the foregoing embodiments, an embodiment of the present application provides a terminal, as shown in fig. 6, including: a processor 110 and a memory 111 for storing computer programs capable of running on the processor 110; the processor 110 illustrated in fig. 6 is not used to refer to the number of the processors 110 as one, but is only used to refer to the position relationship of the processor 110 relative to other devices, and in practical applications, the number of the processors 110 may be one or more; similarly, the memory 111 illustrated in fig. 6 is also used in the same sense, that is, it is only used to refer to the position relationship of the memory 111 relative to other devices, and in practical applications, the number of the memory 111 may be one or more. The processor 110 is configured to implement any one of the translation methods described above when the computer program is executed.
The terminal may further include: at least one network interface 112. The various components in the terminal are coupled together by a bus system 113. It will be appreciated that the bus system 113 is used to enable communications among the components. The bus system 113 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 113 in FIG. 6.
The memory 111 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 111 described in embodiments herein is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 111 in the embodiment of the present application is used to store various types of data to support the operation of the terminal. Examples of such data include: any computer program for operation on the terminal, such as operating systems and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, the program that implements the method of the embodiment of the present application may be included in an application program.
Based on the same inventive concept of the foregoing embodiments, this embodiment further provides a computer storage medium, where a computer program is stored in the computer storage medium, where the computer storage medium may be a Memory such as a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash Memory (flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM), and the like; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc. When executed by a processor, the computer program stored in the computer storage medium implements any of the translation methods described above. Please refer to the description of the embodiment shown in fig. 2 for a specific step flow realized when the computer program is executed by the processor, which is not described herein again.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A translation method applied to a terminal is characterized by comprising the following steps:
identifying a usage scenario;
translating the content to be translated according to the use scene to obtain a corresponding target language translation result;
and displaying the target language translation result.
2. The method of claim 1, wherein the identifying a usage scenario comprises:
acquiring content displayed on a display screen of the terminal;
and extracting scene characteristic values of the content displayed on the display screen of the terminal, and determining a use scene based on the extracted scene characteristic values.
3. The method according to claim 1 or 2, wherein translating the content to be translated according to the usage scenario to obtain a corresponding target language translation result comprises:
determining a translation service function corresponding to the use scene according to the use scene;
and translating the content to be translated based on the translation service function to obtain a corresponding target language translation result.
4. The method according to claim 3, wherein translating the content to be translated based on the translation service function to obtain the corresponding target language translation result comprises:
determining content to be translated based on the translation service function;
and translating the content to be translated to obtain a corresponding target language translation result.
5. The method according to claim 1, wherein before translating the content to be translated according to the usage scenario and obtaining the corresponding target language translation result, the method further comprises:
receiving an input language selection instruction, and determining a language selected by the language selection instruction as a target language;
or, determining the system language of the terminal as a target language;
or determining the target language according to the use scene.
6. The method of claim 5, wherein the usage scenario comprises a chat scenario; the determining the target language according to the use scenario comprises:
acquiring characteristic information of a chat object in a chat scene;
and determining a target language according to the characteristic information of the chat object.
7. The method of claim 6, wherein prior to identifying the usage scenario, further comprising:
detecting whether the chat object in the chat scene has the same preset characteristics with the current user;
if not, the intelligent translation function is started to identify the use scene.
8. The method of claim 1, wherein the content to be translated comprises speech information to be translated; the translating the content to be translated according to the use scene to obtain a corresponding target language translation result comprises the following steps:
and converting the voice information to be translated into text information, and translating the text information into target text information of a target language.
9. A terminal, comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor, when running the computer program, implements the translation method of any of claims 1 to 8.
10. A computer storage medium, characterized in that a computer program is stored which, when executed by a processor, implements the translation method according to any one of claims 1 to 8.
CN202010543789.0A 2020-06-15 2020-06-15 Translation method, terminal and computer storage medium Pending CN111814492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010543789.0A CN111814492A (en) 2020-06-15 2020-06-15 Translation method, terminal and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010543789.0A CN111814492A (en) 2020-06-15 2020-06-15 Translation method, terminal and computer storage medium

Publications (1)

Publication Number Publication Date
CN111814492A true CN111814492A (en) 2020-10-23

Family

ID=72845186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010543789.0A Pending CN111814492A (en) 2020-06-15 2020-06-15 Translation method, terminal and computer storage medium

Country Status (1)

Country Link
CN (1) CN111814492A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190308A (en) * 2021-04-20 2021-07-30 北京异乡旅行网络科技有限公司 Method, device and equipment for determining translation text of overseas renting house application
CN116227504A (en) * 2023-02-08 2023-06-06 广州数字未来文化科技有限公司 Communication method, system, equipment and storage medium for simultaneous translation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190308A (en) * 2021-04-20 2021-07-30 北京异乡旅行网络科技有限公司 Method, device and equipment for determining translation text of overseas renting house application
CN116227504A (en) * 2023-02-08 2023-06-06 广州数字未来文化科技有限公司 Communication method, system, equipment and storage medium for simultaneous translation
CN116227504B (en) * 2023-02-08 2024-01-23 广州数字未来文化科技有限公司 Communication method, system, equipment and storage medium for simultaneous translation

Similar Documents

Publication Publication Date Title
JP6647351B2 (en) Method and apparatus for generating candidate response information
CN110085222B (en) Interactive apparatus and method for supporting voice conversation service
CN109979450B (en) Information processing method and device and electronic equipment
KR20140047633A (en) Speech recognition repair using contextual information
CN101611403A (en) The method and apparatus that is used for the phonetic search of mobile communication equipment
CN110797022A (en) Application control method and device, terminal and server
US20170109339A1 (en) Application program activation method, user terminal, and server
US20200234008A1 (en) Information processing method and device
CN109634501B (en) Electronic book annotation adding method, electronic equipment and computer storage medium
CN110391966B (en) Message processing method and device and message processing device
JP2014049140A (en) Method and apparatus for providing intelligent service using input characters in user device
CN109656444B (en) List positioning method, device, equipment and storage medium
WO2014154097A1 (en) Automatic page content reading-aloud method and device thereof
CN112286485B (en) Method and device for controlling application through voice, electronic equipment and storage medium
CN111814492A (en) Translation method, terminal and computer storage medium
CN106558311B (en) Voice content prompting method and device
CN112291614A (en) Video generation method and device
CN110806822B (en) Electronic book interaction method, computing device and computer storage medium
CN108595141A (en) Pronunciation inputting method and device, computer installation and computer readable storage medium
CN110232155B (en) Information recommendation method for browser interface and electronic equipment
CN111641551A (en) Voice playing method, voice playing device and electronic equipment
CN110874176B (en) Interaction method, storage medium, operating system and device
CN112491690A (en) Method for transmitting voice information, mobile terminal, computer storage medium and system
US20140297285A1 (en) Automatic page content reading-aloud method and device thereof
EP2717121B1 (en) Method and apparatus for providing intelligent service using inputted character in a user device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination