CN114580447A - Translation method, translation device, storage medium and electronic equipment - Google Patents

Translation method, translation device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114580447A
CN114580447A CN202210259874.3A CN202210259874A CN114580447A CN 114580447 A CN114580447 A CN 114580447A CN 202210259874 A CN202210259874 A CN 202210259874A CN 114580447 A CN114580447 A CN 114580447A
Authority
CN
China
Prior art keywords
translation
control
determining
target
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210259874.3A
Other languages
Chinese (zh)
Inventor
陈旭程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210259874.3A priority Critical patent/CN114580447A/en
Publication of CN114580447A publication Critical patent/CN114580447A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a translation method, a translation device, a storage medium and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a first position selected in a page to be translated, determining a second position aiming at a selection mark based on the first position, displaying the selection mark at the first position, determining a target area corresponding to the selection mark when the first position is different from the second position, and determining a translation result of a translation object in the target area. By adopting the embodiment of the application, the convenience of translation can be improved.

Description

Translation method, translation device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a translation method, an apparatus, a storage medium, and an electronic device.
Background
In recent years, with the perfection and popularization of various translation software and translation tools applied to terminals, a barrier of language barrier is opened for users, and in the process of daily using the terminals, when the users browse information on pages to be translated of the terminals, such as information which is not understood and difficult to understand, the processes of translating the information are usually involved.
Disclosure of Invention
The embodiment of the application provides a translation method, a translation device, a storage medium and electronic equipment, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a translation method, where the method includes:
acquiring a first position selected in a page to be translated;
determining a second location for a selection marker based on the first location, the selection marker being displayed at the first location, the first location being different from the second location;
and determining a target area corresponding to the selection mark, and determining a translation result of a translation object in the target area.
In a second aspect, an embodiment of the present application provides a translation apparatus, including:
the system comprises a mark display module, a translation module and a display module, wherein the mark display module is used for acquiring a first position selected in a page to be translated, determining a second position aiming at a selection mark based on the first position, and displaying the selection mark at the first position, and the first position is different from the second position;
and the object translation module is used for determining a target area corresponding to the selection mark and determining a translation result of a translation object in the target area.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in one or more embodiments of the application, a first position selected by a user in a page to be translated is obtained, a second position for a selection mark is determined based on the first position, then the selection mark is displayed at a second position different from the first position, and a target area corresponding to the selection mark is determined, so that a translation object in the target area can be quickly translated, and a translation result of the translation object in the target area is determined. By displaying the selection mark at the second position except the first position selected by the user in the mode, the user can be quickly assisted to position the translation object based on the selection mark, and the translation object can be prevented from being shielded; in the whole translation process, the translation operation path is shortened, the translation process is optimized, and the convenience of translation is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a translation method provided by an embodiment of the present application;
fig. 2 is a scene schematic diagram of a display page related to a translation method provided in an embodiment of the present application;
fig. 3 is a scene schematic diagram of a translation tool related to a translation method provided in an embodiment of the present application;
fig. 4 is a scene schematic diagram of a selection mark related to a translation method provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram of another translation method provided by embodiments of the present application;
fig. 6 is a scene schematic diagram of a translation control trigger related to a translation method provided in an embodiment of the present application;
FIG. 7 is a schematic flow chart diagram of another translation method provided by embodiments of the present application;
fig. 8 is a scene schematic diagram of a display range of a mark related to a translation method provided in an embodiment of the present application;
fig. 9 is a schematic view of a scene in which a selection mark is displayed according to a translation method provided in an embodiment of the present application;
fig. 10 is an interface schematic diagram of a translation control involved in the translation method provided in the embodiment of the present application;
fig. 11 is an interface diagram of a terminal displaying a translation result according to a translation method provided in an embodiment of the present application;
fig. 12 is an interface diagram of another terminal related to the translation method provided in the embodiment of the present application for displaying a translation result;
FIG. 13 is a schematic structural diagram of a translation apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a logo display module according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a position determination unit provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of an object translation module according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 18 is a block diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 19 is an architectural diagram of the android operating system of FIG. 18;
FIG. 20 is an architectural diagram of the IOS operating system of FIG. 18.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in this application will be understood to be a specific case for those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The present application will be described in detail with reference to specific examples.
In one embodiment, as shown in fig. 1, a translation method is proposed, which can be implemented by means of a computer program and can be run on a von neumann-based translation apparatus. The computer program may be integrated into the application or may run as a separate tool-like application. The translation device may be a terminal device, including but not limited to: personal computers, tablet computers, handheld devices, in-vehicle devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and the like. The terminal devices in different networks may be called different names, for example: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, terminal equipment in a 5G network or future evolution network, and the like.
Specifically, the translation method comprises the following steps:
s101: acquiring a first position selected in a page to be translated, and determining a second position aiming at a selection mark based on the first position;
the first position can be used for understanding an operation position corresponding to human-computer interaction operation when a user inputs the human-computer interaction operation (such as translation operation) in an interface to be translated; illustratively, in the whole process that a user intends to translate a translation object in a to-be-translated interface, the user may input a touch operation to touch a to-be-translated page of the terminal first, and the touch operation may be used to trigger an operation of a translation control on the to-be-translated page of the terminal, for example, the translation control is activated by inputting the touch operation and the translation control is dragged to move towards a position where the user desires to translate, and a touch position corresponding to the touch operation in the process is also referred to as a first position.
Optionally, the terminal may monitor a target operation (e.g., a touch operation, a press operation, a click operation, etc.) input by the user for the page to be translated, and when the target operation is monitored, take an operation position corresponding to the target operation as a position selected by the user through the input operation on the page to be translated. It is understood that, if the user continues to input the target operation (for example, the target operation may be a drag operation) for a period of time, the first position continuously changes.
In a specific implementation scenario, translating the translation object may be generally implemented based on a translation control, and a user may trigger the translation control to complete the entire translation process. In some implementations, the user-triggered translation control can be referred to as a first translation control.
Illustratively, a translation control (e.g., a first translation control) may be understood as a control for implementing a function of recognizing translation of a specified entry, text in a text or image; in one or more embodiments, the first translation control is generally used for starting the translation recognition function, and in practical applications, the first translation control may be displayed on a page to be translated on the terminal in a certain specified image (e.g., a circle or a triangle), and when a user triggers the first translation control by inputting a corresponding target operation on the terminal, the information recognition function of the first translation control may be triggered, for example, double-clicking, single-clicking, and long-pressing the first recognition control. Further, after the user triggers the first translation control by inputting the corresponding target operation, the terminal may control the first translation control to move based on the target operation of the user (which may be understood as controlling the first translation control to perform operation following processing, and the display position of the current control is changed in real time by the first translation control along with the movement of the operation position indicated by the target operation of the user), so that during the moving process of the first translation control, the user may control the movement of the first translation control to approach or be close to the position where the translation region that the user desires to identify is located. In the foregoing process, the operation position corresponding to the target operation input by the user on the interface to be translated can also be regarded as the first position selected by the user in the interface to be translated.
S102: determining a second location for a selection marker based on the first location, the selection marker being displayed at the first location, the first location being different from the second location;
the selection mark may be displayed in the form of an icon or a pattern on the page to be translated, and in some embodiments, the selection mark may be a control, and the selection mark is used for predicting a reference translation position (i.e. a second position) which is possibly selected by the user based on the selected first position and is displayed at the reference translation position (i.e. the second position) in the form of a mark.
In some embodiments, the display mark may be used with a translation control (e.g., a first translation control), and in the process of the user inputting a corresponding target operation to trigger the first translation control and controlling the first translation control to move through the target operation, the control display position of the first translation control generally matches (e.g., coincides with) a first position corresponding to the input target operation. In some embodiments, a selection flag may be presented at a position above the translation control.
In some embodiments, the selection mark may be in the form of a pattern mark displayed on the page to be translated and may prompt the user whether the selection mark is selected at the second position for translation processing through the selection mark. The display position of the selection mark is usually different from the display position (such as the first position) of the first translation control, and the display position of the first translation control is usually an operation touch position (i.e. the first position); the display position of the selection mark and the operation touch position may be separated by a certain distance (e.g., displayed above the first position). For example, when a user inputs a target operation by means of finger touch to trigger the first translation control to move to be close to a translation area (or a translation position) desired by the user, in the process that the user triggers the first translation control to move, the moving position of the first translation control, that is, a first position corresponding to the target operation, is determined based on the first position to display a selection mark, wherein the second position displayed by the selection mark is usually different from the first position and is separated by a certain distance.
In some embodiments, a user controls the first translation control to move through corresponding target operation, the terminal can determine a second position in real time based on a first position selected for moving by the first translation control, a selection mark is displayed at the second position to prompt the user whether the second position where the selection mark is located or an area where the second position is located is selected to translate a translation object therein, and in the whole translation process, whether the translation is selected is prompted in a selection mark mode to quickly locate the translation object, so that the translation operation path of the user is shortened, and the translation efficiency is improved; in addition, considering that a user usually shields a display element at an operation position when performing a target operation, if a finger touches a certain position, the finger shields a text or an image at the certain position, and by determining a second position based on the first position to display a selection mark at the second position, an effect of preventing shielding of an object to be translated can be achieved.
It can be understood that when a user inputs a target operation (such as a moving operation) on the terminal for the first translation control, a translation recognition function of the control can be triggered, such as double-clicking, single-clicking, and long-pressing the first translation control.
In a specific implementation scenario, the terminal may receive (by a user) a movement operation (which may be an operation for triggering the first translation control in any operation form) input for the first translation control on the displayed page, and in response to the movement operation for the first translation control on the page to be translated, the terminal may control the first translation to move based on a first position corresponding to the movement operation; in the moving process of the first translation control, each first position corresponding to the moving operation of the page to be translated is obtained in real time, so that a second position is determined in real time based on each first position, and a selection mark is displayed at the second position;
it can be understood that the moving operation may be an operation of moving a dragging control input for the first translation control, the user continuously inputs the first translation operation for the first translation control in a dragging control manner, and in the process of dragging the first translation control to move, the terminal may acquire a moving position of the user for the first translation control, specifically, may acquire a first position corresponding to the moving operation as a corresponding moving position.
In a specific implementation scenario, the first translation control of the terminal may be displayed at a certain position on the page to be translated, for example, after the user triggers a specific function, the first translation control may be displayed on the page to be translated, as shown in fig. 2, in the reading page shown in fig. 2, the first translation control is a circular pattern button control 10, when the user reads an english text on the page to be translated shown in fig. 2, in a scenario of uncommon words and phrases, the user usually needs to translate the text on the reading page, the user may input a moving operation for the button control 10 on the currently displayed page, and the moving operation may be an operation of controlling the control 10 to move to be close to or close to a position where the user desires to translate; illustratively, the moving operation may be a dragging operation in which a finger does not leave the screen after the user selects an object-translation control on the current page to be translated, when the terminal recognizes the dragging operation of the user on the button control 10, the terminal may determine, in response to the dragging operation and in real time or periodically, a dragging position (i.e., a first position) as a moving position of the user for the first translation control 10, and then determine, based on the first position for the first translation control, a second position (for example, the second position may be above the first position), and display a selection flag at the second position to prompt the user whether the position or the area where the selection flag is located is selected to translate the translation object at the position or the area. As shown in fig. 4, fig. 4 is a scene schematic diagram of a selection indicator, in fig. 4, reference numeral 20 is the selection indicator shown, reference numeral 10 is a first position in the process of moving the first translation control by the current user, the position displayed by the selection indicator 20 is usually outside the moving position and no longer on the first translation control to avoid visual occlusion, and based on the second position where the selection indicator 20 is located or the area where the second position is located, whether the translation is selected or not can be visually prompted to the user.
It can be understood that the second position corresponding to the selection mark is separated from the first position by a certain distance (for example, displayed above the operation touch position), and the corresponding text content is not blocked by prompting in the form of the selection mark, and the user can intuitively position the translation object at the selection mark in the moving process through the selection mark, and the terminal can also quickly acquire the positioned translation object, so that the translation operation path of the user is shortened, and the translation efficiency is improved.
In a possible implementation manner, the terminal may have a translation tool that integrates at least one translation function, where the translation tool may be understood as a control, and the translation tool control may include at least a first translation control, and a translation function corresponding to the first translation control is generally that a user triggers the control to a specified area to translate a corresponding translation object, for example, the user translates a text at the specified area by moving the first translation control to the specified area; in one or more embodiments, the translation tool control may further include a third translation control having a global translation function, and after the translation function corresponding to the third translation control is triggered, the terminal automatically translates all content objects of the current page.
Illustratively, the terminal may display the translation tool in the page to be translated, and may automatically display the translation tool on the current display page when the user operates the terminal to open the target page, the target mode is opened, the target function is opened, and the like, for example, the translation tool is displayed in a floating manner. It is to be understood that the translation tool may be understood as a set of at least one translation control having a corresponding translation function, and in some embodiments, the translation tool may be characterized in the form of a translation panel, as shown in fig. 3, fig. 3 is a schematic view of a scenario of a translation tool according to an embodiment of the present application, and in fig. 3, when a user browses a reading page, the terminal displays a translation tool — translation panel 40 on the current reading page, and the translation panel includes at least a first translation control (e.g., a free translation control shown in fig. 3) option and a third translation control (e.g., a full screen translation control shown in fig. 3) option.
In a specific implementation scenario, the terminal may perform full-screen translation processing on the page to be translated in response to the target operation for the third translation control. For example, the user may select a third translation control (such as the full-screen translation control option shown in fig. 3) in the "translation tool-translation panel 40" in fig. 3 to trigger the full-screen translation function, so as to perform full-screen translation processing on the page to be translated; for example, click "full screen translation" on the translation panel in fig. 3, enter a full screen translation mode, and retract the translation panel; the terminal controls to enter a full screen translation mode, loads light sweeping animation from top to bottom and starts to translate screen contents;
illustratively, as shown in fig. 3, a user may select a first translation control (e.g., the free translation control shown in fig. 3) option in the translation tool-translation panel 40, after the first translation control (e.g., the free translation control shown in fig. 3) option, (the translation tool-translation panel 40 performs hiding processing) as shown in fig. 2, the terminal displays the first translation control 10 in the current page to be translated, the user may input a moving operation (e.g., a dragging operation for the first translation control) for the first translation control in the translation tool, the moving operation corresponds to a first position, the terminal may respond to the moving operation for the first translation control in the translation tool, control the first translation control to move based on the first position corresponding to the moving operation, during the moving of the first translation control 10, acquire a next first position in the first translation control for the user, the first position corresponding to the moving operation may be periodically or in real time acquired as the moving position, and the second position is determined based on each first position, respectively, so as to display the selection mark at the second position, as shown in fig. 4, where reference numeral 10 is one first position in the process of the current user moving with respect to the first translation control, and reference numeral 20 is the selection mark shown. And prompting the user to select whether the mark is located at the second position or the area of the second position is selected to translate the corresponding translation object.
It can be understood that after the first translation control is triggered, a control image of the first translation control may block characters on a page to be translated, and in order to reduce blocking, the terminal may perform hidden display processing on the first translation control, and load a selection flag based on a moving position based on an operation position of a user for the first translation control as the moving position.
Illustratively, at time t0, a user triggers a first translation control through an input movement operation, where the touch operation may be a movement operation (e.g., dragging the first translation control to move) in which a finger does not leave the screen for a period of time with time t0 as a starting touch time, the terminal detects that the first translation control is triggered at time t0, the user inputs a movement operation intention to continuously control the first translation control to move to a position where the user desires to translate, the terminal controls the first translation control to move based on each first position corresponding to the movement operation, and at the same time t0 or at a time t0, the terminal may determine a second position based on the acquired first position corresponding to the movement operation as a movement position of the first translation control, and then determine the second position based on the first position, and display a selection flag at the second position;
further, the terminal may perform hidden display processing on the first translation control after triggering the first translation control, which may be, for example, to control the first translation control to reduce a control image parameter (such as image brightness) to achieve the purpose of hidden display processing and perform selection prompt of the translation object to the user with the display selection flag loaded at the second position. It can be understood that the hidden display processing is performed on the first translation control, and the pattern of the first translation control may not be visually presented, that is, not displayed, and the selection mark is used to replace the display of the first translation control, so that the translation object selection process is moved forward by the selection mark, and the translation path is shortened.
It can be understood that the display position of the display mark is based on the selected first position corresponding to the movement operation input by the user for the first translation control, the second position is determined by the first position, and the selection mark is displayed at the second position, the position desired to be selected by the user can be predicted in the process that the selection mark moves when the user operates the first translation control, the user can end the output of the movement operation when the position of the selection mark is matched with the position desired to be selected by the user, that is, the translation object indicated by the selection mark can be determined for translation, and the content required to be translated can be quickly located.
S103: and determining a target area corresponding to the selection mark, and determining a translation result of a translation object in the target area.
It can be understood that, a user controls the first translation control to move through a corresponding operation, and the terminal may display the selection flag at the second position in real time based on the moving position of the first translation control, that is, the display flag is changed based on the position of the first position input and selected by the user for the first translation control, and the display position of the display flag, that is, the second position, also changes accordingly, for example, the user may input a drag operation for the first translation control, the drag position of the drag operation, that is, the first position, changes within a period of time, and the selection flag at the second position displayed based on the drag position also moves accordingly.
It can be understood that, after the movement of the selection marker displayed based on the movement position is stopped, the terminal may acquire the target region corresponding to the position where the movement of the selection marker is stopped, and then translate the translation object in the target region.
It can be understood that the terminal may obtain the position of the selection mark after the movement is stopped, determine the target area based on the text content around the position, and then translate the translation object in the target area, where the translation is to be translated into a language, and may include translating the text of the translation object in any one language into the text in another language, for example, translating an english text into a chinese text, and also translating a chinese text into an english text.
Further, the translation result of the translation may be output, and after the translation result is obtained, the translation result may be output. In the embodiment of the present application, the translation result may be output in various ways.
Illustratively, the translation results may be displayed. It should be noted that the purpose of the user triggering the translation in the interface may be to enable the user to better recognize the content of the text. For example, if the text in the interface is english text, but the user may not be familiar with english, the user may wish to convert the text in the interface to familiar chinese, and the translation result may be displayed after the translation result is obtained.
Illustratively, the translation result can be displayed through the page to be translated. The display position of the page to be translated can be determined according to the actual situation. For example, if the translation object itself corresponding to the target region is at an upper position in the interface, the page to be translated as a result may be displayed at a lower position in the interface, and correspondingly, if the translation object corresponding to the target region is at a lower position in the interface, the page to be translated as a result may be displayed at a higher position in the interface, thereby preventing the page to be translated as a result and the translation object corresponding to the target region from occupying the display position by each other in this way. Moreover, if the target area occupies most of the display screen, the page to be translated can be displayed in a floating manner in the middle of the display screen.
Illustratively, the translation result can be directly covered on a translation object of the target area, and a user can conveniently read the translated content; further, the layout of the translation result and the translation object can be controlled to keep relatively consistent.
In the embodiment of the application, a first position selected by a user in a page to be translated is obtained, a second position for a selection mark is determined based on the first position, then the selection mark is displayed at a second position different from the first position, and a target area corresponding to the selection mark is determined, so that a translation object in the target area can be quickly translated, and a translation result of the translation object in the target area is determined. By displaying the selection mark at the second position except the first position selected by the user in the mode, the user can be quickly assisted to position the translation object based on the selection mark, and the translation object can be prevented from being shielded; in the whole translation process, the translation operation path is shortened, the translation process is optimized, and the convenience of translation is improved; and the whole translation process does not need a user to gradually frame the identification region in the whole process, so that the region operation path in the translation process can be shortened, and the region selection time is saved.
Referring to fig. 5, fig. 5 is a schematic flowchart of another embodiment of a translation method proposed in the present application. Specifically, the method comprises the following steps:
s201: receiving a moving operation aiming at a first translation control in a page to be translated, and acquiring a first position corresponding to the moving operation.
In one or more embodiments, the first translation control may be understood as a control for implementing a translation recognition function for text in a specified entry, text, word, image.
Illustratively, the first translation control is generally used for starting a translation recognition function, in practical applications, the first translation control may be displayed on a page to be translated on the terminal in a certain specified image (e.g., a circle or a triangle), and when a user triggers the first translation control by inputting a movement operation on the terminal, an information recognition function of the first translation control may be triggered, such as double-clicking, single-clicking, and long-pressing the first recognition control. Further, the user moves after triggering the first translation control by inputting a movement operation, and the terminal may control the first translation control to move (which may be understood as controlling the first translation control to perform operation following processing, where the first translation control changes the display position of the current control in real time along with the movement change of each first position indicated by the user movement operation) based on the first position (the first position is also referred to as an operation position of the movement operation in the movement process) corresponding to the movement operation of the user, so that in the movement process of the first translation control, the user may control the movement of the first translation control to be close to or close to the position where the translation area that the user desires to identify is located. In the foregoing process, the operation position corresponding to the movement operation input by the user on the interface to be translated can also be regarded as the first position selected by the user in the interface to be translated.
S202: and acquiring a target angle and a target distance aiming at a selection mark, and determining a second position indicated by the target angle and the target distance by taking the first position as a reference.
The second position is a display position of the selection mark on the page to be translated;
the target angle may be understood as an angle between the first position and the second position;
the target distance may be understood as the distance between a first location and a second location;
in one possible embodiment, the target angle may be preset; the target distance may be preset. Illustratively, after determining the first position, with reference to the first position, the position at which the second position is located may be determined as the second position based on the target angle and the target distance set in advance.
In a possible implementation, the target distance or the target angle may be determined based on an operation parameter (e.g., an operation strength, a touch operation area, a movement speed, a movement angle, etc.) of the movement operation for the first translation control, that is, a value at which the target distance and the target angle are not fixed may be changed accordingly, for example, during the movement, the operation parameter of the movement operation is obtained in real time or periodically, and the target angle and/or the target distance is determined accordingly based on the change of the operation parameter. For example, the greater the operation strength, the greater the target distance or the greater the target angle; for example, the target angle is the same as the movement angle, and changes with the change of the movement angle; for example, the faster the movement speed, the greater the target distance, and so on.
Optionally, a first mapping relationship between at least one reference operating parameter and a "reference distance" and a second mapping relationship between at least one reference operating parameter and a "reference angle" may be pre-established, and the mapping relationships may be represented in the form of a mapping set, a mapping list, a mapping array, and the like. Obtaining operation parameters of mobile operation in real time or periodically, determining a target distance corresponding to the operation parameters based on the first mapping relation, and/or determining a target angle corresponding to the operation parameters based on the second mapping relation, and then determining a second position indicated by the target angle and/or the target distance by taking the first position as a reference;
optionally, the numerical sign of the target angle or the target distance may be changed based on the operation position of the moving operation, for example, when the moving position falls into the reference region corresponding to the lower left corner of the display boundary, the numerical sign of the target angle or the target distance (i.e., a positive value sign and a negative value sign) may be changed, so that the selection mark is displayed at the upper right corner of the moving position. That is, a numerical value symbol mapping relationship between at least one reference area and a numerical value symbol (i.e., a positive value symbol or a negative value symbol) is preset, and then it is determined whether the moving position falls into a target area in the at least one reference area, if so, the numerical value symbol corresponding to the target area is obtained, and the positive and negative of the target angle or the target distance are adjusted based on the current numerical value symbol, so that when the moving position is close to the peripheral side of the display interface, the display position of the selection mark can be adaptively adjusted, and the second position corresponding to the selection mark is prevented from being out of the visual range of the user.
In a possible embodiment, the selection flag generally changes based on a change of the first position, during the movement of the selection flag, the current target distance or target angle may be fine-tuned based on operation parameters of the movement operation (e.g., touch operation strength, touch operation area), a mapping relationship between at least one reference operation parameter and a fine tuning factor may be pre-established, and the mapping relationship may be characterized in the form of a mapping set, a mapping list, a mapping array, or the like. The method comprises the steps of obtaining operation parameters of moving operation in real time or periodically, determining a target fine-tuning factor corresponding to the touch operation based on the mapping relation, carrying out numerical fine-tuning on a target angle and/or a target distance based on the target fine-tuning factor to obtain the target angle and the target distance after numerical fine-tuning, and determining a second position indicated by the target angle and the target distance by taking the first position as a reference. For example, a difference of the target fine tuning factor and the target angle and/or the target distance may be calculated, a product of the target fine tuning factor and the difference of the target angle and/or the target distance may be calculated, and so on.
S203: and determining a reference area based on the second position, acquiring at least one reference position corresponding to the reference area, and determining a second position for the selection mark from the at least one reference position.
The reference area is used for bringing the text content on the peripheral side of the first position into reference, and all areas corresponding to the text display content brought into reference are also called reference areas; in some embodiments, the region size of the reference region is generally smaller than the region size of the page to be translated.
The reference translation object selected by the user may exist in the reference area, the reference translation object (such as a keyword, a key sentence, a key segment and the like of the reference translation) which may be selected by the user in the page to be translated is predicted based on the text content of the reference area, and the position of the reference translation object is also the reference position.
In one possible embodiment, the reference area may be determined based on a reference area specification with reference to a first location, for example: determining a reference area indicated by the specification of the target area by taking the first position as a reference point;
alternatively, the reference region specification may be set in advance. That is, a target area specification is set in advance, a corresponding reference area is determined based on the target area specification after the first position is determined, and all text contents such as pictures, animations, tables and the like are extracted from the reference area to acquire text information in the reference area. And therefore, the text information in the reference area is taken as a reference to determine a reference position corresponding to the reference translation object which is possibly selected by the user in the reference area.
Optionally, the reference region specification may be: the method comprises the steps of obtaining operation displacement characteristics of the mobile operation, wherein the operation displacement characteristics comprise at least one of characteristics of a first position, a touch force characteristic, a touch direction characteristic, a touch duration characteristic, a touch point quantity characteristic and the like. The method comprises the steps that a reference area of text content can be accurately predicted based on operation displacement characteristics, if touch control force is fed back by touch control force characteristics to be large, the current first position can be used as a reference point to bring most peripheral text content into reference, and therefore a target display area is determined; if the number of touch points is large, the current first position can be used as a reference point to reference the text content of most peripheral sides, which can be understood as determining a target area specification,
in a possible implementation manner, a region specification model may be trained, an operation displacement feature for a moving operation is obtained, the operation displacement feature is input into the trained region specification model, a target region specification corresponding to the operation displacement feature is output, the target region specification includes a region size, a region shape, and the like, then a target display region indicated by the target region specification is determined with the first position as a reference point, the understandable first position is used as a reference point, and the target region specification may indicate a target display region with the reference point as a reference, for example, with the reference point as a region center point, a reference region corresponding to the target region specification is determined.
It is understood that the region specification model may be implemented by fitting one or more of a Deep learning-based algorithm, such as a Convolutional Neural Network (CNN) model, a Deep Neural Network (DNN) model, a Recurrent Neural Network (RNN) model, a model, an embedding (embedding) model, a Gradient Boosting Decision Tree (GBDT) model, a Logistic Regression (LR) model, and the like.
In one or more embodiments, at least one reference position corresponding to the reference region may be obtained, and in a specific implementation, text content in the reference region may be obtained, at least one reference translation object is determined based on the text content, a reference position corresponding to the at least one reference translation object is obtained, and then a second position for the selection marker is determined from the at least one reference position.
Generally, the reference translation object determined based on the reference area can be covered to the identification content expected by the user with high probability, the selection mark is displayed based on the reference position of the reference translation object, the translation object required to be selected by the user can be predicted, the content required to be translated can be quickly positioned, and the translation efficiency is improved.
In a possible implementation manner, the terminal determines at least one reference translation object based on the text content, and may be: performing semantic recognition on the text content, determining at least one key text in the text content, and taking the at least one key text as a reference translation object;
as can be understood, the text content is subjected to semantic recognition processing, at least one of key texts such as key fields, key word segments, key sentence segments and the like in the text content is determined after the semantic recognition processing, at least one key text is determined based on the text content, the key text is used as a reference translation object, that is, a second position is determined from a reference position corresponding to the reference translation object; the terminal determines the region where the key character information (that is, the key text) in the text content is located by performing semantic extraction and semantic understanding on the text content, the display position where the key text is located can be used as a predicted reference translation object, the number of the reference translation objects can be multiple, and the position where the key text is located is usually a position where a user expects to select and recognize at a high probability, so that the terminal can assist the user in determining the translation object quickly in advance, save a user operation path during translation, and improve translation efficiency.
In a possible implementation manner, the terminal determines at least one reference translation object based on the text content, and may be: and acquiring a historical translation record corresponding to the text content, and acquiring at least one reference translation object from the historical translation record.
The historical translation record comprises translation data of at least one user terminal on a historical display page. In some embodiments, a displayed page (i.e., a page to be translated) browsed by a current user of a terminal may have a history displayed page browsed by other users once or a history displayed page of the same type as the displayed page; when the users browse the history display page, the history text content of a certain area can be translated, and based on the condition, the history text content of at least one user side on the history display page can be used for reference to determine when determining the reference translation object; for example, there may be a case where a plurality of user terminals translate historical text contents in a plurality of areas on a history display page, and at this time, the plurality of user terminals generally correspond to a plurality of historical text contents (such as historically translated words, sentences, paragraphs, and the like), and then the terminal may use the historical text contents as a reference translation object.
It can be understood that the terminal can serve as a content identifier of the text content based on the key semantic features (e.g., subject words, titles, etc.), which can be understood as a history translation record when the terminal can request the server for at least one user to access a history display page based on the key semantic features, and the contents of the history display page and the current display page can be the same or the same type; in some embodiments, the history display page and the display page display the same content, and the difference is that in the time dimension, the history display page is an interface of the previous display content, and the page to be translated is an interface of the current time.
S204: displaying the selection marker at the second location, the second location being different from the second location.
Reference may be made in detail to method steps of one or more embodiments of the present application, which are not described herein in detail.
S205: and acquiring a control touch state aiming at the first translation control.
S206: and if the control touch state is a stop touch state, acquiring a target position corresponding to the selection mark, and determining a target area corresponding to the target position.
The touch state can be at least a touch moving state and a touch stopping state; the terminal can acquire the touch state of the user for the first translation control in real time or periodically, and judge whether the touch state is a touch stop state, and when the touch state is the touch stop state, the terminal can determine the second position corresponding to the selection mark currently.
It can be understood that, a user performs a moving operation on a page to be translated with respect to a first translation control, based on the moving operation, the first translation control is controlled to move to be close to or close to a translation object to be recognized, and during the moving of the first translation control, a selection flag is displayed to prompt the user whether to select a translation object corresponding to a selection mark, typically during the moving of the first translation control: the touch control state of the first translation control is a touch control moving state; for example, a user continuously inputs a dragging operation for a first translation control to control the first translation control to move, a selection mark is displayed based on a first position, when the position of the selection mark covers a translation object desired to be selected by the user, the user stops inputting the dragging operation, and at this time, the touch state of the first translation control monitored by the terminal is a touch stopping state. And the terminal responds to the stop touch state to determine the current target position of the selection mark.
Illustratively, a user may continuously input a moving operation, namely a dragging operation, to an icon of a first translation control by means of finger touch at time t0, as shown in fig. 6, fig. 6 is a scene schematic diagram triggered by the translation control, where the dragging operation may be, for example, a dragging operation in which, after the user drags a display object, namely the first translation control 10, on a current page to be translated in fig. 6, a finger does not leave a screen, and in a process of dragging by the finger; further, the dragged display object, the first translation control 10, may move as the finger moves. Meanwhile, in order to avoid blocking the text content of the page to be translated in the moving process of the first translation control 10, the terminal may perform hidden display processing on the first translation control 10 after monitoring the dragging operation at the time t0, for example, to control the image parameter (such as image brightness) of the first translation control 10 to gradually decrease; another example is: the first translation control 10 may be controlled to move with the movement of the finger but not to display the control image, as shown in fig. 6, after the user controls the first translation control 10 to move to the point a, the control image before the first translation control 10 is triggered, such as shown in fig. 2, is no longer displayed on the page to be translated currently after the first translation control performs the hidden display processing. It can be understood that when it is monitored that the first translation control 10 is triggered at time t0, based on the first position, that is, point a, of the first translation control 10 after obtaining time t0 (it may be understood that there is a certain operation delay from triggering the first translation control 10 to starting to move), the terminal displays the selection mark 20 (displayed outside point a) based on the first position point a to prompt the user whether to select the translation object corresponding to the selection mark, and generally during the continuous movement process after the first translation control is triggered: the touch state of the user for the first translation control is a touch moving state, a second position is determined based on a first position corresponding to the dragging operation in the process of continuously inputting the dragging operation, a selection mark is displayed at the second position, when the position of the selection mark covers a translation object desired to be selected by the user, for example, when the process of continuously inputting the dragging operation moves to a point B at a first position shown in fig. 6, the position of the selection mark 20 covers the translation object desired to be selected by the user, the user stops inputting the dragging operation, and at this time, the touch state of the first translation control monitored by the terminal is a touch stopping state. The terminal determines the current target position of the selection mark in response to the touch stop state, and it can be understood that the target position at this time is the position C corresponding to the selection mark displayed based on the point B at the first position.
It should be noted that the movement operation for controlling the translation control to move, such as a drag operation, a click operation, etc., may be performed by an external device, such as a mouse, a laser pointer, etc.
The target area may be an area where text content corresponding to the target position is located, for example, an area corresponding to a key text in some embodiments; another example may be a region corresponding to a corresponding sentence at the target position; also for example, may be a region corresponding to a corresponding paragraph at the target location; another example may be a region corresponding to a corresponding character at the target position; and so on.
The content displayed in the target area is a translation object, and the translation object may include at least one character.
It can be understood that, after determining the target position corresponding to the selection mark, the terminal may acquire the target area corresponding to the target position. For example, a target region where text content (such as a key text (keyword, key sentence), sentence, paragraph, character) at a target position is located is acquired to acquire region content (such as text content) in the target region as a translation object.
Optionally, based on the target position, determining a region of a reference character of a whole line or multiple lines at the target position as a target region at the target position, or determining a preset range of the line at the target position as the target region at the target position to obtain text content in the target region as a translation object, and translating the translation object in the target region
S207: and determining a translation result of the translation object in the target area.
In a possible implementation manner, the terminal may perform optical character recognition on display elements (such as characters, images, and icons) in the target area through a text recognition method based on Optical Character Recognition (OCR), where the recognition method is a process of selecting a certain area as the target area at a target position, such as a display image indicated in the target area (which may be a screenshot of the target area), to obtain a display image corresponding to the target area, detecting dark and bright patterns of the rendered display image to determine a character shape of the display image, and then translating the shape into computer characters by using a character recognition method; that is, for the characters of the page to be translated, the characters on the page to be translated are optically converted into an image file of a black-and-white dot matrix, and the characters in the image are converted into a text format (for example, in the form of a character string) by a text recognition method based on Optical Character Recognition (OCR), so as to obtain a recognized translation object.
In one possible embodiment, the terminal translates the translation object. The text content of the translation object may be translated from the current language to another language, such as translating english text content to chinese content. And calling software by the terminal by calling a set translation software interface to perform language translation on the acquired character content of the translation object. The corresponding translation software interface may be a set certain translation software, and the translation software may be a translation application installed locally in the terminal, such as a jinshan dictionary, a track translation, a hundred-degree translation, and the like, and may also be a translation network interface of a translation software service provided in a network, such as a google network translation. And the translation result can be displayed by popping up a text box or a specific display area on the current page to be translated of the terminal.
In the embodiment of the application, a first position selected by a user in a page to be translated is obtained, a second position for a selection mark is determined based on the first position, then the selection mark is displayed at a second position different from the first position, and a target area corresponding to the selection mark is determined, so that a translation object in the target area can be quickly translated, and a translation result of the translation object in the target area is determined. By displaying the selection mark at the second position except the first position selected by the user in the mode, the user can be quickly assisted to position the translation object based on the selection mark, and the translation object can be prevented from being shielded; in the whole translation process, the translation operation path is shortened, the translation process is optimized, and the convenience of translation is improved; the second position is determined based on the first position and is not overlapped with the first position, so that the display mark is accurately displayed, and the position expected to be selected by the user can be accurately predicted; and the whole translation process does not need a user to gradually frame the identification region in the whole process, so that the region operation path in the translation process can be shortened, and the region selection time is saved.
Referring to fig. 7, fig. 7 is a schematic flowchart of another embodiment of a translation method proposed in the present application. Specifically, the method comprises the following steps:
s301: receiving a moving operation aiming at a first translation control in a page to be translated, and acquiring a first position corresponding to the moving operation.
Reference may be made in detail to method steps of one or more embodiments of the present application, which are not described herein in detail.
S302: acquiring a mark display range for a selection mark, and determining that the first position is within the mark display range;
s303: determining a second position for a selection marker based on the first position, the selection marker being displayed at the second position, the second position being different from the second position.
The sign display range is used for judging whether the first position of the first translation control falls into the sign display range or not in the moving process of the first translation control, and if so, loading and displaying a selection sign; it is to be appreciated that when the first position of the first translation control is outside of the indicia display range, the terminal does not display the selection indicia.
The mark display range can be a preset area range, or an area translation determined by text detection of a page to be translated.
In one or more embodiments, selecting the indicia may be understood as a control used in conjunction with the first translation control;
in one or more embodiments, the selection token can be in the form of another image control of the first translation control in a particular state; for example, the first translation control is displayed with a default display icon outside of the logo display range, and the first translation control is displayed in the form of a selection logo within the logo display range. It should be noted that, when the first translation control is displayed with the default display icon outside the mark display range: displaying a default display image at a first position of a user for a first translation control, wherein the first position of the first translation control is generally matched with an operation position corresponding to a user touch operation, and the operation position can be the first position; when the first translation control is displayed in the mark display range in the form of a selection mark: instead of being displayed at the first location of the first translation control, a second location is determined based on the first location, and a selection indicator is displayed at the second location; in addition, the first position is highly related to an operation position corresponding to a trigger operation input by a user in the translation control moving process, the translation efficiency is not improved on the basis of the operation position directly, but a second position is predicted to serve as a predicted position of a translation object selected by the user at a high probability on the basis of the operation position for reference, so that the intelligence and the translation efficiency in the translation process can be greatly improved. Where the display position of the selection marker is generally different and spaced from the operating position, in one or more embodiments, the first translation control may be a control that is independent of the selection marker.
Schematically, as shown in fig. 8, fig. 8 is a scene schematic diagram of a display range of a logo, and in fig. 8, a logo display translation may be an area shown by a dashed box. The area outside the dashed line frame is a non-logo display range, and in the non-logo display range, the user may input a movement operation to control the first translation control 10 to move, at this time, the selection mark is not displayed, for example, the selection mark is not displayed at a point C corresponding to the first position of the first translation control shown in fig. 8; in some implementation scenarios, the user drags the first translation control within the non-display mark range without triggering the display selection mark, so that the user can be effectively prevented from mistakenly triggering the translation function, and the situation that the normal page browsing of the user is influenced by directly displaying the selection mark is avoided. It can be understood that, the user continuously inputs a movement operation for the first translation control from the non-sign display range to control the first translation control 10 to continuously move, and at the same time, the terminal continuously monitors whether the first position falls within the sign display range, where it is understood that, the first translation control 10 moves from the non-sign display range to the sign display range, and if the first position at a certain time t is monitored to be within the sign display range, the terminal displays the selection sign based on the first position.
Illustratively, as shown in fig. 9, fig. 9 is a schematic view of a scenario in which a selection mark is displayed, a user continuously inputs a movement operation for a first translation control from a non-mark display range to control the first translation control 10 to continuously move, for example from point c shown in figure 8 controls the operational position of first translation control 10 based on the move operation to continue moving as the "first position for first translation control 10", when the first position of the first translation control 10 enters the mark display range corresponding to the dotted line shown in fig. 9, the terminal monitors that the current first position of the first translation control is a D point, and the D point falls into the mark display range, and the terminal loads and displays the selection mark based on the first position-D point, it should be noted that the second position displayed by the selection mark does not coincide with the "first position-D point". The specific second location determination may refer to other embodiments to which this application relates.
Illustratively, in the moving process of the first translation control, the first position corresponding to the first translation control enters the mark display range from the non-mark display range, and when the first position falls into the mark display range, the terminal may control the first translation control to perform hidden display processing, where the hidden display processing may be to control the first translation control to gradually disappear on the page to be translated, and then determine the second position based on the first position to display the selection mark. For example, the first translation control is switched and displayed as the selection flag, in the whole translation process, the movement operation input by the user for the first translation control does not end with the hidden display processing, and each first position corresponding to the movement operation may be continuously obtained in real time or periodically, illustratively, usually, the first position corresponding to the first translation control is often an operation position corresponding to the continuously input movement operation, and the selection flag is controlled to be displayed outside the first position while the hidden display processing is performed, for example, the selection flag gradually appears at an upper position of the first translation control, for example, at an upper position of the finger input movement operation shown in fig. 9;
in one or more embodiments, after the terminal determines that the first position is within the display range of the mark, the method further includes: and acquiring a third position (which can be understood as a next first position of the current first position, namely a next operation position of the moving operation) after the first position, and if the third position is out of the mark display range, displaying the first translation control in the interface to be translated, and performing display cancellation processing on the selection mark.
It can be understood that the next first position, that is, the third position, of the moving operation for the first translation control may be continuously monitored, and it may be understood that the touch operation position of the moving operation is continuously obtained as the third position, the third position after the first position is obtained, and the third position after the first position is continuously monitored, and if the third position is outside the mark display range, the first translation control is displayed in the page to be translated, and the selection mark is canceled. It is understood that the first translation control is displayed with the default control image outside the non-logo display range, and the display cancellation process is performed on the selection logo when the third position is outside the logo display range.
In one or more embodiments related to the present application, the moving operation may be a sliding operation, a continuous clicking operation, a continuous triggering operation, or the like. In this process, as the touch operation position of the moving operation is constantly changed, the first position for the moving operation and a third position (which may also be understood as a next first position) after the first position are constantly changed, and after the selection mark is displayed, the second position where the selection mark is located is also constantly changed.
S304: and acquiring a control touch state for the first translation control, and if the control touch state is a stop touch state, acquiring a target position corresponding to the selection mark and determining a target area corresponding to the target position.
Reference may be made in detail to method steps of one or more embodiments of the present application, which are not described herein in detail.
S305: loading a second translation control, controlling a control display area of the second translation control to cover the target area, and selecting and displaying a translation object in the target area;
the second translation control is used for selecting the target area corresponding to the selection mark so as to display the currently selected area to be identified to the user, and in some implementation scenarios, the control display area of the second translation control is used for the user to determine whether the area to be identified is accurately covered. Typically the first translation control is associated with the second translation control. In some embodiments, the second translation control is configured to perform a selected display process on the translation object in the target region in the control display region. The selected display processing may display the translation object in a preset display format, for example, add a background to the translation object, highlight the translation object, and the like.
Schematically, as shown in fig. 10, fig. 10 is an interface schematic diagram of a translation control according to the present application, in the interface shown in fig. 10, a second translation control 30 is displayed, a terminal controls a control display area of the second translation control 30 to cover a target area, so as to perform selection display processing on a translation object in the target area, and after the translation object in the target area is subjected to the selection display processing, a frame of the control display area is presented in a solid line form.
In a specific implementation scenario, a terminal loads a second translation control to select and display a target region so as to select and display a translation object in the target region, wherein the display region range of the second translation control is larger than that of the first translation control. For example: the control display region of the second translation control may be the solid box region shown in fig. 10. The second translation control is loaded and is mainly used for displaying a to-be-identified area (namely a target area) predicted by a display terminal to a user better based on the target position where the selection mark is located, so that the user can be saved from selecting a proper position by a tedious operation and then selecting the identification area to cover a translation object, and an operation identification path is saved.
In a feasible implementation manner, after loading the second translation control and selecting and displaying the translation object in the target region, the user can further determine whether the control display region of the second translation control is the finally selected region to be identified; in some embodiments, the user can input a corresponding operation to fine-tune the control display area, so that the control display area accurately covers the translation object desired to be selected by the user.
In one or more embodiments, the terminal may detect a target operation for the second translation control, where the target operation may be understood as a confirmation operation of a user for a control display area predicted and displayed by the terminal, that is, a confirmation operation that the control display area accurately covers the translation object and the control display area is an area that the user desires to recognize, and then the terminal determines to translate the translation object, and in some embodiments, the detection of the target operation may be: when the terminal detects that the user does not input the human-computer interaction operation within the preset time, the user can be understood that the current control display area is the area to be recognized by default and the area to be recognized accurately covers the translation object without inputting any operation.
According to some embodiments, the target operation may be understood as a region modification operation of the control display region predicted and displayed by the terminal by the user, for example, a specification of the region is modified on the basis of the control display region, for example, an identification range of the control display region is expanded or reduced, as shown in fig. 10, the current user of the terminal may further reduce the identification range of the solid-line frame region on the basis of the solid-line frame region shown in fig. 10, the region modification operation may be generally understood as fine adjustment of the current control display region, the control display region covering the target region may generally be covered with identification content desired by the user with a high probability to be a translation object, and compared with "determining the region to be identified by inputting the identification start position to the identification end position all the way by the user", a recognition operation path is greatly shortened, and translation efficiency is saved.
S306: translating the translation object in the control display area to obtain a translation result, and displaying the translation result;
as can be understood, after the control display area is determined, the translation object in the control display area may be translated to obtain a translation result;
in a feasible implementation manner, the terminal obtains a translation object in the display area of the current control for translation, and obtains a translation result.
Further, the translation object acquisition in the control display area may be a text recognition method based on Optical Character Recognition (OCR), which performs optical character recognition on display element information (such as characters, images, and icons) at the control display area, that is, a character set to be recognized in the control display area to extract the translation object, the recognition method is a process of acquiring a display image indicated at the control display area, detecting a dark and light pattern of the display image to determine a character shape thereof, and then translating the shape into a computer text by using a character recognition method; specifically, for the characters in the area to be recognized, the characters in the area to be recognized are optically converted into an image file of a black-and-white dot matrix, and the characters in the image are converted into a text format (for example, in the form of a character string) by a text recognition method based on Optical Character Recognition (OCR), so that a recognized character set to be recognized is obtained to obtain a translation object, and the translation object is translated.
As can be appreciated, after the translation results are obtained, the results of the translation are then presented. For example, as shown in fig. 11, fig. 11 is an interface schematic diagram of a terminal displaying a translation result, where the terminal finds a translation result of english information "the n his letters stopped, but received one from the other of the firm, and having a left behind where the icon corresponding to the currently loaded translation and the was in a received arm horizontal in the england," when the translation is completed, the terminal cancels and displays an icon corresponding to the currently loaded translation, loads a translation result display frame 40 at the second translation control 30, and displays a translation result corresponding to the translation object in the translation result display frame 40.
Illustratively, the translation result can be displayed through a result display page (such as a translation result display box). The display position of the result display page can be determined according to actual conditions. For example, if the translation object itself corresponding to the target region is at an upper position in the interface, the page to be translated as a result may be displayed at a lower position in the interface, and correspondingly, if the translation object corresponding to the target region is at a lower position in the interface, the page to be translated as a result may be displayed at a higher position in the interface, thereby preventing the translation objects corresponding to the target region from occupying the display position with each other. Furthermore, if the target area itself occupies most of the display screen, the result display page may be displayed in a floating manner in the middle of the display screen.
Further, the translation result may be displayed in an overlay manner at the translation object in the control display area. The translation result can be directly covered on a translation object of the target area, and a user can conveniently read translated contents; further, the typesetting of the translation result can be controlled to be relatively consistent with the translation object.
Illustratively, in the process of performing OCR recognition on the area image of the control display area by using an OCR algorithm module, the original character blocks (i.e. the translation objects) and the background portion in the control display area to be translated may be obtained, and the text characters in each original character block, and the position and size of each original character block in the original image may be recognized. The original character block refers to a text character block in an area image of the control display area obtained through OCR recognition. And then, generating a target character block according to the translation result corresponding to each original character block in the area image of the control display area and the position and the size of each original character block in the area image. And then, synthesizing the background and the target character block according to the position of the target character block corresponding to the original character block to generate a translation result image. And finally, overlaying and displaying the translation result image on the position of the control display area. As shown in fig. 12, fig. 12 is an interface schematic diagram of the terminal displaying the translation result, the terminal finds the translation result of the translation object, at this time, the terminal cancels to display the icon corresponding to the currently loaded translation, and displays the translation result in the solid line area of the second translation control 30. The display mode that the translation result covers the translation object is adopted, the typesetting content of the original display page can not be influenced, the original display page can not be separated during physical examination of reading, and the content can be read in an immersion mode.
In the embodiment of the application, the translation object can be quickly positioned based on the displayed selection mark in the moving process of the first translation control in the above manner, the translation operation path is greatly shortened in the whole translation process, the translation process is optimized, and the translation convenience is improved; the first position is determined based on the moving position and does not coincide with the moving position, so that the display mark is accurately displayed, and the position expected to be selected by the user can be accurately predicted; in the whole translation process, a user does not need to gradually frame the identification region in the whole process, so that the region operation path in the translation process can be shortened, and the region selection time is saved; the translation result can be covered on the original translation object, the typesetting effect is basically consistent with the visual effect of the original content, and the readability after translation can be improved; the small-area OCR recognition is carried out on the target area, so that the translation time can be shortened; and the setting of the display range of the mark can avoid false triggering, thereby improving the intelligence of translation.
The following describes the translation apparatus provided in the embodiment of the present application in detail with reference to fig. 13. It should be noted that, the translation apparatus shown in fig. 13 is used for executing the method of the embodiment shown in fig. 1 to fig. 12 of the present application, and for convenience of description, only the portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to the embodiment shown in fig. 1 to fig. 12 of the present application.
Please refer to fig. 13, which illustrates a schematic structural diagram of a translation apparatus according to an embodiment of the present application. The translation apparatus 1 may be implemented as all or a part of a user terminal by software, hardware, or a combination of both. According to some embodiments, the translation apparatus 1 comprises a translation module 11, a translation module 12, and a translation module 13, and is specifically configured to:
the mark display module 11 is configured to obtain a first position selected in a page to be translated, determine a second position for a selection mark based on the first position, and display the selection mark at the first position, where the first position is different from the second position;
and the object translation module 12 is configured to determine a target region corresponding to the selection flag, and determine a translation result of a translation object in the target region.
Optionally, as shown in fig. 14, the mark display module 11 includes:
a position determining unit 111, configured to obtain a first position selected in a page to be translated, and determine a second position for a selection flag based on the first position;
a sign display unit 112 configured to display the selection sign at the first position, the first position being different from the second position.
Optionally, the position determining unit 111 is specifically configured to:
acquiring a target angle and a target distance for a selection mark, and determining a second position indicated by the target angle and the target distance by taking the first position as a reference; or the like, or, alternatively,
determining a reference area based on the first position, acquiring at least one reference position corresponding to the reference area, and determining a second position for the selection mark from the at least one reference position.
Optionally, as shown in fig. 15, the position determining unit 111 includes:
a content acquisition subunit 1111 configured to acquire text content in the reference region;
a position determining subunit 1112, configured to determine at least one reference translation object based on the text content, and based on a reference position of the at least one reference translation object.
Optionally, the position determining subunit 1112 is specifically configured to:
performing semantic recognition on the text content, determining at least one key text in the text content, and taking the key text as a reference translation object; or the like, or a combination thereof,
and acquiring a historical translation record corresponding to the text content, and acquiring at least one reference translation object from the historical translation record.
Optionally, the position determining unit 111 is specifically configured to: receiving a moving operation aiming at a first translation control in a page to be translated, and acquiring a first position corresponding to the moving operation.
Optionally, as shown in fig. 16, the object translation module 12 includes:
a state obtaining unit 121, configured to obtain a control touch state for the first translation control;
an area obtaining unit 122, configured to obtain a target position corresponding to the selection flag if the control touch state is a stop touch state, and determine a target area corresponding to the target position.
Optionally, the apparatus 1 is specifically configured to:
loading a second translation control, controlling a control display area of the second translation control to cover the target area, and selecting and displaying a translation object in the target area;
the determining a translation result of a translation object in the target region includes:
and translating the translation object in the control display area to obtain a translation result, and displaying the translation result.
Optionally, the apparatus 1 is specifically configured to:
and displaying the translation result in the control display area.
Optionally, the apparatus 1 is specifically configured to:
acquiring a mark display range for the selection mark;
determining that the first position is within the marker display range, determining a second position for a selection marker based on the first position.
Optionally, the apparatus 1 is specifically configured to:
hiding and displaying the first translation control in a page to be translated;
optionally, the apparatus 1 is specifically configured to:
and acquiring a third position after the first position, if the third position is out of the mark display range, displaying the first translation control in the interface to be translated, and performing display cancellation processing on the selected mark.
It should be noted that, when the translation apparatus provided in the foregoing embodiment executes the translation method, only the division of the functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the translation apparatus and the translation method provided by the above embodiments belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, the translation object can be quickly positioned based on the displayed selection mark in the moving process of the first translation control in the above manner, the translation operation path is greatly shortened in the whole translation process, the translation process is optimized, and the translation convenience is improved; the first position is determined based on the moving position and is not overlapped with the moving position, so that the display mark is accurately displayed, and the position expected to be selected by the user can be accurately predicted; in the whole translation process, a user does not need to gradually frame the identification region in the whole process, so that the region operation path in the translation process can be shortened, and the region selection time is saved; the translation result can be covered on the original translation object, the typesetting effect is basically consistent with the visual effect of the original content, and the readability after translation can be improved; the small-area OCR recognition is carried out on the target area, so that the translation time can be shortened; and the setting of the display range of the mark can avoid false triggering, thereby improving the intelligence of translation.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the translation method according to the embodiment shown in fig. 1 to 12, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 12, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored, and the at least one instruction is loaded by the processor and executes the translation method according to the embodiment shown in fig. 1 to 12, where a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 12, and is not described herein again.
Referring to fig. 17, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing text content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, and the like), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area may also store data created by the electronic device during use, such as phone books, audio and video data, chat log data, and the like.
Referring to fig. 18, the memory 120 may be divided into an operating system space, where an operating system is run, and a user space, where native and third-party applications are run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources also differ, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in an animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 19, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game application, an instant messaging program, a photo beautification program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are as shown in fig. 20, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 20, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system may be referred to as a mode and a principle for implementing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input commands or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The electronic device of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink screen, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, video, and the like. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the electronic device shown in fig. 17, where the electronic device may be a terminal, the processor 110 may be configured to call an application program stored in the memory 120, and specifically perform the following operations:
acquiring a first position selected in a page to be translated;
determining a second position for a selection marker based on the first position, the selection marker being displayed at the second position, the first position being different from the second position;
and determining a target area corresponding to the selection mark, and determining a translation result of a translation object in the target area.
In one embodiment, the processor 101 specifically performs the following operations when performing the determining of the second position for the selection flag based on the first position:
acquiring a target angle and a target distance for a selection mark, and determining a second position indicated by the target angle and the target distance by taking the first position as a reference; or the like, or, alternatively,
determining a reference area based on the first position, acquiring at least one reference position corresponding to the reference area, and determining a second position for the selection mark from the at least one reference position.
In an embodiment, when the processor 101 performs the acquiring of the at least one reference position corresponding to the reference area, specifically, the following operations are performed:
acquiring text content in the reference region;
determining at least one reference translation object based on the text content, based on a reference location of the at least one reference translation object.
In one embodiment, the processor 101, when executing the determining of the at least one reference translation object based on the text content, specifically performs the following operations:
performing semantic recognition on the text content, determining at least one key text in the text content, and taking the key text as a reference translation object; or the like, or a combination thereof,
and acquiring a historical translation record corresponding to the text content, and acquiring at least one reference translation object from the historical translation record.
In an embodiment, when the processor 101 executes the obtaining of the selected first position in the page to be translated, the following operation is specifically executed:
receiving a moving operation aiming at a first translation control in a page to be translated, and acquiring a first position corresponding to the moving operation.
In one embodiment, the processor 101, in performing the determining the target area corresponding to the selection flag, includes:
acquiring a control touch state for the first translation control;
and if the control touch state is a stop touch state, acquiring a target position corresponding to the selection mark, and determining a target area corresponding to the target position.
In one embodiment, after the determining the target area corresponding to the selection flag is performed, the processor 101 further performs the following operations: loading a second translation control, controlling a control display area of the second translation control to cover the target area, and selecting and displaying a translation object in the target area;
the determining a translation result of a translation object in the target region includes:
and translating the translation object in the control display area to obtain a translation result, and displaying the translation result.
In one embodiment, when the processor 101 executes the displaying of the translation result, the following operations are specifically performed:
and displaying the translation result in the control display area.
In one embodiment, when the processor 101 determines the second position for the selection flag based on the first position, specifically, the following steps are performed:
acquiring a mark display range for the selection mark;
determining that the first position is within the marker display range, determining a second position for a selection marker based on the first position.
In one embodiment, after performing the determining that the first position is within the marker display range, the processor 101 further performs the following:
and carrying out hidden display processing on the first translation control in the page to be translated.
In an embodiment, when the obtaining of the moving position of the first translation control is performed, the processor 101 specifically performs the following operations:
in one embodiment, the processor 101, after performing the determining that the first position is within the marker display range, further performs the following:
and acquiring a third position after the first position, if the third position is out of the mark display range, displaying the first translation control in the interface to be translated, and performing display cancellation processing on the selected mark.
In the embodiment of the application, the translation object can be quickly positioned based on the displayed selection mark in the moving process of the first translation control in the above manner, the translation operation path is greatly shortened in the whole translation process, the translation process is optimized, and the translation convenience is improved; the first position is determined based on the moving position and is not overlapped with the moving position, so that the display mark is accurately displayed, and the position expected to be selected by the user can be accurately predicted; in the whole translation process, a user does not need to gradually frame the identification region in the whole process, so that the region operation path in the translation process can be shortened, and the region selection time is saved; the translation result can be covered on the original translation object, the typesetting effect is basically consistent with the visual effect of the original content, and the readability after translation can be improved; the small-area OCR recognition is carried out on the target area, so that the translation time can be shortened; and the setting of the display range of the mark can avoid false triggering, thereby improving the intelligence of translation.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (14)

1. A method of translation, the method comprising:
acquiring a first position selected in a page to be translated;
determining a second position for a selection marker based on the first position, the selection marker being displayed at the second position, the first position being different from the second position;
and determining a target area corresponding to the selection mark, and determining a translation result of a translation object in the target area.
2. The method of claim 1, wherein determining the second location for the selection flag based on the first location comprises:
acquiring a target angle and a target distance for a selection mark, and determining a second position indicated by the target angle and the target distance by taking the first position as a reference; or the like, or, alternatively,
determining a reference area based on the first position, acquiring at least one reference position corresponding to the reference area, and determining a second position for the selection mark from the at least one reference position.
3. The method of claim 2, wherein the obtaining at least one reference position corresponding to the reference region comprises:
acquiring text content in the reference region;
determining at least one reference translation object based on the text content, based on a reference location of the at least one reference translation object.
4. The method of claim 3, wherein determining at least one reference translation object based on the textual content comprises:
performing semantic recognition on the text content, determining at least one key text in the text content, and taking the key text as a reference translation object; or the like, or, alternatively,
and acquiring a historical translation record corresponding to the text content, and acquiring at least one reference translation object from the historical translation record.
5. The method according to claim 1, wherein the obtaining a first position selected in the page to be translated comprises:
receiving a moving operation aiming at a first translation control in a page to be translated, and acquiring a first position corresponding to the moving operation.
6. The method of claim 5, wherein the determining the target region corresponding to the selection flag comprises:
acquiring a control touch state for the first translation control;
and if the control touch state is a stop touch state, acquiring a target position corresponding to the selection mark, and determining a target area corresponding to the target position.
7. The method of claim 1, wherein after determining the target region corresponding to the selection flag, further comprising:
loading a second translation control, controlling a control display area of the second translation control to cover the target area, and selecting and displaying a translation object in the target area;
the determining a translation result of a translation object in the target region includes:
and translating the translation object in the control display area to obtain a translation result, and displaying the translation result.
8. The method of claim 7, wherein said displaying the translation result comprises:
and displaying the translation result in the control display area.
9. The method of claim 1, wherein determining the second location for the selection flag based on the first location comprises:
acquiring a mark display range for the selection mark;
determining that the first position is within the marker display range, determining a second position for a selection marker based on the first position.
10. The method of claim 9, wherein after determining that the first location is within the indicia display range, further comprising:
and carrying out hidden display processing on the first translation control in the interface to be translated.
11. The method of claim 9, wherein after determining that the first location is within the indicia display range, further comprising:
and acquiring a third position after the first position, if the third position is out of the mark display range, displaying the first translation control in the interface to be translated, and performing display cancellation processing on the selected mark.
12. A translation apparatus, the apparatus comprising:
the system comprises a mark display module, a translation module and a display module, wherein the mark display module is used for acquiring a first position selected in a page to be translated, determining a second position aiming at a selection mark based on the first position, and displaying the selection mark at the first position, and the first position is different from the second position;
and the object translation module is used for determining a target area corresponding to the selection mark and determining a translation result of a translation object in the target area.
13. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 11.
14. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 11.
CN202210259874.3A 2022-03-16 2022-03-16 Translation method, translation device, storage medium and electronic equipment Pending CN114580447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210259874.3A CN114580447A (en) 2022-03-16 2022-03-16 Translation method, translation device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210259874.3A CN114580447A (en) 2022-03-16 2022-03-16 Translation method, translation device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114580447A true CN114580447A (en) 2022-06-03

Family

ID=81774646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210259874.3A Pending CN114580447A (en) 2022-03-16 2022-03-16 Translation method, translation device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114580447A (en)

Similar Documents

Publication Publication Date Title
US11543928B2 (en) Method for displaying input method interface of improved accuracy of input, device, and terminal
US20230152940A1 (en) Device, method, and graphical user interface for managing folders
US11301131B2 (en) Method for split-screen display, terminal, and non-transitory computer readable storage medium
CN108089786B (en) User interface display method, device, equipment and storage medium
JP6126255B2 (en) Device, method and graphical user interface for operating a soft keyboard
EP4155920A1 (en) Application screen splitting method and apparatus, storage medium and electric device
WO2019047738A1 (en) Message display method, device, mobile terminal and storage medium
US20230035047A1 (en) Remote assistance method, device, storage medium, and terminal
TW201606631A (en) Context menu utilizing a context indicator and floating menu bar
US20230117213A1 (en) Page display method and electronic device
CN109388309B (en) Menu display method, device, terminal and storage medium
WO2017113624A1 (en) System and method for operating system of mobile device
EP4170476A1 (en) Translation method and electronic device
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN111401323A (en) Character translation method, device, storage medium and electronic equipment
CN112995562A (en) Camera calling method and device, storage medium and terminal
CN114580447A (en) Translation method, translation device, storage medium and electronic equipment
CN111859999A (en) Message translation method, device, storage medium and electronic equipment
CN115877939A (en) Input method, electronic device and storage medium
CN115495002A (en) Control method and electronic equipment
CN114895730A (en) Device control method, device, storage medium and electronic device
CN116149530A (en) Translation method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination