CN111144141A - Translation method based on photographing function - Google Patents

Translation method based on photographing function Download PDF

Info

Publication number
CN111144141A
CN111144141A CN201911356637.3A CN201911356637A CN111144141A CN 111144141 A CN111144141 A CN 111144141A CN 201911356637 A CN201911356637 A CN 201911356637A CN 111144141 A CN111144141 A CN 111144141A
Authority
CN
China
Prior art keywords
translation
text
image
translated
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911356637.3A
Other languages
Chinese (zh)
Inventor
代晓炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meizu Technology Co Ltd
Original Assignee
Meizu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meizu Technology Co Ltd filed Critical Meizu Technology Co Ltd
Priority to CN201911356637.3A priority Critical patent/CN111144141A/en
Publication of CN111144141A publication Critical patent/CN111144141A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)
  • Telephone Function (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a translation method based on a photographing function. Wherein, the method comprises the following steps: providing at least two candidate functions at an image capture interface, the candidate functions comprising: a photographing function and a translation function; when an instruction for executing a translation function is detected, acquiring image data acquired from the image shooting interface; extracting text content in the image according to the image data; translating the text content into a translation text; and displaying the translated text on the image shooting interface. The invention solves the technical problems that in the prior art, shooting and translation can be realized through a camera only by using an application program installed in intelligent mobile equipment, unnecessary photo files introduced by shooting and translation are stored in a mobile phone in a large amount, so that the shooting and translation operation process becomes very complicated, even the focusing of a lens is waited, the time is wasted, and the translation efficiency is low.

Description

Translation method based on photographing function
Technical Field
The invention relates to the field of mobile communication, in particular to a translation method based on a photographing function.
Background
With the continuous development of intelligent mobile communication, intelligent mobile devices, such as smart phones, tablet computers, intelligent translation devices and the like, have become indispensable devices in the hands of people, and are convenient for people to use in work and life. For the translation function of the smart mobile device, most manufacturers can design the smart mobile device to be translated by app voice, translated by photographing, and the like.
At present, most manufacturers of smart mobile devices, such as smart phones, do not directly combine the camera and the translation function, and most of the manufacturers realize the camera and the translation function through application programs installed in the smart mobile devices. And calling the camera function through the application program, and realizing the translation function by using the camera. However, these applications all use character recognition on the photo to translate, however, in the smart phone combining the photo and translation, the user downloads the application (e.g. translation software) each time to realize the translation function, and the photo translation introduces unnecessary photo files to be stored in the mobile phone in large quantities, so that the photo translation operation process becomes complicated, and even the user needs to wait for the focus of the lens, which wastes time and reduces the translation efficiency.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a translation method based on a photographing function, which at least solves the technical problems that in the prior art, the photographing translation can be realized through a camera only by using an application program installed in intelligent mobile equipment, unnecessary photo files are introduced into the photographing translation and are stored in a mobile phone in a large amount, the photographing translation operation process becomes very complicated, even the focusing of a lens is waited, the time is wasted, and the translation efficiency is low.
According to an aspect of the embodiments of the present invention, there is provided a translation method based on a photographing function, applied to a mobile communication device, including: providing at least two candidate functions at an image capture interface, the candidate functions comprising: a photographing function and a translation function; when an instruction for executing a translation function is detected, acquiring image data acquired from the image shooting interface; extracting text content in the image according to the image data; translating the text content into a translation text; and displaying the translated text on the image shooting interface.
Optionally, the image capturing interface is a capturing interface of the mobile communication device, and the image data is dynamic image data acquired in real time by a capturing device in the mobile communication device.
Optionally, the step of displaying the translation text on the image capturing interface includes: and marking and displaying the translated text on the image shooting interface.
Optionally, before the step of providing a plurality of candidate functions by the image capturing interface, the method further includes: and presetting a translation language aiming at the translation function.
Optionally, after the step of translating the text content into a translation text, the method further includes: and reading the translated text.
Optionally, after the displaying the translated text on the image capturing interface, the method further includes: and when a click on the translation text is received, displaying the content associated with the translation text.
Optionally, the content associated with the translation text includes: paraphrasing the translation text, or webpage information related to the translation text.
Optionally, after the step of jumping to the content associated with the translated text, the method further includes: and processing the content associated with the translated text.
Optionally, the step of processing the content associated with the translated text includes: collecting, sharing, sending to other terminals, saving to local, exporting, converting into at least one of voice.
Optionally, the webpage information associated with the translation text includes webpage information obtained by querying the translation text with a search engine.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: a processor; and a memory having a computer readable program stored therein, the electronic device performing the method when the computer readable program is executed by the processor.
In the embodiment of the invention, the text information to be translated is acquired through the image shooting interface, and is extracted, translated and displayed on the interactive interface, so that the aim of directly translating and displaying the image data through the mobile phone shooting function and translating the text in real time without storing photos is fulfilled, and the technical problems that in the prior art, the translation can be shot through a camera only by using an application program installed in intelligent mobile equipment, and unnecessary photo files are introduced into the shooting translation and are stored in a mobile phone in a large quantity, so that the shooting translation operation process is complicated, even the focusing of a lens is waited, the time is wasted, and the translation efficiency is low are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a mobile phone photo translation process according to the prior art;
FIG. 2 is a diagram of a mobile phone photo translation according to the prior art;
fig. 3 is a flowchart of a translation method based on a photographing function according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided an embodiment of a translation method based on a photographing function, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that herein.
Fig. 1 is a schematic diagram of a photographing translation process of an intelligent mobile device, as shown in fig. 1, where the intelligent mobile device is, for example, a mobile phone of a user, and in the prior art, when the user needs to translate a target text to be translated while holding the mobile phone, the user needs to photograph the text, and simultaneously store an image generated by photographing into a local storage space of the mobile phone, and then collect and translate the text according to the stored image, and display the translated text to the user to obtain a translation result. In the process, in order to translate the text to be translated, a user needs to call a camera program in an application program, take pictures through a mobile phone terminal and store images, so that a part of mobile phone storage space is occupied, when the translation content is more, the storage space consumption of the mobile phone is extremely high, the user can only continuously delete the translated useless pictures to ensure that the storage space is enough to use, and the user experience is greatly reduced.
Fig. 2 is a schematic diagram of mobile phone photographing and translation, as shown in fig. 2, a user photographs a mobile phone camera with respect to a text to be translated, the text to be translated is displayed on an interactive interface, and at this time, the mobile phone processes the image to obtain a translation text required by the user. As can be seen from fig. 1, in the prior art, it is necessary to use an application installed in an intelligent mobile device to perform shooting and translation through a camera, and unnecessary photo files introduced by the shooting and translation are largely stored in a mobile phone, so that the shooting and translation operation process becomes complicated, even a user needs to wait for focusing of a lens, which wastes time, and the translation efficiency is low.
To solve the problems, an embodiment of the present invention provides a translation method based on a photographing function, which is applied to a mobile communication device, and fig. 3 is a flowchart of a translation method based on a photographing function according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S302, at least two candidate functions are provided on an image shooting interface, and the candidate functions comprise: a photographing function and a translation function.
Specifically, the image capturing interface may be a human-computer interaction interface of a capturing function of the user mobile phone, where the human-computer interaction interface refers to a touchable screen of the user mobile phone. The image shooting interface of the user mobile phone is provided with two triggerable function buttons, one trigger button is used for shooting the influence acquired by the current user mobile phone camera, and the other trigger button is used for translating the text to be translated in the image acquired by the current camera into the language required by the user.
It should be noted that the image capturing interface may also provide more than two candidate functions, in addition to the photographing function and the translation function, a camera function, an illumination function, and the like, and all the functions are implemented by the camera of the mobile phone of the user.
For example, in an image capturing interface in a mobile phone opened by a user, three function options appear on the upper right side of the interface, which are respectively: the method comprises the steps of photographing, shooting and translating, when a user needs to translate the characters in front of the eyes, clicking a translation button, enabling a mobile phone of the user to immediately translate the text in the image data collected by the camera into a preset language, displaying the relevant result content to the user, and meeting the character translation requirement of the user.
Optionally, before the step of providing a plurality of candidate functions by the image capturing interface, the method further includes: and presetting a translation language aiming at the translation function.
Specifically, before the user needs to translate the text, the language of the translated text needs to be set, which includes the language of the translated text, and may also include the language of the original text, i.e., from what language is translated to what language.
It should be noted that the presetting of the language of the original text may be to adopt a function of automatically recognizing the language in the image data collection to collect and recognize the language to be translated, so that the user does not need to judge and set the language of the original text separately, the operation of the user is saved, and the efficiency of language translation is increased.
For example, a user needs to set a target translation language, that is, a translation language, in a mobile phone of the user, and when the user sets the translation language to be english, a text extraction result in image data collected by a camera of the mobile phone of the user is "What" so that a processor of the mobile phone of the user translates "What" into "What" according to the translation language preset by the user, and displays the translated result as a final translation result to the user.
Optionally, after the step of translating the text content into a translation text, the method may further include:
and reading the translated text.
For example, in one usage scenario, after the step of translating the text content into a translated text, the mobile phone may directly use a TTS text-to-speech technology to read an english text. For example, a user shoots an apple, a translation text displayed on a shooting interface of the mobile phone is an applet, and then the mobile phone directly reads the word of the applet by using a TTS text-to-speech technology, so that the user can conveniently and directly obtain related information.
In step S304, when an instruction to execute the translation function is detected, image data acquired from the image capturing interface is acquired.
Specifically, when a user needs to use a mobile phone to perform a translation function, the mobile phone of the user needs to acquire a text to be translated through the camera, and when the processor receives image data acquired by the camera, the text in the image data is extracted, translated to a target language according to an existing translation rule, and returned to an image shooting interface for the user to use.
Optionally, the image capturing interface is a capturing interface of the mobile communication device, and the image data is dynamic image data acquired in real time by a capturing device in the mobile communication device.
Specifically, according to the embodiment of the present invention, the image capturing interface is a functional interface of the user mobile phone, such as capturing a picture and shooting a video, and the capturing and shooting functions of the user mobile phone are operated by a camera electrically connected to a main board of the mobile phone. The photographing and shooting function enables a camera in a mobile phone of a user to acquire an external image, and displays and shoots an interface with the image in the form of image data for the user to operate.
It should be noted that the dynamic image data refers to that external image data is obtained in real time only through a camera device at a mobile phone end without taking pictures or recording videos, and in the process of obtaining images in real time, the processor analyzes each frame of image transmitted to the processor according to the camera, and sends a result of processing the dynamic image data to a subsequent step.
Step S306, extracting text content in the image according to the image data.
Specifically, after the dynamic real-time data is acquired through the camera device of the mobile phone of the user, the processor processes each frame of image data and performs text processing on the image, so that the text in the image is in a state that can be extracted by the processor, and the processor extracts the text as independent text information for subsequent translation.
For example, when a processor of a user mobile phone receives an image data, the image data is subjected to binarization processing, wherein the binarization processing is to select a gray level image with 256 brightness levels by an appropriate threshold value to obtain a binarized image which can still reflect the whole and local features of the image. In digital image processing, a binary image plays a very important role, and firstly, the binarization of the image is beneficial to further processing of the image, so that the image is simple, the data volume is reduced, and the outline of an interested target can be highlighted. Secondly, the processing and analysis of the binary image are carried out, firstly, the gray level image is binarized to obtain a binarized image. The image data after the binarization processing can be quickly identified by the processor to the text content in the image, and the text content is stored for translation.
Step S308, the text content is translated into a translation text.
Specifically, after the image text is extracted and collected by the processor, the user mobile phone translates the text content by calling a local or remote database and the like according to the existing translation rule, and returns the translated result to the image shooting interface or other display interfaces for the user to view and use. The translation rule can be set according to a translation plug-in unit preinstalled in advance, for example, according to an oxford dictionary and a related translation rule program code, and when a text to be translated is received, the text is translated according to the rule; the translation rule can also be that through a network online translation function, text content is sent to an inherent and stable translation address, and a response of the address is obtained in a short time, so that relevant translation work is completed.
And step S310, displaying the translation text on the image shooting interface.
Specifically, the translated text is displayed on an image shooting interface of the mobile phone of the user as a translation result, so that the user can view the translation result.
Optionally, the step of displaying the translation text on the image capturing interface includes: and marking and displaying the translated text on the image shooting interface.
Specifically, after the mobile phone of the user finishes the translation work, the translation text is displayed on the image shooting interface as output data, the display mode of the translation text can be around the original text to be translated, and the content of the translation text is displayed in a mode of marking the text, so that the user can visually see what the translation text of the original text to be translated is, and the user experience is improved.
It should be noted that the display of the translated text may also be displayed on the text of the text to be translated, covering the original text content to be translated, and using fonts with different colors enables the user to intuitively see which sentence is translated, and the covering of the original text content may be to automatically eliminate the original text and display the translated text at the display position of the original text, so as to increase the user experience intuitively.
For example, when a user holds a mobile phone and aims at a text to be translated, namely 'Who my is', the processor extracts and translates the text, displays the translated text after translation below an original text, namely 'Who my is', in an image shooting interface, marks out 'Who am I' in a red font, and can intuitively judge that 'Who am I' is the translated text of 'Who my is' according to the position of the red font after the user sees the red font.
Optionally, after the displaying the translated text on the image capturing interface, the method further includes: and when a click on the translation text is received, displaying the content associated with the translation text.
Specifically, for a user mobile phone which has completed the translation work and displayed the translated text on the image capturing interface, a function of clicking the translated text by the user can be provided, and after the user sees the translated content, the user often has some doubt about the related content, and at this time, the user needs to conveniently inquire or explain the translated text. The embodiment of the invention can also comprise the following functions: when the user clicks the translation text, the related content related to the translation text is correspondingly displayed for the user to further understand and operate.
Optionally, the content associated with the translation text includes: paraphrasing the translation text, or webpage information related to the translation text.
Specifically, in the above embodiment of the present invention, when the user clicks the translation text, the mobile phone of the user displays the content associated with the translation text, where the associated content may be a paraphrase for the translation text or related information data searched on a web page by the translation text,
for example, when the user clicks on the translated text "neural network model", the image capture interface will display an interpretation for "neural network model" and associate with the machine learning model: the neural network model is described based on a mathematical model of neurons. Artificial neural networks (Artificial neural networks) are a description of the first-order characteristics of the human brain system. Briefly, it is a mathematical model. The neural network model is represented by a network topology, node characteristics, and learning rules. ". Therefore, in the above process, the user not only obtains the content of the translated text "neural network model", but also further obtains a deeper explanation of the translated text "neural network model", so that the user understands what is the neural network model, and even understands what the neural network model has for the subsequent processing and use of the user.
For another example, when the user clicks the translation text "neural network model", the image capturing interface displays the web page information for the "neural network model", so that the user can not only obtain the content of the translation text itself, but also know some information data of the related content on the network.
It should be noted that the display of the associated content of the translated text by the image capturing interface may be directly displayed around the translated text and marked by fonts with different colors, or may be a local small dialog box that pops up after the user clicks the translated text and is separately used for displaying the associated content of the translated text, and a specific display mode needs to be determined according to a specific application environment, which is not specifically limited herein.
Optionally, the webpage information associated with the translation text includes webpage information obtained by querying the translation text with a search engine.
Specifically, the content related to the translated text mentioned in the above embodiment may be web page information, and may be a designated search engine connected by using a mobile phone internet function of the user, such as hundredths, google, and the like, and the content related to the translated text is searched by the search engine, and according to a certain extraction rule, network information useful for the user is extracted and fed back, and displayed to the user.
It should be noted that, the network information is screened and selected according to a certain rule, and the neural network model can be used for learning and training the network use habit of the user, so that the value of the information extracted from the network to the user is the greatest and the use is the highest.
For example, through a neural network learning model, search records of a browser in a mobile phone of a user are learned, the search records are classified, the most common search type of the user is extracted, and after the search types are combined, when the associated content of the translated text needs to be searched, the most common search type of the user is actively extracted according to the common search type of the user so as to accord with the information acquisition habit of the user and improve the experience of the user.
Optionally, after the step of displaying the content associated with the translated text, the method further includes: and processing the content associated with the translated text.
Optionally, the step of processing the content associated with the translated text includes: collecting, sharing, sending to other terminals, saving to local, exporting, converting into at least one of voice.
Specifically, according to the above embodiment, after the user clicks the translation text and obtains the related content, the user may further operate the related content, for example, the related content may be collected, shared, sent to another terminal, saved to the local, exported, or converted into voice, so that the user can directly process and use the translation text and the related content thereof.
For example, a user shoots a "Neutral Network", a translation text displayed on a shooting interface of a mobile phone is a "neural Network model", and contents associated with the translation text are as follows: the neural network model is … described based on the mathematical model of the neuron, and the like, and by utilizing a TTS text-to-speech technology, the contents of … described based on the mathematical model of the neuron, and the like, of the neural network model are directly read, so that a user can conveniently and directly obtain related information.
It should be noted that, a processing option button for the translation text associated content by the user may be set on the local pop-up window of the associated content, so that the user clicks and selects a mode to be processed.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: a processor; and a memory having a computer readable program stored therein, the electronic device performing the method when the computer readable program is executed by the processor.
Specifically, the method executed by the electronic device may be clearly obtained according to the description of the translation method based on the photographing function in the embodiment of the present invention, and details are not repeated here.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A translation method based on a photographing function is applied to a mobile communication device, and the method comprises the following steps:
providing at least two candidate functions at an image capture interface, the candidate functions comprising: a photographing function and a translation function;
when an instruction for executing a translation function is detected, acquiring image data acquired from the image shooting interface;
extracting text content in the image according to the image data;
translating the text content into a translation text;
and displaying the translated text on the image shooting interface.
2. The method of claim 1, wherein the image capturing interface is a capturing interface of the mobile communication device, and the image data is dynamic image data captured in real time by a capturing device in the mobile communication device.
3. The method of claim 1, wherein the step of displaying the translation text on the image capture interface comprises:
and marking and displaying the translated text on the image shooting interface.
4. The method of claim 1, wherein after the step of translating the text content into translation text, the method further comprises:
and reading the translated text.
5. The method of claim 1, wherein after displaying the translated text on the image capture interface, further comprising:
and when a click on the translation text is received, displaying the content associated with the translation text.
6. The method of claim 5, wherein the translation text associated content comprises: paraphrasing the translation text, or webpage information related to the translation text.
7. The method of claim 6, wherein after the step of displaying the content associated with the translated text, the method further comprises:
and processing the content associated with the translated text.
8. The method of claim 7, wherein the step of processing the content associated with the translated text comprises:
collecting, sharing, sending to other terminals, saving to local, exporting, converting into at least one of voice.
9. The method of claim 6, wherein the web page information associated with the translated text comprises web page information obtained by querying the translated text with a search engine.
10. An electronic device, comprising:
a processor; and
memory having stored therein a computer readable program which, when executed by the processor, the electronic device performs the method of any of claims 1-9.
CN201911356637.3A 2019-12-25 2019-12-25 Translation method based on photographing function Pending CN111144141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911356637.3A CN111144141A (en) 2019-12-25 2019-12-25 Translation method based on photographing function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911356637.3A CN111144141A (en) 2019-12-25 2019-12-25 Translation method based on photographing function

Publications (1)

Publication Number Publication Date
CN111144141A true CN111144141A (en) 2020-05-12

Family

ID=70519981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911356637.3A Pending CN111144141A (en) 2019-12-25 2019-12-25 Translation method based on photographing function

Country Status (1)

Country Link
CN (1) CN111144141A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135054A (en) * 2020-09-27 2020-12-25 广东小天才科技有限公司 Method and system for realizing photographing translation, smart watch and storage medium
CN112989846A (en) * 2021-03-10 2021-06-18 深圳创维-Rgb电子有限公司 Character translation method, character translation device, character translation apparatus, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135054A (en) * 2020-09-27 2020-12-25 广东小天才科技有限公司 Method and system for realizing photographing translation, smart watch and storage medium
CN112989846A (en) * 2021-03-10 2021-06-18 深圳创维-Rgb电子有限公司 Character translation method, character translation device, character translation apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN113115099B (en) Video recording method and device, electronic equipment and storage medium
CN110557678B (en) Video processing method, device and equipment
CN110446063B (en) Video cover generation method and device and electronic equipment
CN109862397B (en) Video analysis method, device, equipment and storage medium
CN110188365B (en) Word-taking translation method and device
US20170337222A1 (en) Image searching method and apparatus, an apparatus and non-volatile computer storage medium
CN110414404A (en) Image processing method, device and storage medium based on instant messaging
CN110134973A (en) Video caption real time translating method, medium and equipment based on artificial intelligence
CN110213458B (en) Image data processing method and device and storage medium
CN111144141A (en) Translation method based on photographing function
JP2023543640A (en) Liquor label identification method, liquor product information management method, and its apparatus, device, and storage medium
CN110661693A (en) Methods, computing device-readable storage media, and computing devices facilitating media-based content sharing performed in a computing device
CN103605687A (en) Photographing and image recognizing system and method of mobile terminal
CN112862558A (en) Method and system for generating product detail page and data processing method
CN111491209A (en) Video cover determining method and device, electronic equipment and storage medium
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN113031813A (en) Instruction information acquisition method and device, readable storage medium and electronic equipment
CN112183122A (en) Character recognition method and device, storage medium and electronic equipment
CN116740444A (en) Information acquisition method, device, electronic equipment and storage medium
CN110674825A (en) Character recognition method, device and system applied to intelligent voice mouse and storage medium
CN113938739B (en) Information display method, information display device, electronic equipment and storage medium
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
CN114283422A (en) Handwritten font generation method and device, electronic equipment and storage medium
US11010978B2 (en) Method and system for generating augmented reality interactive content
CN111931510B (en) Intention recognition method and device based on neural network and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination