US20180113859A1 - System and method for real time translation - Google Patents
System and method for real time translation Download PDFInfo
- Publication number
- US20180113859A1 US20180113859A1 US15/692,707 US201715692707A US2018113859A1 US 20180113859 A1 US20180113859 A1 US 20180113859A1 US 201715692707 A US201715692707 A US 201715692707A US 2018113859 A1 US2018113859 A1 US 2018113859A1
- Authority
- US
- United States
- Prior art keywords
- touchscreen
- image
- scan
- file
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06F17/289—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/0035—User-machine interface; Control console
- H04N1/00405—Output means
- H04N1/00408—Display of information to the user, e.g. menus
- H04N1/00411—Display of information to the user, e.g. menus the display also being used for user input, e.g. touch screen
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/0035—User-machine interface; Control console
- H04N1/00405—Output means
- H04N1/00408—Display of information to the user, e.g. menus
- H04N1/00413—Display of information to the user, e.g. menus using menus, i.e. presenting the user with a plurality of selectable options
- H04N1/00416—Multi-level menus
- H04N1/00419—Arrangements for navigating between pages or parts of the menu
- H04N1/00427—Arrangements for navigating between pages or parts of the menu using a menu list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/0035—User-machine interface; Control console
- H04N1/00405—Output means
- H04N1/00408—Display of information to the user, e.g. menus
- H04N1/00413—Display of information to the user, e.g. menus using menus, i.e. presenting the user with a plurality of selectable options
- H04N1/00437—Intelligent menus, e.g. anticipating user selections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/0035—User-machine interface; Control console
- H04N1/00405—Output means
- H04N1/00408—Display of information to the user, e.g. menus
- H04N1/00472—Display of information to the user, e.g. menus using a pop-up window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0094—Multifunctional device, i.e. a device capable of all of reading, reproducing, copying, facsimile transception, file transception
Definitions
- This application relates generally to a multifunction peripheral with integrated machine translation.
- the application relates more particularly to translation of specified areas of a scanned document selected from a preview image displayed on a device touchscreen.
- Document processing devices include printers, copiers, scanners and e-mail gateways. More recently, devices employing two or more of these functions are found in office environments. These devices are referred to as multifunction peripherals (MFPs) or multifunction devices (MFDs). As used herein, MFPs are understood to comprise printers, alone or in combination with other of the afore-noted functions. It is further understood that any suitable document processing device can be used.
- MFPs multifunction peripherals
- MFDs multifunction devices
- MFPs are becoming increasingly capable and increasingly complex. User control and interaction is typically made via a touchscreen working in concert with an intelligent controller comprised of an on board computer system.
- MFPs may have integrated scanners that will take a physical document and generate a corresponding electronic image scan file. Many MFPs have an ability to convert a scan file to various file formats, such as portable document format (PDF), Joint Photographic Experts Group (JPEG), Graphical Image Format (GIF), tagged information file format (TIFF), bitmap (BMP), Portable Network Graphics (PNG), or the like.
- PDF portable document format
- JPEG Joint Photographic Experts Group
- GIF Graphical Image Format
- TIFF tagged information file format
- BMP Portable Network Graphics
- a system and method for real time translation of scanned documents includes a multifunction peripheral having an intelligent controller including a processor and associated memory.
- a scan engine generates an electronic image of a tangible document and a touchscreen user interface receives a user scan instruction.
- the scan operation generates a scan file which provides an image of the document on the touchscreen.
- a user selects an area of the generated image via the touchscreen.
- An optical character recognition operation generates a character based file which is then translated from source language to a target language. The resultant translation is then displayed on the touchscreen.
- FIG. 1 an example embodiment of a real time scan translation system
- FIG. 2 is an example embodiment of a graphical rendering of a device control screen
- FIG. 3 is an example embodiment of a graphical rendering of a device control screen
- FIG. 4 is an example embodiment of a graphical rendering of a device control screen
- FIG. 5 is an example embodiment of a graphical rendering of a device control screen
- FIG. 6 is an example embodiment of a networked digital device
- FIG. 7 is a flowchart of an example embodiment of a system for real time translation of a selected document scan area.
- OCR optical character recognition
- a user loads an image file and the application generates a character inclusive file that can be printed or edited with a standard word processing application.
- Characters may be encoded in any suitable format such as ASCII, EBCDIC, ISO 8859, Unicode, JIS or the like. Characters may include phonetic characters, such as used in western Europe or the Americas, or ideographs, such as used in Asian countries.
- Another application may take a character encoded file and perform a machine translation between languages. This can also be done online, such as by use of GOOGLE translate, which is a service instantly translates words, phrases, and web pages between English and over 100 other languages.
- GOOGLE translate is a service instantly translates words, phrases, and web pages between English and over 100 other languages.
- This process is a time consuming and labor intensive mechanism for securing a printout of a machine translation to a scanned document.
- it requires more processing power than may be required if only a portion of a scanned document is desired for translation as there is generally no mechanism to translate only a portion of a page.
- This can lead to problems, such as when a document includes multiple languages, since translation programs need to identify a source file language and a target file language to work.
- a document may contain quotes or excerpts in one language and a text body in another language.
- a user may only need to translated selected portions of a document.
- Example embodiments herein provide for real time scanning of documents with machine translation of document portions selected by a user working from a preview image displayed on a device touchscreen. The translated portion is suitably superimposed on the preview image and a user may print the resultant, hybridized page. This may be all be completed in a single operation.
- FIG. 1 illustrates an example embodiment of a real time scan translation system 100 running on MFP 104 .
- MFP 104 includes a user interface 108 including a touchscreen 112 configured to generate graphical or text images, and suitably programmable to generate a soft keyboard for numeric or text entry.
- real time scan translation on the MFP 104 is accomplished through a series of user interface renderings 116 , 120 , 124 and 128 additional detail for which are found in FIGS. 2-7 , respectively.
- Interface rendering 116 includes a graphical rendering of a device control screen 204 for conversion a selected portion of a paper document in source language to a displayed or printed document in a target language.
- Source and target documents may be selected by a user, or specific applications for a particular translation may be launched.
- a translation from English to Japanese is selected, and a scan of a paper document commenced by depressing soft scan key 208 .
- interface rendering 120 depicts a preview or thumbnail image 304 from a scan initiated by the pressing of the scan key 208 of FIG. 2 .
- a graphical rendering of device screen 404 shows a prompt 408 for commencing a document area selection on preview image 304 , suitably commenced by pressing soft OK button 412 .
- interface rendering 504 shows preview image 304 with an area 308 selected for translation.
- selection is suitably made by any touchscreen gesture on the device screen 504 , such as touching boundaries such as edges or corners, dragging a box or boundaries by dragging a finger across the touchscreen, or dragging and dropping area templates.
- a user can also select the entire preview image, for example by double tapping the preview image.
- a translation of selected area 308 occurs as soon as the area selection has been made such that translated text is superimposed over the selected area 308 . From that point, the user can print or transmit the hybridized image such as by selection of soft keys for emailing 508 , saving to folder 512 , printing 516 or faxing 520 .
- FIG. 6 illustrated is an example embodiment of a networked digital device comprised of document rendering system 600 suitably comprised within an MFP, such as with MFP 104 of FIG. 1 .
- controller 601 includes one or more processors, such as that illustrated by processor 602 .
- processors such as that illustrated by processor 602 .
- Each processor is suitably associated with non-volatile memory, such as ROM 604 , and random access memory (RAM) 606 , via a data bus 612 .
- non-volatile memory such as ROM 604
- RAM random access memory
- Processor 602 is also in data communication with a storage interface 608 for reading or writing to a storage 616 , suitably comprised of a hard disk, optical disk, solid-state disk, cloud-based storage, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
- a storage 616 suitably comprised of a hard disk, optical disk, solid-state disk, cloud-based storage, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
- Processor 602 is also in data communication with a network interface 610 which provides an interface to a network interface controller (NIC) 614 , which in turn provides a data path to any suitable wired or physical network connection 620 , or to a wireless data connection via wireless network interface 618 .
- NIC network interface controller
- Example wireless connections include cellular, Wi-Fi, Bluetooth, NFC, wireless universal serial bus (wireless USB), satellite, and the like.
- Example wired interfaces include Ethernet, USB, IEEE 1394 (FireWire), Lightning, telephone line, or the like.
- Processor 602 can also be in data communication with any suitable user input/output (I/O) interface 619 which provides data communication with user peripherals, such as displays, keyboards, mice, track balls, touchscreens, or the like.
- I/O user input/output
- a document processor interface 622 suitable for data communication with NFP functional units 650 .
- these units include copy hardware 640 , scan hardware 642 , print hardware 644 and fax hardware 546 which together comprise MFP functional hardware 650 .
- Hardware monitors suitably provide device event data, working in concert with suitable monitoring systems.
- monitoring systems may include page counters, sensor output, such as consumable level sensors, temperature sensors, power quality sensors, device error sensors, door open sensors, and the like.
- functional units are suitably comprised of intelligent units, including any suitable hardware or software platform.
- FIG. 7 illustrated is a flowchart 700 of an example embodiment of a system for real time translation of a selected document scan area.
- the process commences at block 704 and a document is scanned at block 708 .
- a source language is established at block 712 , either by auto-detection or user specification as noted above.
- a target language is selected by a user or by virtue of a previously chosen application at block 716 .
- An optical character recognition operation is performed on block 720 , and a document preview generated on a touchscreen at block 724 .
- a user is prompted to select a scan area at block 728 , which area is received in block 732 and a translation of characters in the selected area completed at block 736 resulting in a display of the translation in block 740 , suitably superimposing an image of the translation over the selected area.
- Alternative embodiments may comprise completion of an OCR operation and a translation for an entire document while revealing only a portion of the translated document associated with a selected area.
- an OCR may be made of an entire document and a translation made only for the selected area.
- OCR and translation operation may occur solely on characters in the selected area.
- a user may choose to print a document form displayed, hybrid image at block 744 , or perform any other desire action such as saving, e-mailing or faxing the image. Processing then ends at block 752 .
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Machine Translation (AREA)
- Facsimiles In General (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 62/410,520, filed Oct. 20, 2016, which is incorporated herein by reference.
- This application relates generally to a multifunction peripheral with integrated machine translation. The application relates more particularly to translation of specified areas of a scanned document selected from a preview image displayed on a device touchscreen.
- Document processing devices include printers, copiers, scanners and e-mail gateways. More recently, devices employing two or more of these functions are found in office environments. These devices are referred to as multifunction peripherals (MFPs) or multifunction devices (MFDs). As used herein, MFPs are understood to comprise printers, alone or in combination with other of the afore-noted functions. It is further understood that any suitable document processing device can be used.
- MFPs are becoming increasingly capable and increasingly complex. User control and interaction is typically made via a touchscreen working in concert with an intelligent controller comprised of an on board computer system.
- MFPs may have integrated scanners that will take a physical document and generate a corresponding electronic image scan file. Many MFPs have an ability to convert a scan file to various file formats, such as portable document format (PDF), Joint Photographic Experts Group (JPEG), Graphical Image Format (GIF), tagged information file format (TIFF), bitmap (BMP), Portable Network Graphics (PNG), or the like.
- In accordance with an example embodiment of the subject application, a system and method for real time translation of scanned documents includes a multifunction peripheral having an intelligent controller including a processor and associated memory. A scan engine generates an electronic image of a tangible document and a touchscreen user interface receives a user scan instruction. The scan operation generates a scan file which provides an image of the document on the touchscreen. A user selects an area of the generated image via the touchscreen. An optical character recognition operation generates a character based file which is then translated from source language to a target language. The resultant translation is then displayed on the touchscreen.
- Various embodiments will become better understood with regard to the following description, appended claims and accompanying drawings wherein:
-
FIG. 1 an example embodiment of a real time scan translation system; -
FIG. 2 is an example embodiment of a graphical rendering of a device control screen; -
FIG. 3 is an example embodiment of a graphical rendering of a device control screen; -
FIG. 4 is an example embodiment of a graphical rendering of a device control screen; -
FIG. 5 is an example embodiment of a graphical rendering of a device control screen; -
FIG. 6 is an example embodiment of a networked digital device; and -
FIG. 7 is a flowchart of an example embodiment of a system for real time translation of a selected document scan area. - The systems and methods disclosed herein are described in detail by way of examples and with reference to the figures. It will be appreciated that modifications to disclosed and described examples, arrangements, configurations, components, elements, apparatuses, devices methods, systems, etc. can suitably be made and may be desired for a specific application. In this disclosure, any identification of specific techniques, arrangements, etc. are either related to a specific example presented or are merely a general description of such a technique, arrangement, etc. Identifications of specific details or examples are not intended to be, and should not be, construed as mandatory or limiting unless specifically designated as such.
- Once a file has been scanned on an MFP into an electronic format as noted above, it is possible to run an application on a computer or workstation which is able to extract character information from graphical files in a process called optical character recognition (OCR). A user loads an image file and the application generates a character inclusive file that can be printed or edited with a standard word processing application. Characters may be encoded in any suitable format such as ASCII, EBCDIC, ISO 8859, Unicode, JIS or the like. Characters may include phonetic characters, such as used in western Europe or the Americas, or ideographs, such as used in Asian countries.
- Another application may take a character encoded file and perform a machine translation between languages. This can also be done online, such as by use of GOOGLE translate, which is a service instantly translates words, phrases, and web pages between English and over 100 other languages.
- From the forgoing, if one desires to scan a document for translation, it can be a time consuming process. First, a scan has to be made, such as by an MFP. A resulting scan file must be captured and loaded into an OCR program for conversion. The resultant OCR file must be then loaded into a machine translation application for conversion. If a printout is then desired, the machine translated file must be brought to or sent to an MFP for printing. In such an instance, the entire translated document will be printed, unless user determines which page or pages are needed and manually selects only these pages for printing.
- This process is a time consuming and labor intensive mechanism for securing a printout of a machine translation to a scanned document. In addition, it requires more processing power than may be required if only a portion of a scanned document is desired for translation as there is generally no mechanism to translate only a portion of a page. This can lead to problems, such as when a document includes multiple languages, since translation programs need to identify a source file language and a target file language to work. In some instances, a document may contain quotes or excerpts in one language and a text body in another language. A user may only need to translated selected portions of a document. Example embodiments herein provide for real time scanning of documents with machine translation of document portions selected by a user working from a preview image displayed on a device touchscreen. The translated portion is suitably superimposed on the preview image and a user may print the resultant, hybridized page. This may be all be completed in a single operation.
- In accordance with the subject application,
FIG. 1 illustrates an example embodiment of a real timescan translation system 100 running onMFP 104. MFP 104 includes auser interface 108 including atouchscreen 112 configured to generate graphical or text images, and suitably programmable to generate a soft keyboard for numeric or text entry. In the illustrated example embodiment, real time scan translation on theMFP 104 is accomplished through a series ofuser interface renderings FIGS. 2-7 , respectively. -
Interface rendering 116, with added reference toFIG. 2 , includes a graphical rendering of adevice control screen 204 for conversion a selected portion of a paper document in source language to a displayed or printed document in a target language. Source and target documents may be selected by a user, or specific applications for a particular translation may be launched. As noted above, it may also be possible to auto-detect a source language, which auto-detection may suitably be confirmed by a user viatouchscreen interface 112. In the illustrated example embodiment, a translation from English to Japanese is selected, and a scan of a paper document commenced by depressingsoft scan key 208. - Next, with added reference to
FIG. 3 , interface rendering 120 depicts a preview orthumbnail image 304 from a scan initiated by the pressing of thescan key 208 ofFIG. 2 . With added reference toFIG. 4 , a graphical rendering ofdevice screen 404 shows a prompt 408 for commencing a document area selection onpreview image 304, suitably commenced by pressingsoft OK button 412. - With added reference to
FIG. 5 ,interface rendering 504 showspreview image 304 with anarea 308 selected for translation. Such selection is suitably made by any touchscreen gesture on thedevice screen 504, such as touching boundaries such as edges or corners, dragging a box or boundaries by dragging a finger across the touchscreen, or dragging and dropping area templates. A user can also select the entire preview image, for example by double tapping the preview image. In the illustrated example, a translation of selectedarea 308 occurs as soon as the area selection has been made such that translated text is superimposed over the selectedarea 308. From that point, the user can print or transmit the hybridized image such as by selection of soft keys for emailing 508, saving tofolder 512, printing 516 or faxing 520. - Turning now to
FIG. 6 illustrated is an example embodiment of a networked digital device comprised ofdocument rendering system 600 suitably comprised within an MFP, such as withMFP 104 ofFIG. 1 . Included incontroller 601 are one or more processors, such as that illustrated byprocessor 602. Each processor is suitably associated with non-volatile memory, such asROM 604, and random access memory (RAM) 606, via adata bus 612. -
Processor 602 is also in data communication with astorage interface 608 for reading or writing to astorage 616, suitably comprised of a hard disk, optical disk, solid-state disk, cloud-based storage, or any other suitable data storage as will be appreciated by one of ordinary skill in the art. -
Processor 602 is also in data communication with anetwork interface 610 which provides an interface to a network interface controller (NIC) 614, which in turn provides a data path to any suitable wired orphysical network connection 620, or to a wireless data connection viawireless network interface 618. Example wireless connections include cellular, Wi-Fi, Bluetooth, NFC, wireless universal serial bus (wireless USB), satellite, and the like. Example wired interfaces include Ethernet, USB, IEEE 1394 (FireWire), Lightning, telephone line, or the like. -
Processor 602 can also be in data communication with any suitable user input/output (I/O)interface 619 which provides data communication with user peripherals, such as displays, keyboards, mice, track balls, touchscreens, or the like. - Also in data communication with
data bus 612 is adocument processor interface 622 suitable for data communication with NFPfunctional units 650. In the illustrate example, these units includecopy hardware 640,scan hardware 642,print hardware 644 and fax hardware 546 which together comprise MFPfunctional hardware 650. Hardware monitors suitably provide device event data, working in concert with suitable monitoring systems. By way of further example, monitoring systems may include page counters, sensor output, such as consumable level sensors, temperature sensors, power quality sensors, device error sensors, door open sensors, and the like. It will be understood that functional units are suitably comprised of intelligent units, including any suitable hardware or software platform. - Referring next to
FIG. 7 , illustrated is aflowchart 700 of an example embodiment of a system for real time translation of a selected document scan area. The process commences atblock 704 and a document is scanned atblock 708. A source language is established atblock 712, either by auto-detection or user specification as noted above. Next, a target language is selected by a user or by virtue of a previously chosen application atblock 716. An optical character recognition operation is performed onblock 720, and a document preview generated on a touchscreen atblock 724. A user is prompted to select a scan area atblock 728, which area is received inblock 732 and a translation of characters in the selected area completed atblock 736 resulting in a display of the translation inblock 740, suitably superimposing an image of the translation over the selected area. Alternative embodiments may comprise completion of an OCR operation and a translation for an entire document while revealing only a portion of the translated document associated with a selected area. In other embodiments, an OCR may be made of an entire document and a translation made only for the selected area. In still another embodiment, and OCR and translation operation may occur solely on characters in the selected area. - Next, a user may choose to print a document form displayed, hybrid image at
block 744, or perform any other desire action such as saving, e-mailing or faxing the image. Processing then ends atblock 752. - While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the spirit and scope of the inventions.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/692,707 US20180113859A1 (en) | 2016-10-20 | 2017-08-31 | System and method for real time translation |
US16/539,460 US10528679B2 (en) | 2016-10-20 | 2019-08-13 | System and method for real time translation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662410520P | 2016-10-20 | 2016-10-20 | |
US15/692,707 US20180113859A1 (en) | 2016-10-20 | 2017-08-31 | System and method for real time translation |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/539,460 Continuation US10528679B2 (en) | 2016-10-20 | 2019-08-13 | System and method for real time translation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180113859A1 true US20180113859A1 (en) | 2018-04-26 |
Family
ID=61971512
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/692,707 Abandoned US20180113859A1 (en) | 2016-10-20 | 2017-08-31 | System and method for real time translation |
US16/539,460 Active US10528679B2 (en) | 2016-10-20 | 2019-08-13 | System and method for real time translation |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/539,460 Active US10528679B2 (en) | 2016-10-20 | 2019-08-13 | System and method for real time translation |
Country Status (1)
Country | Link |
---|---|
US (2) | US20180113859A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180115663A1 (en) * | 2016-10-20 | 2018-04-26 | Kabushiki Kaisha Toshiba | System and method for device gamification during job processing |
US10839272B2 (en) * | 2018-12-28 | 2020-11-17 | Kyocera Document Solutions Inc. | Image forming apparatus that prints image forming data including sentences in plurality of languages, on recording medium |
CN112269467A (en) * | 2020-08-04 | 2021-01-26 | 深圳市弘祥光电科技有限公司 | Translation method based on AR and AR equipment |
US20210256210A1 (en) * | 2020-02-19 | 2021-08-19 | Intuit Inc. | Financial document text conversion to computer readable operations |
WO2022116523A1 (en) * | 2020-12-04 | 2022-06-09 | 北京搜狗科技发展有限公司 | Image processing method, image recognition apparatus, electronic device, and medium |
US11385916B2 (en) * | 2020-03-16 | 2022-07-12 | Servicenow, Inc. | Dynamic translation of graphical user interfaces |
US11574134B2 (en) * | 2018-12-20 | 2023-02-07 | Lexmark International, Inc. | Systems and methods of processing a document in an imaging device |
US11580312B2 (en) | 2020-03-16 | 2023-02-14 | Servicenow, Inc. | Machine translation of chat sessions |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6852666B2 (en) * | 2017-12-26 | 2021-03-31 | 京セラドキュメントソリューションズ株式会社 | Image forming device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050122537A1 (en) * | 2003-12-05 | 2005-06-09 | Shin Dong-Hyup | Combination machine having an image data conversion function and image data conversion method therefor |
US20140081619A1 (en) * | 2012-09-18 | 2014-03-20 | Abbyy Software Ltd. | Photography Recognition Translation |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050197825A1 (en) * | 2004-03-05 | 2005-09-08 | Lucent Technologies Inc. | Personal digital assistant with text scanner and language translator |
WO2015186312A1 (en) * | 2014-06-06 | 2015-12-10 | 凸版印刷株式会社 | Image reading device |
-
2017
- 2017-08-31 US US15/692,707 patent/US20180113859A1/en not_active Abandoned
-
2019
- 2019-08-13 US US16/539,460 patent/US10528679B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050122537A1 (en) * | 2003-12-05 | 2005-06-09 | Shin Dong-Hyup | Combination machine having an image data conversion function and image data conversion method therefor |
US20140081619A1 (en) * | 2012-09-18 | 2014-03-20 | Abbyy Software Ltd. | Photography Recognition Translation |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180115663A1 (en) * | 2016-10-20 | 2018-04-26 | Kabushiki Kaisha Toshiba | System and method for device gamification during job processing |
US10237429B2 (en) * | 2016-10-20 | 2019-03-19 | Kabushiki Kaisha Toshiba | System and method for device gamification during job processing |
US11574134B2 (en) * | 2018-12-20 | 2023-02-07 | Lexmark International, Inc. | Systems and methods of processing a document in an imaging device |
US10839272B2 (en) * | 2018-12-28 | 2020-11-17 | Kyocera Document Solutions Inc. | Image forming apparatus that prints image forming data including sentences in plurality of languages, on recording medium |
US20210256210A1 (en) * | 2020-02-19 | 2021-08-19 | Intuit Inc. | Financial document text conversion to computer readable operations |
US11783128B2 (en) * | 2020-02-19 | 2023-10-10 | Intuit Inc. | Financial document text conversion to computer readable operations |
US11385916B2 (en) * | 2020-03-16 | 2022-07-12 | Servicenow, Inc. | Dynamic translation of graphical user interfaces |
US11580312B2 (en) | 2020-03-16 | 2023-02-14 | Servicenow, Inc. | Machine translation of chat sessions |
US11836456B2 (en) | 2020-03-16 | 2023-12-05 | Servicenow, Inc. | Machine translation of chat sessions |
CN112269467A (en) * | 2020-08-04 | 2021-01-26 | 深圳市弘祥光电科技有限公司 | Translation method based on AR and AR equipment |
WO2022116523A1 (en) * | 2020-12-04 | 2022-06-09 | 北京搜狗科技发展有限公司 | Image processing method, image recognition apparatus, electronic device, and medium |
Also Published As
Publication number | Publication date |
---|---|
US10528679B2 (en) | 2020-01-07 |
US20190370339A1 (en) | 2019-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10528679B2 (en) | System and method for real time translation | |
JP6953230B2 (en) | A device for setting a file name, etc. on a scanned image, its control method, and a program. | |
JP6891073B2 (en) | A device for setting a file name, etc. on a scanned image, its control method, and a program. | |
JP6968647B2 (en) | A device for setting a file name for a scanned image, its control method, and a program. | |
US8610929B2 (en) | Image processing apparatus, control method therefor, and program | |
US20130050743A1 (en) | System and Method of Print Job Retrieval from the Cloud | |
JP7062388B2 (en) | A device for setting a file name, etc. on a scanned image, its control method, and a program. | |
JP6720795B2 (en) | Equipment, information processing device, information processing system, information processing method, and program | |
US8831351B2 (en) | Data processing apparatus, method for controlling data processing apparatus, and non-transitory computer readable storage medium | |
US20200174637A1 (en) | Device, method, and storage medium | |
JP4983610B2 (en) | Image processing device | |
US20170004147A1 (en) | Retrieval device, retrieval method, and computer-readable storage medium for computer program | |
US11350011B2 (en) | Device, process execution system, process execution method, and non-transitory recording medium | |
KR20200115263A (en) | Image processing apparatus, method for controlling the same, and storage medium | |
US8984623B2 (en) | Image processing system, image processing apparatus and computer-readable recording medium | |
JP6558240B2 (en) | program | |
US9413841B2 (en) | Image processing system, image processing method, and medium | |
US10306095B2 (en) | Image processing apparatus and method | |
US20120092683A1 (en) | Image forming apparatus and document editing method | |
US11928171B2 (en) | Providing shortened URL and information related contents corresponding to original URL | |
US20150124294A1 (en) | Image forming apparatus and method for producing e-book contents | |
JP2018007085A (en) | Information processing device, image processing device, and program | |
JP2017004455A (en) | Image processing apparatus, image processing system, image processing method, and program | |
JP6827839B2 (en) | Image forming apparatus, control method of image forming apparatus, and program | |
JP6540122B2 (en) | INFORMATION PROCESSING APPARATUS, RECORDING SYSTEM, AND PROGRAM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TOSHIBA TEC KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KODIMER, MARIANNE;REEL/FRAME:043469/0077 Effective date: 20170824 Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KODIMER, MARIANNE;REEL/FRAME:043469/0077 Effective date: 20170824 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |