WO2013033910A1 - User interface for translation webpage - Google Patents

User interface for translation webpage Download PDF

Info

Publication number
WO2013033910A1
WO2013033910A1 PCT/CN2011/079504 CN2011079504W WO2013033910A1 WO 2013033910 A1 WO2013033910 A1 WO 2013033910A1 CN 2011079504 W CN2011079504 W CN 2011079504W WO 2013033910 A1 WO2013033910 A1 WO 2013033910A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
source
target
language selection
potential
Prior art date
Application number
PCT/CN2011/079504
Other languages
French (fr)
Inventor
Chao TIAN
Awaneesh Verma
Joshua James ESTELLE
Yung-Fong Frank Tang
Original Assignee
Google Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google Inc. filed Critical Google Inc.
Priority to KR1020147009070A priority Critical patent/KR101891765B1/en
Priority to PCT/CN2011/079504 priority patent/WO2013033910A1/en
Priority to EP11871935.0A priority patent/EP2774053A4/en
Priority to JP2014528825A priority patent/JP6050362B2/en
Priority to CN201180073336.5A priority patent/CN104025079A/en
Priority to US13/305,895 priority patent/US20130067307A1/en
Publication of WO2013033910A1 publication Critical patent/WO2013033910A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • G06F40/56Natural language generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3325Reformulation based on results of preceding query
    • G06F16/3326Reformulation based on results of preceding query using relevance feedback from the user, e.g. relevance feedback on documents, documents sets, document terms or passages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/263Language identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce

Definitions

  • the present disclosure relates to a user interface for a translation webpage.
  • a user may access a website from a computing device via a network such as the Internet.
  • the website may display a webpage to the user via a web browser executing on the computing device.
  • the webpage may include images, videos, text, or a combination thereof, to be displayed to the user on a display associated with the computing device.
  • the webpage may provide a user interface through which the user interacts with the network and the computing devices connected thereto (servers, routers, etc.). Accordingly, the user interface provided by a webpage may provide a simple mechanism for the user to accomplish whatever tasks the user wishes to perform.
  • a computer- implemented technique can include receiving, at a server, a request for a translation webpage from a user interacting with a user device to initiate a user session.
  • the technique can also include generating, at the server, a user interface webpage for the translation webpage, where the user interface webpage includes: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion.
  • the source language selection portion may include: (a) a quick source language selection icon identifying a first potential source language, and (b) a source language selection list including a plurality of potential source languages.
  • the target language selection portion may include: (a) a quick target language selection icon identifying a first potential target language, and (b) a target language selection list including a plurality of potential target languages.
  • the technique may further include determining the potential source language and the potential target language based on a stored history of the user.
  • the stored history may include at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user.
  • the technique may include providing, from the server, the user interface webpage to the user device and receiving, at the server, a translation request from the user interacting with the user interface webpage displayed at the user device.
  • the translation request can include a text portion in a source language, a source language identification that identifies the source language, and a target language identification that identifies a target language in which the user desires to have the text portion translated.
  • the technique may also include providing a translated text output to the user device based on the translation request.
  • the translated text output may correspond to a translation of the text portion from the source language to the target language.
  • the technique may include updating the stored history based on the source language identification and the target language identification such that the source language selection portion and the target language selection portion dynamically update during the user session.
  • a computer- implemented technique can include receiving, at a server, a request for a translation webpage from a user interacting with a user device.
  • the technique can further include generating, at the server, a user interface webpage for the translation webpage, the user interface webpage including: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion.
  • the source language selection portion may include: (a) a quick source language selection icon identifying a potential source language, and (b) a source language selection list including a plurality of potential source languages.
  • the target language selection portion may include: (a) a quick target language selection icon identifying a potential target language, and (b) a target language selection list including a plurality of potential target languages.
  • the technique can also include determining the potential source language and the potential target language based on a stored history of the user.
  • the stored history of the user can include at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user.
  • a computer- implemented technique may utilize a server that includes a communication module, a user interface module and a datastore.
  • the communication module may receive a request for a translation webpage from a user interacting with a user device.
  • the user interface module may be in communication with the communication module and may generate a user interface webpage for the translation webpage.
  • the user interface webpage can include: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion.
  • the source language selection portion may include: (a) a quick source language selection icon identifying a potential source language, and (b) a source language selection list including a plurality of potential source languages.
  • the target language selection portion can include: (a) a quick target language selection icon identifying a potential target language, and (b) a target language selection list including a plurality of potential target languages.
  • the datastore may be in communication with the user interface module and may store a stored history of the user including at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user.
  • the user interface module may determine the potential source language and the potential target language based on the stored history of the user.
  • FIG. 1 is a schematic diagram of an example server according to some embodiments of the present disclosure and an example environment in which techniques according to some embodiments of the present disclosure can be utilized;
  • FIG. 2 is a schematic block diagram of the example server of
  • FIG. 1 is a diagrammatic representation of FIG. 1 ;
  • FIG. 3 is a representation of an example user interface according to some embodiments of the present disclosure.
  • FIG. 4 is a representation of the example user interface of FIG. 3 in an expanded state
  • FIG. 5 is a flow diagram of an example of a technique according to some embodiments of the present disclosure.
  • FIG. 6 is a flow diagram of the example technique for generating a user interface webpage of FIG. 5.
  • a user 10 can interact with a user device 20, for example, to access a network 30.
  • Examples of the network 30 include, but are not limited to, the
  • a server 100 connected to the network 30 may also be accessed by the user 10 via a user device 20. Further, in some embodiments of the present disclosure, a translation engine 40 may be connected to network 30 and/or connected to the server 100 through a separate communication connection 50.
  • a separate communication connection 50 may be utilized with the present disclosure.
  • FIG. 1 is merely illustrative and different environments (such as those that include more or less components, those that include additional connections, and/or those that are arranged in a different configuration) may be utilized with the present disclosure.
  • the translation engine 40 is illustrated in FIG. 1 as being separate from the server 100, one will appreciate that the translation engine 40 may be included as a module, engine, etc. of the server 100.
  • FIG. 2 A block diagram of an example server 100 according to some embodiments of the present disclosure is illustrated in FIG. 2.
  • the server 100 includes a communication module 120 in communication with a user interface module 140, as well as a datastore 160 in communication with the user interface module 140.
  • the communication module 120 can provide the communication interface between the server 100 and the user 10 and user device 20 via network 30, as well as between the server 100 and the translation engine 40 via either the network 30 or separate communication connection 50.
  • the communication module 120 may receive a request for a translation webpage from the user 10 interacting with the user device 20 via network 30.
  • a translation webpage includes, for example, a webpage that provides a user interface through which the user 10 interacts with a component (such as translation engine 40) that provides a translation service.
  • the user interface for the translation webpage may be generated by the user interface module 140, e.g., according to the techniques described below.
  • FIGS. 3 and 4 An example of a user interface 200 according to some embodiments of the present disclosure is shown in FIGS. 3 and 4.
  • the user interface 200 can include a text input portion 210, a translated text output portion 220, a source language selection portion 230 and a target language selection portion 240.
  • the text input portion 210 may be selected by the user 10, such as by being "clicked" by the user 10 interacting with a web browser on the user device 20.
  • a text portion to be translated may be entered into the text input portion by the user 10 by any known manner.
  • the user 10 can select a source language (that is, the original language of the text portion) and a target language (that is, the language in which the user 10 desires the text portion to be translated) via the source language selection portion 230 and the target language selection portion 240, respectively.
  • a translated text output can be generated (e.g., by translation engine 40) and provided to the user 10 by being displayed in the translated text output portion 220 of the user interface 200.
  • the translated text output may correspond to a translation (machine or otherwise) of the text portion from the source language to the target language.
  • the source language selection portion 230 can include one or more quick source language selection icons 232A, 232B and 232C. Each of the quick source language selection icons 232A, 232B and 232C identifies a potential source language. Further, the source language selection portion 230 can include a source language selection list 234 that includes a plurality of potential source languages. Similarly, the target language selection portion 240 can include one or more quick target language selection icons 242A, 242B and 242C (each of which identifying a potential target language) and a target language selection list 244 that includes a plurality of potential target languages. In various embodiments, the quick source and target language selection icons 232, 242 may be click buttons, radio buttons, selectable tabs on the text input portion and translate text output portion, respectively, or a combination thereof.
  • Each of the source language selection list 234 and the target language selection list 244 can be individually displayed in the user interface 200 in a collapsed state (FIG. 3) or an expanded state (FIG. 4). These lists may be toggled between the collapsed and expanded state by the user 10, e.g., by clicking on the appropriate list.
  • the source and target language selection lists 234, 244 may display only a selected source or target language, respectively, while in the expanded state (FIG. 4) the source and target language selection lists 234, 244 may display a plurality of potential source and target languages, respectively.
  • the specific potential source and target languages identified by the quick source language selection icons 232A, 232B and 232C and the quick target language selection icons 242A, 242B and 242C can be determined in many ways.
  • the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on a stored history of the user 10, for example, stored in the datastore 160.
  • the datastore 160 may include, e.g., a database, a hard disk drive, flash memory, server memory or any other type of electronic storage medium.
  • the stored history of the user 10 can include: (1 ) preferences of the user 10 (previously selected by the user 10, determined from previous interactions with the server 100, or a combination of both), (2) one or more source languages previously selected by the user 10, and/or (3) one or more target languages previously selected by the user 10.
  • the stored history of the user 10 may also include N source languages most recently selected by the user 10 and M target languages most recently selected by the user 10, where M and N are integers greater than zero.
  • the user interface 200 may include N quick source language selection icons 232 (each of which identifying one of the N source languages most recently selected by the user 10) and M quick target language selection icons 242 (each of which identifying one of the M target languages most recently selected by the user 10).
  • N and M may be equal to three such that there are three quick source language selection icons 232A, 232B and 232C and three quick target language selection icons 242A, 242B, and 242C.
  • the stored history of the user 10 may also include a ranking of frequency of use of the source languages previously selected by the user 10 and/or a ranking of frequency of use of the target languages previously selected by the user 10 such that the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on these frequencies.
  • the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on a location of the user 10 and/or a web browser language setting located at the user device 20.
  • the location of the user 10 can be determined in any known manner, such as through the use of geo-location or a Global Positioning System signal.
  • a flow chart describing an example technique (or method) 300 is shown.
  • a request for a translation webpage is received from the user 10 interacting with the user device 20 to initiate a user session.
  • the server 100 (or more specifically, the communication module 120) may receive this request via the network 30.
  • a user interface webpage for the translation webpage is generated, e.g., by the server 100 (or more specifically, the user interface module 140).
  • the user interface webpage may include, for example, the user interface 200 described above, and be provided to the user 10 at step 330.
  • a translation request is received from the user 10 at step 340, e.g., via the user interface 200 and user interface webpage and at the server 100 (or more specifically, the communication module 120).
  • the translation request includes: (1 ) a text portion in a source language, (2) a source language identification that identifies the source language of the text portion, and (3) a target language identification that identifies a target language in which the user 10 desires to have the text portion translated.
  • a translated text output is provided to the user 10/user device 20 based on the translation request.
  • the translated text output corresponds to a translation of the text portion from the identified source language to the identified target language.
  • the stored history of the user 10, which can be utilized to generate the user interface 200 as described herein, is updated at the server 100, e.g., at the user interface module 140 and the datastore 160.
  • the stored history may, for example, be updated based on the source and target language identifications in the translation request. Further, the stored history may be updated and utilized to dynamically update the source language selection portion (the quick source language selection icons 232, etc.) and/or the target language selection portion (the quick target language selection icons 242, etc.) during the user session, e.g., without the user 10 reloading the user interface webpage at the web browser on the user device 20. This may be accomplished through the use of JavaScript or similar mechanism.
  • step 322 the stored history of the user 10 is retrieved, e.g., by the user interface module 140. As discussed above, the stored history may be stored on the datastore 160 and utilized to generate the user interface 200.
  • a potential source language and a potential target language are determined based on the stored history.
  • one or more quick source language selection icons 232 that each identifies one potential source language are included in the user interface 200.
  • one or more quick target language selection icons 242 that each identifies one potential target language are included in the user interface 200.
  • the specific potential source and target languages identified by the quick source and target language selection icons 232 and 242, respectively can be determined in many ways.
  • the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on a stored history of the user 10, for example, stored in the datastore 160.
  • Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known procedures, well- known device structures, and well-known technologies are not described in detail.
  • first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
  • module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code, or a process executed by a distributed network of processors and storage in networked clusters or datacenters; other suitable components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
  • the term module may include memory (shared, dedicated, or group) that stores code executed by the one or more processors.
  • code may include software, firmware, byte-code and/or microcode, and may refer to programs, routines, functions, classes, and/or objects.
  • shared means that some or all code from multiple modules may be executed using a single (shared) processor. In addition, some or all code from multiple modules may be stored by a single (shared) memory.
  • group means that some or all code from a single module may be executed using a group of processors. In addition, some or all code from a single module may be stored using a group of memories.
  • the techniques described herein may be implemented by one or more computer programs executed by one or more processors.
  • the computer programs include processor-executable instructions that are stored on a non- transitory tangible computer readable medium.
  • the computer programs may also include stored data.
  • Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage.
  • Certain aspects of the described techniques include process steps and instructions described herein in the form of an algorithm. It should be noted that the described process steps and instructions could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
  • the present disclosure also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer.
  • a computer program may be stored in a tangible computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • the present disclosure is well suited to a wide variety of computer network systems over numerous topologies.
  • the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.

Abstract

A computer-implemented technique includes receiving a request for a translation webpage and generating a user interface webpage for the translation webpage. The user interface webpage includes a text input portion, a translated text output portion, a source language selection portion, and a target language selection portion. The source language selection portion includes a quick source language selection icon identifying a potential source language, and a source language selection list including a plurality of potential source languages. The target language selection portion includes a quick target language selection icon identifying a potential target language, and a target language selection list including a plurality of potential target languages. The potential source language and the potential target language is determined based on a stored history of a user, which includes at least one of preferences of the user, source languages previously selected by the user, and target languages previously selected by the user.

Description

USER INTERFACE FOR TRANSLATION WEBPAGE
FIELD
[0001 ] The present disclosure relates to a user interface for a translation webpage.
BACKGROUND
[0002] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
[0003] A user may access a website from a computing device via a network such as the Internet. The website may display a webpage to the user via a web browser executing on the computing device. The webpage may include images, videos, text, or a combination thereof, to be displayed to the user on a display associated with the computing device. The webpage may provide a user interface through which the user interacts with the network and the computing devices connected thereto (servers, routers, etc.). Accordingly, the user interface provided by a webpage may provide a simple mechanism for the user to accomplish whatever tasks the user wishes to perform. SUMMARY
[0004] This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
[0005] In various embodiments of the present disclosure, a computer- implemented technique is disclosed. The technique can include receiving, at a server, a request for a translation webpage from a user interacting with a user device to initiate a user session. The technique can also include generating, at the server, a user interface webpage for the translation webpage, where the user interface webpage includes: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion. The source language selection portion may include: (a) a quick source language selection icon identifying a first potential source language, and (b) a source language selection list including a plurality of potential source languages. Similarly, the target language selection portion may include: (a) a quick target language selection icon identifying a first potential target language, and (b) a target language selection list including a plurality of potential target languages. The technique may further include determining the potential source language and the potential target language based on a stored history of the user. The stored history may include at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user. Additionally, the technique may include providing, from the server, the user interface webpage to the user device and receiving, at the server, a translation request from the user interacting with the user interface webpage displayed at the user device. The translation request can include a text portion in a source language, a source language identification that identifies the source language, and a target language identification that identifies a target language in which the user desires to have the text portion translated. The technique may also include providing a translated text output to the user device based on the translation request. The translated text output may correspond to a translation of the text portion from the source language to the target language. Finally, the technique may include updating the stored history based on the source language identification and the target language identification such that the source language selection portion and the target language selection portion dynamically update during the user session.
[0006] In various embodiments of the present disclosure, a computer- implemented technique is disclosed. The technique can include receiving, at a server, a request for a translation webpage from a user interacting with a user device. The technique can further include generating, at the server, a user interface webpage for the translation webpage, the user interface webpage including: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion. The source language selection portion may include: (a) a quick source language selection icon identifying a potential source language, and (b) a source language selection list including a plurality of potential source languages. Similarly, the target language selection portion may include: (a) a quick target language selection icon identifying a potential target language, and (b) a target language selection list including a plurality of potential target languages. The technique can also include determining the potential source language and the potential target language based on a stored history of the user. The stored history of the user can include at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user.
[0007] In various embodiments of the present disclosure, a computer- implemented technique may utilize a server that includes a communication module, a user interface module and a datastore. The communication module may receive a request for a translation webpage from a user interacting with a user device. The user interface module may be in communication with the communication module and may generate a user interface webpage for the translation webpage. The user interface webpage can include: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion. The source language selection portion may include: (a) a quick source language selection icon identifying a potential source language, and (b) a source language selection list including a plurality of potential source languages. Similarly, the target language selection portion can include: (a) a quick target language selection icon identifying a potential target language, and (b) a target language selection list including a plurality of potential target languages. The datastore may be in communication with the user interface module and may store a stored history of the user including at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user. The user interface module may determine the potential source language and the potential target language based on the stored history of the user.
[0008] Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:
[0010] FIG. 1 is a schematic diagram of an example server according to some embodiments of the present disclosure and an example environment in which techniques according to some embodiments of the present disclosure can be utilized;
[001 1 ] FIG. 2 is a schematic block diagram of the example server of
FIG. 1 ;
[0012] FIG. 3 is a representation of an example user interface according to some embodiments of the present disclosure;
[0013] FIG. 4 is a representation of the example user interface of FIG. 3 in an expanded state; [0014] FIG. 5 is a flow diagram of an example of a technique according to some embodiments of the present disclosure; and
[0015] FIG. 6 is a flow diagram of the example technique for generating a user interface webpage of FIG. 5.
DETAILED DESCRIPTION
[0016] Referring now to FIG. 1 , an environment in which the techniques according to some embodiments of the present disclosure can be utilized is illustrated. A user 10 can interact with a user device 20, for example, to access a network 30. Examples of the network 30 include, but are not limited to, the
Internet, a wide area network, a local area network, and a private network. A server 100 connected to the network 30 may also be accessed by the user 10 via a user device 20. Further, in some embodiments of the present disclosure, a translation engine 40 may be connected to network 30 and/or connected to the server 100 through a separate communication connection 50. One skilled in the art will appreciate that the environment shown in FIG. 1 is merely illustrative and different environments (such as those that include more or less components, those that include additional connections, and/or those that are arranged in a different configuration) may be utilized with the present disclosure. For example only, while the translation engine 40 is illustrated in FIG. 1 as being separate from the server 100, one will appreciate that the translation engine 40 may be included as a module, engine, etc. of the server 100.
[0017] A block diagram of an example server 100 according to some embodiments of the present disclosure is illustrated in FIG. 2. The server 100 includes a communication module 120 in communication with a user interface module 140, as well as a datastore 160 in communication with the user interface module 140. The communication module 120 can provide the communication interface between the server 100 and the user 10 and user device 20 via network 30, as well as between the server 100 and the translation engine 40 via either the network 30 or separate communication connection 50.
[0018] In some embodiments, the communication module 120 may receive a request for a translation webpage from the user 10 interacting with the user device 20 via network 30. A translation webpage includes, for example, a webpage that provides a user interface through which the user 10 interacts with a component (such as translation engine 40) that provides a translation service. The user interface for the translation webpage may be generated by the user interface module 140, e.g., according to the techniques described below.
[0019] An example of a user interface 200 according to some embodiments of the present disclosure is shown in FIGS. 3 and 4. The user interface 200 can include a text input portion 210, a translated text output portion 220, a source language selection portion 230 and a target language selection portion 240. The text input portion 210 may be selected by the user 10, such as by being "clicked" by the user 10 interacting with a web browser on the user device 20. A text portion to be translated may be entered into the text input portion by the user 10 by any known manner.
[0020] Further, the user 10 can select a source language (that is, the original language of the text portion) and a target language (that is, the language in which the user 10 desires the text portion to be translated) via the source language selection portion 230 and the target language selection portion 240, respectively. Upon receipt of a translation command (such as, by the user 10 selecting a translate command icon 250), a translated text output can be generated (e.g., by translation engine 40) and provided to the user 10 by being displayed in the translated text output portion 220 of the user interface 200. The translated text output may correspond to a translation (machine or otherwise) of the text portion from the source language to the target language.
[0021 ] The source language selection portion 230 can include one or more quick source language selection icons 232A, 232B and 232C. Each of the quick source language selection icons 232A, 232B and 232C identifies a potential source language. Further, the source language selection portion 230 can include a source language selection list 234 that includes a plurality of potential source languages. Similarly, the target language selection portion 240 can include one or more quick target language selection icons 242A, 242B and 242C (each of which identifying a potential target language) and a target language selection list 244 that includes a plurality of potential target languages. In various embodiments, the quick source and target language selection icons 232, 242 may be click buttons, radio buttons, selectable tabs on the text input portion and translate text output portion, respectively, or a combination thereof.
[0022] Each of the source language selection list 234 and the target language selection list 244 can be individually displayed in the user interface 200 in a collapsed state (FIG. 3) or an expanded state (FIG. 4). These lists may be toggled between the collapsed and expanded state by the user 10, e.g., by clicking on the appropriate list. In the collapsed state (FIG. 3) the source and target language selection lists 234, 244 may display only a selected source or target language, respectively, while in the expanded state (FIG. 4) the source and target language selection lists 234, 244 may display a plurality of potential source and target languages, respectively.
[0023] The specific potential source and target languages identified by the quick source language selection icons 232A, 232B and 232C and the quick target language selection icons 242A, 242B and 242C can be determined in many ways. In some embodiments, the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on a stored history of the user 10, for example, stored in the datastore 160. The datastore 160 may include, e.g., a database, a hard disk drive, flash memory, server memory or any other type of electronic storage medium.
[0024] The stored history of the user 10 can include: (1 ) preferences of the user 10 (previously selected by the user 10, determined from previous interactions with the server 100, or a combination of both), (2) one or more source languages previously selected by the user 10, and/or (3) one or more target languages previously selected by the user 10. In various embodiments of the present disclosure, the stored history of the user 10 may also include N source languages most recently selected by the user 10 and M target languages most recently selected by the user 10, where M and N are integers greater than zero. In this manner, the user interface 200 may include N quick source language selection icons 232 (each of which identifying one of the N source languages most recently selected by the user 10) and M quick target language selection icons 242 (each of which identifying one of the M target languages most recently selected by the user 10). For example only, and as shown in FIGS. 3 and 4, the integers N and M may be equal to three such that there are three quick source language selection icons 232A, 232B and 232C and three quick target language selection icons 242A, 242B, and 242C.
[0025] The stored history of the user 10 may also include a ranking of frequency of use of the source languages previously selected by the user 10 and/or a ranking of frequency of use of the target languages previously selected by the user 10 such that the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on these frequencies. In addition to the stored history of the user 10, in some embodiments the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on a location of the user 10 and/or a web browser language setting located at the user device 20. The location of the user 10 can be determined in any known manner, such as through the use of geo-location or a Global Positioning System signal.
[0026] Referring now to FIG. 5, a flow chart describing an example technique (or method) 300 according to some embodiments of the present disclosure is shown. At step 310, a request for a translation webpage is received from the user 10 interacting with the user device 20 to initiate a user session. For example only, the server 100 (or more specifically, the communication module 120) may receive this request via the network 30. At step 320, a user interface webpage for the translation webpage is generated, e.g., by the server 100 (or more specifically, the user interface module 140). The user interface webpage may include, for example, the user interface 200 described above, and be provided to the user 10 at step 330.
[0027] A translation request is received from the user 10 at step 340, e.g., via the user interface 200 and user interface webpage and at the server 100 (or more specifically, the communication module 120). In some embodiments, the translation request includes: (1 ) a text portion in a source language, (2) a source language identification that identifies the source language of the text portion, and (3) a target language identification that identifies a target language in which the user 10 desires to have the text portion translated. At step 350, a translated text output is provided to the user 10/user device 20 based on the translation request. In some embodiments, the translated text output corresponds to a translation of the text portion from the identified source language to the identified target language.
[0028] At step 360, the stored history of the user 10, which can be utilized to generate the user interface 200 as described herein, is updated at the server 100, e.g., at the user interface module 140 and the datastore 160. The stored history may, for example, be updated based on the source and target language identifications in the translation request. Further, the stored history may be updated and utilized to dynamically update the source language selection portion (the quick source language selection icons 232, etc.) and/or the target language selection portion (the quick target language selection icons 242, etc.) during the user session, e.g., without the user 10 reloading the user interface webpage at the web browser on the user device 20. This may be accomplished through the use of JavaScript or similar mechanism.
[0029] Referring now to FIG. 6, a flow chart describing an example technique (or method) for generating a user interface webpage (such as that described above in accordance with step 320) according to some embodiments of the present disclosure is shown. At step 322, the stored history of the user 10 is retrieved, e.g., by the user interface module 140. As discussed above, the stored history may be stored on the datastore 160 and utilized to generate the user interface 200.
[0030] At step 324, a potential source language and a potential target language are determined based on the stored history. At step 326, one or more quick source language selection icons 232 that each identifies one potential source language are included in the user interface 200. Similarly, at step 328, one or more quick target language selection icons 242 that each identifies one potential target language are included in the user interface 200. As described above, the specific potential source and target languages identified by the quick source and target language selection icons 232 and 242, respectively, can be determined in many ways. In some embodiments, the specific potential source and target languages identified by the quick source language selection icons 232 and the quick target language selection icons 242 can be determined based on a stored history of the user 10, for example, stored in the datastore 160.
[0031 ] Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known procedures, well- known device structures, and well-known technologies are not described in detail.
[0032] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the" may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The term "and/or" includes any and all combinations of one or more of the associated listed items. The terms "comprises," "comprising," "including," and "having," are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
[0033] Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as "first," "second," and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
[0034] As used herein, the term module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code, or a process executed by a distributed network of processors and storage in networked clusters or datacenters; other suitable components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module may include memory (shared, dedicated, or group) that stores code executed by the one or more processors. [0035] The term code, as used above, may include software, firmware, byte-code and/or microcode, and may refer to programs, routines, functions, classes, and/or objects. The term shared, as used above, means that some or all code from multiple modules may be executed using a single (shared) processor. In addition, some or all code from multiple modules may be stored by a single (shared) memory. The term group, as used above, means that some or all code from a single module may be executed using a group of processors. In addition, some or all code from a single module may be stored using a group of memories.
[0036] The techniques described herein may be implemented by one or more computer programs executed by one or more processors. The computer programs include processor-executable instructions that are stored on a non- transitory tangible computer readable medium. The computer programs may also include stored data. Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage.
[0037] Some portions of the above description present the techniques described herein in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times to refer to these arrangements of operations as modules or by functional names, without loss of generality.
[0038] Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0039] Certain aspects of the described techniques include process steps and instructions described herein in the form of an algorithm. It should be noted that the described process steps and instructions could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
[0040] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a tangible computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
[0041 ] The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general- purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatuses to perform the required method steps. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present disclosure is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein, and any references to specific languages are provided for disclosure of enablement and best mode of the present disclosure.
[0042] The present disclosure is well suited to a wide variety of computer network systems over numerous topologies. Within this field, the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.
[0043] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.

Claims

CLAIMS What is claimed is:
1 . A computer-implemented method comprising:
receiving, at a server, a request for a translation webpage from a user interacting with a user device to initiate a user session;
generating, at the server, a user interface webpage for the translation webpage, the user interface webpage including: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion,
wherein the source language selection portion includes: (a) a quick source language selection icon identifying a first potential source language, and (b) a source language selection list including a plurality of potential source languages, and
wherein the target language selection portion includes (a) a quick target language selection icon identifying a first potential target language, and (b) a target language selection list including a plurality of potential target languages;
determining the potential source language and the potential target language based on a stored history of the user, the stored history including at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user;
providing, from the server, the user interface webpage to the user device; receiving, at the server, a translation request from the user interacting with the user interface webpage displayed at the user device, the translation request including a text portion in a source language, a source language identification that identifies the source language, and a target language identification that identifies a target language in which the user desires to have the text portion translated;
providing a translated text output to the user device based on the translation request, the translated text output corresponding to a translation of the text portion from the source language to the target language; and
updating the stored history based on the source language identification and the target language identification such that the source language selection portion and the target language selection portion dynamically update during the user session.
2. A computer-implemented method comprising:
receiving, at a server, a request for a translation webpage from a user interacting with a user device;
generating, at the server, a user interface webpage for the translation webpage, the user interface webpage including: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion,
wherein the source language selection portion includes: (a) a quick source language selection icon identifying a potential source language, and (b) a source language selection list including a plurality of potential source languages, and
wherein the target language selection portion includes (a) a quick target language selection icon identifying a potential target language, and (b) a target language selection list including a plurality of potential target languages; and
determining the potential source language and the potential target language based on a stored history of the user, the stored history including at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user.
3. The computer-implemented method of claim 2 further comprising: providing, from the server, the user interface webpage to the user device;
receiving, at the server, a translation request from the user interacting with the user interface webpage displayed at the user device, the translation request including a text portion in a source language, a source language identification that identifies the source language, and a target language identification that identifies a target language in which the user desires to have the text portion translated; and
providing a translated text output to the user device based on the translation request, the translated text output corresponding to a translation of the text portion from the source language to the target language.
4. The computer-implemented method of claim 2, further comprising updating the stored history such that the source language selection portion and the target language selection portion dynamically update during a user session.
5. The computer-implemented method of claim 2, wherein the quick source language selection icon and the quick target language selection icon comprise click buttons.
6. The computer-implemented method of claim 2, wherein the quick source language selection icon and the quick target language selection icon comprise radio buttons.
7. The computer-implemented method of claim 2, wherein the quick source language selection icon and the quick target language selection icon comprise selectable tabs on the text input portion and translated text output portion, respectively.
8. The computer-implemented method of claim 2, wherein (i) the stored history includes N source languages most recently selected by the user and M target languages most recently selected by the user, N and M being integers greater than zero, (ii) the source language selection portion includes N quick source language selection icons each corresponding to one of the N source languages most recently selected by the user, and (iii) the target language selection portion includes M quick target language selection icons each corresponding to one of the M target languages most recently selected by the user.
9. The computer-implemented method of claim 8, wherein N and M are equal to three.
10. The computer-implemented method of claim 2, wherein the stored history includes a first ranking of frequency of use of the source languages previously selected by the user, and a second ranking of frequency of use of the target languages previously selected by the user, wherein determining the potential source language and the potential target language is further based on the first and second rankings.
1 1. The computer-implemented method of claim 2, wherein the source language selection list displays only one of the plurality of potential source languages in a collapsed state and displays the plurality of potential source languages in an expanded state, and wherein the target language selection list displays only one of the plurality of potential target languages in the collapsed state and displays the plurality of potential target languages in the expanded state.
12. The computer-implemented method of claim 2, wherein determining the potential source language and the potential target language is further based on a location of the user.
13. The computer-implemented method of claim 2, wherein determining the potential source language and the potential target language is further based on a web browser language setting at the user device.
14. A system for generating a user interface webpage for a translation webpage comprising:
a communication module in a server that receives a request for a translation webpage from a user interacting with a user device;
a user interface module in the server and in communication with the communication module that generates a user interface webpage for the translation webpage, the user interface webpage including: (i) a text input portion, (ii) a translated text output portion, (iii) a source language selection portion, and (iv) a target language selection portion,
wherein the source language selection portion includes: (a) a quick source language selection icon identifying a potential source language, and (b) a source language selection list including a plurality of potential source languages, and
wherein the target language selection portion includes (a) a quick target language selection icon identifying a potential target language, and (b) a target language selection list including a plurality of potential target languages; and
a datastore in the server and in communication with the user interface module, the datastore storing a stored history of the user including at least one of: (i) preferences of the user, (ii) source languages previously selected by the user, and (iii) target languages previously selected by the user, wherein the user interface module determines the potential source language and the potential target language based on the stored history of the user.
15. The system of claim 14, wherein the user interface module updates the stored history such that the source language selection portion and the target language selection portion dynamically update during a user session.
16. The system of claim 14, wherein (i) the stored history includes N source languages most recently selected by the user and M target languages most recently selected by the user, N and M being integers greater than zero, (ii) the source language selection portion includes N quick source language selection icons each corresponding to one of the N source languages most recently selected by the user, and (iii) the target language selection portion includes M quick target language selection icons each corresponding to one of the M target languages most recently selected by the user.
17. The system of claim 14, wherein the stored history includes a first ranking of frequency of use of the source languages previously selected by the user, and a second ranking of frequency of use of the target languages previously selected by the user, wherein determining the potential source language and the potential target language is further based on the first and second rankings.
18. The system of claim 14, wherein the source language selection list displays only one of the plurality of potential source languages in a collapsed state and displays the plurality of potential source languages in an expanded state, and wherein the target language selection list displays only one of the plurality of potential target languages in the collapsed state and displays the plurality of potential target languages in the expanded state.
19. The system of claim 14, wherein the user interface module further determines the potential source language and the potential target language based on a location of the user.
20. The system of claim 14, wherein the user interface module further determines the potential source language and the potential target language based on a web browser language setting at the user device..
PCT/CN2011/079504 2011-09-09 2011-09-09 User interface for translation webpage WO2013033910A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020147009070A KR101891765B1 (en) 2011-09-09 2011-09-09 User interface for translation webpage
PCT/CN2011/079504 WO2013033910A1 (en) 2011-09-09 2011-09-09 User interface for translation webpage
EP11871935.0A EP2774053A4 (en) 2011-09-09 2011-09-09 User interface for translation webpage
JP2014528825A JP6050362B2 (en) 2011-09-09 2011-09-09 User interface for translation web pages
CN201180073336.5A CN104025079A (en) 2011-09-09 2011-09-09 User interface for translation webpage
US13/305,895 US20130067307A1 (en) 2011-09-09 2011-11-29 User interface for translation webpage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/079504 WO2013033910A1 (en) 2011-09-09 2011-09-09 User interface for translation webpage

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/305,895 Continuation US20130067307A1 (en) 2011-09-09 2011-11-29 User interface for translation webpage

Publications (1)

Publication Number Publication Date
WO2013033910A1 true WO2013033910A1 (en) 2013-03-14

Family

ID=47830963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/079504 WO2013033910A1 (en) 2011-09-09 2011-09-09 User interface for translation webpage

Country Status (6)

Country Link
US (1) US20130067307A1 (en)
EP (1) EP2774053A4 (en)
JP (1) JP6050362B2 (en)
KR (1) KR101891765B1 (en)
CN (1) CN104025079A (en)
WO (1) WO2013033910A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016025243A1 (en) * 2014-08-15 2016-02-18 Google Inc. Techniques for automatically swapping languages and/or content for machine translation
US11030422B2 (en) 2017-02-08 2021-06-08 Panasonic Intellectual Property Management Co., Ltd. Information display device and information display system

Families Citing this family (157)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) * 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
BR112015018905B1 (en) 2013-02-07 2022-02-22 Apple Inc Voice activation feature operation method, computer readable storage media and electronic device
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
CN110377920A (en) * 2013-05-13 2019-10-25 腾讯科技(深圳)有限公司 Applied to the method and system for realizing that language is interpreted in the browser of mobile terminal
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3008641A1 (en) 2013-06-09 2016-04-20 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9547644B2 (en) 2013-11-08 2017-01-17 Google Inc. Presenting translations of text depicted in images
US9239833B2 (en) 2013-11-08 2016-01-19 Google Inc. Presenting translations of text depicted in images
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
CN106471570B (en) 2014-05-30 2019-10-01 苹果公司 Order single language input method more
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
CN104780335B (en) * 2015-03-26 2021-06-22 中兴通讯股份有限公司 WebRTC P2P audio and video call method and device
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
CN105138519B (en) * 2015-07-31 2018-04-06 小米科技有限责任公司 Lexical translation method and apparatus
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
CN105243057A (en) * 2015-09-30 2016-01-13 北京奇虎科技有限公司 Method for translating web page contents and electronic device.
CN105354187A (en) * 2015-09-30 2016-02-24 北京奇虎科技有限公司 Method for translating webpage contents and electronic device
CN105183725A (en) * 2015-09-30 2015-12-23 北京奇虎科技有限公司 Method for translating word on web page and electronic device
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
CN105912532B (en) * 2016-04-08 2020-11-20 华南师范大学 Language translation method and system based on geographic position information
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK201770427A1 (en) 2017-05-12 2018-12-20 Apple Inc. Low-latency intelligent automated assistant
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
CN107291703B (en) * 2017-05-17 2021-06-08 百度在线网络技术(北京)有限公司 Pronunciation method and device in translation service application
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
CN107391500A (en) * 2017-08-21 2017-11-24 阿里巴巴集团控股有限公司 Text interpretation method, device and equipment
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
CN107632983A (en) * 2017-10-27 2018-01-26 姜俊 A kind of character translation device and method
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US20190266248A1 (en) * 2018-02-26 2019-08-29 Loveland Co., Ltd. Webpage translation system, webpage translation apparatus, webpage providing apparatus, and webpage translation method
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US20230161975A1 (en) * 2018-05-04 2023-05-25 Telefonaktiebolaget Lm Ericsson (Publ) Methods and apparatus for enriching entities with alternative texts in multiple languages
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10504518B1 (en) 2018-06-03 2019-12-10 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
DK201970511A1 (en) 2019-05-31 2021-02-15 Apple Inc Voice identification in digital assistant systems
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11227599B2 (en) 2019-06-01 2022-01-18 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators
KR102415923B1 (en) * 2020-03-04 2022-07-04 김경철 Method for managing translation platform
CN113014986A (en) * 2020-04-30 2021-06-22 北京字节跳动网络技术有限公司 Interactive information processing method, device, equipment and medium
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11038934B1 (en) 2020-05-11 2021-06-15 Apple Inc. Digital assistant hardware abstraction
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
CN112069439B (en) * 2020-09-15 2023-10-13 成都知道创宇信息技术有限公司 Document request processing method and device, document providing server and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046804A (en) * 2006-03-30 2007-10-03 国际商业机器公司 Method for searching order in file system and correlation search engine
WO2010055425A2 (en) * 2008-11-12 2010-05-20 Andrzej Bernal Method and system for providing translation services
US20100223048A1 (en) 2009-02-27 2010-09-02 Andrew Nelthropp Lauder Language translation employing a combination of machine and human translations

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0785060A (en) * 1993-09-13 1995-03-31 Matsushita Electric Ind Co Ltd Language converting device
JPH10214171A (en) * 1997-01-29 1998-08-11 Mitsubishi Electric Corp Information processor
JP2000194698A (en) * 1998-12-25 2000-07-14 Sony Corp Information processing device and method and information providing medium
US20020123879A1 (en) * 2001-03-01 2002-09-05 Donald Spector Translation system & method
AUPR360701A0 (en) 2001-03-06 2001-04-05 Worldlingo, Inc Seamless translation system
US7752266B2 (en) * 2001-10-11 2010-07-06 Ebay Inc. System and method to facilitate translation of communications between entities over a network
US7272377B2 (en) * 2002-02-07 2007-09-18 At&T Corp. System and method of ubiquitous language translation for wireless devices
CN101055573A (en) * 2006-04-10 2007-10-17 李钢 Multiple-language translation system
US7801721B2 (en) * 2006-10-02 2010-09-21 Google Inc. Displaying original text in a user interface with translated text

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046804A (en) * 2006-03-30 2007-10-03 国际商业机器公司 Method for searching order in file system and correlation search engine
WO2010055425A2 (en) * 2008-11-12 2010-05-20 Andrzej Bernal Method and system for providing translation services
US20100223048A1 (en) 2009-02-27 2010-09-02 Andrew Nelthropp Lauder Language translation employing a combination of machine and human translations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2774053A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016025243A1 (en) * 2014-08-15 2016-02-18 Google Inc. Techniques for automatically swapping languages and/or content for machine translation
US9524293B2 (en) 2014-08-15 2016-12-20 Google Inc. Techniques for automatically swapping languages and/or content for machine translation
US11030422B2 (en) 2017-02-08 2021-06-08 Panasonic Intellectual Property Management Co., Ltd. Information display device and information display system

Also Published As

Publication number Publication date
CN104025079A (en) 2014-09-03
JP2014526723A (en) 2014-10-06
JP6050362B2 (en) 2016-12-21
EP2774053A1 (en) 2014-09-10
US20130067307A1 (en) 2013-03-14
KR20140069100A (en) 2014-06-09
KR101891765B1 (en) 2018-08-27
EP2774053A4 (en) 2015-11-18

Similar Documents

Publication Publication Date Title
US20130067307A1 (en) User interface for translation webpage
EP2732380B1 (en) Mobile web browser for pre-loading web pages
US10621281B2 (en) Populating values in a spreadsheet using semantic cues
US8386955B1 (en) User-optimized content for web browsing windows
CN106372110B (en) Recommendation method of application program and mobile terminal
US10313283B2 (en) Optimizing E-mail for mobile devices
US20160162148A1 (en) Application launching and switching interface
US20190324608A1 (en) Method and apparatus for homepage cluster system management based on tree structure
WO2017139507A1 (en) Reinforcement learning using advantage estimates
US20140181756A1 (en) Visualization interaction design for cross-platform utilization
US20140089382A1 (en) Techniques for context-based grouping of messages for translation
US10782857B2 (en) Adaptive user interface
US10459745B2 (en) Application help functionality including suggested search
US10769167B1 (en) Federated computational analysis over distributed data
CN109033466B (en) Page sharing method calculates equipment and computer storage medium
EP3265927A1 (en) Coordinated user word selection for translation and obtaining of contextual information for the selected word
EP3685328A1 (en) Iteratively updating a collaboration site or template
US20140289661A1 (en) Methods and Systems for Combined Management of Multiple Servers
CN107111418B (en) Icon displacement with minimal disruption
US20150261880A1 (en) Techniques for translating user interfaces of web-based applications
US9176948B2 (en) Client/server-based statistical phrase distribution display and associated text entry technique
WO2015100455A1 (en) Dynamically sharing intents
US9430466B1 (en) Techniques for crowd sourcing human translations to provide translated versions of web pages with additional content
EP3200057B1 (en) Short cut links in a graphical user interface
EP3158431A1 (en) Subscriber defined dynamic eventing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11871935

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014528825

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011871935

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20147009070

Country of ref document: KR

Kind code of ref document: A