JP2006302091A - Translation device and program thereof - Google Patents

Translation device and program thereof Download PDF

Info

Publication number
JP2006302091A
JP2006302091A JP2005124949A JP2005124949A JP2006302091A JP 2006302091 A JP2006302091 A JP 2006302091A JP 2005124949 A JP2005124949 A JP 2005124949A JP 2005124949 A JP2005124949 A JP 2005124949A JP 2006302091 A JP2006302091 A JP 2006302091A
Authority
JP
Japan
Prior art keywords
type
language
character
data
translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2005124949A
Other languages
Japanese (ja)
Inventor
Haruhiko Yoshimeki
晴彦 吉目木
Original Assignee
Konica Minolta Photo Imaging Inc
コニカミノルタフォトイメージング株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Photo Imaging Inc, コニカミノルタフォトイメージング株式会社 filed Critical Konica Minolta Photo Imaging Inc
Priority to JP2005124949A priority Critical patent/JP2006302091A/en
Publication of JP2006302091A publication Critical patent/JP2006302091A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To accurately discriminate a language type of a language to be translated to perform translation in multi-language translation of a translation device. <P>SOLUTION: In the translation device, a character string is extracted from video data acquired from an imaging means, and a character type of the character string and a linguistic dictionary corresponding to a language type using this character type are referred to display a translation result on a display device. In this translation device, position information of the translation device is acquired by character type data of the extracted character string, word data, and formal writing rule data and/or GPS to refer to data of language types being used in this position, and priorities are given to prescribe a language type. A dictionary corresponding to the prescribed language type is referred to translate the language as the translation object to a prescribed language. <P>COPYRIGHT: (C)2007,JPO&INPIT

Description

  The present invention relates to a translation apparatus and a program thereof, and more particularly, to a translation apparatus and a program thereof that select and translate an appropriate dictionary from a multilingual dictionary by specifying a language type to be translated.

  2. Description of the Related Art Conventionally, translation apparatuses that perform automatic translation from a predetermined language to a predetermined language are known. A bilingual translation is generally used for the translation device. Character strings and sentences input from a keyboard or scanner are identified in units of words, and each word is translated into a predetermined language with reference to a dictionary stored in the translation engine.

  There is also a multilingual translation device that translates from multiple languages to a target language. Multilingual translation devices can be roughly divided into two types as functional configurations. On the other hand, a dictionary corresponding to the language type of the language to be translated is stored in the device in the same manner as the bilingual translation device, and the language type is selected by the user (ie, the language to be translated by the user). This is the type that translates into the target language. The other is a type in which the apparatus itself automatically determines the language type of the language to be translated, and translates it into the target language by referring to a dictionary corresponding to the language type specified by this determination. The former is an application of the bilingual translation device. The latter type is useful when the user is not sure what language the language to be translated is. For example, Patent Document 1 includes a language determination unit that translates an input character string using a plurality of dictionaries corresponding to translation and determines whether the translation result is correct, and translates the input character string into a target language based on the determination result. A multilingual translation system is disclosed. Patent Document 2 discloses a document search device that searches an input character string in all dictionaries corresponding to translation, and translates it in the language type of this dictionary when it is detected.

Moreover, various things are known as an external appearance structure of a translation apparatus. In addition to workstations and personal computers, there are small electronic dictionaries and small portable terminals such as PDAs (Personal Digital Assistants). Among them, a small portable terminal can be used anywhere, so that it can be translated as a destination or used as a dictionary. For example, as a small portable terminal, in Patent Document 3, image processing is performed on video data captured by a small camera, a character string included in the video data is extracted, character recognition is performed, and a dictionary function is provided for translation. A head mounted display to perform is disclosed.
JP 2001-92823 A Japanese Patent Laid-Open No. 5-165889 JP 2000-152125 A

  However, some languages have the same character type and spelling and correspond to a plurality of language types. FIG. 14 shows an outline of characters used in various countries around the world. In particular, there are many countries and regions that use Latin characters, Arabic characters, and Cyrillic characters, and it is often impossible to specify the language type to be translated from only the character types. In addition, there are many cases where the spelling of words is similar or exactly the same between languages using the same character type. For example, the word “memory” exists in both Italian and Spanish. The word “intention” is present in both English and French. The word “marine” is present in German and French.

  In such a case, the translation device cannot determine which language type it is, and performs translation in any applicable language type, or inserts the original text into the translation as an untranslated word or performs translation. May not be done.

  An object of the present invention is to accurately identify the language type of an input character string and perform multilingual translation by referring to the dictionary of the identified language type.

In order to solve the above-mentioned problem, the invention described in claim 1 is characterized in that an imaging unit for imaging an object, a character string detection unit for detecting a character string included in video data acquired from the imaging unit, and the character Translation means for referring to a language dictionary corresponding to the character type of the character string detected by the string detection means and the language type using the character type, and a display means for displaying the translated word translated by the translation means In a translation device equipped with
Word extraction means for extracting a word from the character string detected by the character string detection means;
Storage means for storing word data for each language type to be translated by the translation means;
And a control unit that refers to the word extracted by the extraction unit and the word data stored in the storage unit, and prioritizes and specifies the language type of the character string included in the video data. To do.

The invention according to claim 2 is the translation apparatus according to claim 1,
A character type extracting means for extracting a character type of a character string included in the video data;
The storage means further stores character type data and correct text rule data for each language type to be translated by the translation means,
The control means further refers to the character type data extracted by the character type extraction means, the character type data and the orthography rule data stored in the storage means, and sets the priority order to the language type of the character string included in the video data. It is characterized by attaching.

The invention according to claim 3 is the translation device according to claim 1 or 2,
Position detecting means for receiving a GPS signal and acquiring position information of the translation device;
Storage means for storing language type data associated with the position information;
It further comprises control means for further specifying the language type of the character string included in the video data with priority by further referring to the position information and the language type data.

According to a fourth aspect of the present invention, there is provided an image pickup means for picking up an object, a character string detection means for detecting a character string included in video data acquired from the image pickup means, and a character string detected by the character string detection means. A translation device comprising a translation means for translating into a specific language with reference to a language dictionary corresponding to a character type of the character type and a language type using the character type, and a display means for displaying a translated word translated by the translation means,
Position detecting means for receiving a GPS signal and acquiring position information of the translation device;
Storage means for storing language type data associated with the position information;
Referring to the position information and the language type data, the language type of the character string included in the video data is specified with a priority and specified.

The invention according to claim 5 is the translation device according to claim 4,
A word extracting means for extracting a word from the character string detected by the character string detecting means;
The storage means further stores word data for each language type to be translated by the translation means,
The control means further specifies the language type of the character string included in the input data by specifying the priority by further referring to the word extracted by the word extraction means and the word data.

The invention according to claim 6 is the translation device according to claim 4,
A character type extracting means for extracting a character type of a character string included in the video data;
The storage means further stores character type data and correct text rule data for each language type to be translated by the translation means,
The control unit further refers to the character type data extracted by the character type extraction unit, the character type data and the normal rule data stored in the storage unit, and prioritizes the language type of the character string included in the video data. It is characterized by performing identification.

The invention according to claim 7 is an image pickup means for picking up an object, a character string detection means for detecting a character string included in video data acquired from the image pickup means, and a character string detected by the character string detection means. A translation device comprising: a character type; a translation means for referring to a language dictionary corresponding to the language type using the character type; and a display means for displaying a translated word translated by the translation means.
Character type extracting means for extracting a character type of a character string included in the video data;
Storage means for storing character type data and orthographic rule data of a plurality of language types corresponding to the translation of the translation means;
Referring to the character type data extracted by the character type extraction unit and the character type data and normal rule data of a plurality of language types stored in the storage unit, priorities are assigned to the language types of the character strings included in the video data. Control means for performing identification is provided.

The invention according to claim 8 is the translation device according to any one of claims 1 to 7,
The translating means translates into a specific language by referring to the language dictionary of the language type based on the priority order of the language type specified by the control means.

Invention of Claim 9 is the translation apparatus as described in any one of Claims 1-8,
The language dictionary further stores voice data corresponding to the translated words translated into a specific language by the translation means,
It further includes voice output means for generating and outputting voice from voice data corresponding to the translated word translated by the translation means.

The invention according to claim 10 is the translation device according to any one of claims 1 to 9,
Further comprising a specifying unit that allows a user to specify a language type for translation from among the language types prioritized by the control unit;
The translating means translates only with reference to a language dictionary corresponding to the language type designated by the designation means.

Invention of Claim 11 is the translation apparatus as described in any one of Claims 1-10,
The translation means further comprises a history storage means for storing a history of language types to be translated,
The control means further refers to the history of the language type stored in the history storage means and assigns a priority order to the language type of the character string included in the video data, and performs identification.

The invention according to claim 12 is the translation apparatus according to claim 11,
It further comprises clock means for outputting current time information,
The storage means is associated with a predetermined time, and further stores related information related to a language type to be translated by the translation means,
The control means displays the related information corresponding to the history of the language type stored in the history storage means on the display means based on the information on the current time.

The invention according to claim 13 is the translation device according to any one of claims 1 to 12,
The display means is a head mounted display.

The invention according to claim 14 is the translation device according to claim 13,
The head mounted display is an external light transmissive type.

The invention according to claim 15 is the translation device according to claim 14,
The head mounted display includes a video display for generating and projecting video from a video signal;
The image display device includes a prism for collecting and guiding an image projected from the image display device, and a diffraction film for diffracting an image guided by the prism and guiding a hologram image to a pupil.

The invention according to claim 16 is a language dictionary that detects a character string included in video data acquired from an imaging unit that images an object, and corresponds to a character type of the character string and a language type using the character type. To a computer that translates into a specific language with reference to
A word extraction function for extracting words from the character string;
A storage function for storing word data for each language type to be translated;
A program that realizes a language type specifying function that refers to the word extracted by the word extracting function and the word data for each language type and specifies the language type of the character string with priority. is there.

The invention according to claim 17 is the program according to claim 16,
Further realizing a character type extraction function for extracting a character type of a character string included in the video data,
In the storage function, further storing character type data and correct rule data for each language type to be translated,
The language type specifying function further refers to the character type data extracted by the character type extraction function, the character type data for each language type, and the orthography rule data, and prioritizes the language types of the character strings included in the video data. It is characterized by performing identification.

The invention according to claim 18 is the program according to claim 16 or 17,
Further realizing a position detection function for receiving GPS signals and acquiring position information,
The storage function further stores language type data associated with the position information,
The language type specifying function further refers to the position information and the language type data, and specifies the language type of the character string included in the video data with priority.

According to a nineteenth aspect of the present invention, there is provided a language dictionary that detects a character string included in video data acquired from an imaging unit that images an object, and corresponds to a character type of the character string and a language type using the character type To a computer that translates into a specific language with reference to
A position detection function for receiving GPS signals and acquiring position information;
A storage function for storing language type data associated with the position information;
A program that realizes a language type specifying function that specifies a language type of a character string included in the video data with priority by referring to the position information and the language type data.

The invention according to claim 20 is the program according to claim 19,
Further realizing a word extraction function for extracting words from a character string included in the video data,
The memory function further stores word data for each language type to be translated,
In the language type specifying function, the word type of the character string included in the input data is prioritized and specified by further referring to the word extracted by the word extraction function and the word data for each language type. Features.

The invention according to claim 21 is the program according to claim 19,
Further realizing a character type extraction function for extracting a character type of a character string included in the video data,
The memory function further stores character type data and orthographic rule data of a plurality of language types that enable translation,
The language specifying function further refers to the character type data extracted by the character type extraction function, the character type data for each language type, and the correct text rule data, and prioritizes the language type of the character string included in the video data. It is characterized by performing identification.

According to a twenty-second aspect of the present invention, a character string included in video data acquired from an imaging unit that captures an object is detected and translated into a specific language by referring to a language dictionary corresponding to the language type of the character string. On the computer to process
A character type extraction function for extracting a character type of a character string included in the video data;
A storage function for storing character type data and orthographic rule data for each language type to be translated;
Language type specification for specifying and specifying the priority of the language type of the character string included in the video data with reference to the character type data extracted by the character type extraction function, the character type data of the plurality of language types, and the orthographic rule data It is a program characterized by realizing a function.

The invention according to claim 23 is the program according to any one of claims 16 to 22,
Realizing a translation function for translating a character string included in the video data into a specific language by referring to the language dictionary of the language type based on the priority order of the plurality of language types specified by the language specifying function. Features.

The invention according to claim 24 is the program according to any one of claims 16 to 23, wherein
Further realizing a history storage function for storing a history of the language type specified by the language type specifying function,
The language type specifying function further refers to the history of the language type by the history storage function and specifies the language type of the character string included in the video data based on the language type with the latest history. It is characterized by that.

The invention according to claim 25 is the program according to claim 24,
The storage function is associated with a predetermined time, and further stores related information related to the language type to be translated,
A clock function that outputs current time information,
Further realizing a display function for displaying the related information on the display unit according to the latest language type of the language type history stored in the history storage function based on the current time information by the clock function. Features.

  According to the first aspect of the present invention, when the translation apparatus performs translation, the word extraction unit extracts a word from the character string, and refers to the extracted word and the word data for each language type stored in the storage unit. By doing so, the language type of the character string included in the video data acquired from the imaging means can be specified. In multilingual translation, there are many cases where the language type of the translation object cannot be specified only by the character type. In the present invention, the language type can be specified by further specifying the language type by comparing the word and the word data. Further, depending on the language type, there may be words having the same character type and the same spelling, and when specifying the language type, a plurality of language types may correspond depending on the translation object. In this regard, according to the present invention, the control means assigns a priority to the corresponding language type and specifies the language type, so that the language type can be specified with high accuracy.

  According to the second aspect of the present invention, the translation device according to the first aspect further comprises character extraction means for extracting the character type of the character string included in the video data, and the storage means includes the language type to be translated. Since the character type data and the correct text rule data for each are further stored, the word, the character type, and the correct text rule can be used for the specific determination of the language type in the control means, and more accurate multilingual translation can be performed. it can. In addition, when a plurality of language types correspond to the specification of the language type, the priority is given to the corresponding language type and specified. That is, prioritization of language types is performed in the order satisfying the above three criteria. Therefore, it is possible to specify a language type with higher accuracy.

According to invention of Claim 3, in the translation apparatus of Claim 1 or 2,
Further, a position detecting means for receiving the GPS signal and acquiring the position information of the translation device is further provided, and the language type data associated with the position information is further stored in the storage means. Words, character types, orthographic rules, and position information can be used for determination, and more accurate multilingual translation can be performed. In addition, when a plurality of language types correspond to the specification of the language type, the priority is given to the corresponding language type and specified. That is, prioritization of language types is performed in the order satisfying the above four criteria. Therefore, multilingual translation with higher accuracy can be performed.

According to the invention described in claim 4, when the translation apparatus performs translation, the position of the translation apparatus is specified from the position information detection means, and the imaging means is referred to by referring to the language type data associated with the position information. The language type of the character string included in the video data acquired from the above can be specified.
In addition, the character type may be the same and the language type may be different. In the present invention, the language type can be specified by comparing the position information with the language type data associated with the position information.

  According to a fifth aspect of the present invention, the translation device according to the fourth aspect further comprises a word extracting means for extracting a word of a character string included in the video data, and a language type to be translated in the storage means. In order to further store the character type data and the orthographic rule data for each word, the storage unit can use the word and position information for specific determination of the language type in the control unit, and can perform multilingual translation with higher accuracy. it can. In addition, when a plurality of language types correspond to the specification of the language type, the priority is given to the corresponding language type and specified. That is, prioritization of language types is performed in the order satisfying the above two criteria. Therefore, it is possible to specify a language type with higher accuracy.

  According to a sixth aspect of the present invention, in the translation device according to the fourth aspect, the character type extracting means for extracting the character type of the character string included in the video data is further provided, and the language type to be translated is stored in the storage means. Since each character type data and correct text rule data is further stored, position information, character type and correct text rules can be used for specific judgment of the language type in the control means, and more accurate multilingual translation can be performed. Can do. In addition, when a plurality of language types correspond to the specification of the language type, the priority is given to the corresponding language type and specified. That is, prioritization of language types is performed in the order satisfying the above three criteria. Therefore, it is possible to specify a language type with higher accuracy.

According to the seventh aspect of the present invention, when the translation device performs translation, the character type extracting unit extracts the character type of the character string, and the extracted character type and the character type data for each language type stored in the storage unit and the correct text By referring to the rule data, the character type of the character string included in the video data acquired from the imaging means can be specified.
In addition, the character type may be the same and the language type may be different. In the present invention, the language type can be specified by specifying the language type by comparing the character type and the correct text rule data. In addition, when specifying a language type, a plurality of language types may correspond depending on the translation object. In this regard, according to the present invention, the control means assigns a priority to the corresponding language type and specifies the language type, so that the language type can be specified with high accuracy.

According to invention of Claim 8, in the translation apparatus as described in any one of Claims 1-7, in the case of translation of a character string, a translation means based on the priority of the language kind specified by the control means Can translate by referring to the language dictionary of these language types.
Depending on the language type, there may be words having the same character type and the same spelling, and depending on the translation object, a plurality of language types may be applicable. In this respect, in the present invention, since priority is given to the language types by the control means, and the translation means performs translation by referring to the language dictionary based on this priority order, highly accurate multilingual translation can be performed.

  According to the invention described in claim 9, in the translation device according to any one of claims 1 to 8, speech data corresponding to a translated word translated into a specific language by the translation means is stored in the language dictionary. The voice output means generates and outputs voice from the voice data, so that the translation of the character string can be acquired from the auditory sense.

  According to the invention described in claim 10, in the translation device according to any one of claims 1 to 9, it is possible to specify the language type by the specifying unit, and the translation unit refers to the language dictionary of the specified language type. The character string can be translated.

  According to the eleventh aspect of the present invention, in the translation device according to any one of the first to tenth aspects, the history storage means is provided, and the accuracy is further improved by referring to the history when specifying the language type of the control means. High language types can be identified or translated. For example, when traveling, the language used in the travel destination is often translated continuously. There is a high possibility that the language type to be translated last time matches the language type to be translated this time. For this reason, by using the history of the specified language type as a criterion for specifying the language type, it is possible to specify and translate the language type with higher accuracy.

  According to a twelfth aspect of the present invention, in the translation device according to the eleventh aspect of the present invention, the translation device further includes a clock unit that outputs information on the current time, and the storage unit stores predetermined information related to the language type that enables translation Corresponding information is recorded in association with the time, and the related information corresponding to the language type history stored in the history storage means can be displayed based on the current time information. For example, the language with the history is Italian and the predetermined time is noon. In addition, since noon is lunchtime, related information corresponding to noon is used as a restaurant. At noon, it is possible to refer to the history storage means, acquire information that the history is in Italian, and display information on the Italian restaurant on the display means.

  According to invention of Claim 13, in the translation apparatus as described in any one of Claims 1-12, since a display means is a head mounted display, there exists an effect that it is excellent in portability.

  According to the fourteenth aspect of the present invention, in the translation device according to the thirteenth aspect, the translated video can be superimposed and displayed in the normal field of view. For this reason, it is possible to confirm the translation while grasping the translation object in the field of view.

  According to the fifteenth aspect of the present invention, in the translation device according to the fourteenth aspect, the translated video can be displayed superimposed on the normal field of view. For this reason, it is possible to confirm the translation while grasping the translation object in the field of view.

  According to the invention described in claim 16, when a computer performs translation, a word is extracted from a character string by a word extraction function, and the extracted word is imaged by referring to word data for a plurality of language types. The language type of the character string included in the video data acquired from the means can be specified. In multilingual translation, there are many cases where the language type of the translation object cannot be specified only by the character type. In the present invention, the language type can be specified by further specifying the language type by comparing the word and the word data. Further, depending on the language type, there may be words having the same character type and the same spelling, and when specifying the language type, a plurality of language types may correspond depending on the translation object. In this respect, according to the present invention, the language type specifying function prioritizes the language type to which the language type corresponds, and therefore specifies the language type with high accuracy.

  According to the invention described in claim 17, in the program described in claim 16, a character type extracting function for extracting a character type of a character string included in video data is further realized, and a plurality of languages corresponding to translation by a storage function In order to further store the character type data and the orthographic rule data, it is possible to use the word, the character type, and the normal rule for the determination of the language type in the language type specifying function, and to specify the language type with higher accuracy. It can be carried out. In addition, when a plurality of language types correspond to the specification of the language type, the priority is given to the corresponding language type and specified. That is, prioritization of language types is performed in the order satisfying the above three criteria. Therefore, it is possible to specify a language type with higher accuracy.

According to the invention of claim 18, in the program of claim 16 or 17,
Further, the position detection function for receiving the GPS signal and acquiring the position information is further realized, and the language type data associated with the position information is further stored in the storage means. In addition, it is possible to use a word, a character type, a regular text rule, and position information, and to specify a language type with higher accuracy.

According to the nineteenth aspect of the present invention, when the computer performs translation, the position of the computer is detected from the position information detection function, and acquired from the imaging means by referring to the language type data associated with the position information. The language type of the character string included in the processed video data can be specified.
In multilingual translation, there are many cases where the language type of the translation object cannot be specified only by the character type. In the present invention, the language type can be specified by comparing the position information with the language type data associated with the position information.

  According to the twentieth aspect of the invention, in the program according to the nineteenth aspect, the word of the character string included in the video data is extracted by the word extracting means, and the word for each language type to be translated by the storage function Since the data is further stored, the word and position information can be used for specifying the language type in the language type specifying function, and the language type can be specified with higher accuracy.

  According to the invention of claim 21, in the program of claim 19, the character type of the character string included in the video data is extracted by the character type extraction function, and the character type for each language type to be translated by the storage function Since the data and the orthographic rule data are further stored, the language type specifying function can use the word and position information for specifying the language type, and the language type can be specified with higher accuracy.

  According to the invention described in claim 22, when the computer translates, the character type of the character string is extracted by the character type extraction function, and the character type data and the orthography rule data for each language type stored by the character type and the storage function, Can be specified by the language type specifying function to specify the language type of the character string included in the video data acquired from the imaging means. In multilingual translation, there are many cases where the language type of the translation object cannot be specified only by the character type. In the present invention, the language type can be specified by specifying the language type by comparing the character type and the correct text rule data.

According to the invention described in claim 23, in the program according to any one of claims 16-22, the character string is translated based on the priority of the language type specified by the language type specifying means. Translation can be performed by referring to the language dictionary of the language type to be selected.
Depending on the language type, there may be words having the same character type and the same spelling, etc., and depending on the translation object, a plurality of language types may be applicable. In this respect, according to the present invention, the language type specifying function assigns priorities to the language types, and the translation unit performs translation by referring to the language dictionary based on the priorities, so that highly accurate multilingual translation can be performed.

  According to the invention of claim 24, in the program according to any one of claims 16 to 23, by referring to the number of translations by the history storage function for the language type specification by the language type specification function, the accuracy can be further improved. High language types can be identified or translated. That is, when traveling, the language used in the travel destination is often translated continuously. There is a high possibility that the language type that was translated last time matches the current translation process. For this reason, it is possible to specify the language type with higher accuracy and perform multilingual translation by using the history for each language type as a specific criterion for determining the language type.

According to the invention described in claim 25, in the program described in claim 24, information related to each language type to be translated by the storage function is recorded in association with a predetermined time, and the current time is recorded. Based on the current time information from the clock function that outputs the information, the related information corresponding to the language type history stored in the history storage function can be displayed.
For example, the history is Italian and the predetermined time is noon. In addition, since noon is lunchtime, related information corresponding to noon is used as a restaurant. At noon, it is possible to obtain information that the history is in Italian by referring to the history storage means and display information on the Italian restaurant.

Next, the form which applied this invention to the head mounted display is demonstrated using figures.
The head-mounted display in the present embodiment recognizes character strings and sentences in the line of sight as electronic images, performs character recognition by image processing, specifies language types from various language type specifying elements, and determines a predetermined language And a multilingual translation function for displaying translated words on a display device provided in front of the eyes. In addition, the specific history of the language type specified in the process of specifying the language type is stored, and information related to the language type having a specific number of times is displayed on the display device retroactively from the latest specific history at a predetermined time. Thus, the user has a function of providing accompanying information related to the language type of the translation object.

FIG. 1 shows an external configuration of the head mounted display 1. In the following description, the direction opposite to the eyepiece direction is the front.
The head mounted display 1 includes a frame 4 provided with a nose pad 2 and temples 3R and 3L, and lens portions 5R and 5L, as in normal glasses. Above the lens portion 5R of the frame 4, a CCD camera 7 that captures an image in the direction of the user's line of sight and a display device 6 that displays an image from a video signal transmitted from the control box 9 are integrally installed. . Further, cables are supported at the intermediate portions of the temples 3R and 3L, and the earphones 8R and 8L are provided so as to be attachable to the user's ears. The display device 6, the CCD camera 7, and the earphones 8R and 8L are connected to the control box 9 via cables 14, respectively.

  The control box 9 is provided with a switch 10, a cross key 11, a power switch 12, and a battery 39 as a power source. The switch 10 activates the “translation mode” and determines an input by a user operation in a language specifying process to be described later. The cross key 11 appropriately performs various input operations according to the application when the “translation mode” is activated.

  In FIG. 2, the side sectional view from the left direction of the display apparatus 6 is shown. The display device 6 includes a video generation unit 15 and an eyepiece optical system 16. Inside a casing 17 serving as a main body of the display device 6, a transmissive liquid crystal display 18 that generates an image from an image signal transmitted from the control box 9, and a plurality of LEDs (Light Emitting Diodes) that are backlight light sources. An illumination optical system 20 for guiding the irradiation light from the group 19 and the LED group 19 to the entire surface of the liquid crystal display 18 is provided. The eyepiece optical system 16 further includes a prism 21 and a hologram element 22. The hologram element 22 is formed of a sheet-like member having substantially the same size as the lower end joint surface between the lens unit 5 and the eyepiece optical system 16 and is sandwiched between the lower end joint surfaces between the prism 21 and the lens unit 5. Provided.

The upper end of the prism 21 has a thick wedge shape in the front direction, and is formed so that the image light beam emitted from the liquid crystal display 18 can be easily taken. The image light beam collected from the upper end is guided to the hologram element 22 provided at the lower end while being totally reflected inside the prism 21. The hologram element 22 interferes light and forms a virtual image on the eyeball E.
The eyepiece optical system 16 is composed of a single member having the same refractive index of light as the lens unit 5R, and is joined so that the surface direction is continuous without a step. The user can observe the external field of view through the lens unit 5 and the eyepiece optical system 16 and can observe an image superimposed on the external field of view through the hologram element 22.

  FIG. 3 shows a functional configuration of the head mounted display 1. The head mounted display 1 includes an imaging unit 25, an image processing unit 26, a control unit 27, a storage unit 28, a translation engine 29, a communication unit 30, a display unit 31, a voice output unit 32, a display unit, and an operation input unit 33. The

  The imaging unit 25 includes a camera using a photoelectric conversion element such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) image sensor. In this embodiment, a CCD camera 7 is applied. The video signal acquired by the imaging unit 25 is subjected to correlated double sampling of an analog video signal by a CDS (Correlated Double Sampling) circuit or an A / D (Analog / Digital) conversion circuit (not shown) to reduce and remove noise. The gain is adjusted, converted into a digital video signal, and transmitted to the image processing unit 26.

The image processing unit 26 includes a DSP (Digital Signal Processor) and a graphic memory. A character string extraction process for extracting a character string from the video signal transmitted from the imaging unit 25 by pattern matching is performed, and data extracted by the character string extraction process (hereinafter, simply “extracted character data”) is stored in the storage unit. 28.
The character string extraction process is divided into three stages: a correction process, a feature extraction process, and a character recognition process.

  The correction process includes noise removal, smoothing, sharpening, two-dimensional filtering, binarization for converting a video into a binary video for easy feature extraction described later, and a figure or character to be recognized. Thinning to extract skeletal lines, normalization related to enlargement, reduction, translation, rotation, and density conversion for more accurate pattern matching, which is image matching at the final stage. Performed as pre-processing.

  In the feature extraction process, an edge detection process is applied. In this processing, density conversion points are extracted from density values of colors included in the video signal, a discontinuous portion of the video is extracted as an edge, and the video is divided into several continuous areas using the edge as a boundary line. This is performed by Laplacian (secondary differentiation) processing, Hough transform processing, or the like. When edge detection is performed from density images, there are many cases where there are interruptions in some places. In this case, processing for connecting point sequences that are broken by an expansion method, a contraction method, an extended trace method, or the like is performed.

  The character recognition process uses a pattern matching method. The pattern matching method electronically superimposes pattern data such as various characters stored in advance in the storage unit 28, which will be described later, and video data that has been edge-enhanced by the above-described feature extraction processing, and uses the most similar data such as characters. It is a technique to consider. Specifically, the determination is performed by examining the correlation (pixel value matching ratio) corresponding to the pixel data level from the two data.

  The storage unit 28 is composed of a nonvolatile semiconductor memory. Data in various processes in the head mounted display 1 is temporarily or permanently stored. In this embodiment, an EEPROM (Electric Erasable and Programmable ROM) is applied to reduce the size and weight. The extracted character data transmitted from the image processing unit 26 is stored, and the extracted character data is appropriately transmitted in response to a data request instruction signal from the control unit 27. Further, the map information corresponding to the coordinate data transmitted from the communication unit 30, the local language data corresponding to the map information, and the related data used in the “related information display mode” described later are stored in advance. In addition, character type data, word data, and correct text rule data used in the language type specifying process are also stored.

  The control unit 27 includes a CPU (Central Processing Unit) 35, a ROM (Read Only Memory) 36, and a RAM (Random Access Memory) 37. The CPU 35 develops a program stored in advance in the ROM 36 in the RAM 37, and performs overall control of the head mounted display 1 in accordance with the instructions of the program. When the “translation mode” is activated, the extracted character data stored in the storage unit 28 is requested, the language type of the extracted character data is specified by the language type specifying process described later, the specified language type data and the extracted character, etc. Data is transmitted to the storage unit 28. The control unit 27 has a clock circuit (not shown), synchronizes various signal processes, measures the current time and outputs it appropriately, and supplies data such as the recording date in recording various data. In addition, a counter 38 is provided for storing the specific history of the language type specified in the language type specifying process in the storage unit 28, and the specific history data, current time data, and position information by GPS (Global Positioning System) described later are referred to. Thus, the “related information display mode” for displaying the related information stored in advance in the storage unit 28 on the display device 6 is executed. The “related information display mode” will be described later.

  The translation engine 29 translates the extracted character data and language type data into a specific language set in advance by the user or the like. It is possible to adopt a hardware configuration using LSI (Large Scale Integration) or the like, or a configuration in which the control unit 27 has a translation function in software. Specifically, morphological analysis that divides extracted character data into words while referring to a word dictionary (not shown), syntactic analysis for obtaining a syntax structure based on the connection relationship between morphemes, various languages, and voice data of this language This language type dictionary is searched from a language dictionary stored in advance based on the language type data, and a translation process is performed for translation into a predesignated language (for example, Japanese). The translated text data and voice data are transmitted to the control unit 27.

  The communication unit 30 receives coordinate data transmitted from the artificial satellite S by GPS, identifies the current location of the user from the map information stored in advance in the storage unit 28, and transmits the identified current location data to the control unit 27. .

  The display unit 31 corresponds to the display device 6 and displays a translated sentence translated into a predetermined language by the translation engine 29. The voice output unit 32 outputs the voice translation data transmitted from the translation engine to the earphones 8R and 8L.

  Next, language type identification processing processed by the control unit 27 will be described. The language type specifying process is a process of specifying the language type of the extracted character data from the GPS data from the communication unit 30, the word, the character type, and the correct text rules of various languages. In addition, the history of the language type specified by the language type specifying process is stored in the storage unit 28, and the process of increasing the accuracy of specifying the language type is performed by referring to this specific history in the next language type specifying process. This is because, in general, the language type specified last time is often the same as the language type of the next translation object.

  Further, in the language type specifying process in the present embodiment, the language type is specified while referring to four types of language type specifying elements such as GPS data, words, character types, and various types of normal writing rules. However, the language type can be specified using one of these types of language type specifying elements. However, there are cases where extracted character data can be specified as one language type and cases where a plurality of language types are specified. For example, the word “memory” means “memory”, but this word is a word that exists in both Italian and Spanish of the same character type (Latin script). Similarly, “intention” exists in English and French, and “marine” exists in German and French. In this way, when the language type cannot be completely specified only by the character type or word, the position of the user is determined from the GPS data, and the priority is given with the language of the corresponding position (region, country, etc.) as the first. Specify the language type. Furthermore, the correct type rule specific to the language type is discriminated from the extracted character data, and the language type is specified (priority order) from the determination result, or the language type specific history is referred to and used as the language-specific reference data. . By referring to a plurality of language type specifying elements, it is possible to specify a language type with higher accuracy.

  Here, the “correct text rule” will be described. The correct book rule is a rule on the notation of a character string applied when a language is displayed on a display or the like or when printing is performed. There are regular rules for each language type, but the typical rules are: [1] Accent symbol, [2] Ligature (character proximity), [3] Character shape change by character appearance position, [4] Concatenated characters , [5] Character shape change depending on writing direction, [6] Character array direction, [7] Character shape change (kerning) at specific character array, [8] Justification (how to align line head, line end, etc.) and [9] The relationship between hyphenation and spelling changes.

[1] Accent symbols Accent symbols are added to characters used for notation. Found in Latin, Greek and Vietnamese. FIG. 4A shows an example of accent symbols in Italian, and FIG. 4B shows examples of accent symbols in Greek. When such an accent symbol is discriminated in the character string, it becomes possible to determine the priority order of the language type of the extracted character data by comprehensively judging with other character types, GPS data and the like.

[2] Ligature (character proximity)
From the aesthetic point of view of the notation, there are cases in which a combination of characters is used. FIG. 5A shows a Latin ligature. As shown in the figure, “f” and “f”, “f” and “i”, and “f” and “l” are successively expressed by connecting a part of each character. FIG. 5B shows an example of German “Esset”. sz or ss may be expressed as s.

[3] Change in character shape depending on position Depending on the position of the character in the word and the characters before and after the character, the character shape may change to a shape other than the basic shape. This is especially true for Arabic languages. As shown in FIG. 6, in the Arabic language language, the character shape changes depending on the position of the character in the word, such as the independent form, the head form, the middle form and the ending form. Note that, even in languages that do not have such rules, the character shape may change depending on the characters before and after the font type. An example is the English cursive font.

[4] Concatenated characters In most Indian languages, there is a special orthography rule called concatenation. This is a complicated rule similar to the above-described ligature, character shape change depending on position, and the like. FIG. 7 shows an example of Devanagari characters.

[5] Change in character shape according to writing direction Some languages, such as Japanese, Chinese, or Korean, have two different writing directions, vertical and horizontal. Some characters change in writing direction. FIG. 8 shows an example in Japanese. When the word “start” is written horizontally, the long clef symbol is written as “−”, but when written vertically, it is written as “|”.

[6] Character arrangement direction Common to all languages, characters are arranged in a specific linear direction. The English word “direction” shown in FIG. 9A arranges characters in a straight line direction from left to right. In the Japanese example, the word “sentence” is arranged in a linear direction from top to bottom as shown in FIG. Although not shown, in Arabic, characters are arranged in a straight line direction from right to left.

[7] Character shape change (kerning) when specific characters are arranged
When two specific characters are arranged, there is a so-called kerning normal rule that finely adjusts the character spacing according to the shape of each other. FIG. 10 shows the relationship between the Roman letters “A” and “t” and “o” and “e”. In FIG. 10A, when “t” is arranged next to “A”, the character interval H is adjusted narrower than when other characters are arranged. Similarly, in FIG. 10B, when “e” is arranged next to “o”, the character spacing H is adjusted narrowly. It is a rule that adjusts the character spacing based on the relationship between the shape and size of the character to adjust the visual discomfort.

[8] Justification (How to align the beginning and end of the line)
When a line is formed by arranging words in the arrangement direction, there are rules for justification that aligns the beginning and end of a document when displaying or printing depending on the language.

[9] Relationship between hyphenation and spelling When a line break is made in the middle of a word, the places where line breaks are allowed may be limited. There are also specific rules that are related to justification, such as spelling changes depending on the language. For example, in English, when a line break is made in the middle of a word, a hyphen is inserted to make a line break. In German, the spelling changes further. FIG. 11 shows an example of a hyphenation rule in German. When a line break is made in the middle of the word “weken” in the sentence, a hyphen is inserted. At this time, the character “k” immediately before the hyphen is duplicated after the hyphen (“k” at the beginning of the line in FIG. 11). "Weken" becomes "wek-ken").

  The data of the above-mentioned regular text rules is stored in the storage unit 28, and the priority order of the translation language of the extracted character data transmitted from the image processing unit 26 is comprehensively determined from other data such as GPS data. The process of determining is performed.

The operation of language type identification processing in the “translation mode” of the head mounted display 1 will be described below with reference to the flowchart of FIG. The following processing is controlled by the CPU 35 of the control unit 27.
In response to a signal transmitted by the user operating the switch 10 of the operation input unit 33, the CPU 35 reads a program from the ROM 36, develops it on the RAM 37, and activates “translation mode” (step S100).

  When the activation is completed, an operation guidance message “Please press the switch 10 while viewing the character string or sentence to be translated” is displayed on the display device 6 (step S101).

  The user receives an imaging instruction signal transmitted by pressing the switch 10 while grasping the translation object (a character string or a sentence) in the visual field direction, and obtains an image of the translation object from the imaging unit 25 (step) S102).

  The acquired video is transmitted to the image processing unit 26, image processing (pattern matching, etc.) is performed, character recognition is performed, and extracted character data is generated (step S103). The extracted character data is transmitted to the storage unit 28 and temporarily stored.

  When a signal indicating completion of image processing of the extracted character data transmitted from the image processing unit 26 is received, an extracted character data request signal is transmitted to the storage unit 28, and reading of the extracted character data is started (step S104).

  Here, prior to individual language type identification processing such as character type data and GPS data, the language type identification history which is the previous language type identification result is referred to. If the storage unit 28 is accessed and there is language type identification history data (step S105: YES), the language type priorities are prioritized with reference to the language type identification history (step: S106).

  Next, the character type of the extracted character data is compared with the character type selection data to prioritize the language type. Specifically, prioritization (narrowing) of the corresponding language type is performed from a plurality of language types stored in the character type selection data (step S107). For example, if the character type is determined to be a Latin character, the language type data (for example, English, French, Italian, etc.) associated with the character type selection data is associated with other character type data (for example, Prioritize over Hindu words associated with Arabic character data.

  Next, the character type of the extracted character data is compared with the word selection data to prioritize the language type. Specifically, prioritization (narrowing) of the corresponding language type is performed from a plurality of language types stored in the word selection data (step S108). For example, if the translation object is "TO BE TO BE TENMADE TO BE", it recognizes English words such as "TO", "BE", "TEN", "MADE" and uses the same Latin script. It becomes possible to deny the possibility of being a word or French.

  In addition, if the extracted character data is “TO BE TO BE TENMADE TO BE”, there is a possibility that it will be “Tobetobentotobe” which is written in Japanese. Therefore, the coordinate data, the map data, and the regional language data corresponding to the map data transmitted from the communication unit 30 are referred to, and the language corresponding to the regional language data is prioritized from the languages prioritized in step S108. Ranking is performed (step S109). For example, if the user is located in New York (United States), the first priority is assigned to a language type such as English and secondly Japanese. Further, for example, when the extracted character data is data such as “memoria” that cannot be determined whether it is either Italian or Spanish, the coordinate language data and the corresponding regional language data are changed to either Italian or Spanish. Prioritization can be performed.

  Further, the normal rule is determined from the extracted character data (step S110). Prioritization is further performed on the language corresponding to the orthographic rule from the language type data that has been prioritized in step S109, and then the language type data is transmitted to the storage unit 28 and temporarily stored (step S110). Further, the specified language type is stored in the storage unit 28 as a language type specifying history (step S111).

  When the specification of the language type is completed, a translation instruction signal is transmitted to the translation engine 29 (step S112). The translation engine 29 that has received the translation instruction signal transmits a request signal for extracted character data and language type data, and the CPU 35 reads these data from the storage unit 28 and transmits them to the translation engine 29.

  The translation engine 29 performs a process of translating the extracted character data into a predetermined language (for example, Japanese) while referring to the translation dictionary based on the prioritized language type data (step S112).

  The extracted character data translated into a predetermined language in step S112 is output to the voice output unit 32 and the display unit 31, and the translation is displayed on the display device 6 and voice output is performed from the earphones 8R and 8L (step S113).

  Next, the “related information display mode” will be described. The “related information display mode” includes the language type identification history stored in the storage unit 28 by the counter 38, the position information acquired by the GPS, the map information stored in the storage unit 28, and the association associated with this map information. By referring to data and current time data transmitted from the clock circuit, information related to a language type that is frequently translated, that is, frequently translated is provided. Specifically, the language type with the most history is identified from the five latest histories retroactively from the latest one in the specific history, and the country that uses this language type from the table classified by the category related to the current time. Or the item relevant to a district is extracted. The extracted data is searched for related data associated with the map information, and the position is displayed on the display device 6. This process is performed in the background of the “translation mode” process.

FIG. 13 shows the processing operation in the “related information display mode”.
Based on the current time data transmitted from the clock circuit, it is determined whether or not it coincides with the time for displaying the related information (assuming it is 12:00 pm) (step S201).

  If it coincides with the time at which the related information is displayed (step S201: YES), a category to be searched is determined by referring to a table in which the time and the category are associated in advance (step: S202). Since it is lunchtime at 12:00 pm, the corresponding category is “lunch”.

  The language type specific history stored in the storage unit 28 is read, and the most frequent language type is detected from among the five histories retroactively from the latest specific language type (step S203). Suppose that the language type detected by this processing is Italian.

  From the category “lunch” determined in step S202, an item “Italian cuisine” related to the language type “Italian” detected in step S203 is searched (step S204).

  Based on the GPS signal and the map information, the current position data and the map information of the surrounding area centering on the current position are acquired, and the related data related to “Italian cuisine” is searched from the related data associated with the map information. The place where Italian food is provided is displayed on the surrounding map image displayed on the display device 6 (step S203).

  As described above, according to the head mounted display 1 to which the present invention is applied, it is possible to specify the language type of the translation object from three types of data such as a word, GPS data, and a normal text rule. Therefore, unlike a conventional device having a translation function in which the translation object and the parallel language are limited in advance or a device having a translation function in which the user sets the language type of the translation object in advance, the user is related to a foreign language. Even without knowledge, it is possible to automatically specify the language type of the translation object and provide a display and audio output translated into a desired language.

  In addition, when specifying language types, not only one language type is specified, but also prioritization is performed, and the respective parallel translations of these higher-order plural types of language types are displayed and voice output is performed. Accordingly, the most appropriate meaning can be selected. For example, suppose a foreigner who is native English and has no knowledge of Japanese or Chinese visits Chinatown when he travels to Japan. When the sign “Bear Cat” is on the street signboard, it means “bear, cat” in Japanese, but “panda” in Chinese. At this time, since the GPS position data is within the Japanese region, the language type of the extracted character data is set to Japanese, but from the word data, the language type data in the lower order is Chinese with many overlapping parts as the same kanji culture sphere. Therefore, “bear, cat” as the first translation word and “panda” as the second translation word can be provided to the user. In some cases, the user can easily understand the contents of the signboard or the like from the information “Panda” even if the “bear, cat” does not fit in the surrounding situation. Thus, by simultaneously translating a plurality of language types and displaying them, it becomes possible to deal with individual situations such as the usage environment.

  Further, by adopting the orthography rule for determining the language of the translation object, it is possible to generate language type data only from the characteristic rules possessed by a specific language, and to accurately identify candidate language types. it can. For this reason, there is an effect that the process of automatic translation corresponding to a multilingual language becomes easy.

  Furthermore, it is possible to provide information related to the translated language using the history of the specified language, the GPS data, and the current time from the clock circuit. Opportunities for acquiring various information related to a specific language type in a timely manner from the related information can be increased. For example, suppose that when doing business related to Italy on business, translation is performed using the head mounted display 1 when reading an Italian document. In the case of lunch, a nearby restaurant is displayed. When having lunch at the restaurant, it may be possible to obtain ideas of current business while experiencing Italian food culture. Since the number of accesses to the specific language type can be increased in this way, work and learning can be performed effectively.

As mentioned above, although the best form for implementing this invention was demonstrated, this invention is not limited to the said various example.
For example, in the language type specifying process in the control unit 27, the language type is prioritized from four types of data: character type data, word data, GPS data, and orthographic rule data. Candidate language type data can be generated sufficiently even if one is used as discrimination data for prioritizing language types.

  Further, the character selection database used in the language type specifying process in the control unit 27 and the dictionary data used in the translation process in the translation engine 29 are not necessarily stored in advance in the storage unit 28 or the like. It is possible to use a small communication device such as the Internet so that connection communication can be made with a network such as the Internet, and download from a server or the like configured on the network as appropriate. As a result, a large-scale search for character selection data and the like is possible, and the burden on the head mounted display 1 can be reduced.

1 is a schematic diagram showing an external configuration of a head mounted display 1 in the best mode for carrying out the present invention. Sectional drawing which showed the left side cross section of the display apparatus 6 shown in FIG. It is the block diagram which showed the functional structure of the head mounted display 1 shown in FIG. (A) shows an example of an Italian orthography rule in the best mode for carrying out the present invention. (B) shows an example of Greek. (A) shows an example of a Latin-style correct writing rule in the best mode for carrying out the present invention. (B) shows an example in German. 2 shows an example of an Arabic-style orthography rule in the best mode for carrying out the present invention. FIG. 3 shows an example of a Devanagari character normal writing rule in the best mode for carrying out the present invention. FIG. FIG. 2 shows an example of a Japanese orthography rule in the best mode for carrying out the present invention. FIG. (A) shows an example of a normal English rule in the best mode for carrying out the present invention. (B) shows an example of Japanese. (A) And (b) shows the example of the normal writing rule of the Roman character in the best form for implementing this invention. FIG. 3 shows an example of a German orthography rule in the best mode for carrying out the present invention. FIG. It is the flowchart which showed the language type specific process of the head mounted display 1 in the best form for implementing this invention. It is the flowchart which showed the process in the "related information display mode" of the head mounted display 1 in the best form for implementing this invention. It is the schematic which showed the character type used in the world, and its distribution.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 Head mounted display 2 Nose pad 3R, 3L Temple 4 Frame 5R, 5L Lens 6 Display apparatus 7 CCD camera 8R, 8L Earphone 9 Control box 10 Switch 12 Cross button 13 Power switch 14 Cable 15 Image generation unit 16 Eyepiece optical system 17 Housing Body 18 LCD 19 LED
20 Illumination optical system 21 Prism 22 Hologram element 25 Imaging unit 26 Image processing unit 27 Control unit 28 Storage unit 29 Translation engine 30 Communication unit 31 Display unit 32 Audio output unit 33 Operation input unit 35 CPU
36 ROM
37 RAM
38 Counter 39 Battery

Claims (25)

  1. An imaging unit that images an object, a character string detection unit that detects a character string included in video data acquired from the imaging unit, a character type of a character string detected by the character string detection unit, and the character type In a translation device comprising a translation means for referring to a language dictionary corresponding to a language type and translating into a specific language, and a display means for displaying a translation translated by the translation means,
    Word extraction means for extracting a word from the character string detected by the character string detection means;
    Storage means for storing word data corresponding to each language type to be translated by the translation means;
    And a control unit that refers to the word extracted by the extraction unit and the word data stored in the storage unit, and prioritizes and specifies the language type of the character string included in the video data. Translation device to do.
  2. The translation device according to claim 1,
    A character type extracting means for extracting a character type of a character string included in the video data;
    The storage means further stores character type data and correct text rule data for each language type to be translated by the translation means,
    The control means further refers to the character type data extracted by the character type extraction means, the character type data and the orthography rule data stored in the storage means, and sets the priority order to the language type of the character string included in the video data. A translation apparatus characterized by attaching and specifying.
  3. The translation device according to claim 1 or 2,
    Position detecting means for receiving a GPS signal and acquiring position information of the translation device;
    Storage means for storing language type data associated with the position information;
    A translation apparatus comprising: control means for further specifying the language type of a character string included in the video data with priority by further referring to the position information and the language type data.
  4. An imaging unit that images an object, a character string detection unit that detects a character string included in video data acquired from the imaging unit, a character type of a character string detected by the character string detection unit, and the character type In a translation apparatus comprising a translation unit that translates into a specific language with reference to a language dictionary corresponding to a language type, and a display unit that displays a translation translated by the translation unit,
    Position detecting means for receiving a GPS signal and acquiring position information of the translation device;
    Storage means for storing language type data associated with the position information;
    A translation apparatus characterized in that priority is given to a language type of a character string included in the video data with reference to the position information and the language type data.
  5. The translation device according to claim 4,
    A word extracting means for extracting a word from the character string detected by the character string detecting means;
    The storage means further stores word data for each language type to be translated by the translation means,
    The translation device according to claim 1, wherein the control unit further specifies the language type of the character string included in the input data with reference to the word extracted by the word extraction unit and the word data.
  6. The translation device according to claim 4,
    A character type extracting means for extracting a character type of a character string included in the video data;
    The storage means further stores character type data and correct text rule data for each language type to be translated by the translation means,
    The control unit further refers to the character type data extracted by the character type extraction unit, the character type data and the normal rule data stored in the storage unit, and prioritizes the language type of the character string included in the video data. A translation device characterized by performing identification.
  7. An imaging unit that images an object, a character string detection unit that detects a character string included in video data acquired from the imaging unit, a character type of a character string detected by the character string detection unit, and the character type In a translation apparatus comprising a translation unit that translates into a specific language with reference to a language dictionary corresponding to a language type, and a display unit that displays a translation translated by the translation unit,
    Character type extracting means for extracting a character type of a character string included in the video data;
    Storage means for storing character type data and correct rule data for each language type to be translated by the translation means;
    Control means for referring to the character type data extracted by the character type extraction means, the character type data stored in the storage means, and the orthographic rule data, and specifying the language type of the character string included in the video data with priority. A translation apparatus comprising:
  8. In the translation apparatus as described in any one of Claims 1-7,
    The translation device translates into a specific language by referring to the language dictionary of the language type based on the priority order of the language type specified by the control unit.
  9. In the translation apparatus as described in any one of Claims 1-8,
    The language dictionary further stores voice data corresponding to the translated words translated into a specific language by the translation means,
    A translation apparatus, further comprising voice output means for generating and outputting voice from voice data corresponding to the translated word translated by the translation means.
  10. In the translation apparatus as described in any one of Claims 1-9,
    Further comprising a specifying means for enabling a user to specify a language type for translation from a plurality of language types prioritized by the control means;
    The translation apparatus, wherein the translation unit performs translation by referring only to a language dictionary corresponding to the language type designated by the designation unit.
  11. In the translation apparatus as described in any one of Claims 1-10,
    The translation means further comprises a history storage means for storing a history of language types to be translated,
    The translation device according to claim 1, wherein the control means further refers to the history of the language type stored in the history storage means and prioritizes the language type of the character string included in the video data.
  12. The translation apparatus according to claim 11,
    It further comprises clock means for outputting current time information,
    The storage means is associated with a predetermined time, and further stores related information related to a language type to be translated by the translation means,
    The said control means displays the said relevant information corresponding to the log | history of the language type memorize | stored in the said log | history memory | storage means on the said display means based on the information of the said present | current time, The translation apparatus characterized by the above-mentioned.
  13. In the translation apparatus as described in any one of Claims 1-12,
    The translation apparatus, wherein the display means is a head mounted display.
  14. The translation device according to claim 13,
    The translation apparatus according to claim 1, wherein the head mounted display is an external light transmission type.
  15. The translation device according to claim 14, wherein
    The head mounted display includes a video display for generating and projecting video from a video signal;
    A translation apparatus comprising: a prism that samples an image projected from the image display and guides the image to an inside; and a diffraction film that diffracts the image guided by the prism and guides a hologram image to a pupil.
  16. Detects a character string included in video data acquired from an imaging means for imaging an object, and translates it into a specific language by referring to a language dictionary corresponding to the character type of the character string and the language type using the character type To the computer that performs processing,
    A word extraction function for extracting words from the character string;
    A storage function for storing word data for each language type to be translated;
    A program for realizing a language type specifying function for specifying a priority by prioritizing the language type of the character string with reference to the word extracted by the word extracting function and the word data for each language type.
  17. The program according to claim 16, wherein
    Further realizing a character type extraction function for extracting a character type of a character string included in the video data,
    In the storage function, further storing character type data and correct rule data for each language type to be translated,
    The language type specifying function further refers to the character type data extracted by the character type extraction function, the character type data for each language type, and the orthography rule data, and prioritizes the language types of the character strings included in the video data. A program characterized by performing identification.
  18. The program according to claim 16 or 17,
    Further realizing a position detection function for receiving GPS signals and acquiring position information,
    The storage function further stores language type data associated with the position information,
    A program characterized in that, with the language type specifying function, the position information and the language type data are further referred to, and the language type of the character string included in the video data is specified with priority.
  19. Detects a character string included in video data acquired from an imaging means for imaging an object, and translates it into a specific language by referring to a language dictionary corresponding to the character type of the character string and the language type using the character type To the computer that performs processing,
    A position detection function for receiving GPS signals and acquiring position information;
    A storage function for storing language type data associated with the position information;
    A program for realizing a language type specifying function for specifying the language type of a character string included in the video data with a priority by referring to the position information and the language type data.
  20. The program according to claim 19, wherein
    Further realizing a word extraction function for extracting words from a character string included in the video data,
    The memory function further stores word data for each language type to be translated,
    In the language type specifying function, the word type of the character string included in the input data is prioritized and specified by further referring to the word extracted by the word extraction function and the word data for each language type. A featured program.
  21. The program according to claim 19, wherein
    Further realizing a character type extraction function for extracting a character type of a character string included in the video data,
    In the storage function, further storing character type data and correct rule data for each language type to be translated,
    The language specifying function further refers to the character type data extracted by the character type extraction function, the character type data for each language type, and the correct text rule data, and prioritizes the language type of the character string included in the video data. A program characterized by specifying.
  22. A computer that detects a character string included in video data acquired from an imaging unit that captures an object and translates it into a specific language with reference to a language dictionary corresponding to the language type of the character string.
    A character type extraction function for extracting a character type of a character string included in the video data;
    A storage function for storing character type data and orthographic rule data for each language type to be translated;
    Language type specification for specifying and specifying the priority of the language type of the character string included in the video data with reference to the character type data extracted by the character type extraction function, the character type data of the plurality of language types, and the orthographic rule data A program characterized by realizing a function.
  23. In the program according to any one of claims 16 to 22,
    A translation function for translating a character string included in the video data into a specific language by referring to a language dictionary of the language type based on the priority order of the language type specified by the language specifying function is provided. Program to do.
  24. In the program according to any one of claims 16 to 23,
    Further realizing a history storage function for storing a history of the language type specified by the language type specifying function,
    The language type specifying function further refers to the history of the language type by the history storage function and specifies the language type of the character string included in the video data based on the language type with the latest history. A program characterized by that.
  25. The program according to claim 24,
    The storage function is associated with a predetermined time, and further stores related information related to each language type to be translated.
    A clock function that outputs current time information,
    Further realizing a display function for displaying the related information on the display unit according to the latest language type of the language type history stored in the history storage function based on the current time information by the clock function. A featured program.
JP2005124949A 2005-04-22 2005-04-22 Translation device and program thereof Pending JP2006302091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005124949A JP2006302091A (en) 2005-04-22 2005-04-22 Translation device and program thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005124949A JP2006302091A (en) 2005-04-22 2005-04-22 Translation device and program thereof

Publications (1)

Publication Number Publication Date
JP2006302091A true JP2006302091A (en) 2006-11-02

Family

ID=37470284

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005124949A Pending JP2006302091A (en) 2005-04-22 2005-04-22 Translation device and program thereof

Country Status (1)

Country Link
JP (1) JP2006302091A (en)

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010034759A (en) * 2008-07-28 2010-02-12 Denso Corp On-vehicle communication apparatus
JP2010109700A (en) * 2008-10-30 2010-05-13 Sony Computer Entertainment Inc Image processor
JP2013508817A (en) * 2009-10-14 2013-03-07 クゥアルコム・インコーポレイテッドQualcomm Incorporated Method and apparatus for automatic predictive selection of input method for web browser
JP2015520861A (en) * 2012-03-06 2015-07-23 アップル インコーポレイテッド Multilingual content speech synthesis processing
KR20150098493A (en) * 2014-02-20 2015-08-28 권형석 System, method and computer readable recording medium for informing emergency state
KR20150098930A (en) * 2014-02-21 2015-08-31 권형석 System for rescue request during emergency and method therefor
KR101614691B1 (en) 2014-02-24 2016-05-12 주식회사 안심동행 System for rescue request using portable electronic apparatus and method for rescue request therefor
JP2016519797A (en) * 2013-03-15 2016-07-07 トランスレート アブロード,インコーポレイテッド System and method for real-time display of foreign language character sets and their translations on resource-constrained mobile devices
CN105843944A (en) * 2016-04-08 2016-08-10 华南师范大学 Geographical location information based language acquisition method and system
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
JP2017532684A (en) * 2014-10-17 2017-11-02 マシーン・ゾーン・インコーポレイテッドMachine Zone, Inc. System and method for language detection
JP2018503147A (en) * 2015-11-27 2018-02-01 小米科技有限責任公司Xiaomi Inc. Interface display method and apparatus
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US10162811B2 (en) 2014-10-17 2018-12-25 Mz Ip Holdings, Llc Systems and methods for language detection
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US10346543B2 (en) 2013-02-08 2019-07-09 Mz Ip Holdings, Llc Systems and methods for incentivizing user feedback for translation processing
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10366170B2 (en) 2013-02-08 2019-07-30 Mz Ip Holdings, Llc Systems and methods for multi-user multi-lingual communications
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10417351B2 (en) 2013-02-08 2019-09-17 Mz Ip Holdings, Llc Systems and methods for multi-user mutli-lingual communications
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10482874B2 (en) 2017-08-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
JP2010034759A (en) * 2008-07-28 2010-02-12 Denso Corp On-vehicle communication apparatus
JP2010109700A (en) * 2008-10-30 2010-05-13 Sony Computer Entertainment Inc Image processor
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
JP2013508817A (en) * 2009-10-14 2013-03-07 クゥアルコム・インコーポレイテッドQualcomm Incorporated Method and apparatus for automatic predictive selection of input method for web browser
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
JP2015520861A (en) * 2012-03-06 2015-07-23 アップル インコーポレイテッド Multilingual content speech synthesis processing
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US10346543B2 (en) 2013-02-08 2019-07-09 Mz Ip Holdings, Llc Systems and methods for incentivizing user feedback for translation processing
US10366170B2 (en) 2013-02-08 2019-07-30 Mz Ip Holdings, Llc Systems and methods for multi-user multi-lingual communications
US10417351B2 (en) 2013-02-08 2019-09-17 Mz Ip Holdings, Llc Systems and methods for multi-user mutli-lingual communications
JP2016519797A (en) * 2013-03-15 2016-07-07 トランスレート アブロード,インコーポレイテッド System and method for real-time display of foreign language character sets and their translations on resource-constrained mobile devices
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
KR101634972B1 (en) 2014-02-20 2016-07-01 주식회사 보형 System, method and computer readable recording medium for informing emergency state
KR20150098493A (en) * 2014-02-20 2015-08-28 권형석 System, method and computer readable recording medium for informing emergency state
KR20150098930A (en) * 2014-02-21 2015-08-31 권형석 System for rescue request during emergency and method therefor
KR101634976B1 (en) 2014-02-21 2016-07-01 주식회사 보형 System for rescue request during emergency and method therefor
KR101614691B1 (en) 2014-02-24 2016-05-12 주식회사 안심동행 System for rescue request using portable electronic apparatus and method for rescue request therefor
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
JP2017532684A (en) * 2014-10-17 2017-11-02 マシーン・ゾーン・インコーポレイテッドMachine Zone, Inc. System and method for language detection
US10162811B2 (en) 2014-10-17 2018-12-25 Mz Ip Holdings, Llc Systems and methods for language detection
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
JP2018503147A (en) * 2015-11-27 2018-02-01 小米科技有限責任公司Xiaomi Inc. Interface display method and apparatus
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
CN105843944A (en) * 2016-04-08 2016-08-10 华南师范大学 Geographical location information based language acquisition method and system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10482874B2 (en) 2017-08-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants

Similar Documents

Publication Publication Date Title
US6738535B2 (en) Head-mounted display content transformer
JP3981734B2 (en) Question answering system and question answering processing method
US8185372B2 (en) Apparatus, method and computer program product for translating speech input using example
US6393443B1 (en) Method for providing computerized word-based referencing
JP5479066B2 (en) Method, apparatus and system for position assisted translation
Marian et al. CLEARPOND: Cross-linguistic easy-access resource for phonological and orthographic neighborhood densities
JP3952216B2 (en) Translation device and dictionary search device
US5678051A (en) Translating apparatus with special display mode for supplemented words
US20070021956A1 (en) Method and apparatus for generating ideographic representations of letter based names
KR100259407B1 (en) Keyboard for a system and method for processing chinese language text
US20130147836A1 (en) Making static printed content dynamic with virtual data
US20090222257A1 (en) Speech translation apparatus and computer program product
US5832478A (en) Method of searching an on-line dictionary using syllables and syllable count
JP5791861B2 (en) Information processing apparatus and information processing method
US9519641B2 (en) Photography recognition translation
Bomhard Reconstructing Proto-Nostratic
TWI313418B (en) Multimodal speech-to-speech language translation and display
US20090106018A1 (en) Word translation device, translation method, and computer readable medium
JP2011182125A (en) Conference system, information processor, conference supporting method, information processing method, and computer program
US20060224378A1 (en) Communication support apparatus and computer program product for supporting communication by performing translation between languages
US7912289B2 (en) Image text replacement
CN103092338B (en) Update print content with personalized virtual data
US8060359B2 (en) Apparatus, method and computer program product for optimum translation based on semantic relation between words
US20100329555A1 (en) Systems and methods for displaying scanned images with overlaid text
US8423346B2 (en) Device and method for interactive machine translation