WO2016133091A2 - Content creation device, content playback device, program, and content creation and playback system - Google Patents

Content creation device, content playback device, program, and content creation and playback system Download PDF

Info

Publication number
WO2016133091A2
WO2016133091A2 PCT/JP2016/054448 JP2016054448W WO2016133091A2 WO 2016133091 A2 WO2016133091 A2 WO 2016133091A2 JP 2016054448 W JP2016054448 W JP 2016054448W WO 2016133091 A2 WO2016133091 A2 WO 2016133091A2
Authority
WO
WIPO (PCT)
Prior art keywords
embedding
content
embedded
language
data
Prior art date
Application number
PCT/JP2016/054448
Other languages
French (fr)
Japanese (ja)
Other versions
WO2016133091A3 (en
Inventor
重昭 白鳥
Original Assignee
ギズモモバイル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ギズモモバイル株式会社 filed Critical ギズモモバイル株式会社
Publication of WO2016133091A2 publication Critical patent/WO2016133091A2/en
Publication of WO2016133091A3 publication Critical patent/WO2016133091A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Definitions

  • the present invention relates to a content creation device, a content reproduction device, a program, and a content distribution system.
  • the present invention has been made in view of such a situation, and an object of the present invention is to enable easy creation of content that can be switched to multiple languages as content including images and languages.
  • a content creation device includes: A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements; Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations; Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process; Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information; Is provided.
  • the first program of one aspect of the present invention is a program corresponding to the above-described content creation device of one aspect of the present invention.
  • a content reproduction device includes Image data including one or more embedding locations for embedding language data, language data of two or more languages embedded in each of the one or more embedding locations, and each of the one or more embedding locations.
  • Content acquisition means for acquiring content including embedded correspondence information indicating a correspondence relationship with language data of two or more languages to be embedded;
  • a specifying means for specifying the target language, Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
  • Reproduction control means for reproducing the image data in a state in which language data of the reproduction target language is embedded in the embedded portion; Is provided.
  • the second program according to one aspect of the present invention is a program corresponding to the content reproduction device according to one aspect of the present invention described above.
  • a content creation and playback system includes: In a content creation and playback system including a content creation device and a content playback device,
  • the content creation device includes: A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements; Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations; Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process; Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information; With The content playback device Content acquisition means for acquiring the content generated by the content generation means; A specifying means for specifying the target language, Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information; Reproduction control means for reproducing
  • FIG. 3 is a schematic diagram for explaining a function of embedding multilingual text data and voice data in one embedding location as a function of the editor device of FIG. 2. It is a figure which shows an example of the structure of the embedding correspondence table produced
  • FIG. 12 is an image diagram showing a difference from a conventional guide book different from those shown in FIGS. 10 and 11 when a multilingual map is used as a guide book.
  • FIG. 1 is a block diagram showing a configuration of an information processing system according to an embodiment of the content creation and playback system of the present invention.
  • the information processing system shown in FIG. 1 includes an editor device 1, an authoring device 2, a distribution device 3, and a viewer device 4.
  • the editor device 1 is an embodiment of a content creation device to which the present invention is applied, and creates content (for example, comic content in the present embodiment) as electronic data. Specifically, the editor device 1 creates electronic data of content (hereinafter referred to as “multilingual switching content”) that can switch a language portion such as speech in multilingual among comic content.
  • multilingual switching content electronic data of content
  • the term “content” refers to electronic data of the content.
  • the authoring device 2 reduces the multilingual switching content and provides the reduced multilingual switching content (hereinafter referred to as “reduced content”) to the distribution device 3.
  • reduced content the reduced content creation method (content reduction method) is not particularly limited, and any method can be employed.
  • the authoring device 2 executes an image conversion process such as encoding a multilingual switching content image to reduce (reduce) the data amount by a method that simulates multiple exposure.
  • an image conversion process such as encoding a multilingual switching content image to reduce (reduce) the data amount by a method that simulates multiple exposure.
  • the image subjected to the image change processing is hereinafter referred to as a “converted image”
  • the multilingual switching content is restored by executing the restoration process (hereinafter referred to as “image reverse conversion process”).
  • image reverse conversion process the restoration process
  • the data amount of the converted image is smaller than that of the original image. This is clear when considering that the same image is shifted and overlapped, the more the image is averaged and finally the color becomes one color.
  • the image conversion process and the image reverse conversion process employed in the present embodiment are mathematically guaranteed means that reduce the amount of image data and do not impair the authenticity of the data as an image. It is a process using. For this reason, it is virtually impossible for a third party to accidentally intercept or tamper with a confidential image.
  • the specification attached to the application for Japanese Patent Application No. 2014-214533 may be referred to.
  • the image conversion process and the image reverse conversion process employed in the present embodiment are merely examples of a reduction method, and any reduction method can be adopted.
  • the distribution device 3 holds a plurality of reduced contents, and provides a viewer-reduced reduced content to the viewer device 4 when a viewing request is received from the viewer device 4.
  • the viewer apparatus 4 is an embodiment of a content reproduction apparatus to which the present invention is applied, and is an apparatus operated when a user views content (for example, comic content in the present embodiment).
  • the viewer device 4 restores the multilingual switching content by performing the above-described image reverse conversion process on the reduced content to be browsed.
  • the viewer device 4 receives the instruction operation from the user and reproduces the multilingual switching content.
  • the language portion such as speech is played back in the language specified by the user, and the playback target language is switched according to the switching operation of the user.
  • each of the editor device 1, the authoring device 2, the distribution device 3, and the viewer device 4 is applied to a computer and its peripheral devices.
  • Each unit in the present embodiment is configured by hardware included in a computer and its peripheral devices, and software that controls the hardware.
  • the above hardware includes a storage unit, a communication unit, a display unit, and an input unit in addition to a CPU (Central Processing Unit) as a control unit.
  • a storage unit for example, a memory (RAM: Random Access Memory, ROM: Read Only Memory, etc.), a hard disk drive (HDD: Hard Disk Drive), an optical disk (CD: Compact Disk, DVD: Digital Versatile drive, etc.).
  • Examples of the communication unit include various wired and wireless interface devices.
  • Examples of the display unit include various displays such as a liquid crystal display.
  • Examples of the input unit include a keyboard and a pointing device (mouse, tracking ball, etc.).
  • the viewer apparatus 4 of this embodiment is comprised as a tablet, and also has the touchscreen which has both the input part and the display part.
  • the input unit of the touch panel includes, for example, a capacitance type or resistance type position input sensor stacked in the display area of the display unit, and detects the coordinates of the position where the touch operation is performed.
  • the touch operation refers to an operation of touching or approaching an object (such as a user's finger or a touch pen) with respect to a touch panel (more precisely, an input unit) serving as a display medium.
  • touch position the position where the touch operation is performed
  • the coordinates of the touch position are referred to as “touch coordinates”.
  • the software includes a computer program and data for controlling the hardware.
  • the computer program and data are stored in the storage unit, and are appropriately executed and referenced by the control unit. Further, the computer program and data can be distributed via a communication line, and can also be recorded and distributed on a computer-readable medium such as a CD-ROM.
  • FIG. 2 is a functional block diagram illustrating an example of a functional configuration of the editor device 1.
  • the editor device 1 includes an image reception unit 11, an embedding location selection unit 12, a text reception unit 13, a voice reception unit 14, an embedding unit 15, an embedding correspondence table generation unit 16, and switching content data generation.
  • a unit 17 and an output unit 18 are provided.
  • the image receiving unit 11 receives image data of the content.
  • the embedding location selection unit 12 selects a location in which language data is embedded (hereinafter, “embedding location”) from the received image.
  • the image data received in the present embodiment is image data indicating each of a plurality of pages constituting a comic, and can be divided in units of pages.
  • An image in one page is divided into a plurality of frames.
  • One frame includes a picture of a predetermined scene, and includes “speech balloons” as necessary. In this “speech balloon”, words such as persons included in the picture of the frame are displayed. Therefore, in the present embodiment, the “speech balloon” location included in the frame is selected as the “embedding location”.
  • FIG. 3 shows an example of the editor image 31 displayed on the editor device 1.
  • the content creator can use the editor image 31 to create multilingual switching content for an arbitrary comic.
  • the editor image 31 includes an area 41J for embedding Japanese data, an area 41E for embedding English data, an area 41C for embedding Chinese data, and the like as areas for embedding data in a predetermined language.
  • the producer designates a “Japanese” tab and displays an area 41J in which Japanese data is embedded as shown in FIG.
  • the editor image 31 will be described by taking as an example the case of embedding Japanese data.
  • the image of the work target page among the images of the plurality of pages constituting the comic is displayed.
  • the creator can switch the target page by pressing the software button shown in the page switching area 42 or pressing the thumbnail of the image of each page displayed in the page thumbnail image display area 43. it can.
  • the image of the work target page is divided into a plurality of frames, and one or more “speech balloons” are set for each frame.
  • the location of the “balloon” is a candidate for an embedded location.
  • “speech balloons” 51 to 55 are candidates for embedding locations.
  • the producer performs an operation of selecting an embedding location from such embedding location candidates.
  • the embedding location selection unit 12 in FIG. 2 selects an embedding location based on such an operation. For example, it is assumed that a “balloon” location 52 is selected as an embedding location.
  • the producer can embed at least one of text data and voice data as Japanese data in the location 52 selected as the embedding location.
  • the producer when embedding text data, can embed the input text data by directly inputting the text to the location 52 selected as the embedding location, or a text prepared in advance. Data can be embedded.
  • the text receiving unit 13 in FIG. 2 receives the text data to be embedded and supplies it to the embedding unit 15.
  • the producer when embedding audio data, the producer emits a predetermined sound and directly inputs it to a microphone (not shown) of the editor device 1 with the location 52 selected as the embedding location.
  • the input voice data can be embedded, or voice data prepared in advance can be embedded.
  • the voice reception unit 14 in FIG. 2 receives the voice data to be embedded and supplies it to the embedding unit 15.
  • the embedding unit 15 receives a predetermined language (the above-described example) received by the text receiving unit 13 with respect to the embedding portion (the “balloon” portion 52 in the above example) selected by the embedding location selecting unit 12.
  • a predetermined language the above-described example
  • the text receiving unit 13 receives a predetermined language (the above-described example) received by the text receiving unit 13 with respect to the embedding portion (the “balloon” portion 52 in the above example) selected by the embedding location selecting unit 12.
  • Japanese Japanese
  • processing for embedding voice data of a predetermined language (Japanese in the above example) received by the voice receiving unit 14 is executed.
  • the producer may further perform the same operation as described above with the tab “ENGLISH” designated to embed English data and the area 41E for embedding English data displayed. .
  • the producer may perform the same operation as described above while designating the “Chinese” tab and displaying the area 41E for embedding Chinese data.
  • multilingual text data and voice data can be embedded in one embedding location (in the example of FIG. 4, a “balloon” location 52).
  • the “embedding process” by the embedding unit 15 in FIG. 2 in this embodiment is not an image process for creating an image of a page in which text or the like is arranged at an embedding location (that is, a process for processing an image).
  • the method of this association is not particularly limited, but in this embodiment, a method of generating a table as shown in FIG. 5 (hereinafter referred to as “embedding correspondence table”) is adopted. That is, the embedding correspondence table generation unit 16 in FIG. 2 creates an embedding correspondence table in which embedding locations in image data are associated with data of a language to be embedded (text data or voice data).
  • a predetermined row corresponds to a predetermined one embedding location.
  • one comic content has a plurality of “speech balloons” (spots 51 to 56 only shown in FIG. 3). Separate text and voice are embedded. Therefore, a unique ID is assigned to the embedding location.
  • the text data ID and the voice data ID are used separately so that the text data and the voice data are clearly distinguished even at the same embedding location.
  • ID “Pn-Am-T” and ID “Pn-Am-S” are used in this embodiment.
  • “n” of “Pn” indicates a page number.
  • “M” of “Am” indicates a number given by a predetermined rule to each of a plurality of embedding portions included in the image of the “n” page. That is, the ID “Pn-Am” is an ID that uniquely indicates the “m” -th embedding location of the “n” page. Furthermore, “T” at the end of the ID indicates text data, and “S” at the end of the ID indicates audio data. It is assumed that each embedding location selected by the embedding location selection unit 12 in FIG. 2 and its ID are also associated in the image data. That is, by designating an ID, a “speech balloon” location (image region) indicated by the ID is specified from the image.
  • the embedding location of the ID “P1-A1-T”, that is, the “1” th embedding location of the “1” page (for example, location 52 in FIG. 4)
  • “My name is A.” is associated with Japanese
  • “My Name is A.” is associated with English
  • “My Name Is A.” is associated with Chinese.
  • parameters of each text data for example, font type, font size, etc., may be stored in the embedding correspondence table for each language (each item). it can.
  • the parameters of each text data are specified for each language and for each embedding location (for each character as required) by operating various operating tools (software) in the text parameter specifying area 44 of FIG. Is possible.
  • texts in each language are directly stored in the embedding correspondence table.
  • a text data file is prepared separately like voice data, and the link destination of the file is stored. Also good.
  • the audio data is stored in the embedding location of the ID “P1-A1-T”, that is, the “1” -th embedding location (eg, location 52 in FIG. 4) of the “1” page.
  • “A day.mp3” is associated in Japanese
  • “A English.mp3” is associated in English
  • “A middle.mp3” is associated in Chinese.
  • “A day.mp3” indicates a file name of voice data “My name is A.” pronounced in Japanese. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
  • “A English.mp3” indicates a file name of voice data “My Name is A.” pronounced in English.
  • the link destination of the audio data file is stored in the embedding correspondence table of FIG. “A middle.mp3” indicates the file name of the voice data “My Name is A.” pronounced in Chinese. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
  • the switching content data generation unit 17 performs image data of each page of the comic (corresponding to the embedded portion), text data and audio data embedded (corresponding) to each embedded portion. And a data group including an embedding correspondence table are generated as multilingual switching content.
  • the output unit 18 outputs the multilingual switching content from the editor device 1.
  • FIG. 6 is a functional block diagram illustrating an example of a functional configuration of the viewer device 4.
  • the viewer device 4 includes a switching content data acquisition unit 61, a separation unit 62, an image holding unit 63, a text holding unit 64, an audio holding unit 65, an embedding correspondence table holding unit 66, an operation unit 67, A reproduction target specifying unit 68, a reproduction target extraction unit 69, a reproduction control unit 70, and an output unit 71 are provided.
  • the switching content data acquisition unit 61 acquires the multilingual switching content distributed from the distribution device 3. Separation unit 62, from the multilingual switching content, image data of each page of the comic (embedded location is associated), text data and audio data embedded (associated) in each embedded location, and The embedding correspondence table is separated. Of the data separated from the multilingual switching content, the image data of each page of the comic (corresponding to the embedded portion) is held in the image holding unit 63. Text data embedded (associated) in each embedding location is held in the text holding unit 64. Audio data embedded (associated) in each embedding location is held in the audio holding unit 65. The embedding correspondence table is held in the embedding correspondence table holding unit 66.
  • the multilingual switching content output from the editor device 1 is subjected to image conversion processing for reduction in the authoring device 2 in FIG. 1 as described above, and the distribution device 3 as reduced content.
  • the switching content data acquisition unit 61 of the viewer device 4 acquires reduced content. Therefore, the separation unit 62 performs the above-described image reverse conversion process on the reduced content to restore the multilingual switching content.
  • the various data described above are separated from the restored multilingual switching content.
  • the operation unit 67 includes various software operation instruments displayed on the touch panel in this embodiment. That is, the user performs various touch operations on the touch panel, so that the operation unit 67 receives various operations.
  • a viewer image 101 shown in FIG. 7 is displayed on the touch panel.
  • a display area 111 for displaying the title (manga title) of the multilingual switching content a display area 112 for displaying the number of pages to be displayed, and the position of the display target page with respect to the entire page are displayed.
  • the viewer image 101 includes a display area 114 for displaying an image of a page to be browsed.
  • the image of the page to be browsed shows one page of a comic, which is divided into a plurality of frames, and each frame includes a “speech balloon” as appropriate.
  • the text of the reproduction target language is displayed at the location of this “balloon” (for example, location 151 in the example of FIG. 7).
  • a Japanese text “My name is A” is displayed.
  • the language to be played back can be switched.
  • the viewer presses the switching button 115 when switching the text setting language.
  • the switch button 115 in the operation unit 67 of FIG. 6 is pressed.
  • the reproduction target specifying unit 68 determines an image of the reproduction target page and also determines a reproduction target language. That is, when the switch button 115 is pressed, the reproduction target specifying unit 68 switches the text reproduction target language. For example, it is assumed that “Japanese” is switched to “English”.
  • the reproduction target extraction unit 69 extracts the image of the reproduction target page from the image holding unit 63 and reproduces the text out of the text embedded in each “balloon” portion (embedding portion) included in the image of the reproduction target page.
  • Extract texts in the target language for example, English
  • the text embedded in each “balloon” part (embedded part) included in the image of the reproduction target page is represented by the embedding correspondence table (FIG. 5). Determined.
  • the reproduction control unit 70 generates an image in which text in a reproduction target language (for example, English) is superimposed on each “balloon” portion (embedded portion) on the reproduction target page image, and an output unit 71 is reproduced. That is, the output unit 71 includes a touch panel display unit and a speaker (not shown).
  • a viewer image 101 is displayed on the display unit, and an image generated by the playback control unit 70 is displayed in the display area 114 of the viewer image 101. That is, when the switching button 115 is pressed and the reproduction target language is switched from “Japanese” to “English”, the display area 114 of the viewer image 101 displays a picture such as a person or background from the image shown in FIG. In the state as it is, each “speech balloon” portion (embedded portion) is displayed by switching to an image in which English text is displayed.
  • a reproduction target language for example, English
  • “My Name is A.” in English is displayed at the location “balloon” 151 as shown in the center of FIG.
  • the Chinese word “My name A” is displayed in the “balloon” portion 151 as shown on the right side of FIG.
  • the text in the portion “balloon” 151 corresponds to the text of each language stored in the first row of the embedding correspondence table in FIG. That is, the ID of the location 151 is managed as “P1-A1”.
  • the reproduction target extraction unit 69 extracts the text of the reproduction target language from the ID “P1-A1-T” in the first line as the text of the portion 151 having the ID “P1-A1”. Good. Therefore, the viewer simply presses the switching button 115 and displays “My name is A” in Japanese, “My Name is A.” in English, and “My Name is A.” The text in other languages such as the word “My personal name A.” can be switched sequentially.
  • all “speech balloons” displayed in the display area 114 of the viewer image 101 may be switched all at once, or specified by the viewer. Only the “speech balloon” portion may be switched.
  • the switching operation of the text reproduction target language may employ, for example, a touch operation on the target “speech balloon” in addition to the pressing operation of the switching button 115.
  • the switching order of the language to be played back may be a predetermined order, or the selection area 117 in FIG. 7 is displayed so that the viewer can select a desired language as in the case of switching of audio data described later. It may be.
  • the viewer can not only switch the multi-language for the text displayed in the “speech” part, but can also switch the multi-language for the audio output corresponding to the text displayed in the “speech” part. .
  • it can be established as a new genre in parallel with anime and manga as a manga that speaks.
  • differentiation from electronic books can be achieved.
  • the language can be freely selected by combining letters and voices so that anyone in the world can read them. Thereby, it is possible to respond to the request for localization. Furthermore, since you can enjoy learning while reading comics, you can also use it for language learning.
  • the switching button 116 in FIG. 7 is used to switch the other language of the voice output.
  • a selection area 117 is displayed.
  • the viewer can specify a desired language as a reproduction target language by performing a touch operation on the desired language among the languages displayed in the selection area 117.
  • the selection operation for the selection region 117 is accepted as an operation by the operation unit 67 of FIG.
  • the reproduction target specifying unit 68 determines a reproduction target language for the sound based on the selection operation. That is, the audio playback target language is switched by the selection operation. For example, it is assumed that “Japanese” is switched to “English”.
  • the reproduction target extraction unit 69 extracts the sound of the reproduction target language (for example, English) from the voices embedded in the portions (embedded portions) of each “balloon” included in the image of the reproduction target page.
  • the sound of the reproduction target language for example, English
  • the playback control unit 70 plays the audio in the playback target language (for example, English) at the timing of the audio playback at the location that is the target of audio playback among the locations (embedded locations) of each “balloon” on the playback target page. Is reproduced from the speaker of the output unit 71.
  • the playback target language for example, English
  • a cartoon as a so-called electronic book is used as the content, but the present invention is not particularly limited thereto.
  • the content may be animated cartoon content (moving cartoon).
  • comics manuscripts are used, so speedier production is possible. Thereby, it becomes possible to cope with a fast business speed.
  • the author's manga manuscript is directly converted into a movie, it can be established as a new genre characterized by high quality.
  • it produces mainly manga manuscripts it has the merit of lower costs compared to anime.
  • the embedding location for embedding language data is the location of the “speech balloon” of each frame of the comic, but is not particularly limited thereto, and may be any location in the image.
  • onomatopoeia is often expressed as a text part. Such a text part can also be adopted as an embedding part.
  • the multilingual switching content is not particularly required to be a comic, and any content that can embed language data is sufficient.
  • a menu provided at a restaurant or the like can be adopted as multilingual switching content.
  • a location indicating the name and price of the food and drink, an explanatory text explaining the food and drink, and the like can be adopted as the embedding location.
  • the contents of the language embedded in the embedding location do not necessarily need to be matched for each language. For example, at a restaurant in Japan, if the Japanese is used by a Japanese who eats sashimi, long explanations such as ingredients and how to make are often unnecessary, but the sashimi is always eaten. If it is a language used by foreigners who do not (such as English), it may be better to use long descriptions such as materials and how to make them. In such a case, the contents of the language embedded in the embedding location are different.
  • a map can be adopted as multilingual switching content.
  • a location indicating each location in the map can be adopted as an embedded location.
  • the contents of the language embedded in the embedding location do not necessarily need to be matched for each language. For example, if the Japanese map is used by a Japanese who is familiar with the geography, long explanations are often unnecessary, but if it is a language used by a foreigner familiar with the geography (eg English), sightseeing It may be better to use long explanations such as guidance. In such a case, the contents of the language embedded in the embedding location are different.
  • multilingual map Such a service is hereinafter referred to as a “multilingual map”.
  • a multilingual map an example of the multilingual map will be described with reference to FIGS. 9 to 14.
  • FIG. 9 is an image diagram showing an overview of an example of a multilingual map.
  • the multilingual map can be realized by the content creation and playback system of the present invention.
  • MAP category a category related to a map
  • shop category a category related to a store
  • menu category a category related to a menu
  • MAP category, store category, menu category a category related to a map
  • menu category a category related to a menu
  • the linkage of the three categories (MAP category, store category, menu category) described above is an example, and is not limited to these three. All kinds of categories can be linked. For example, it can be linked with a travel agency or a duty-free shop, or can be linked with a person (publisher or the like) that provides information such as a famous place, history, or person.
  • the multilingual switching content is produced by generating the embedding correspondence table as shown in FIG. That is, in this example, the other language switching content of the store category and the menu category is produced on the store (for example, restaurant) side that provides food and drink.
  • multilingual switching content in the MAP category is produced using the editor device 1 (FIG. 2) operated by a multilingual map service provider, a tourist company, or the like. That is, an employee of a restaurant (for example, a restaurant) that provides food and drinks uses the editor device 1 (FIG. 2) in FIG. 2 to support various descriptions of various descriptions on the website of the shop and various descriptions on the menu.
  • the multilingual switching content is produced by generating the embedding correspondence table as shown in FIG.
  • the editor apparatus 1 may be a dedicated terminal or a general-purpose terminal such as a personal computer in which dedicated software is installed.
  • the user side uses the multilingual map using the viewer device 4 of FIG.
  • the dedicated application when the dedicated application is activated on the mobile terminal, the multilingual map is displayed on the screen of the mobile terminal.
  • the multilingual map is a global map. Since the map and the location of the store are linked to each other, an icon indicating the store is displayed on the map.
  • MAP category in the example of FIG. 9 for example, information related to recommended spots in the vicinity, information related to traffic information such as closed roads and traffic jams, and information related to famous places such as tourist spots can be adopted.
  • a store category such as an address or the like, a store commercial or the like related to a store PR, or a store homepage related to the store category may be employed.
  • Menu categories include, for example, those related to signage such as video advertisements for products and services, those related to halal (meaning all “sound products and activities permitted by Islamic teaching”), and allergens (allergic symptoms). Can be used as well as those related to local specialties and special products.
  • the above three categories are linked to each other, so that, for example, the location and the surrounding spots can be linked in the relationship between the MAP category and the store category.
  • the home page and the store menu can be linked.
  • the food production area and the seasonal photos and videos introducing the production area can be linked by linking food and drink with a map. That is, users of different world languages can use the dedicated app to realize “enjoying”, “knowing”, and “eating” at a time.
  • multilingual maps are all linked to multilingual maps, categories, and menus. For this reason, the user can realize a function that cannot be realized by a conventional general travel guidebook by using a multilingual map as a travel guide when traveling abroad.
  • FIG. 10 is an image diagram showing a difference from a conventional guidebook when a multilingual map is used as a guidebook.
  • Overseas travelers can switch between text and voice output in real time by using a multilingual map as a travel guide.
  • a multilingual map as a travel guide.
  • information useful for overseas travel can be acquired in real time.
  • the store side can also receive a content editor that can be easily produced by individuals.
  • the status S1 indicates that when a restaurant manager (eg, restaurant) or the like operates the editor device 1 to generate or update the restaurant menu, the multilingual map m, the store homepage, and the like are all displayed. In conjunction, it shows that the content of the update is reflected in real-time on a multilingual map for the entire world. Further, when the restaurant homepage and the like are updated, the multilingual map m and the menu are all linked, and the contents of the update are reflected in real-time on the globally compatible multilingual map. In other words, since the updated content of the multilingual switching content can be browsed in real time and in multiple languages as it is in the world, the restaurant is not limited to Japan, but to foreign travelers visiting Japan from all over the world. It becomes easy to advertise the menu.
  • the editor device 1 can be operated at any time by being provided at the restaurant or the area to which the restaurant belongs.
  • Status S2 indicates that an overseas traveler can use a dedicated application by operating a mobile terminal when visiting Japan. That is, the overseas traveler can easily obtain the latest information of the restaurant in the language used by the overseas traveler (for example, the map m indicating the provided food and location) in real time. At this time, even if there are a plurality of overseas travelers and the languages used are different, it is not necessary to activate the dedicated application on all the mobile terminals possessed by each of the overseas travelers. By starting a dedicated application on a specific mobile terminal and switching the displayed language as appropriate, all the overseas travelers can easily view the latest information of the predetermined store in real time. For example, if the dedicated application is activated only on one portable terminal possessed by the tour conductor, it becomes possible for a companion with a different language to easily view necessary information.
  • the dedicated application is activated only on one portable terminal possessed by the tour conductor, it becomes possible for a companion with a different language to easily view necessary information.
  • the multilingual map can be used not only as a travel guide for overseas travelers during the trip, but also for collecting information before the departure of the trip.
  • Status S3 indicates that an overseas traveler can use a dedicated application by operating a mobile terminal before visiting Japan.
  • the overseas traveler operates a mobile terminal and uses a dedicated application before visiting Japan.
  • the latest information of the predetermined store can be easily acquired in real time in the language used by the overseas traveler.
  • the overseas traveler may check whether the store has a menu that he or she can consume. Can be easily confirmed in real time according to the language used by the overseas traveler. As a result, it is possible to prevent a situation in which, despite having visited Japan, there is no food that can be eaten or the physical condition has been lost due to having eaten.
  • the multilingual map introduces the recommended places and spots according to the contents of the prescribed tour guide, introduces the contents of the recommended sights in multiple languages, or leads to a tour around the sights. You can do it. It is also possible to link a multilingual map with a tour reservation system. Also, as advance information, seasonal recommended information and special features can be posted in real time. As described above, in the statuses S1 to S3, the multilingual map, the store homepage, and the menu are all linked.
  • FIG. 11 is an image diagram showing a difference from a conventional guidebook different from that shown in FIG. 10 when a multilingual map is used as a travel guide.
  • each category can be advertised all over the world by linking or sharing on the Internet.
  • the content of the multilingual map can be used as it is as a signage or menu for moving image advertisements of products and services. This makes it possible to appeal tourism resources to people all over the world.
  • step S11 after the dedicated app is started by an overseas traveler, characters or designs indicating an area (in the example of FIG. 11, “city center area”) that the overseas traveler wants to obtain information from the overseas traveler. When tapped, the target area is selected. At this time, the overseas traveler can switch the characters displayed on the mobile terminal to the language used by the overseas traveler in real time.
  • an area in the example of FIG. 11, “city center area”
  • the multilingual map can be searched and displayed for tours, shopping, restaurants, etc. based on the current location.
  • a favorite spot can be registered as a favorite spot at any timing before and after the visit.
  • step S12 the map indicating the area is enlarged and displayed, icons indicating the locations of the facilities are displayed on the map, and an overview of the facilities is displayed as a thumbnail. Also, an image showing a predetermined character displayed on the screen becomes an embedding location, and information in multiple languages is announced to the overseas user by means of a sound embedded in the embedding location or a photo displayed as a thumbnail. Can do. In addition, it is good also as dividing the shape of the icon displayed on a map, a pattern, etc. for every kind of each facility. For example, in the example of FIG. 11, each facility type is divided into Food (restaurant), Shop (non-restaurant), and SPOT (other facilities).
  • the overseas traveler taps an icon of a facility that he / she wants to obtain information from among the icons displayed on the enlarged map, information on the facility is displayed.
  • the menu of the store and the map m1 are linked, and the menu of the predetermined restaurant is displayed.
  • the menu of the restaurant and the map m1 can be linked together, and the production area of the food displayed as the description of the menu can be linked with the map m1.
  • the overseas traveler can easily grasp the source of the ingredients used in the food that he / she eats in the language he / she uses, so that he / she can deepen the excitement when eating the food. be able to.
  • PR of the menu for overseas travelers becomes easy.
  • the multilingual map can also issue store menus, catalogs, coupons, and the like from detailed spot information (for example, restaurants).
  • FIG. 12 is an image diagram showing a difference from a conventional guidebook different from those shown in FIGS. 10 and 11 when a multilingual map is used as a guidebook.
  • the multilingual map is provided with a dedicated editor (for example, the editor device 1 in FIG. 1) for the manager on the store (or region) side.
  • a dedicated editor for example, the editor device 1 in FIG. 1 for the manager on the store (or region) side.
  • the method for providing a dedicated editor includes provision including hardware and online provision. This makes it easy for managers of stores (or regions) to upload to the multilingual map platform and to update the SD card or online, thereby reducing the time costs required for these operations.
  • the latest information that the manager of the region or the store wants to publicize can be easily promoted to people all over the world without having know-how on the work.
  • the manager of the store can introduce the production area and the like in multiple languages and in real time not only with documents and photos but also with videos and sounds. This makes it possible to easily promote products and services to people all over the world using any method. It is also possible to make a dedicated mobile terminal (tablet) in a predetermined store (or region). For example, by deploying a dedicated tablet instead of the conventional paper menu table deployed at each restaurant table, the menu table corresponding to the language used by the visitor can be displayed on the dedicated tablet. . Thereby, the visitor can not only smoothly order the food but also obtain detailed information about each food and drink constituting the menu table by voice, a moving image, a map, and the like.
  • what can be interlocked with the multilingual map is not limited to the above-described example, and any object can be interlocked.
  • guidance with explanations (letters and voices) at each sightseeing spot, introduction of tours at recommended spots, various reservation systems, tax exemption information, simple conversation using comics, emergency contact methods, information on spot sales, purchase Information on services that deliver goods to hotels, gourmet information, etc. can be linked to a map that supports the world.
  • FIG. 13 is a diagram showing an example of a menu table created by an administrator of a predetermined store.
  • the left figure of FIG. 13 has shown an example of the menu table of the food provided in a predetermined shop (restaurant).
  • the menu table is an example of multilingual content created by the manager of the store using the editor device 1 and viewed by the user using the viewer device 4.
  • An example of the menu table includes a menu category 201 as an embedding location, a language switching button 202, and a map button 203 in addition to information (name, photo, and price) of the provided food.
  • a person who wants to obtain information from the menu table can display menus by category by selecting the menu category 201.
  • the menu category 201 is displayed in Chinese because Chinese (Chinese) is selected in the language switching button 202.
  • the map button 203 can indicate the location of the predetermined store, the production area of ingredients of each dish constituting the menu table, and the like on a map.
  • the right side of FIG. 13 shows an example in which information related to a specific dish is displayed in a document from a menu table of dishes provided at a predetermined store (restaurant). Note that the content displayed in the document can be read out by multilingual audio.
  • the information is managed by the embedding correspondence table illustrated in FIG. 5 and can be switched in multiple languages.
  • the information includes a cooking halal certification 204 as an embedding location, a food pictogram 205, a product description 206, and basic information 207, in addition to a photo of the specific dish.
  • the halal certification 204 is a display indicating whether or not the product falls under “Halal” which means a sound product or an overall activity permitted by Islamic teaching.
  • “Halam”, which means the opposite of “Halal”, is harmful and addictive to Muslims. That is, Muslims must avoid food and drink other than those officially recognized as falling under “Halal”. For this reason, Muslims confirm in real time whether the particular dish is officially recognized as a halal product by confirming the display of the halal certification 204 displayed in the menu table. be able to.
  • the food pictogram 205 displays foods used for cooking as a service for customers who have restrictions on what can be eaten and drinked for reasons of religion, vegetarianism, food allergies, and the like. Thereby, the said customer can order a dish in comfort.
  • the product description 206 is a sentence explaining the specific dish. For example, it is possible to display an explanation that may be interested in Japanese food or store preferences. In the example on the right side of FIG. 13, a product description in English is displayed. In this way, Japan can be strongly appealed not only by providing food to foreigners but also by conveying the Japanese culture embedded in the food.
  • the basic information 207 displays basic information about the specific dish such as the production area, allergen, and calories.
  • the production area of the ingredients used for cooking, the state of the producer, and the like can be displayed by multilingual video and images. Accordingly, the customer can order the food with peace of mind, and can easily grasp the source of the ingredients used for the food he / she eats in the language he / she uses. Thereby, the customer can deepen the excitement when the food is eaten.
  • the side which provides information, such as a restaurant becomes easy to PR of a menu with respect to a customer.
  • FIG. 14 is a diagram illustrating an example of a product catalog of a home appliance mass retailer and a menu of a restaurant displayed on a smartphone in which a dedicated application is installed.
  • FIG. 14A shows an example of a product catalog of a home appliance mass retailer.
  • the overseas traveler wants to purchase a rice cooker made in Japan at a predetermined consumer electronics retailer in Japan
  • the overseas traveler first activates a dedicated application to find the predetermined consumer electronics retailer. Search for.
  • the icon which shows a predetermined household appliance mass retailer is displayed on the map of a special application.
  • information for example, size and price included in the product catalog of the home appliance mass retailer embedded in the embedding location is displayed in a document or output by voice. .
  • the document displayed on the product catalog displayed on the screen of the smartphone can be switched and displayed in the language used by the overseas traveler by the operation of the overseas traveler.
  • documents and the like displayed in the product catalog are managed by the embedding correspondence table illustrated in FIG. 5 and can be switched in multiple languages.
  • FIG. 14B shows an example of a restaurant menu. For example, when an overseas traveler places a food order at a restaurant in Japan, to a store clerk who can select food from the traditional menu table displayed only in Japanese and speak only Japanese I often have to place orders. However, it is not possible to create a taste only with the names of the dishes listed in the menu table. Also, you may want to check the menu before entering a restaurant.
  • overseas travelers can not only smoothly order food at restaurants by using the dedicated app, but also have detailed information about each food and drink embedded in the embedded area of the menu table. Information can be acquired by sound, moving images, maps, and the like corresponding to multiple languages. It is also possible to place orders online and perform accounting using a dedicated app. You can also search for recommended stores from your current location.
  • the content creation and playback system to which the present invention is applied includes various information processing systems including the information processing system including the editor device 1 and the viewer device 4 according to the above-described embodiment. Embodiments can be taken. That is, a content creation and playback system to which the present invention is applied is a system including the following content creation device and content playback device.
  • the content creation device includes the following selection means, embedding means, embedding correspondence information generating means, and content generating means.
  • the selection means (for example, the embedding location selection unit 12 in FIG. 2) selects one or more embedding locations for embedding language data from the image data serving as content elements.
  • the embedding unit for example, the embedding unit 15 in FIG.
  • the embedding correspondence information generating means (for example, the embedding correspondence table generating unit 16 in FIG. 2) generates embedding correspondence information (for example, the embedding correspondence table in FIG. 5) indicating the result of the embedding process.
  • the content generation means (for example, the switching content data generation unit 17 in FIG. 2) includes the content including the image data, the language data of each of the two or more languages to be embedded, and the embedding correspondence information (for example, the above-described other Language switching content) is generated.
  • the content reproduction apparatus includes the following content acquisition means, identification means, extraction means, and reproduction control means.
  • a content acquisition unit acquires the content generated by the content generation unit.
  • the specifying unit specifies the playback target language.
  • the extraction means extracts language data of the reproduction target language from the language data of two or more languages to be embedded based on the embedding correspondence information.
  • the reproduction control means reproduces the image data in a state where language data of the reproduction target language is embedded in the embedding portion.
  • the content creation device and the content reproduction device to which the present invention is applied have been described by taking the editor device 1 and the viewer device 4 as examples.
  • the present invention is not particularly limited thereto.
  • the present invention can be applied to general electronic devices that can process sound and images.
  • the present invention can be applied to portable terminals such as smartphones, portable navigation devices, mobile phones, portable games, digital cameras, notebook personal computers, printers, television receivers, video cameras, and the like. It is.
  • FIGS. 2 and 6 are merely examples, and are not particularly limited. That is, it is sufficient that the editor device 1 and the viewer device 4 have a function capable of executing the above-described series of processing as a whole, and what functional block is used to realize this function is particularly shown in FIGS.
  • the example is not limited to six.
  • one functional block may be constituted by hardware alone, software alone, or a combination thereof.
  • a program constituting the software is installed on a computer or the like from a network or a recording medium.
  • the computer may be a computer incorporated in dedicated hardware.
  • the computer may be a computer capable of executing various functions by installing various programs, for example, a general-purpose personal computer.
  • a recording medium including such a program is provided not only to a removable medium distributed separately from the apparatus main body in order to provide the program to the user, but also to the user in a state of being incorporated in the apparatus main body in advance. It consists of a recording medium.
  • the removable medium 41 is composed of, for example, a magnetic disk (including a floppy disk), an optical disk, a magneto-optical disk, or the like.
  • the optical disk is composed of, for example, a CD-ROM (Compact Disk-Read Only Memory), a DVD (Digital Versatile Disk), or the like.
  • the magneto-optical disk is constituted by an MD (Mini-Disk) or the like.
  • the recording medium provided to the user in a state of being preinstalled in the apparatus main body is configured by, for example, a ROM or a hard disk in which a program is recorded.
  • the step of describing the program recorded on the recording medium is not limited to the processing performed in time series along the order, but is not necessarily performed in time series, either in parallel or individually.
  • the process to be executed is also included.
  • the term “system” means an overall apparatus configured by a plurality of devices, a plurality of means, and the like.

Abstract

 The present invention makes it possible to easily create content which includes images and words and which can be switched between languages. An embedding location selection unit 12 selects, from among sets of image data that serve as elements of the content, at least one embedding location for embedding linguistic data. An embedding unit 15 performs an embedding process in which linguistic data to be embedded in at least two languages is associated with each of the at least one embedding locations. An embedding correspondence table generation unit 16 generates an embedding correspondence table as embedding correspondence information indicating the results of the embedding process. The switchable content data generation unit 17 generates, as content that can be switched between languages, content including image data, the sets of linguistic data for the at least two languages to be embedded, and the embedding correspondence information.

Description

コンテンツ作成装置、コンテンツ再生装置、及びプログラム、並びにコンテンツ作成及び再生システムContent creation apparatus, content reproduction apparatus, program, and content creation and reproduction system
 本発明は、コンテンツ作成装置、コンテンツ再生装置、プログラム、並びにコンテンツ配信システムに関する。 The present invention relates to a content creation device, a content reproduction device, a program, and a content distribution system.
 従来より、漫画のコンテンツを電子化して配信するシステムが存在する(例えば特許文献1参照)。
 このような漫画のコンテンツは、紙媒体に描かれた漫画をイメージスキャナ等で読み込ませたイメージデータが用いられていた。つまり、登場人物や背景等の絵と共に、吹き出しに表されるセリフも1つのイメージデータ内に含まれていた。
Conventionally, there is a system that digitizes and distributes comic content (see, for example, Patent Document 1).
For such comic content, image data obtained by reading a comic drawn on a paper medium with an image scanner or the like has been used. In other words, along with pictures of characters, backgrounds, etc., words expressed in a balloon are also included in one image data.
特開2005-204338号公報JP 2005-204338 A
 このため、他言語の漫画のコンテンツを作成するためには、紙媒体上で吹き出し部を多言語に代えた後、当該紙媒体をイメージスキャナ等で読み込ませた別のイメージデータを用意する必要があった。
 このような状況は、漫画のコンテンツのみならず、画像と言語を含むコンテンツ一般にあてはまることである。
For this reason, in order to create comic content in other languages, it is necessary to prepare another image data in which the paper medium is read by an image scanner or the like after the speech balloon is changed to multiple languages on the paper medium. there were.
Such a situation applies not only to comic content but also to content in general including images and languages.
 本発明は、このような状況に鑑みてなされたものであり、画像と言語を含むコンテンツとして、多言語に切替え可能なコンテンツを容易に作成可能にすることを目的とする。 The present invention has been made in view of such a situation, and an object of the present invention is to enable easy creation of content that can be switched to multiple languages as content including images and languages.
 上記目的を達成するため、本発明の一態様のコンテンツ作成装置は、
 コンテンツの要素となる画像データの中から、言語データを埋込むための埋込箇所を1以上選択する選択手段と、
 1以上の前記埋込箇所の夫々に対して、埋込対象の2以上の言語の言語データを夫々対応付ける処理を埋込処理として実行する埋込手段と、
 前記埋込処理の結果を示す埋込対応情報を生成する埋込対応情報生成手段と、
 前記画像データ、前記埋込対象の2以上の言語の夫々の言語データ、及び前記埋込対応情報を含むコンテンツを生成するコンテンツ生成手段と、
 を備える。
In order to achieve the above object, a content creation device according to an aspect of the present invention includes:
A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements;
Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations;
Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process;
Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information;
Is provided.
 本発明の一態様の第1プログラムは、上述の本発明の一態様のコンテンツ作成装置に対応するプログラムである。 The first program of one aspect of the present invention is a program corresponding to the above-described content creation device of one aspect of the present invention.
 本発明の一態様のコンテンツ再生装置は、
 言語データを埋込むための埋込箇所を1以上含む画像データと、前記1以上の埋込箇所の夫々に対して埋め込まれる2以上の言語の言語データと、1以上の前記埋込箇所の夫々と、埋込対象の2以上の言語の言語データとの対応関係を示す埋込対応情報とを含むコンテンツを取得するコンテンツ取得手段と、
 再生対象言語を特定する特定手段と、
 前記埋込対応情報に基づいて、前記埋込対象の2以上の言語の言語データから、再生対象言語の言語データを抽出する抽出手段と、
 前記画像データを、前記再生対象言語の言語データを前記埋込箇所に埋め込んだ状態で再生する再生制御手段と、
 を備える。
A content reproduction device according to one embodiment of the present invention includes
Image data including one or more embedding locations for embedding language data, language data of two or more languages embedded in each of the one or more embedding locations, and each of the one or more embedding locations. Content acquisition means for acquiring content including embedded correspondence information indicating a correspondence relationship with language data of two or more languages to be embedded;
A specifying means for specifying the target language,
Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
Reproduction control means for reproducing the image data in a state in which language data of the reproduction target language is embedded in the embedded portion;
Is provided.
 本発明の一態様の第2プログラムは、上述の本発明の一態様のコンテンツ再生装置に対応するプログラムである。 The second program according to one aspect of the present invention is a program corresponding to the content reproduction device according to one aspect of the present invention described above.
 本発明の一態様のコンテンツ作成及び再生システムは、
 コンテンツ作成装置とコンテンツ再生装置とを含むコンテンツ作成及び再生システムにおいて、
 前記コンテンツ作成装置は、
  コンテンツの要素となる画像データの中から、言語データを埋込むための埋込箇所を1以上選択する選択手段と、
  1以上の前記埋込箇所の夫々に対して、埋込対象の2以上の言語の言語データを夫々対応付ける処理を埋込処理として実行する埋込手段と、
  前記埋込処理の結果を示す埋込対応情報を生成する埋込対応情報生成手段と、
  前記画像データ、前記埋込対象の2以上の言語の夫々の言語データ、及び前記埋込対応情報を含むコンテンツを生成するコンテンツ生成手段と、
 を備え、
 前記コンテンツ再生装置は、
  前記コンテンツ生成手段により生成された前記コンテンツを取得するコンテンツ取得手段と、
  再生対象言語を特定する特定手段と、
  前記埋込対応情報に基づいて、前記埋込対象の2以上の言語の言語データから、再生対象言語の言語データを抽出する抽出手段と、
  前記画像データを、前記再生対象言語の言語データを前記埋込箇所に埋め込んだ状態で再生する再生制御手段と、
 を備える。
A content creation and playback system according to an aspect of the present invention includes:
In a content creation and playback system including a content creation device and a content playback device,
The content creation device includes:
A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements;
Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations;
Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process;
Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information;
With
The content playback device
Content acquisition means for acquiring the content generated by the content generation means;
A specifying means for specifying the target language,
Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
Reproduction control means for reproducing the image data in a state in which language data of the reproduction target language is embedded in the embedded portion;
Is provided.
 本発明によれば、画像と言語を含むコンテンツとして、多言語に切替え可能なコンテンツを容易に作成可能になる。 According to the present invention, it is possible to easily create content that can be switched to multiple languages as content including images and languages.
本発明のコンテンツ作成及び再生システムの一実施形態に係る情報処理システムの構成を示すブロック図である。It is a block diagram which shows the structure of the information processing system which concerns on one Embodiment of the content creation and reproduction | regeneration system of this invention. 図1の情報処理システムのうち、エディタ装置の機能的構成の一例を示す機能ブロック図である。It is a functional block diagram which shows an example of a functional structure of an editor apparatus among the information processing systems of FIG. 図2のエディタ装置に表示されるエディタ画像の一例を示す図である。It is a figure which shows an example of the editor image displayed on the editor apparatus of FIG. 図2のエディタ装置の機能として、1つの埋込箇所に対して、多言語のテキストデータや音声データを埋込む機能を説明するための模式図である。FIG. 3 is a schematic diagram for explaining a function of embedding multilingual text data and voice data in one embedding location as a function of the editor device of FIG. 2. 図2のエディタ装置により生成される埋込対応テーブルの構造の一例を示す図である。It is a figure which shows an example of the structure of the embedding correspondence table produced | generated by the editor apparatus of FIG. 図1の情報処理システムのうち、ビューア装置の機能的構成の一例を示す機能ブロック図である。It is a functional block diagram which shows an example of a functional structure of a viewer apparatus among the information processing systems of FIG. 図6のビューア装置に表示されるビューア画像の一例を示す図である。It is a figure which shows an example of the viewer image displayed on the viewer apparatus of FIG. 図6のビューア装置で再生されるコンテンツの他言語の切換えを説明するための模式図である。It is a schematic diagram for demonstrating the switching of the other languages of the content reproduced | regenerated with the viewer apparatus of FIG. マルチリンガルマップの一例の全体の概要を示すイメージ図である。It is an image figure which shows the outline | summary of the whole example of a multilingual map. マルチリンガルマップをガイドブックとして利用する場合における従来のガイドブックとの違いを示すイメージ図である。It is an image figure which shows the difference with the conventional guidebook in the case of using a multilingual map as a guidebook. マルチリンガルマップを旅行ガイドとして利用する場合における、図10に示したものとは異なる従来のガイドブックとの違いを示すイメージ図である。It is an image figure which shows the difference with the conventional guidebook different from what was shown in FIG. 10 in the case of using a multilingual map as a travel guide. マルチリンガルマップをガイドブックとして利用する場合における、図10及び11に示したものとは異なる従来のガイドブックとの違いを示すイメージ図である。FIG. 12 is an image diagram showing a difference from a conventional guide book different from those shown in FIGS. 10 and 11 when a multilingual map is used as a guide book. 所定の店の管理者が制作したメニュー表の一例を示す図である。It is a figure which shows an example of the menu table | surface produced by the administrator of the predetermined shop. マルチリンガルマップの専用アプリがインストールされたスマートフォンに表示された、家電量販店の商品カタログ及び飲食店のメニューの一例を示す図である。It is a figure which shows an example of the product catalog of a household appliance mass retailer, and the menu of a restaurant displayed on the smart phone in which the exclusive application of the multilingual map was installed.
 以下、本発明の実施形態について、図面を用いて説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明のコンテンツ作成及び再生システムの一実施形態に係る情報処理システムの構成を示すブロック図である。
 図1に示す情報処理システムは、エディタ装置1と、オーサリング装置2と、配信装置3と、ビューア装置4とから構成される。
FIG. 1 is a block diagram showing a configuration of an information processing system according to an embodiment of the content creation and playback system of the present invention.
The information processing system shown in FIG. 1 includes an editor device 1, an authoring device 2, a distribution device 3, and a viewer device 4.
 エディタ装置1は、本発明が適用されるコンテンツ作成装置の一実施形態であり、コンテンツ(例えば本実施形態では漫画のコンテンツ)を電子データとして作成する。具体的には、エディタ装置1は、漫画のコンテンツのうち、セリフ等の言語の部分を多言語で切り替えることが可能なコンテンツ(以下「多言語切替コンテンツ」と呼ぶ)の電子データを作成する。なお、以下、特に断りの無い限り、コンテンツと呼ぶ場合は、コンテンツの電子データを意味するものとする。 The editor device 1 is an embodiment of a content creation device to which the present invention is applied, and creates content (for example, comic content in the present embodiment) as electronic data. Specifically, the editor device 1 creates electronic data of content (hereinafter referred to as “multilingual switching content”) that can switch a language portion such as speech in multilingual among comic content. Hereinafter, unless otherwise specified, the term “content” refers to electronic data of the content.
 オーサリング装置2は、多言語切替コンテンツを縮小し、縮小後の多言語切替コンテンツ(以下「縮小コンテンツ」と呼ぶ)を配信装置3に提供する。
 なお、縮小コンテンツの作成手法(コンテンツの縮小手法)は、特に限定されず、任意の手法を採用することができる。
The authoring device 2 reduces the multilingual switching content and provides the reduced multilingual switching content (hereinafter referred to as “reduced content”) to the distribution device 3.
Note that the reduced content creation method (content reduction method) is not particularly limited, and any method can be employed.
 例えば本実施形態では、オーサリング装置2は、多重露光をシミュレートした方法で、多言語切替コンテンツの画像を符号化してデータ量の削減(縮小)を行う、といった画像変換処理を実行する。
 ここで、当該画像変更処理が施された画像を、以下、「変換画像」と呼ぶものとすると、後述するビューア装置4において、多重露光から一回露光の画像を得る数学的手段を用いた画像の復元処理(以下、「画像逆変換処理」と呼ぶ)が実行されることで、多言語切替コンテンツが復元される。
 ここで、変換画像のデータ量は原画像のそれよりも減少している。これは、同じ画像をずらしながら重ねていけばいくほど画像が平均化され、最後には一色になってしまうことを考えれば明らかである。
 このように、本実施形態で採用された画像変換処理及び画像逆変換処理は、画像のデータ量を削減しつつそのデータの画像としての信憑性を損なうことのない、数学的に保証された手段を用いた処理である。このため、第三者により機密性の有る画像がたまたま傍受されたり改竄されたりすることが実質できなくなる。
 なお、本実施形態で採用された画像変換処理及び画像逆変換処理の詳細については、特願2014-214533号の願書に添付した明細書を参照するとよい。
 ただし、繰り返しになるが、本実施形態で採用された画像変換処理及び画像逆変換処理は縮小手法の例示に過ぎず、任意の縮小手法を採用することができる。
For example, in the present embodiment, the authoring device 2 executes an image conversion process such as encoding a multilingual switching content image to reduce (reduce) the data amount by a method that simulates multiple exposure.
Here, assuming that the image subjected to the image change processing is hereinafter referred to as a “converted image”, an image using mathematical means for obtaining a single-exposure image from multiple exposure in the viewer device 4 described later. The multilingual switching content is restored by executing the restoration process (hereinafter referred to as “image reverse conversion process”).
Here, the data amount of the converted image is smaller than that of the original image. This is clear when considering that the same image is shifted and overlapped, the more the image is averaged and finally the color becomes one color.
As described above, the image conversion process and the image reverse conversion process employed in the present embodiment are mathematically guaranteed means that reduce the amount of image data and do not impair the authenticity of the data as an image. It is a process using. For this reason, it is virtually impossible for a third party to accidentally intercept or tamper with a confidential image.
For details of the image conversion processing and the image reverse conversion processing employed in the present embodiment, the specification attached to the application for Japanese Patent Application No. 2014-214533 may be referred to.
However, although it is repeated, the image conversion process and the image reverse conversion process employed in the present embodiment are merely examples of a reduction method, and any reduction method can be adopted.
 配信装置3は、複数の縮小コンテンツを保持し、ビューア装置4から閲覧要求があった場合、閲覧対象の縮小コンテンツをビューア装置4に提供する。 The distribution device 3 holds a plurality of reduced contents, and provides a viewer-reduced reduced content to the viewer device 4 when a viewing request is received from the viewer device 4.
 ビューア装置4は、本発明が適用されるコンテンツ再生装置の一実施形態であり、ユーザがコンテンツ(例えば本実施形態では漫画のコンテンツ)を閲覧する際に操作する装置である。
 本実施形態では、ビューア装置4は、閲覧対象の縮小コンテンツに対して、上述の画像逆変換処理を施すことで、多言語切替コンテンツを復元する。
 ビューア装置4は、ユーザからの指示操作を受けて、多言語切替コンテンツを再生する。詳細については後述するが、セリフ等の言語の部分については、ユーザにより指定された言語で再生され、ユーザの切替操作に応じて再生対象の言語が切り替えられる。
The viewer apparatus 4 is an embodiment of a content reproduction apparatus to which the present invention is applied, and is an apparatus operated when a user views content (for example, comic content in the present embodiment).
In the present embodiment, the viewer device 4 restores the multilingual switching content by performing the above-described image reverse conversion process on the reduced content to be browsed.
The viewer device 4 receives the instruction operation from the user and reproduces the multilingual switching content. As will be described in detail later, the language portion such as speech is played back in the language specified by the user, and the playback target language is switched according to the switching operation of the user.
 本実施形態における、エディタ装置1、オーサリング装置2、配信装置3、及びビューア装置4の夫々は、コンピュータ及びその周辺装置に適用される。本実施形態における各部は、コンピュータ及びその周辺装置が備えるハードウェア並びに当該ハードウェアを制御するソフトウェアによって構成される。 In the present embodiment, each of the editor device 1, the authoring device 2, the distribution device 3, and the viewer device 4 is applied to a computer and its peripheral devices. Each unit in the present embodiment is configured by hardware included in a computer and its peripheral devices, and software that controls the hardware.
 上記ハードウェアには、制御部としてのCPU(Central Processing Unit)の他、記憶部、通信部、表示部及び入力部が含まれる。記憶部としては、例えば、メモリ(RAM:Random Access Memory、ROM:Read Only Memory等)、ハードディスクドライブ(HDD:Hard Disk Drive)、光ディスク(CD:Compact Disk、DVD:Digital Versatile Disk等)ドライブ等が挙げられる。通信部としては、例えば、各種有線及び無線インターフェース装置が挙げられる。表示部としては、例えば、液晶ディスプレイ等の各種ディスプレイが挙げられる。入力部としては、例えば、キーボードやポインティング・デバイス(マウス、トラッキングボール等)が挙げられる。 The above hardware includes a storage unit, a communication unit, a display unit, and an input unit in addition to a CPU (Central Processing Unit) as a control unit. As the storage unit, for example, a memory (RAM: Random Access Memory, ROM: Read Only Memory, etc.), a hard disk drive (HDD: Hard Disk Drive), an optical disk (CD: Compact Disk, DVD: Digital Versatile drive, etc.). Can be mentioned. Examples of the communication unit include various wired and wireless interface devices. Examples of the display unit include various displays such as a liquid crystal display. Examples of the input unit include a keyboard and a pointing device (mouse, tracking ball, etc.).
 なお、本実施形態のビューア装置4は、タブレットとして構成され、入力部と表示部を兼ね備えたタッチパネルも有している。
 タッチパネルの入力部は、例えば表示部の表示領域に積層される静電容量式又は抵抗膜式の位置入力センサにより構成され、タッチ操作がなされた位置の座標を検出する。ここで、タッチ操作とは、表示媒体たるタッチパネル(正確にはそのうちの入力部)に対する物体(ユーザの指やタッチペン等)の接触又は近接の操作をいう。なお、以下、タッチ操作がなされた位置を「タッチ位置」と呼び、タッチ位置の座標を「タッチ座標」と呼ぶ。
In addition, the viewer apparatus 4 of this embodiment is comprised as a tablet, and also has the touchscreen which has both the input part and the display part.
The input unit of the touch panel includes, for example, a capacitance type or resistance type position input sensor stacked in the display area of the display unit, and detects the coordinates of the position where the touch operation is performed. Here, the touch operation refers to an operation of touching or approaching an object (such as a user's finger or a touch pen) with respect to a touch panel (more precisely, an input unit) serving as a display medium. Hereinafter, the position where the touch operation is performed is referred to as “touch position”, and the coordinates of the touch position are referred to as “touch coordinates”.
 また、上記ソフトウェアには、上記ハードウェアを制御するコンピュータ・プログラムやデータが含まれる。コンピュータ・プログラムやデータは、記憶部により記憶され、制御部により適宜実行、参照される。また、コンピュータ・プログラムやデータは、通信回線を介して配布されることも可能であり、CD-ROM等のコンピュータ可読媒体に記録して配布されることも可能である。 Further, the software includes a computer program and data for controlling the hardware. The computer program and data are stored in the storage unit, and are appropriately executed and referenced by the control unit. Further, the computer program and data can be distributed via a communication line, and can also be recorded and distributed on a computer-readable medium such as a CD-ROM.
 エディタ装置1は、このようなハードウェアとソフトウェアの協働による各種動作をすべく、図2に示すような機能的構成を有している。
 即ち図2は、エディタ装置1の機能的構成の一例を示す機能ブロック図である。
The editor device 1 has a functional configuration as shown in FIG. 2 in order to perform various operations by such cooperation of hardware and software.
That is, FIG. 2 is a functional block diagram illustrating an example of a functional configuration of the editor device 1.
 エディタ装置1は、画像受付部11と、埋込箇所選択部12と、テキスト受付部13と、音声受付部14と、埋込部15と、埋込対応テーブル生成部16と、切替コンテンツデータ生成部17と、出力部18とを備えている。 The editor device 1 includes an image reception unit 11, an embedding location selection unit 12, a text reception unit 13, a voice reception unit 14, an embedding unit 15, an embedding correspondence table generation unit 16, and switching content data generation. A unit 17 and an output unit 18 are provided.
 画像受付部11は、コンテンツのうち画像のデータを受け付ける。
 埋込箇所選択部12は、受け付けられた画像の中から、言語のデータを埋め込む箇所(以下、「埋込箇所」を選択する。
 例えば本実施形態で受け付けられる画像のデータは、漫画を構成する複数ページの夫々を示す画像のデータであり、ページ単位で分割可能なものである。
 1ページ内の画像は、複数のコマに区分されている。1つのコマには、所定のシーンの絵が含まれており、必要に応じて「吹き出し」が含まれている。この「吹き出し」に、当該コマの絵に含まれる人物等のセリフが表示される。
 従って、本実施形態では、コマに含まれる「吹き出し」の箇所が、「埋込箇所」として選択される。
The image receiving unit 11 receives image data of the content.
The embedding location selection unit 12 selects a location in which language data is embedded (hereinafter, “embedding location”) from the received image.
For example, the image data received in the present embodiment is image data indicating each of a plurality of pages constituting a comic, and can be divided in units of pages.
An image in one page is divided into a plurality of frames. One frame includes a picture of a predetermined scene, and includes “speech balloons” as necessary. In this “speech balloon”, words such as persons included in the picture of the frame are displayed.
Therefore, in the present embodiment, the “speech balloon” location included in the frame is selected as the “embedding location”.
 図3は、エディタ装置1に表示されるエディタ画像31の一例を示している。
 コンテンツの制作者は、当該エディタ画像31を用いて、任意の漫画について、多言語切替コンテンツを作成する作業をすることができる。
FIG. 3 shows an example of the editor image 31 displayed on the editor device 1.
The content creator can use the editor image 31 to create multilingual switching content for an arbitrary comic.
 エディタ画像31には、所定の言語のデータを埋め込むための領域として、日本語のデータを埋め込む領域41J、英語のデータを埋め込む領域41E、中国語のデータを埋め込む領域41C等が含まれている。
 制作者は、例えば日本語のデータを埋め込むときには「日本語」のタブを指定して、図3に示すように、日本語のデータを埋め込む領域41Jを表示させる。
 以下、日本語のデータを埋め込む場合を例として、エディタ画像31について説明する。
The editor image 31 includes an area 41J for embedding Japanese data, an area 41E for embedding English data, an area 41C for embedding Chinese data, and the like as areas for embedding data in a predetermined language.
For example, when embedding Japanese data, the producer designates a “Japanese” tab and displays an area 41J in which Japanese data is embedded as shown in FIG.
Hereinafter, the editor image 31 will be described by taking as an example the case of embedding Japanese data.
 図3に示すように、日本語のデータを埋め込む領域41Jには、漫画を構成する複数のページの画像のうち、作業対象のページの画像が表示される。
 なお、制作者は、ページ切替領域42に示されるソフトウェアボタンを押下したり、ページサムネイル画像表示領域43に表示された各ページの画像のサムネイルを押下することで、作業対象のページを切替えることができる。
As shown in FIG. 3, in the region 41J in which the Japanese data is embedded, the image of the work target page among the images of the plurality of pages constituting the comic is displayed.
Note that the creator can switch the target page by pressing the software button shown in the page switching area 42 or pressing the thumbnail of the image of each page displayed in the page thumbnail image display area 43. it can.
 作業対象のページの画像は、複数のコマに分割されており、各コマには1以上の「吹き出し」が設定されている。当該「吹き出し」の箇所が、埋込箇所の候補となる。図3の例では、「吹き出し」の箇所51乃至55が、埋込箇所の候補となる。
 制作者は、このような埋込箇所の候補の中から、埋込箇所を選択する操作をする。図2の埋込箇所選択部12は、このような操作に基づいて、埋込箇所を選択する。
 例えば「吹き出し」の箇所52が埋込箇所として選択されたものとする。
The image of the work target page is divided into a plurality of frames, and one or more “speech balloons” are set for each frame. The location of the “balloon” is a candidate for an embedded location. In the example of FIG. 3, “speech balloons” 51 to 55 are candidates for embedding locations.
The producer performs an operation of selecting an embedding location from such embedding location candidates. The embedding location selection unit 12 in FIG. 2 selects an embedding location based on such an operation.
For example, it is assumed that a “balloon” location 52 is selected as an embedding location.
 制作者は、埋込箇所として選択された箇所52に対して、日本語のデータとして、テキストデータと音声データとのうち少なくとも一方を埋込むことができる。 The producer can embed at least one of text data and voice data as Japanese data in the location 52 selected as the embedding location.
 例えば制作者は、テキストデータを埋込む場合、埋込箇所として選択された箇所52に対して直接テキストを入力することで、入力されたテキストデータを埋込むこともできるし、予め用意されたテキストデータを埋込むこともできる。
 何れにしても、図2のテキスト受付部13が、埋込み対象のテキストデータを受付けて、埋込部15に供給する。
For example, when embedding text data, the producer can embed the input text data by directly inputting the text to the location 52 selected as the embedding location, or a text prepared in advance. Data can be embedded.
In any case, the text receiving unit 13 in FIG. 2 receives the text data to be embedded and supplies it to the embedding unit 15.
 また例えば、制作者は、音声データを埋込む場合、埋込箇所として箇所52が選択された状態で、所定の音声を発してエディタ装置1の図示せぬマイクロフォンに対して直接入力することで、入力された音声データを埋込むこともできるし、予め用意された音声データを埋込むこともできる。
 何れにしても、図2の音声受付部14が、埋込み対象の音声データを受付けて、埋込部15に供給する。
Further, for example, when embedding audio data, the producer emits a predetermined sound and directly inputs it to a microphone (not shown) of the editor device 1 with the location 52 selected as the embedding location. The input voice data can be embedded, or voice data prepared in advance can be embedded.
In any case, the voice reception unit 14 in FIG. 2 receives the voice data to be embedded and supplies it to the embedding unit 15.
 埋込部15は、埋込箇所選択部12により選択された埋込箇所(上述の例では「吹き出し」の箇所52)に対して、テキスト受付部13により受け付けられた、所定言語(上述の例では日本語)のテキストデータや、音声受付部14により受け付けられた、所定言語(上述の例では日本語)の音声データを埋込む処理を実行する。 The embedding unit 15 receives a predetermined language (the above-described example) received by the text receiving unit 13 with respect to the embedding portion (the “balloon” portion 52 in the above example) selected by the embedding location selecting unit 12. In Japanese) and processing for embedding voice data of a predetermined language (Japanese in the above example) received by the voice receiving unit 14 is executed.
 このようにして、埋込箇所選択部12により選択された埋込箇所(上述の例では「吹き出し」の箇所52)に対して、日本語のテキストデータや音声データを埋込みことが可能になる。
 図示はしないが、制作者は、さらに、英語のデータを埋め込むときには「ENGLISH」のタブを指定して、英語のデータを埋め込む領域41Eを表示させた状態で、上述と同様の操作をすればよい。同様に、制作者は、中国語のデータを埋め込むときには「中文」のタブを指定して、中国語のデータを埋め込む領域41Eを表示させた状態で、上述と同様の操作をすればよい。
 これにより、図4に示すように、1つの埋込箇所(同図の例では「吹き出し」の箇所52)に対して、多言語のテキストデータや音声データを埋込むことができる。
In this way, it is possible to embed Japanese text data or voice data in the embedding location selected by the embedding location selection unit 12 (in the above example, the “speech balloon location 52”).
Although not shown in the drawing, the producer may further perform the same operation as described above with the tab “ENGLISH” designated to embed English data and the area 41E for embedding English data displayed. . Similarly, when embedding Chinese data, the producer may perform the same operation as described above while designating the “Chinese” tab and displaying the area 41E for embedding Chinese data.
As a result, as shown in FIG. 4, multilingual text data and voice data can be embedded in one embedding location (in the example of FIG. 4, a “balloon” location 52).
 ここで、本実施形態における図2の埋込部15による「埋込む処理」とは、埋込箇所にテキスト等が配置されたページの画像をつくる画像処理(つまり画像を加工する処理)ではなく、画像データにおける埋込箇所と、埋込対象の言語のデータ(テキストデータや音声データ)とを対応付ける処理をいう。
 この対応付けの手法は、特に限定されないが、本実施形態では、図5に示すようなテーブル(以下、「埋込対応テーブル」と呼ぶ)を生成する手法が採用されている。
 即ち、図2の埋込対応テーブル生成部16は、画像データにおける埋込箇所と、埋込対象の言語のデータ(テキストデータや音声データ)とを対応付けた埋込対応テーブルを作成する。
Here, the “embedding process” by the embedding unit 15 in FIG. 2 in this embodiment is not an image process for creating an image of a page in which text or the like is arranged at an embedding location (that is, a process for processing an image). The process of associating the embedding location in the image data with the data of the language to be embedded (text data or audio data).
The method of this association is not particularly limited, but in this embodiment, a method of generating a table as shown in FIG. 5 (hereinafter referred to as “embedding correspondence table”) is adopted.
That is, the embedding correspondence table generation unit 16 in FIG. 2 creates an embedding correspondence table in which embedding locations in image data are associated with data of a language to be embedded (text data or voice data).
 図5の埋込対応テーブルでは、所定の1行は、所定の1つの埋込箇所に対応している。
 図3の例示からも明らかなように、1つの漫画のコンテンツには、複数の「吹き出し」の箇所(図3に示すだけでも箇所51乃至56)が存在し、夫々の「吹き出し」の箇所に対して別々のテキストや音声が埋込まれる。
 従って、埋込箇所は、一意のIDが付されることになる。さらに、本実施形態では、同一の埋込箇所であっても、テキストデータと音声データとを明確に区別すべく、テキストデータ用のIDと音声データ用のIDとが別々に用いられる。具体的には例えば、本実施形態ではID「Pn-Am-T」と、ID「Pn-Am-S」が用いられる。ここで、「Pn」の「n」は、ページ番号を示している。「Am」の「m」は、「n」ページの画像に含まれる複数の埋込箇所の夫々に対して所定規則で付された番号を示している。即ちID「Pn-Am」とは、「n」ページ目の「m」番目の埋込箇所を一意に示すIDである。さらに、IDの末尾の「T」はテキストデータであることを示し、IDの末尾の「S」は音声データであることを示している。
 なお、図2の埋込箇所選択部12により選択された各埋込箇所及びそのIDについては、画像のデータ内でも対応付けができているものとする。即ち、IDを指定することで、画像の中から、当該IDが示す「吹き出し」の箇所(画像領域)が特定されるものとする。
In the embedding correspondence table of FIG. 5, a predetermined row corresponds to a predetermined one embedding location.
As is clear from the illustration of FIG. 3, one comic content has a plurality of “speech balloons” (spots 51 to 56 only shown in FIG. 3). Separate text and voice are embedded.
Therefore, a unique ID is assigned to the embedding location. Further, in the present embodiment, the text data ID and the voice data ID are used separately so that the text data and the voice data are clearly distinguished even at the same embedding location. Specifically, for example, ID “Pn-Am-T” and ID “Pn-Am-S” are used in this embodiment. Here, “n” of “Pn” indicates a page number. “M” of “Am” indicates a number given by a predetermined rule to each of a plurality of embedding portions included in the image of the “n” page. That is, the ID “Pn-Am” is an ID that uniquely indicates the “m” -th embedding location of the “n” page. Furthermore, “T” at the end of the ID indicates text data, and “S” at the end of the ID indicates audio data.
It is assumed that each embedding location selected by the embedding location selection unit 12 in FIG. 2 and its ID are also associated in the image data. That is, by designating an ID, a “speech balloon” location (image region) indicated by the ID is specified from the image.
 具体的には例えば1行目によれば、ID「P1-A1-T」の埋込箇所、即ち「1」ページ目の「1」番目の埋込箇所(例えば図4の箇所52)には、テキストデータとして、日本語では「僕の名前はAです。」が対応付けられ、英語では「My Name is A.」が対応付けられ、中国語では「我的名字是A。」が対応付けられていることがわかる。
 なお、図5には示していないが、各テキストデータのパラメータ、例えば、フォントの種類、フォントの大きさ等も、各言語毎(各項目毎)に埋込対応テーブルに格納しておくこともできる。なお、各テキストデータのパラメータは、図3のテキストパラメータ指定領域44の各種操作器具(ソフトウェア)を操作することで、各言語毎かつ各埋込箇所毎(必要に応じて各文字毎)に指定することが可能である。
 また、図5の例では、各言語のテキストが埋込対応テーブルに直接格納されているが、音声データのように、テキストデータのファイルを別に用意して、当該ファイルのリンク先を格納されてもよい。
Specifically, for example, according to the first line, the embedding location of the ID “P1-A1-T”, that is, the “1” th embedding location of the “1” page (for example, location 52 in FIG. 4) As text data, “My name is A.” is associated with Japanese, “My Name is A.” is associated with English, and “My Name Is A.” is associated with Chinese. You can see that
Although not shown in FIG. 5, parameters of each text data, for example, font type, font size, etc., may be stored in the embedding correspondence table for each language (each item). it can. The parameters of each text data are specified for each language and for each embedding location (for each character as required) by operating various operating tools (software) in the text parameter specifying area 44 of FIG. Is possible.
In the example of FIG. 5, texts in each language are directly stored in the embedding correspondence table. However, a text data file is prepared separately like voice data, and the link destination of the file is stored. Also good.
 また例えば2行目によれば、ID「P1-A1-T」の埋込箇所、即ち「1」ページ目の「1」番目の埋込箇所(例えば図4の箇所52)には、音声データとして、日本語では「A日.mp3」が対応付けられ、英語では「A英.mp3」が対応付けられ、中国語では「A中.mp3」が対応付けられていることがわかる。
 ここで「A日.mp3」とは、日本語で発音する「僕の名前はAです。」という音声データのファイル名を示している。即ち、当該音声データのファイルのリンク先が、図5の埋込対応テーブルに格納されている。
 同様に、「A英.mp3」とは、英語で発音する「My Name is A.」という音声データのファイル名を示している。即ち、当該音声データのファイルのリンク先が、図5の埋込対応テーブルに格納されている。
 「A中.mp3」とは、中国語で発音する「My Name is A.」という音声データのファイル名を示している。即ち、当該音声データのファイルのリンク先が、図5の埋込対応テーブルに格納されている。
Further, for example, according to the second line, the audio data is stored in the embedding location of the ID “P1-A1-T”, that is, the “1” -th embedding location (eg, location 52 in FIG. 4) of the “1” page. As can be seen, “A day.mp3” is associated in Japanese, “A English.mp3” is associated in English, and “A middle.mp3” is associated in Chinese.
Here, “A day.mp3” indicates a file name of voice data “My name is A.” pronounced in Japanese. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
Similarly, “A English.mp3” indicates a file name of voice data “My Name is A.” pronounced in English. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
“A middle.mp3” indicates the file name of the voice data “My Name is A.” pronounced in Chinese. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
 図2に戻り、切替コンテンツデータ生成部17は、漫画の各ページの画像のデータ(埋込箇所の対応付け済み)、各埋込箇所に埋め込まれた(対応付けられた)テキストデータ及び音声データ、並びに埋込対応テーブルを含むデータ群を、多言語切替コンテンツとして生成する。
 出力部18は、多言語切替コンテンツをエディタ装置1から出力する。
Returning to FIG. 2, the switching content data generation unit 17 performs image data of each page of the comic (corresponding to the embedded portion), text data and audio data embedded (corresponding) to each embedded portion. And a data group including an embedding correspondence table are generated as multilingual switching content.
The output unit 18 outputs the multilingual switching content from the editor device 1.
 以上、本発明のコンテンツ作成装置の一実施形態としてのエディタ装置1の機能的構成例について説明した。
 次に、本発明のコンテンツ再生装置の一実施形態としてのビューア装置4の機能的構成例について説明する。
 図6は、ビューア装置4の機能的構成の一例を示す機能ブロック図である。
Heretofore, the functional configuration example of the editor device 1 as one embodiment of the content creation device of the present invention has been described.
Next, a functional configuration example of the viewer apparatus 4 as an embodiment of the content reproduction apparatus of the present invention will be described.
FIG. 6 is a functional block diagram illustrating an example of a functional configuration of the viewer device 4.
 ビューア装置4は、切替コンテンツデータ取得部61と、分離部62と、画像保持部63と、テキスト保持部64と、音声保持部65と、埋込対応テーブル保持部66と、操作部67と、再生対象特定部68と、再生対象抽出部69と、再生制御部70と、出力部71とを備えている。 The viewer device 4 includes a switching content data acquisition unit 61, a separation unit 62, an image holding unit 63, a text holding unit 64, an audio holding unit 65, an embedding correspondence table holding unit 66, an operation unit 67, A reproduction target specifying unit 68, a reproduction target extraction unit 69, a reproduction control unit 70, and an output unit 71 are provided.
 切替コンテンツデータ取得部61は、配信装置3から配信された多言語切替コンテンツを取得する。
 分離部62は、多言語切替コンテンツから、漫画の各ページの画像のデータ(埋込箇所の対応付け済み)、各埋込箇所に埋め込まれた(対応付けられた)テキストデータ及び音声データ、並びに埋込対応テーブルを、夫々分離する。
 多言語切替コンテンツから分離された各データのうち、漫画の各ページの画像のデータ(埋込箇所の対応付け済み)は画像保持部63に保持される。各埋込箇所に埋め込まれた(対応付けられた)テキストデータはテキスト保持部64に保持される。各埋込箇所に埋め込まれた(対応付けられた)音声データは音声保持部65に保持される。埋込対応テーブルは埋込対応テーブル保持部66に保持される。
The switching content data acquisition unit 61 acquires the multilingual switching content distributed from the distribution device 3.
Separation unit 62, from the multilingual switching content, image data of each page of the comic (embedded location is associated), text data and audio data embedded (associated) in each embedded location, and The embedding correspondence table is separated.
Of the data separated from the multilingual switching content, the image data of each page of the comic (corresponding to the embedded portion) is held in the image holding unit 63. Text data embedded (associated) in each embedding location is held in the text holding unit 64. Audio data embedded (associated) in each embedding location is held in the audio holding unit 65. The embedding correspondence table is held in the embedding correspondence table holding unit 66.
 なお、本実施形態では、エディタ装置1から出力された多言語切替コンテンツは、上述したように図1のオーサリング装置2において、縮小するための画像変換処理が施されて、縮小コンテンツとして配信装置3に提供される。
 従って、本実施形態のビューア装置4の切替コンテンツデータ取得部61は、縮小コンテンツを取得する。
 そこで、分離部62は、縮小コンテンツに対して上述の画像逆変換処理を施して、多言語切替コンテンツを復元させる。この復元された多言語切替コンテンツから、上述の各種データが分離される。
In the present embodiment, the multilingual switching content output from the editor device 1 is subjected to image conversion processing for reduction in the authoring device 2 in FIG. 1 as described above, and the distribution device 3 as reduced content. Provided to.
Therefore, the switching content data acquisition unit 61 of the viewer device 4 according to the present embodiment acquires reduced content.
Therefore, the separation unit 62 performs the above-described image reverse conversion process on the reduced content to restore the multilingual switching content. The various data described above are separated from the restored multilingual switching content.
 操作部67は、本実施形態ではタッチパネルに表示される、ソフトウェアの各種操作器具から構成される。即ち、ユーザはタッチパネルに対して各種タッチ操作することで、操作部67が各種操作を受付ける。 The operation unit 67 includes various software operation instruments displayed on the touch panel in this embodiment. That is, the user performs various touch operations on the touch panel, so that the operation unit 67 receives various operations.
 このタッチパネルには、例えば図7に示すビューア画像101が表示される。
 ビューア画像101には、多言語切替コンテンツの題名(漫画の題名)等を表示する表示領域111と、表示対象のページ数を表示する表示領域112と、全体のページに対する表示対象のページの位置付けを示すバーの表示領域113とが含まれている。
 また、ビューア画像101には、閲覧対象のページの画像を表示する表示領域114が含まれている。
 閲覧対象のページの画像は、本例では漫画の1ページを示しており、複数のコマに分割され、各コマには「吹き出し」の箇所が適宜含まれている。
 この「吹き出し」の箇所(例えば図7の例では箇所151)には、再生対象言語のテキストが表示される。例えば図7の例では「僕の名前はAです。」という日本語のテキストが表示される。
For example, a viewer image 101 shown in FIG. 7 is displayed on the touch panel.
In the viewer image 101, a display area 111 for displaying the title (manga title) of the multilingual switching content, a display area 112 for displaying the number of pages to be displayed, and the position of the display target page with respect to the entire page are displayed. And a bar display area 113 shown.
Further, the viewer image 101 includes a display area 114 for displaying an image of a page to be browsed.
In this example, the image of the page to be browsed shows one page of a comic, which is divided into a plurality of frames, and each frame includes a “speech balloon” as appropriate.
The text of the reproduction target language is displayed at the location of this “balloon” (for example, location 151 in the example of FIG. 7). For example, in the example of FIG. 7, a Japanese text “My name is A” is displayed.
 ここで、再生対象言語は切替えが可能であり、例えば閲覧者は、テキストの設定言語を切替えたい場合、切替ボタン115を押下する。この場合、図6の操作部67のうち切替ボタン115が押下操作されたことになる。
 再生対象特定部68は、再生対象ページの画像を決定すると共に、再生対象言語を決定する。つまり、切替ボタン115が押下されると、再生対象特定部68は、テキストの再生対象言語を切替える。例えば「日本語」から「英語」に切替えられたものとする。
 再生対象抽出部69は、再生対象ページの画像を画像保持部63から抽出すると共に、再生対象ページの画像に含まれる各「吹き出し」の箇所(埋込箇所)に埋め込まれたテキストのうち、再生対象言語(例えば英語)のテキストを夫々抽出する。
 ここで、再生対象ページの画像に含まれる各「吹き出し」の箇所(埋込箇所)に埋め込まれたテキストのうち、再生対象言語(例えば英語)のテキストは、埋込対応テーブル(図5)により判別される。
Here, the language to be played back can be switched. For example, the viewer presses the switching button 115 when switching the text setting language. In this case, the switch button 115 in the operation unit 67 of FIG. 6 is pressed.
The reproduction target specifying unit 68 determines an image of the reproduction target page and also determines a reproduction target language. That is, when the switch button 115 is pressed, the reproduction target specifying unit 68 switches the text reproduction target language. For example, it is assumed that “Japanese” is switched to “English”.
The reproduction target extraction unit 69 extracts the image of the reproduction target page from the image holding unit 63 and reproduces the text out of the text embedded in each “balloon” portion (embedding portion) included in the image of the reproduction target page. Extract texts in the target language (for example, English).
Here, of the text embedded in each “balloon” part (embedded part) included in the image of the reproduction target page, the text in the reproduction target language (for example, English) is represented by the embedding correspondence table (FIG. 5). Determined.
 再生制御部70は、再生対象ページの画像に対して、各「吹き出し」の箇所(埋込箇所)に、再生対象言語(例えば英語)のテキストを夫々重畳配置させた画像を生成し、出力部71から再生させる。
 即ち、出力部71は、タッチパネルの表示部と、図示せぬスピーカとから構成される。この表示部にはビューア画像101が表示され、当該ビューア画像101の表示領域114に、再生制御部70により生成された画像が表示される。
 即ち、切替ボタン115が押下されて、再生対象言語が「日本語」から「英語」に切替えられると、ビューア画像101の表示領域114には、図7に示す画像から、人物や背景等の絵はそのままの状態で、各「吹き出し」の箇所(埋込箇所)には英語のテキストが表示された画像に切り替わって表示される。
The reproduction control unit 70 generates an image in which text in a reproduction target language (for example, English) is superimposed on each “balloon” portion (embedded portion) on the reproduction target page image, and an output unit 71 is reproduced.
That is, the output unit 71 includes a touch panel display unit and a speaker (not shown). A viewer image 101 is displayed on the display unit, and an image generated by the playback control unit 70 is displayed in the display area 114 of the viewer image 101.
That is, when the switching button 115 is pressed and the reproduction target language is switched from “Japanese” to “English”, the display area 114 of the viewer image 101 displays a picture such as a person or background from the image shown in FIG. In the state as it is, each “speech balloon” portion (embedded portion) is displayed by switching to an image in which English text is displayed.
 具体的には例えば、「吹き出し」の箇所151には、図8の中央に示すように、英語の「My Name is A。」が表示される。
 さらに、この状態で、切替ボタン115が押下されると、「吹き出し」の箇所151には、図8の右側に示すように、中国語の「我的名字是A。」が表示される。
 なお、図8の例では、説明を容易なものとすべく、日本語と、英語と、中国語の3つの言語のみが示されているが、当然ながら、図5の埋込対応テーブルに記憶されている言語であれば対応可能である。
 即ち、「吹き出し」の箇所151のテキストは、図5の埋込対応テーブルの1行目に格納された各言語のテキストに対応している。つまり、当該箇所151のIDは「P1-A1」として管理されている。このため、再生対象抽出部69は、「P1-A1」のIDを有する箇所151のテキストとしては、1行目のID「P1-A1―T」のうち、再生対象言語のテキストを抽出すればよい。
 従って、閲覧者は、切替ボタン115を押下するだけで、「吹き出し」の箇所151のテキスト表示として、日本語の「僕の名前はAです。」、英語の「My Name is A。」、中国語の「我的名字是A。」等他言語のテキストを順次切り替えることができる。
More specifically, for example, “My Name is A.” in English is displayed at the location “balloon” 151 as shown in the center of FIG.
In addition, when the switch button 115 is pressed in this state, the Chinese word “My name A” is displayed in the “balloon” portion 151 as shown on the right side of FIG.
In the example of FIG. 8, only three languages, Japanese, English, and Chinese, are shown for ease of explanation, but it is naturally stored in the embedding correspondence table of FIG. Any language can be used.
In other words, the text in the portion “balloon” 151 corresponds to the text of each language stored in the first row of the embedding correspondence table in FIG. That is, the ID of the location 151 is managed as “P1-A1”. Therefore, the reproduction target extraction unit 69 extracts the text of the reproduction target language from the ID “P1-A1-T” in the first line as the text of the portion 151 having the ID “P1-A1”. Good.
Therefore, the viewer simply presses the switching button 115 and displays “My name is A” in Japanese, “My Name is A.” in English, and “My Name is A.” The text in other languages such as the word “My personal name A.” can be switched sequentially.
 なお、テキストの再生対象言語の切り替えについては、当該ビューア画像101の表示領域114に表示された全ての「吹き出し」の箇所が一斉に切り替わるようにされてもよいし、閲覧者により指示された特定の「吹き出し」の箇所のみが切り替わるようにされてもよい。
 後者の場合、テキストの再生対象言語の切り替え操作は、切替ボタン115の押下操作以外に、例えば、対象となる「吹き出し」の箇所に対するタッチ操作等を採用してもよい。
 また、テキストの再生対象言語の切り替えの順番は、所定の順番でもよいし、後述の音声データの切替えのように、図7の選択領域117を表示させ、閲覧者に所望の言語を選択させるようにしてもよい。
Regarding the switching of the language to be played back, all “speech balloons” displayed in the display area 114 of the viewer image 101 may be switched all at once, or specified by the viewer. Only the “speech balloon” portion may be switched.
In the latter case, the switching operation of the text reproduction target language may employ, for example, a touch operation on the target “speech balloon” in addition to the pressing operation of the switching button 115.
Further, the switching order of the language to be played back may be a predetermined order, or the selection area 117 in FIG. 7 is displayed so that the viewer can select a desired language as in the case of switching of audio data described later. It may be.
 ここで、閲覧者は、「吹き出し」の箇所に表示されるテキストについて多言語の切替えができるだけではなく、「吹き出し」の箇所に表示されるテキストに対応する音声出力についても多言語の切替えができる。即ち、喋る漫画として、アニメ、漫画と並列した新ジャンルとして確立させることができる。また、電子書籍との差別化を図ることもできる。また、世界中誰でも読むことができるように文字と音声を組み合わせを自由に言語選択をすることができる。これにより、ローカライズの要請に対応することができる。さらに、漫画を読みながら楽しく学習することができるため、語学学習にも利用することができる。 Here, the viewer can not only switch the multi-language for the text displayed in the “speech” part, but can also switch the multi-language for the audio output corresponding to the text displayed in the “speech” part. . In other words, it can be established as a new genre in parallel with anime and manga as a manga that speaks. In addition, differentiation from electronic books can be achieved. In addition, the language can be freely selected by combining letters and voices so that anyone in the world can read them. Thereby, it is possible to respond to the request for localization. Furthermore, since you can enjoy learning while reading comics, you can also use it for language learning.
 この音声出力の他言語の切替えは、図7の切替ボタン116が用いられる。
 切替ボタン116が押下操作されると、選択領域117が表示される。閲覧者は、選択領域117に表示された言語のうち所望の言語をタッチ操作することで、当該所望の言語を再生対象言語として指定することができる。
The switching button 116 in FIG. 7 is used to switch the other language of the voice output.
When the switch button 116 is pressed, a selection area 117 is displayed. The viewer can specify a desired language as a reproduction target language by performing a touch operation on the desired language among the languages displayed in the selection area 117.
 この場合、選択領域117に対する選択操作は、図6の操作部67により操作として受け付けられる。
 再生対象特定部68は、この選択操作に基づいて、音声についての再生対象言語を決定する。つまり、選択操作により、音声の再生対象言語が切替わる。例えば「日本語」から「英語」に切替えられたものとする。
 再生対象抽出部69は、再生対象ページの画像に含まれる各「吹き出し」の箇所(埋込箇所)に埋め込まれた音声のうち、再生対象言語(例えば英語)の音声を夫々抽出する。
 ここで、再生対象ページの画像に含まれる各「吹き出し」の箇所(埋込箇所)に埋め込まれた音声のうち、再生対象言語(例えば英語)の音声は、埋込対応テーブル(図5)により判別される。
In this case, the selection operation for the selection region 117 is accepted as an operation by the operation unit 67 of FIG.
The reproduction target specifying unit 68 determines a reproduction target language for the sound based on the selection operation. That is, the audio playback target language is switched by the selection operation. For example, it is assumed that “Japanese” is switched to “English”.
The reproduction target extraction unit 69 extracts the sound of the reproduction target language (for example, English) from the voices embedded in the portions (embedded portions) of each “balloon” included in the image of the reproduction target page.
Here, among the voices embedded in each “balloon” part (embedded part) included in the image of the reproduction target page, the sound of the reproduction target language (for example, English) is represented by the embedding correspondence table (FIG. 5). Determined.
 再生制御部70は、再生対象ページの各「吹き出し」の箇所(埋込箇所)のうち、音声再生の対象となっている箇所の音声再生のタイミングにて、再生対象言語(例えば英語)の音声を、出力部71のうちスピーカから再生させる。 The playback control unit 70 plays the audio in the playback target language (for example, English) at the timing of the audio playback at the location that is the target of audio playback among the locations (embedded locations) of each “balloon” on the playback target page. Is reproduced from the speaker of the output unit 71.
 なお、本発明は、上述の実施形態に限定されるものではなく、本発明の目的を達成できる範囲での変形、改良等は本発明に含まれるものである。 It should be noted that the present invention is not limited to the above-described embodiment, and modifications, improvements, etc. within a scope that can achieve the object of the present invention are included in the present invention.
 例えば、上述の実施形態では、いわゆる電子書籍としての漫画がコンテンツとされたが、特にこれに限定されない。例えばコンテンツは、動画の漫画のコンテンツ(動く漫画)であってもよい。動く漫画の場合、漫画原稿を使用するため、従来よりスピーディな制作が可能となる。これにより、速いビジネススピードにも対応することが可能となる。また、作家の漫画原稿をそのまま動画化するため、ハイクオリティを特徴とする新しいジャンルとして確立することができる。さらに、漫画原稿をメインとして制作する為、アニメと比べてコストが低いというメリットを有する。 For example, in the above-described embodiment, a cartoon as a so-called electronic book is used as the content, but the present invention is not particularly limited thereto. For example, the content may be animated cartoon content (moving cartoon). In the case of moving comics, comics manuscripts are used, so speedier production is possible. Thereby, it becomes possible to cope with a fast business speed. In addition, since the author's manga manuscript is directly converted into a movie, it can be established as a new genre characterized by high quality. In addition, because it produces mainly manga manuscripts, it has the merit of lower costs compared to anime.
 また例えば、上述の実施形態では、言語データを埋込むための埋込箇所は、漫画の各コマの「吹き出し」の箇所とされたが、特にこれに限定されず、画像内の任意の箇所でよい。
 例えば、漫画には、擬音がテキスト部として表されることが多い。このようなテキスト部を埋込箇所として採用することもできる。
In addition, for example, in the above-described embodiment, the embedding location for embedding language data is the location of the “speech balloon” of each frame of the comic, but is not particularly limited thereto, and may be any location in the image. Good.
For example, in comics, onomatopoeia is often expressed as a text part. Such a text part can also be adopted as an embedding part.
 さらに言えば、多言語切替コンテンツは、漫画である必要は特に無く、言語データを埋込むことが可能なものであれば足りる。
 例えば、飲食店等において提供されるメニューを多言語切替コンテンツとして採用することができる。この場合、飲食物の名称や値段を示す箇所、当該飲食物を説明する説明文等を、埋込箇所として採用することができる。
 この場合、埋込箇所に埋込む言語の内容については、各言語毎に必ずしも一致させる必要は無い。例えば日本国の飲食店において、刺身を説明する文章については、刺身を常食する日本人が用いる日本語であれば、素材や造り方等の長い説明文は不要の場合が多いが、刺身を常食としない外国人が用いる言語(例えば英語)であれば、素材や造り方等の長い説明文とした方が良い場合がある。このような場合、埋込箇所に埋込む言語の内容は異なることになる。
Furthermore, the multilingual switching content is not particularly required to be a comic, and any content that can embed language data is sufficient.
For example, a menu provided at a restaurant or the like can be adopted as multilingual switching content. In this case, a location indicating the name and price of the food and drink, an explanatory text explaining the food and drink, and the like can be adopted as the embedding location.
In this case, the contents of the language embedded in the embedding location do not necessarily need to be matched for each language. For example, at a restaurant in Japan, if the Japanese is used by a Japanese who eats sashimi, long explanations such as ingredients and how to make are often unnecessary, but the sashimi is always eaten. If it is a language used by foreigners who do not (such as English), it may be better to use long descriptions such as materials and how to make them. In such a case, the contents of the language embedded in the embedding location are different.
 また例えば、地図を多言語切替コンテンツとして採用することができる。この場合、地図内の各場所を示す箇所等を、埋込箇所として採用することができる。
 この場合、埋込箇所に埋込む言語の内容については、各言語毎に必ずしも一致させる必要は無い。例えば日本国の地図において、その地理に詳しい日本人が用いる日本語であれば、長い説明文は不要の場合が多いが、その地理に詳しい外国人が用いる言語(例えば英語)であれば、観光案内等の長い説明文とした方が良い場合がある。このような場合、埋込箇所に埋込む言語の内容は異なることになる。
In addition, for example, a map can be adopted as multilingual switching content. In this case, a location indicating each location in the map can be adopted as an embedded location.
In this case, the contents of the language embedded in the embedding location do not necessarily need to be matched for each language. For example, if the Japanese map is used by a Japanese who is familiar with the geography, long explanations are often unnecessary, but if it is a language used by a foreigner familiar with the geography (eg English), sightseeing It may be better to use long explanations such as guidance. In such a case, the contents of the language embedded in the embedding location are different.
 従って例えば、このような多言語切替コンテンツとして採用可能な、地図、店、飲食店等において提供されるメニュー全てが連動すると共に、メニューの作成をアシスト可能なものであって、モバイル端末への展開が容易なサービスを提供することができる。
 このようなサービスを、以下、「マルチリンガルマップ」と呼ぶ。
 以下、図9乃至図14を参照してマルチリンガルマップの一例の概要を説明する。
Therefore, for example, all the menus provided in such maps, stores, restaurants, etc. that can be adopted as such multilingual switching content can be linked and assist in the creation of the menu, and can be expanded to mobile terminals. Can provide easy service.
Such a service is hereinafter referred to as a “multilingual map”.
Hereinafter, an example of the multilingual map will be described with reference to FIGS. 9 to 14.
 図9は、マルチリンガルマップの一例の全体の概要を示すイメージ図である。 FIG. 9 is an image diagram showing an overview of an example of a multilingual map.
 マルチリンガルマップは、本発明のコンテンツ作成及び再生システムによって実現することができる。その一例として、3つのカテゴリ、即ち、地図に関するカテゴリ(以下「MAPカテゴリ」と呼ぶ)、店に関するカテゴリ(以下「お店カテゴリ」と呼ぶ)、及びメニューに関するカテゴリ(以下「メニューカテゴリ」と呼ぶ)を互いに連動させることができる。なお、上述した3つのカテゴリ(MAPカテゴリ、お店カテゴリ、メニューカテゴリ)の連動は例示であり、この3つに限られない。あらゆる種類のカテゴリを連動させることができる。例えば、旅行会社や免税店と連動させることもできるし、名所、歴史、人物等の情報を提供する者(出版会社等)と連動させることもできる。ただし、以下、説明の便宜上、図9に示す3つのカテゴリの連動を具体例に説明をする。 The multilingual map can be realized by the content creation and playback system of the present invention. As an example, there are three categories: a category related to a map (hereinafter referred to as “MAP category”), a category related to a store (hereinafter referred to as “shop category”), and a category related to a menu (hereinafter referred to as “menu category”). Can be linked to each other. The linkage of the three categories (MAP category, store category, menu category) described above is an example, and is not limited to these three. All kinds of categories can be linked. For example, it can be linked with a travel agency or a duty-free shop, or can be linked with a person (publisher or the like) that provides information such as a famous place, history, or person. However, for the sake of convenience of explanation, the linkage of the three categories shown in FIG.
 具体的には、飲食店(例えばレストラン)の管理者等が、エディタ装置1(図2)を用いて、ショップのウエブサイトの各種説明部分や、メニューの各種説明部分を多言語対応にするように、図5に示したような埋込対応テーブルを生成することによって、多言語切替コンテンツを制作する。つまり、本例では、お店カテゴリとメニューカテゴリの他言語切替コンテンツは、飲食を提供する店(例えばレストラン)側で制作される。
 一方、MAPカテゴリの多言語切替コンテンツは、マルチリンガルマップのサービス提供者や観光会社等が操作するエディタ装置1(図2)を用いて制作される。即ち、飲食を提供する店(例えばレストラン)の従業員等側が、図2のエディタ装置1(図2)を用いて、 ショップのウエブサイトの各種説明部分や、メニューの各種説明部分を多言語対応にするように、図5に示したような埋込対応テーブルを生成することによって、多言語切替コンテンツを制作する。
Specifically, an administrator of a restaurant (for example, a restaurant) uses the editor device 1 (FIG. 2) to make various explanation parts of the shop website and various explanation parts of the menu multilingual. In addition, the multilingual switching content is produced by generating the embedding correspondence table as shown in FIG. That is, in this example, the other language switching content of the store category and the menu category is produced on the store (for example, restaurant) side that provides food and drink.
On the other hand, multilingual switching content in the MAP category is produced using the editor device 1 (FIG. 2) operated by a multilingual map service provider, a tourist company, or the like. That is, an employee of a restaurant (for example, a restaurant) that provides food and drinks uses the editor device 1 (FIG. 2) in FIG. 2 to support various descriptions of various descriptions on the website of the shop and various descriptions on the menu. As described above, the multilingual switching content is produced by generating the embedding correspondence table as shown in FIG.
 ここで、エディタ装置1は、専用の端末であってもよいし、専用のソフトウェアがインストールされたパーソナルコンピュータ等の汎用の端末であってもよい。
 ユーザ側は、図6のビューワ装置4を用いてマルチリンガルマップを利用する。なお、マルチリンガルマップの専用アプリケーションソフトウェア(以下「専用アプリ」と呼ぶ)をスマートフォン等のモバイル端末にインストールすることにより、当該モバイル端末にビューワ装置4と同様の機能を持たせることとしてもよい。この場合、モバイル端末で専用アプリが起動されると、当該モバイル端末の画面にマルチリンガルマップが表示される。
 マルチリンガルマップは、全世界対応の地図となっている。当該地図と、店の所在地とは相互に連動しているため、当該地図上に当該店を示すアイコンが表示される。
Here, the editor apparatus 1 may be a dedicated terminal or a general-purpose terminal such as a personal computer in which dedicated software is installed.
The user side uses the multilingual map using the viewer device 4 of FIG. In addition, it is good also as giving the function similar to the viewer apparatus 4 to the said mobile terminal by installing the dedicated application software (henceforth "dedicated application") of a multilingual map in mobile terminals, such as a smart phone. In this case, when the dedicated application is activated on the mobile terminal, the multilingual map is displayed on the screen of the mobile terminal.
The multilingual map is a global map. Since the map and the location of the store are linked to each other, an icon indicating the store is displayed on the map.
 なお、図9の例におけるMAPカテゴリには、例えば、周辺のおすすめスポットに関するもの、通行止めや渋滞等の交通情報に関するもの、及び観光地等の名所紹介に関するものを採用することができる。また、お店カテゴリには、例えば、住所等の店の所在地に関するもの、動画コマーシャル等の店のPRに関するもの、及び店のホームページに関するものを採用することができる。また、メニューカテゴリには、例えば、商品やサービスの動画広告等のサイネージに関するもの、ハラール(イスラムの教えで許された「健全な商品や活動」全般を意味する)に関するもの、アレルゲン(アレルギー症状を引き起こす原因となるもの)に関するもの、及び各地の名産品や特産物に関するものを採用することができる。 Note that, for the MAP category in the example of FIG. 9, for example, information related to recommended spots in the vicinity, information related to traffic information such as closed roads and traffic jams, and information related to famous places such as tourist spots can be adopted. In addition, for example, a store category such as an address or the like, a store commercial or the like related to a store PR, or a store homepage related to the store category may be employed. Menu categories include, for example, those related to signage such as video advertisements for products and services, those related to halal (meaning all “sound products and activities permitted by Islamic teaching”), and allergens (allergic symptoms). Can be used as well as those related to local specialties and special products.
 図9の例では、上記の3つのカテゴリが相互に連動することにより、MAPカテゴリとお店カテゴリとの関係では、例えば、所在地とその周辺のスポットとを連動させることができる。また、店カテゴリとメニューカテゴリとの関係では、ホームページと店頭メニューとを連動させることができる。さらに、メニューカテゴリとMAPカテゴリとの関係では、飲食物と地図とを連動させることにより、食材の産地と、当該産地を紹介する四季折々の写真及び動画とを連動させることもできる。
 即ち、使用言語が異なる世界中のユーザの夫々が専用アプリを利用することにより、「楽しむこと」、「知ること」、及び「食べること」を一度に実現させることができる。
In the example of FIG. 9, the above three categories are linked to each other, so that, for example, the location and the surrounding spots can be linked in the relationship between the MAP category and the store category. Further, in relation to the store category and the menu category, the home page and the store menu can be linked. Further, in the relationship between the menu category and the MAP category, the food production area and the seasonal photos and videos introducing the production area can be linked by linking food and drink with a map.
That is, users of different world languages can use the dedicated app to realize “enjoying”, “knowing”, and “eating” at a time.
 このように、マルチリンガルマップは、多言語対応の地図、カテゴリ、及びメニューが全て連動している。このため、ユーザは、海外旅行をする際にマルチリンガルマップを旅行ガイドとして利用することにより、従来からの一般的な旅行ガイドブックでは実現できなかった機能を実現させることができる。 In this way, multilingual maps are all linked to multilingual maps, categories, and menus. For this reason, the user can realize a function that cannot be realized by a conventional general travel guidebook by using a multilingual map as a travel guide when traveling abroad.
 図10は、マルチリンガルマップをガイドブックとして利用する場合における従来のガイドブックとの違いを示すイメージ図である。 FIG. 10 is an image diagram showing a difference from a conventional guidebook when a multilingual map is used as a guidebook.
 海外旅行者は、マルチリンガルマップを旅行ガイドとして利用することにより、出力されるテキストと音声をリアルタイムで切り替えることができる。また、各カテゴリが相互に細かく連動するため、海外旅行に有用となる情報をリアルタイムで取得することができる。
 さらに、店側も、個人でも容易に制作可能なコンテンツエディタの提供を受けることが可能となる。
Overseas travelers can switch between text and voice output in real time by using a multilingual map as a travel guide. In addition, since each category is closely linked to each other, information useful for overseas travel can be acquired in real time.
Furthermore, the store side can also receive a content editor that can be easily produced by individuals.
 ステータスS1は、飲食店(例えばレストラン)の管理者等がエディタ装置1を操作することにより、当該飲食店のメニューが生成又は更新されると、多言語対応の地図m及び店のホームページ等が全て連動し、当該更新の内容が全世界対応のマルチリンガルマップにリアルタイムで反映されることを示している。
 また、当該飲食店のホームページ等が更新されると、多言語対応の地図m及びメニューが全て連動し、当該更新の内容が全世界対応のマルチリンガルマップにリアルタイムで反映される。
 即ち、多言語切替コンテンツの更新内容は、そのまま全世界においてリアルタイムかつ多言語で閲覧が可能となるため、当該飲食店は、日本に限らず世界中から日本へ訪れる海外旅行者に対し、自身のメニューを宣伝することが容易に可能となる。
 なお、エディタ装置1は、当該飲食店の店頭や、当該飲食店が属する地域に提供されることにより、随時操作が可能とすることができる。
The status S1 indicates that when a restaurant manager (eg, restaurant) or the like operates the editor device 1 to generate or update the restaurant menu, the multilingual map m, the store homepage, and the like are all displayed. In conjunction, it shows that the content of the update is reflected in real-time on a multilingual map for the entire world.
Further, when the restaurant homepage and the like are updated, the multilingual map m and the menu are all linked, and the contents of the update are reflected in real-time on the globally compatible multilingual map.
In other words, since the updated content of the multilingual switching content can be browsed in real time and in multiple languages as it is in the world, the restaurant is not limited to Japan, but to foreign travelers visiting Japan from all over the world. It becomes easy to advertise the menu.
The editor device 1 can be operated at any time by being provided at the restaurant or the area to which the restaurant belongs.
 ステータスS2は、海外旅行者は、日本を訪れた時に、モバイル端末を操作して専用アプリを活用することができることを示している。即ち、海外旅行者は、当該海外旅行者が使用する言語による当該飲食店の最新情報(例えば、提供される料理や所在地を示す地図m)をリアルタイムで容易に取得することができる。
 このとき、当該海外旅行者が複数であり、夫々使用する言語が異なる場合であったとしても、当該海外旅行者の夫々が所持するモバイル端末全てにおいて専用アプリを起動させる必要はない。特定の1台のモバイル端末において専用アプリを起動させ、表示される言語を適宜切り替えることにより、当該海外旅行者全員が容易に当該所定の店の最新情報をリアルタイムで容易に閲覧することができる。例えば、ツアーコンダクターが所持する携帯端末1台のみにおいて専用アプリを起動させれば、使用言語の異なる同行者に対し、容易に必要な情報を閲覧させることが可能となる。
Status S2 indicates that an overseas traveler can use a dedicated application by operating a mobile terminal when visiting Japan. That is, the overseas traveler can easily obtain the latest information of the restaurant in the language used by the overseas traveler (for example, the map m indicating the provided food and location) in real time.
At this time, even if there are a plurality of overseas travelers and the languages used are different, it is not necessary to activate the dedicated application on all the mobile terminals possessed by each of the overseas travelers. By starting a dedicated application on a specific mobile terminal and switching the displayed language as appropriate, all the overseas travelers can easily view the latest information of the predetermined store in real time. For example, if the dedicated application is activated only on one portable terminal possessed by the tour conductor, it becomes possible for a companion with a different language to easily view necessary information.
 また、マルチリンガルマップは、海外旅行者が旅行中に旅行ガイドとして利用できるだけではなく、旅行の出発前の情報収集に活用することができる。
 ステータスS3は、海外旅行者は、日本を訪れる前に、モバイル端末を操作して専用アプリを活用することができることを示している。即ち、海外旅行者は、日本を訪れる際には日本の当該所定の店で食事をしてみたいと考えたとき、当該海外旅行者は、日本を訪れる前に、モバイル端末を操作して専用アプリを利用することにより、当該所定の店の最新情報を当該海外旅行者が使用する言語によってリアルタイムで容易に取得することができる。
In addition, the multilingual map can be used not only as a travel guide for overseas travelers during the trip, but also for collecting information before the departure of the trip.
Status S3 indicates that an overseas traveler can use a dedicated application by operating a mobile terminal before visiting Japan. In other words, when an overseas traveler wants to eat at the designated store in Japan when visiting Japan, the overseas traveler operates a mobile terminal and uses a dedicated application before visiting Japan. By doing so, the latest information of the predetermined store can be easily acquired in real time in the language used by the overseas traveler.
 例えば、日本へ訪れようとする海外旅行者が、宗教上の理由や体質上の理由により摂取できる食べ物に制限がある場合、当該海外旅行者は、自身が摂取できるメニューが当該店にあるかどうかについて、当該海外旅行者が使用する言語によってリアルタイムで容易に事前確認することができる。これにより、せっかく日本に訪れたにも関わらず、食べることができる物がなかったり、食べてしまったことにより体調を崩してしまったといった事態を未然に防ぐことができる。 For example, if a foreign traveler who is visiting Japan has restrictions on the food that can be consumed due to religious or constitutional reasons, the overseas traveler may check whether the store has a menu that he or she can consume. Can be easily confirmed in real time according to the language used by the overseas traveler. As a result, it is possible to prevent a situation in which, despite having visited Japan, there is no food that can be eaten or the physical condition has been lost due to having eaten.
 また、海外旅行者は、来日前に観光プランの申し込みをしておきたい場合や、お勧めのスポットを事前に知りたい場合においても、マルチリンガルマップを事前のガイドとして利用することができる。この場合、マルチリンガルマップは、お勧めの場所やスポットを所定のツアーガイドの内容に沿って紹介することにより、お勧めの名所の内容を多言語で紹介したり、当該名所を巡るツアーに誘導したりすることができる。また、マルチリンガルマプとツアーの予約システムとを連動させることもできる。また、事前情報として、季節のおすすめ情報や、特集をリアルタイムで掲載することもできる。
 このように、ステータスS1乃至3において、多言語対応の地図、店のホームページ等、及びメニューが全て連動している。
Overseas travelers can also use the multilingual map as a guide in advance when they want to apply for a sightseeing plan before coming to Japan or know the recommended spots in advance. In this case, the multilingual map introduces the recommended places and spots according to the contents of the prescribed tour guide, introduces the contents of the recommended sights in multiple languages, or leads to a tour around the sights. You can do it. It is also possible to link a multilingual map with a tour reservation system. Also, as advance information, seasonal recommended information and special features can be posted in real time.
As described above, in the statuses S1 to S3, the multilingual map, the store homepage, and the menu are all linked.
 図11は、マルチリンガルマップを旅行ガイドとして利用する場合における、図10に示したものとは異なる従来のガイドブックとの違いを示すイメージ図である。 FIG. 11 is an image diagram showing a difference from a conventional guidebook different from that shown in FIG. 10 when a multilingual map is used as a travel guide.
 マルチリンガルマップを旅行ガイドとして利用することにより、交通状況の案内や目的地までのルート案内として出力される言語及び音声をリアルタイムで切り替えることができる。これにより、電車、バス、及びタクシー等を効率良く利用することが可能となるため、海外旅行先で地図を読み間違えて迷子になることや、地図や時刻表を読むことに時間を費やしたために、電車やバスに乗り遅れる又は乗り過ごすといった失敗を防ぐことができる。 言語 By using a multilingual map as a travel guide, it is possible to switch in real time the language and voice that are output as traffic status guidance and route guidance to the destination. As a result, trains, buses, taxis, etc. can be used efficiently, so you may get lost by reading a map mistakenly at an overseas travel destination, or because you spent time reading a map or timetable. It is possible to prevent failures such as missed or missed trains and buses.
 また、各カテゴリをインターネット上でリンク又はシェアすることにより、世界中に宣伝することができる。さらに、各店(又は地域)において、マルチリンガルマップの内容を商品やサービスの動画広告等のサイネージやメニューとしてそのまま利用することができる。
 これにより、世界中の人々に対し、観光資源を最大限アピールすることが可能となる。
Moreover, each category can be advertised all over the world by linking or sharing on the Internet. Furthermore, at each store (or region), the content of the multilingual map can be used as it is as a signage or menu for moving image advertisements of products and services.
This makes it possible to appeal tourism resources to people all over the world.
 ステップS11において、海外旅行者によって専用アプリが起動された後に、当該海外旅行者によって当該海外旅行者が情報を得たいと思うエリア(図11の例では「都心エリア」)を示す文字や図柄がタップされると、対象となるエリアが選択される。
 このとき、当該海外旅行者は、モバイル端末に表示される文字を、当該海外旅行者が使用する言語にリアルタイムで切り替えることができる。
In step S11, after the dedicated app is started by an overseas traveler, characters or designs indicating an area (in the example of FIG. 11, “city center area”) that the overseas traveler wants to obtain information from the overseas traveler. When tapped, the target area is selected.
At this time, the overseas traveler can switch the characters displayed on the mobile terminal to the language used by the overseas traveler in real time.
 また、マルチリンガルマップは、ツアー、買い物、及び飲食店等について、現在地を基準に検索し、表示させることができる。また、目的地への道案内、交通案内を、現在地を基準に出力させることができる。さらに、お気に入りのスポットについて、訪問前後いずれのタイミングにおいてもお気に入りスポットとして登録することができる。 Also, the multilingual map can be searched and displayed for tours, shopping, restaurants, etc. based on the current location. In addition, it is possible to output route guidance and traffic guidance to the destination based on the current location. Furthermore, a favorite spot can be registered as a favorite spot at any timing before and after the visit.
 ステップS12において、当該エリアを示す地図が拡大表示され、当該地図上に各施設の所在地を示すアイコンが表示されると共に、当該各施設の概要がサムネイル表示される。また、画面に表示される所定のキャラクタを示す画像が埋込箇所となり、当該埋込箇所に埋め込まれた音声やサムネイル表示された写真によって、当該海外利用者に対し多言語による情報をアナウンスすることができる。
 なお、地図上に表示されるアイコンの形状、模様等を各施設の種類毎に分けることとしてもよい。例えば、図11の例では、各施設の種類毎にFood(飲食店)、Shop(飲食店以外の店)、及びSPOT(その他の施設)に分けられている。
In step S12, the map indicating the area is enlarged and displayed, icons indicating the locations of the facilities are displayed on the map, and an overview of the facilities is displayed as a thumbnail. Also, an image showing a predetermined character displayed on the screen becomes an embedding location, and information in multiple languages is announced to the overseas user by means of a sound embedded in the embedding location or a photo displayed as a thumbnail. Can do.
In addition, it is good also as dividing the shape of the icon displayed on a map, a pattern, etc. for every kind of each facility. For example, in the example of FIG. 11, each facility type is divided into Food (restaurant), Shop (non-restaurant), and SPOT (other facilities).
 当該海外旅行者が、拡大表示された地図に表示されたアイコンの中から、自身が情報を得たいと考える施設のアイコンをタップすると、当該施設に関する情報が表示される。図11の例では、所定の飲食店を示すアイコンP1がタップされると、当該店のメニューと地図m1とが連動し、当該所定の飲食店のメニューが表示される。なお、当該飲食店のメニューと地図m1とを連動させるだけではなく、当該メニューの説明として表示された食材の産地と地図m1とを連動させることもできる。
 これにより、当該海外旅行者は、自身が食べる料理に使用されている食材の出所を、自身が使用する言語で容易に把握することができるため、当該料理を食べた際の感動をより一層深めることができる。また、店側(例えば飲食店)としては、海外旅行者に対するメニューのPRが容易となる。
 また、マルチリンガルマップは、詳細なスポット(例えば飲食店)の情報から、店のメニュー、カタログ、及びクーポン等の発行を行うこともできる。
When the overseas traveler taps an icon of a facility that he / she wants to obtain information from among the icons displayed on the enlarged map, information on the facility is displayed. In the example of FIG. 11, when the icon P1 indicating a predetermined restaurant is tapped, the menu of the store and the map m1 are linked, and the menu of the predetermined restaurant is displayed. In addition, the menu of the restaurant and the map m1 can be linked together, and the production area of the food displayed as the description of the menu can be linked with the map m1.
As a result, the overseas traveler can easily grasp the source of the ingredients used in the food that he / she eats in the language he / she uses, so that he / she can deepen the excitement when eating the food. be able to. Further, on the store side (for example, a restaurant), PR of the menu for overseas travelers becomes easy.
The multilingual map can also issue store menus, catalogs, coupons, and the like from detailed spot information (for example, restaurants).
 図12は、マルチリンガルマップをガイドブックとして利用する場合における、図10及び11に示したものとは異なる従来のガイドブックとの違いを示すイメージ図である。 FIG. 12 is an image diagram showing a difference from a conventional guidebook different from those shown in FIGS. 10 and 11 when a multilingual map is used as a guidebook.
 図12に示すように、マルチリンガルマップは、店(又は地域)側の管理者に対し、専用エディタ(例えば図1のエディタ装置1)が提供される。また、制作に必要となる翻訳作業、並びに音声、写真、デザイン、及び動画等の制作の支援を行う。なお、専用エディタの提供方法には、ハードウェアを含めた提供、及びオンラインによる提供が含まれる。
 これにより、店(又は地域)の管理者は、マルチリンガルマップのプラットフォームへのアップロード作業や、SDカードやオンラインによる更新作業が容易となるため、これら作業に要する時間的コストを削減できると共に、これら作業上のノウハウを有しなくとも、地域又は店の管理者がPRしたいと考える最新情報を、世界中の人々に対して容易にPRすることができるようになる。
As shown in FIG. 12, the multilingual map is provided with a dedicated editor (for example, the editor device 1 in FIG. 1) for the manager on the store (or region) side. In addition, we will support the translation work required for production, and the production of audio, photos, designs, and videos. Note that the method for providing a dedicated editor includes provision including hardware and online provision.
This makes it easy for managers of stores (or regions) to upload to the multilingual map platform and to update the SD card or online, thereby reducing the time costs required for these operations. The latest information that the manager of the region or the store wants to publicize can be easily promoted to people all over the world without having know-how on the work.
 また、店(又は地域)の管理者は、文書や写真だけではなく動画や音声により産地等を多言語かつリアルタイムで紹介することができる。これにより、あらゆる手法を用いて世界中の人々に対し商品やサービスを容易にPRすることができる。
 また、所定の店(又は地域)における専用のモバイル端末(タブレット)を作ることもできる。例えば飲食店の各テーブルに配備された従来からの紙製のメニュー表の代わりに専用タブレットを配備することにより、来客者が使用する言語に対応したメニュー表を当該専用タブレットに表示させることができる。これにより、来客者はスムーズに料理を注文することができるだけでなく、当該メニュー表を構成する各料理や飲み物についての詳しい情報を音声、動画、及び地図等によって取得することができる。
Further, the manager of the store (or region) can introduce the production area and the like in multiple languages and in real time not only with documents and photos but also with videos and sounds. This makes it possible to easily promote products and services to people all over the world using any method.
It is also possible to make a dedicated mobile terminal (tablet) in a predetermined store (or region). For example, by deploying a dedicated tablet instead of the conventional paper menu table deployed at each restaurant table, the menu table corresponding to the language used by the visitor can be displayed on the dedicated tablet. . Thereby, the visitor can not only smoothly order the food but also obtain detailed information about each food and drink constituting the menu table by voice, a moving image, a map, and the like.
 なお、マルチリンガルマップで連動させることができるものは、上述した例に限られず、あらゆるものを連動させることができる。例えば、各観光地における解説(文字と音声)付きの案内、お勧めスポットにおけるツアー等の紹介、各種の予約システム、免税情報、漫画を利用した簡単会話、緊急連絡方法、即売会の情報、購入品をホテルに届けてくれるサービスの案内、グルメ案内等を、世界対応の地図と連動させることができる。 In addition, what can be interlocked with the multilingual map is not limited to the above-described example, and any object can be interlocked. For example, guidance with explanations (letters and voices) at each sightseeing spot, introduction of tours at recommended spots, various reservation systems, tax exemption information, simple conversation using comics, emergency contact methods, information on spot sales, purchase Information on services that deliver goods to hotels, gourmet information, etc. can be linked to a map that supports the world.
 図13は、所定の店の管理者が制作したメニュー表の一例を示す図である。 FIG. 13 is a diagram showing an example of a menu table created by an administrator of a predetermined store.
 図13左図は、所定の店(飲食店)で提供される料理のメニュー表の一例を示している。当該メニュー表は、店の管理者によってエディタ装置1で制作された多言語コンテンツの一例であり、ユーザによってビューワ装置4で閲覧される。
 当該メニュー表の一例は、提供される料理の情報(名称、写真、及び価格)の他に、埋込箇所としてのメニューカテゴリ201と、言語切替ボタン202と、地図ボタン203とが含まれる。
 当該メニュー表から情報を得ようとする者は、メニューカテゴリ201を選択することによって、メニューをカテゴリ別に表示させることができる。なお、図13左図の例では、言語切替ボタン202において中国語(中文)が選択された場合を示しているため、メニューカテゴリ201は中国語で表示されている。
The left figure of FIG. 13 has shown an example of the menu table of the food provided in a predetermined shop (restaurant). The menu table is an example of multilingual content created by the manager of the store using the editor device 1 and viewed by the user using the viewer device 4.
An example of the menu table includes a menu category 201 as an embedding location, a language switching button 202, and a map button 203 in addition to information (name, photo, and price) of the provided food.
A person who wants to obtain information from the menu table can display menus by category by selecting the menu category 201. In the example in the left diagram of FIG. 13, the menu category 201 is displayed in Chinese because Chinese (Chinese) is selected in the language switching button 202.
 この場合、日本語を使用する者は、言語切替ボタン202の中から日本語(日本)を選択し、英語を使用する者は、言語切替ボタン202の中から英語(English)を選択することにより、メニュー表に出力される文書、音声、動画、及び地図等をリアルタイムで使用言語に切り替えることができる。
 地図ボタン203は、当該所定の店の所在地や、メニュー表を構成する各料理の食材の産地等を地図によって示すことができる。
In this case, those who use Japanese select Japanese (Japan) from the language switching button 202, and those who use English select English (English) from the language switching button 202. Documents, voices, videos, maps, etc. that are output to the menu table can be switched to the language used in real time.
The map button 203 can indicate the location of the predetermined store, the production area of ingredients of each dish constituting the menu table, and the like on a map.
 図13右図は、所定の店(飲食店)で提供される料理のメニュー表の中から、特定の料理に関する情報を文書で表示させた場合の一例を示している。なお、当該文書で表示された内容を多言語の音声により読み上げることもできる。
 当該情報は、図5に例示する埋込対応テーブルによって管理されており、多言語によって切り替えられる。
 当該情報には、当該特定の料理の写真の他に、埋込箇所としての料理ハラール認証204と、食品ピクトグラム205と、商品説明206と、基本情報207とが含まれる。
The right side of FIG. 13 shows an example in which information related to a specific dish is displayed in a document from a menu table of dishes provided at a predetermined store (restaurant). Note that the content displayed in the document can be read out by multilingual audio.
The information is managed by the embedding correspondence table illustrated in FIG. 5 and can be switched in multiple languages.
The information includes a cooking halal certification 204 as an embedding location, a food pictogram 205, a product description 206, and basic information 207, in addition to a photo of the specific dish.
 ハラール認証204は、上述したように、イスラムの教えで許された健全な商品や活動の全般を意味する「ハラール」に該当する商品であるかどうかを示す表示である。なお、「ハラール」の反対の意味を持つ「ハラム」は、イスラム教徒にとって有害な物及び中毒性のある物となる。即ち、イスラム教徒は、「ハラール」に該当すると正式に認められているもの以外の飲食物は避けなければならない。このため、イスラム教徒は、メニュー表に表示されたハラール認証204の表示を確認することにより、当該特定の料理が、ハラール品であると正式に認められたものであるかどうかをリアルタイムで確認することができる。 As described above, the halal certification 204 is a display indicating whether or not the product falls under “Halal” which means a sound product or an overall activity permitted by Islamic teaching. “Halam”, which means the opposite of “Halal”, is harmful and addictive to Muslims. That is, Muslims must avoid food and drink other than those officially recognized as falling under “Halal”. For this reason, Muslims confirm in real time whether the particular dish is officially recognized as a halal product by confirming the display of the halal certification 204 displayed in the menu table. be able to.
 食品ピクトグラム205は、宗教上の理由、菜食主義、及び食物アレルギー等を理由に、飲食できるものに制限がある顧客に対するサービスとして、料理に使用する食材を表示するものである。これにより、当該顧客は、安心して料理を注文することができる。
 商品説明206は、当該特定の料理を説明する文章である。例えば、日本食に対して興味を持ってもらえるような説明や、店のこだわり等を表示させることができる。図13右図の例では、英語による商品説明が表示されている。これにより、外国人に対し、単に料理を提供するだけではなく、料理に込められた日本文化を伝えることにより、日本を強くアピールすることができる。
The food pictogram 205 displays foods used for cooking as a service for customers who have restrictions on what can be eaten and drinked for reasons of religion, vegetarianism, food allergies, and the like. Thereby, the said customer can order a dish in comfort.
The product description 206 is a sentence explaining the specific dish. For example, it is possible to display an explanation that may be interested in Japanese food or store preferences. In the example on the right side of FIG. 13, a product description in English is displayed. In this way, Japan can be strongly appealed not only by providing food to foreigners but also by conveying the Japanese culture embedded in the food.
 基本情報207は、産地、アレルゲン、及びカロリー等、当該特定の料理に関する基本情報が表示される。例えば、料理に使用する食材の産地や生産者の様子等を、多言語対応の映像や画像によって表示することができる。これにより、当該顧客は、安心して料理を注文することができると共に、自身が食べる料理に使用されている食材の出所を、自身が使用する言語によって容易に把握することができる。
 これにより、当該顧客は、当該料理を食べた際の感動をより一層深めることができる。
 また、飲食店等の情報を提供する側は、顧客に対するメニューのPRが容易となる。
The basic information 207 displays basic information about the specific dish such as the production area, allergen, and calories. For example, the production area of the ingredients used for cooking, the state of the producer, and the like can be displayed by multilingual video and images. Accordingly, the customer can order the food with peace of mind, and can easily grasp the source of the ingredients used for the food he / she eats in the language he / she uses.
Thereby, the customer can deepen the excitement when the food is eaten.
Moreover, the side which provides information, such as a restaurant, becomes easy to PR of a menu with respect to a customer.
 上述の例では、飲食店と、メニューと、地図とが互いに連動させることができることを説明したが、上述の例に限られず、例えば家電量販店も連動させることができる。
 図14は、専用アプリがインストールされたスマートフォンに表示された、家電量販店の商品カタログ及び飲食店のメニューの一例を示す図である。
In the above-described example, it has been described that a restaurant, a menu, and a map can be linked to each other. However, the present invention is not limited to the above-described example, and for example, a home appliance mass retailer can also be linked.
FIG. 14 is a diagram illustrating an example of a product catalog of a home appliance mass retailer and a menu of a restaurant displayed on a smartphone in which a dedicated application is installed.
図14(A)は、家電量販店の商品カタログの一例を示している。例えば、海外旅行者が、日本の所定の家電量販店で日本製の炊飯器を購入したいと考えた場合、当該海外旅行者は、まず専用アプリを起動させ、当該所定の家電量販店を見つけるための検索を行う。これにより、専用アプリの地図上に所定の家電量販店を示すアイコンが表示される。当該海外旅行者によって当該アイコンをタップされると、埋込箇所に埋め込まれた当該家電量販店の商品カタログに含まれる情報(例えば、サイズや価格)が文書で表示され、又は音声で出力される。 FIG. 14A shows an example of a product catalog of a home appliance mass retailer. For example, when an overseas traveler wants to purchase a rice cooker made in Japan at a predetermined consumer electronics retailer in Japan, the overseas traveler first activates a dedicated application to find the predetermined consumer electronics retailer. Search for. Thereby, the icon which shows a predetermined household appliance mass retailer is displayed on the map of a special application. When the overseas traveler taps the icon, information (for example, size and price) included in the product catalog of the home appliance mass retailer embedded in the embedding location is displayed in a document or output by voice. .
 このとき、当該海外旅行者の操作により、スマートフォンの画面に表示される商品カタログに表示される文書等を、当該海外旅行者が使用する言語に切り替え表示させることができる。
 このとき、商品カタログに表示される文書等は、図5に例示する埋込対応テーブルによって管理されており、多言語によって切り替えられる。
これにより、当該海外旅行者は、日本語を理解することができなくても、日本の家電量販店で販売されている日本製の炊飯器に関する情報を迅速かつ正確に取得することができるので、時間的な制限を有する海外旅行時においても、焦ることなく楽しく有意義な買い物をすることができる。
At this time, the document displayed on the product catalog displayed on the screen of the smartphone can be switched and displayed in the language used by the overseas traveler by the operation of the overseas traveler.
At this time, documents and the like displayed in the product catalog are managed by the embedding correspondence table illustrated in FIG. 5 and can be switched in multiple languages.
As a result, even if the overseas traveler cannot understand Japanese, he can quickly and accurately obtain information about Japanese rice cookers sold at Japanese home electronics mass retailers. Even when traveling abroad with time restrictions, it is possible to make fun and meaningful shopping without rushing.
 また、大きい物をまとめ買いしたい等の場合には、専用アプリで購入対象リストを作成し、そのままオンラインで一括購入することができる。また、購入した商品をホテルへ配送するための手配を行うこともできる。
 これにより、多言語対応のカタログで事前にリサーチ可能なスマートショッピングが実現される。即ち、多言語対応のカタログで事前にチェックしリスト化するため、店内で迷う必要がなくなる。また、オンラインで簡単に購入することができるため、細々とした決済を行わずに、購入対象商品のリストを一度で決済することができる。また、購入した商品をそのままホテルへ配送することができるため、海外旅行者は重い荷物を持つ必要がなくなる。
In addition, if you want to buy large items in bulk, you can create a purchase target list with a dedicated application and make online purchases as it is. It is also possible to make arrangements for delivering purchased products to the hotel.
This realizes smart shopping that can be researched in advance with a multilingual catalog. That is, since it is checked and listed in advance in a multilingual catalog, there is no need to get lost in the store. In addition, since it can be easily purchased online, it is possible to settle a list of products to be purchased at once without performing detailed settlement. Moreover, since the purchased goods can be delivered to the hotel as they are, overseas travelers do not need to have heavy luggage.
 図14(B)は、飲食店のメニューの一例を示している。例えば、海外旅行客が日本の飲食店において料理の注文を行う際には、従来の日本語のみで表示されたメニュー表の中から料理を選択し、日本語のみ話すことができる店員に対して注文を行わなければならない事が多い。しかし、メニュー表に記載された料理の名前だけでは、味を創造することができない。また、飲食店に入る前にメニューを調べておきたい場合もある。 FIG. 14B shows an example of a restaurant menu. For example, when an overseas traveler places a food order at a restaurant in Japan, to a store clerk who can select food from the traditional menu table displayed only in Japanese and speak only Japanese I often have to place orders. However, it is not possible to create a taste only with the names of the dishes listed in the menu table. Also, you may want to check the menu before entering a restaurant.
 これに対し、海外旅行者は、専用アプリを利用することにより、飲食店においてスムーズに料理を注文することができるだけでなく、当該メニュー表の埋込箇所に埋め込まれた各料理や飲み物についての詳しい情報を、多言語に対応した音声、動画、及び地図等によって取得することができる。また、専用アプリを用いてオンラインで注文することや会計を行うことができる。さらに、現在地からお勧めの店を検索することもできる。 On the other hand, overseas travelers can not only smoothly order food at restaurants by using the dedicated app, but also have detailed information about each food and drink embedded in the embedded area of the menu table. Information can be acquired by sound, moving images, maps, and the like corresponding to multiple languages. It is also possible to place orders online and perform accounting using a dedicated app. You can also search for recommended stores from your current location.
 以上を換言すると、本発明が適用されるコンテンツ作成及び再生システムは、上述の実施形態としてのエディタ装置1とビューア装置4からなる情報処理システムを含め、次のような構成を有する、各種各様の実施形態を取ることができる。
 即ち、本発明が適用されるコンテンツ作成及び再生システムは、次のようなコンテンツ作成装置とコンテンツ再生装置とを含むシステムである。
 コンテンツ作成装置は、次のような選択手段と、埋込手段と、埋込対応情報生成手段と、コンテンツ生成手段とを備える、
 選択手段(例えば図2の埋込箇所選択部12)は、コンテンツの要素となる画像データの中から、言語データを埋込むための埋込箇所を1以上選択する。
 埋込手段(例えば図2の埋込部15)は、1以上の前記埋込箇所の夫々に対して、埋込対象の2以上の言語の言語データを夫々対応付ける処理を埋込処理として実行する。
 埋込対応情報生成手段(例えば図2の埋込対応テーブル生成部16)は、前記埋込処理の結果を示す埋込対応情報(例えば図5の埋込対応テーブル)を生成する。
 コンテンツ生成手段(例えば図2の切替コンテンツデータ生成部17)は、前記画像データ、前記埋込対象の2以上の言語の夫々の言語データ、及び前記埋込対応情報を含むコンテンツ(例えば上述の他言語切替コンテンツ)を生成する。
 コンテンツ再生装置は、次のようなコンテンツ取得手段と、特定手段と、抽出手段と、再生制御手段とを備える。
 コンテンツ取得手段(例えば図6の切替コンテンツデータ取得部61)は、前記コンテンツ生成手段により生成された前記コンテンツを取得する。
 特定手段(例えば図6の再生対象特定部68)は、再生対象言語を特定する。
 抽出手段(例えば図6の再生対象抽出部69)は、前記埋込対応情報に基づいて、前記埋込対象の2以上の言語の言語データから、再生対象言語の言語データを抽出する。
 再生制御手段(例えば図6の再生制御部70)は、前記画像データを、前記再生対象言語の言語データを前記埋込箇所に埋め込んだ状態で再生する。
In other words, the content creation and playback system to which the present invention is applied includes various information processing systems including the information processing system including the editor device 1 and the viewer device 4 according to the above-described embodiment. Embodiments can be taken.
That is, a content creation and playback system to which the present invention is applied is a system including the following content creation device and content playback device.
The content creation device includes the following selection means, embedding means, embedding correspondence information generating means, and content generating means.
The selection means (for example, the embedding location selection unit 12 in FIG. 2) selects one or more embedding locations for embedding language data from the image data serving as content elements.
The embedding unit (for example, the embedding unit 15 in FIG. 2) executes, as an embedding process, a process of associating language data of two or more languages to be embedded with each of one or more embedding locations. .
The embedding correspondence information generating means (for example, the embedding correspondence table generating unit 16 in FIG. 2) generates embedding correspondence information (for example, the embedding correspondence table in FIG. 5) indicating the result of the embedding process.
The content generation means (for example, the switching content data generation unit 17 in FIG. 2) includes the content including the image data, the language data of each of the two or more languages to be embedded, and the embedding correspondence information (for example, the above-described other Language switching content) is generated.
The content reproduction apparatus includes the following content acquisition means, identification means, extraction means, and reproduction control means.
A content acquisition unit (for example, the switching content data acquisition unit 61 in FIG. 6) acquires the content generated by the content generation unit.
The specifying unit (for example, the playback target specifying unit 68 in FIG. 6) specifies the playback target language.
The extraction means (for example, the reproduction target extraction unit 69 in FIG. 6) extracts language data of the reproduction target language from the language data of two or more languages to be embedded based on the embedding correspondence information.
The reproduction control means (for example, the reproduction control unit 70 in FIG. 6) reproduces the image data in a state where language data of the reproduction target language is embedded in the embedding portion.
 これにより、画像と言語を含むコンテンツとして、多言語に切替え可能なコンテンツを容易に作成可能にすることが可能になる。
 即ち、漫画等のコンテンツは、再生時点では、登場人物や背景等の絵と共に、吹き出しに表されるセリフ等の言語も同時に再生される。
 このようなコンテンツを作成する場合、従来は、各言語毎に、紙媒体に描かれた漫画をイメージスキャナ等で読み込ませたイメージデータを1つずつ作成する必要があった。
 これに対して、本発明では、登場人物や背景等の絵の画像データ(イメージデータ)は1つで済み、当該画像データと、各言語毎の言語データとの対応付けは埋込対応情報で容易に管理される。その結果、画像と言語を含むコンテンツとして、多言語に切替え可能なコンテンツを容易に作成可能にすることが可能になる。
This makes it possible to easily create content that can be switched to multiple languages as content including images and languages.
In other words, at the time of playback of content such as comics, a language such as speech expressed in a balloon is simultaneously played with pictures such as characters and backgrounds.
In the case of creating such content, conventionally, it has been necessary to create image data obtained by reading a comic drawn on a paper medium with an image scanner or the like for each language.
On the other hand, in the present invention, there is only one image data (image data) of pictures such as characters and backgrounds, and the correspondence between the image data and language data for each language is embedded correspondence information. Easy to manage. As a result, it is possible to easily create content that can be switched to multiple languages as content including images and languages.
 また、上述の実施形態では、本発明が適用されるコンテンツ作成装置やコンテンツ再生装置は、エディタ装置1やビューア装置4を例として説明したが、特にこれに限定されない。
 例えば、本発明は、音声と画像とを処理可能な電子機器一般に適用することができる。具体的には、例えば、本発明は、スマートフォン等の携帯端末、携帯型ナビゲーション装置、携帯電話機、ポータブルゲーム、デジタルカメラ、ノート型のパーソナルコンピュータ、プリンタ、テレビジョン受像機、ビデオカメラ等に適用可能である。
In the above-described embodiment, the content creation device and the content reproduction device to which the present invention is applied have been described by taking the editor device 1 and the viewer device 4 as examples. However, the present invention is not particularly limited thereto.
For example, the present invention can be applied to general electronic devices that can process sound and images. Specifically, for example, the present invention can be applied to portable terminals such as smartphones, portable navigation devices, mobile phones, portable games, digital cameras, notebook personal computers, printers, television receivers, video cameras, and the like. It is.
 上述した一連の処理は、ハードウェアにより実行させることもできるし、ソフトウェアにより実行させることもできる。
 換言すると、図2や図6の機能的構成は例示に過ぎず、特に限定されない。即ち、上述した一連の処理を全体として実行できる機能がエディタ装置1やビューア装置4に備えられていれば足り、この機能を実現するためにどのような機能ブロックを用いるのかは特に図2や図6の例に限定されない。
 また、1つの機能ブロックは、ハードウェア単体で構成してもよいし、ソフトウェア単体で構成してもよいし、それらの組み合わせで構成してもよい。
The series of processes described above can be executed by hardware or can be executed by software.
In other words, the functional configurations of FIGS. 2 and 6 are merely examples, and are not particularly limited. That is, it is sufficient that the editor device 1 and the viewer device 4 have a function capable of executing the above-described series of processing as a whole, and what functional block is used to realize this function is particularly shown in FIGS. The example is not limited to six.
In addition, one functional block may be constituted by hardware alone, software alone, or a combination thereof.
 一連の処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、コンピュータ等にネットワークや記録媒体からインストールされる。
 コンピュータは、専用のハードウェアに組み込まれているコンピュータであってもよい。また、コンピュータは、各種のプログラムをインストールすることで、各種の機能を実行することが可能なコンピュータ、例えば汎用のパーソナルコンピュータであってもよい。
When a series of processing is executed by software, a program constituting the software is installed on a computer or the like from a network or a recording medium.
The computer may be a computer incorporated in dedicated hardware. The computer may be a computer capable of executing various functions by installing various programs, for example, a general-purpose personal computer.
 このようなプログラムを含む記録媒体は、ユーザにプログラムを提供するために装置本体とは別に配布されるリムーバブルメディアにより構成されるだけでなく、装置本体に予め組み込まれた状態でユーザに提供される記録媒体等で構成される。リムーバブルメディア41は、例えば、磁気ディスク(フロッピディスクを含む)、光ディスク、又は光磁気ディスク等により構成される。光ディスクは、例えば、CD-ROM(Compact Disk-Read Only Memory),DVD(Digital Versatile Disk)等により構成される。光磁気ディスクは、MD(Mini-Disk)等により構成される。また、装置本体に予め組み込まれた状態でユーザに提供される記録媒体は、例えば、プログラムが記録されているROMやハードディスク等で構成される。 A recording medium including such a program is provided not only to a removable medium distributed separately from the apparatus main body in order to provide the program to the user, but also to the user in a state of being incorporated in the apparatus main body in advance. It consists of a recording medium. The removable medium 41 is composed of, for example, a magnetic disk (including a floppy disk), an optical disk, a magneto-optical disk, or the like. The optical disk is composed of, for example, a CD-ROM (Compact Disk-Read Only Memory), a DVD (Digital Versatile Disk), or the like. The magneto-optical disk is constituted by an MD (Mini-Disk) or the like. In addition, the recording medium provided to the user in a state of being preinstalled in the apparatus main body is configured by, for example, a ROM or a hard disk in which a program is recorded.
 なお、本明細書において、記録媒体に記録されるプログラムを記述するステップは、その順序に沿って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的或いは個別に実行される処理をも含むものである。
 また、本明細書において、システムの用語は、複数の装置や複数の手段等より構成される全体的な装置を意味するものとする。
In the present specification, the step of describing the program recorded on the recording medium is not limited to the processing performed in time series along the order, but is not necessarily performed in time series, either in parallel or individually. The process to be executed is also included.
Further, in the present specification, the term “system” means an overall apparatus configured by a plurality of devices, a plurality of means, and the like.
 1・・・エディタ装置、2・・・オーサリング装置、3・・・配信装置、4・・・ビューア装置、11・・・画像受付部、12・・・埋込箇所選択部、13・・・テキスト受付部、14・・・音声受付部、15・・・埋込部、16・・・埋込対応テーブル生成部、17・・・切替コンテンツデータ生成部、18・・・出力部、61・・・切替コンテンツデータ取得部、62・・・分離部、63・・・画像保持部、64・・・テキスト保持部、65・・・音声保持部、66・・・埋込対応テーブル保持部、67・・・操作部、68・・・再生対象特定部、69・・・再生対象抽出部、70・・・再生制御部、71・・・出力部、P1・・・アイコン、m、m1・・・地図、201・・・メニューカテゴリ、202・・・言語切替ボタン、203・・・地図ボタン、204・・・ハラール認証、205・・・食品ピクトグラム、206・・・商品説明、207・・・基本情報 DESCRIPTION OF SYMBOLS 1 ... Editor apparatus, 2 ... Authoring apparatus, 3 ... Distribution apparatus, 4 ... Viewer apparatus, 11 ... Image reception part, 12 ... Embedment location selection part, 13 ... Text accepting unit, 14 ... voice accepting unit, 15 ... embedding unit, 16 ... embedding correspondence table generating unit, 17 ... switching content data generating unit, 18 ... output unit, 61. ..Switching content data acquisition unit, 62... Separation unit, 63... Image holding unit, 64... Text holding unit, 65. 67 ... operation unit, 68 ... reproduction target specifying unit, 69 ... reproduction target extraction unit, 70 ... reproduction control unit, 71 ... output unit, P1 ... icon, m, m1,. ..Map 201 ... menu category 202 ... language switching button 20 ... map button, 204 ... Halal certification, 205 ... food pictogram, 206 ... Product Description, 207 ... basic information

Claims (5)

  1.  コンテンツの要素となる画像データの中から、言語データを埋込むための埋込箇所を1以上選択する選択手段と、
     1以上の前記埋込箇所の夫々に対して、埋込対象の2以上の言語の言語データを夫々対応付ける処理を埋込処理として実行する埋込手段と、
     前記埋込処理の結果を示す埋込対応情報を生成する埋込対応情報生成手段と、
     前記画像データ、前記埋込対象の2以上の言語の夫々の言語データ、及び前記埋込対応情報を含むコンテンツを生成するコンテンツ生成手段と、
     を備えるコンテンツ作成装置。
    A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements;
    Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations;
    Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process;
    Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information;
    A content creation device comprising:
  2.  コンテンツを作成する制御を実行するコンピュータに、
     前記コンテンツの要素となる画像データの中から、言語データを埋込むための埋込箇所を1以上選択する選択ステップと、
     1以上の前記埋込箇所の夫々に対して、埋込対象の言語データを対応付ける処理を埋込処理として実行する埋込ステップと、
     前記埋込処理の結果を示す埋込対応情報を生成する埋込対応情報生成ステップと、
     前記画像データ、前記埋込対象の言語データ、及び前記埋込対応情報を含むコンテンツを生成するコンテンツ生成ステップと、
     を含む制御処理を実行させるプログラム。
    On the computer that runs the control that creates the content,
    A selection step of selecting one or more embedding locations for embedding language data from the image data serving as the content elements;
    An embedding step of executing, as an embedding process, a process of associating language data to be embedded with each of the one or more embedding locations;
    An embedding correspondence information generating step for generating embedding correspondence information indicating a result of the embedding process;
    A content generation step of generating content including the image data, the language data to be embedded, and the embedded correspondence information;
    A program that executes control processing including
  3.  言語データを埋込むための埋込箇所を1以上含む画像データと、前記1以上の埋込箇所の夫々に対して埋め込まれる2以上の言語の言語データと、1以上の前記埋込箇所の夫々と、埋込対象の2以上の言語の言語データとの対応関係を示す埋込対応情報とを含むコンテンツを取得するコンテンツ取得手段と、
     再生対象言語を特定する特定手段と、
     前記埋込対応情報に基づいて、前記埋込対象の2以上の言語の言語データから、再生対象言語の言語データを抽出する抽出手段と、
     前記画像データを、前記再生対象言語の言語データを前記埋込箇所に埋め込んだ状態で再生する再生制御手段と、
     を備えるコンテンツ再生装置。
    Image data including one or more embedding locations for embedding language data, language data of two or more languages embedded in each of the one or more embedding locations, and each of the one or more embedding locations. Content acquisition means for acquiring content including embedded correspondence information indicating a correspondence relationship with language data of two or more languages to be embedded;
    A specifying means for specifying the target language,
    Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
    Reproduction control means for reproducing the image data in a state in which language data of the reproduction target language is embedded in the embedded portion;
    A content playback apparatus comprising:
  4.  コンテンツを再生する制御を実行するコンピュータに、
     言語データを埋込むための埋込箇所を1以上含む画像データと、前記1以上の埋込箇所の夫々に対して埋め込まれる2以上の言語の言語データと、1以上の前記埋込箇所の夫々と、埋込対象の2以上の言語の言語データとの対応関係を示す埋込対応情報とを含むコンテンツを取得するコンテンツ取得ステップと、
     再生対象言語を特定する特定ステップと、
     前記埋込対応情報に基づいて、前記埋込対象の2以上の言語の言語データから、再生対象言語の言語データを抽出する抽出ステップと、
     前記画像データを、前記再生対象言語の言語データを前記埋込箇所に埋め込んだ状態で再生する再生制御ステップと、
     を含む制御処理を実行させるプログラム。
    To the computer that executes the control to play the content,
    Image data including one or more embedding locations for embedding language data, language data of two or more languages embedded in each of the one or more embedding locations, and each of the one or more embedding locations. And a content acquisition step of acquiring content including embedded correspondence information indicating a correspondence relationship between language data of two or more languages to be embedded;
    A specific step of identifying the target language,
    An extraction step of extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
    A reproduction control step of reproducing the image data in a state where language data of the reproduction target language is embedded in the embedding portion;
    A program that executes control processing including
  5.  コンテンツ作成装置とコンテンツ再生装置とを含むコンテンツ作成及び再生システムにおいて、
     前記コンテンツ作成装置は、
      コンテンツの要素となる画像データの中から、言語データを埋込むための埋込箇所を1以上選択する選択手段と、
      1以上の前記埋込箇所の夫々に対して、埋込対象の2以上の言語の言語データを夫々対応付ける処理を埋込処理として実行する埋込手段と、
      前記埋込処理の結果を示す埋込対応情報を生成する埋込対応情報生成手段と、
      前記画像データ、前記埋込対象の2以上の言語の夫々の言語データ、及び前記埋込対応情報を含むコンテンツを生成するコンテンツ生成手段と、
     を備え、
     前記コンテンツ再生装置は、
      前記コンテンツ生成手段により生成された前記コンテンツを取得するコンテンツ取得手段と、
      再生対象言語を特定する特定手段と、
      前記埋込対応情報に基づいて、前記埋込対象の2以上の言語の言語データから、再生対象言語の言語データを抽出する抽出手段と、
      前記画像データを、前記再生対象言語の言語データを前記埋込箇所に埋め込んだ状態で再生する再生制御手段と、
     を備える、
     コンテンツ作成及び再生システム。
    In a content creation and playback system including a content creation device and a content playback device,
    The content creation device includes:
    A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements;
    Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations;
    Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process;
    Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information;
    With
    The content playback device
    Content acquisition means for acquiring the content generated by the content generation means;
    A specifying means for specifying the target language,
    Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
    Reproduction control means for reproducing the image data in a state in which language data of the reproduction target language is embedded in the embedded portion;
    Comprising
    Content creation and playback system.
PCT/JP2016/054448 2015-02-17 2016-02-16 Content creation device, content playback device, program, and content creation and playback system WO2016133091A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015028857A JP2018061071A (en) 2015-02-17 2015-02-17 Content creation device, content reproduction device, program, and content creation and reproduction system
JP2015-028857 2015-02-17

Publications (2)

Publication Number Publication Date
WO2016133091A2 true WO2016133091A2 (en) 2016-08-25
WO2016133091A3 WO2016133091A3 (en) 2016-10-27

Family

ID=56689364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/054448 WO2016133091A2 (en) 2015-02-17 2016-02-16 Content creation device, content playback device, program, and content creation and playback system

Country Status (2)

Country Link
JP (1) JP2018061071A (en)
WO (1) WO2016133091A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3214443U (en) * 2017-10-13 2018-01-18 株式会社日本レストランエンタプライズ Display of pictograms on food containers
JP2018101378A (en) * 2016-12-22 2018-06-28 東芝テック株式会社 Information processor, sales data processor and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001338307A (en) * 2000-05-29 2001-12-07 Sharp Corp Electronic cartoon producing device and electronic cartoon display device
JP2008040694A (en) * 2006-08-03 2008-02-21 Mti Ltd Multilingual display device
JP5674451B2 (en) * 2010-12-22 2015-02-25 富士フイルム株式会社 Viewer device, browsing system, viewer program, and recording medium
JP5439456B2 (en) * 2011-10-21 2014-03-12 富士フイルム株式会社 Electronic comic editing apparatus, method and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018101378A (en) * 2016-12-22 2018-06-28 東芝テック株式会社 Information processor, sales data processor and program
JP3214443U (en) * 2017-10-13 2018-01-18 株式会社日本レストランエンタプライズ Display of pictograms on food containers

Also Published As

Publication number Publication date
JP2018061071A (en) 2018-04-12
WO2016133091A3 (en) 2016-10-27

Similar Documents

Publication Publication Date Title
Benyon Designing user experience
US20170168782A1 (en) System and method for creating a universally compatible application development system
US20100088631A1 (en) Interactive metro guide map and portal system, methods of operation, and storage medium
JP5518112B2 (en) Digital book provision system
US20200250369A1 (en) System and method for transposing web content
US8468148B2 (en) Searching by use of machine-readable code content
Torma et al. IReligion
US11831738B2 (en) System and method for selecting and providing available actions from one or more computer applications to a user
Basaraba et al. Digital narrative conventions in heritage trail mobile apps
Ozdemir-Guzel et al. Gen Z tourists and smart devices
WO2016133091A2 (en) Content creation device, content playback device, program, and content creation and playback system
JP4672543B2 (en) Information display device
US11304029B1 (en) Location based mobile device system and application for providing artifact tours
Verhoeff Theoretical consoles: Concepts for gadget analysis
Katlav QR code applications in tourism
US20140372469A1 (en) Searching by use of machine-readable code content
CN107111657A (en) The WEB application retrieval and display of information and WEB content based on WEB content
US9628573B1 (en) Location-based interaction with digital works
Pearson et al. Exploring low-cost, Internet-free information access for resource-constrained communities
JP2022021316A (en) Information processing device, information processing method and information processing system
Esteves et al. Mementos: a tangible interface supporting travel
Cremonesi et al. Personalized interactive public screens
Yelmi Istanbuls cultural soundscape: Collecting, preserving and exhibiting the sonic cultural heritage of daily urban life
US10885267B1 (en) Interactive electronic book system and method therefor
JP6931445B2 (en) Information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16752477

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16752477

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: JP