WO2016133091A2 - Dispositif de création de contenu, dispositif de lecture de contenu, programme et système de création et de lecture de contenu - Google Patents

Dispositif de création de contenu, dispositif de lecture de contenu, programme et système de création et de lecture de contenu Download PDF

Info

Publication number
WO2016133091A2
WO2016133091A2 PCT/JP2016/054448 JP2016054448W WO2016133091A2 WO 2016133091 A2 WO2016133091 A2 WO 2016133091A2 JP 2016054448 W JP2016054448 W JP 2016054448W WO 2016133091 A2 WO2016133091 A2 WO 2016133091A2
Authority
WO
WIPO (PCT)
Prior art keywords
embedding
content
embedded
language
data
Prior art date
Application number
PCT/JP2016/054448
Other languages
English (en)
Japanese (ja)
Other versions
WO2016133091A3 (fr
Inventor
重昭 白鳥
Original Assignee
ギズモモバイル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ギズモモバイル株式会社 filed Critical ギズモモバイル株式会社
Publication of WO2016133091A2 publication Critical patent/WO2016133091A2/fr
Publication of WO2016133091A3 publication Critical patent/WO2016133091A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Definitions

  • the present invention relates to a content creation device, a content reproduction device, a program, and a content distribution system.
  • the present invention has been made in view of such a situation, and an object of the present invention is to enable easy creation of content that can be switched to multiple languages as content including images and languages.
  • a content creation device includes: A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements; Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations; Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process; Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information; Is provided.
  • the first program of one aspect of the present invention is a program corresponding to the above-described content creation device of one aspect of the present invention.
  • a content reproduction device includes Image data including one or more embedding locations for embedding language data, language data of two or more languages embedded in each of the one or more embedding locations, and each of the one or more embedding locations.
  • Content acquisition means for acquiring content including embedded correspondence information indicating a correspondence relationship with language data of two or more languages to be embedded;
  • a specifying means for specifying the target language, Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information;
  • Reproduction control means for reproducing the image data in a state in which language data of the reproduction target language is embedded in the embedded portion; Is provided.
  • the second program according to one aspect of the present invention is a program corresponding to the content reproduction device according to one aspect of the present invention described above.
  • a content creation and playback system includes: In a content creation and playback system including a content creation device and a content playback device,
  • the content creation device includes: A selection means for selecting one or more embedding locations for embedding language data from image data serving as content elements; Embedding means for executing, as an embedding process, a process of associating language data of two or more languages to be embedded with each of the one or more embedding locations; Embedding correspondence information generating means for generating embedding correspondence information indicating a result of the embedding process; Content generating means for generating content including the image data, the language data of each of two or more languages to be embedded, and the embedding correspondence information; With The content playback device Content acquisition means for acquiring the content generated by the content generation means; A specifying means for specifying the target language, Extracting means for extracting language data of a reproduction target language from language data of two or more languages to be embedded based on the embedding correspondence information; Reproduction control means for reproducing
  • FIG. 3 is a schematic diagram for explaining a function of embedding multilingual text data and voice data in one embedding location as a function of the editor device of FIG. 2. It is a figure which shows an example of the structure of the embedding correspondence table produced
  • FIG. 12 is an image diagram showing a difference from a conventional guide book different from those shown in FIGS. 10 and 11 when a multilingual map is used as a guide book.
  • FIG. 1 is a block diagram showing a configuration of an information processing system according to an embodiment of the content creation and playback system of the present invention.
  • the information processing system shown in FIG. 1 includes an editor device 1, an authoring device 2, a distribution device 3, and a viewer device 4.
  • the editor device 1 is an embodiment of a content creation device to which the present invention is applied, and creates content (for example, comic content in the present embodiment) as electronic data. Specifically, the editor device 1 creates electronic data of content (hereinafter referred to as “multilingual switching content”) that can switch a language portion such as speech in multilingual among comic content.
  • multilingual switching content electronic data of content
  • the term “content” refers to electronic data of the content.
  • the authoring device 2 reduces the multilingual switching content and provides the reduced multilingual switching content (hereinafter referred to as “reduced content”) to the distribution device 3.
  • reduced content the reduced content creation method (content reduction method) is not particularly limited, and any method can be employed.
  • the authoring device 2 executes an image conversion process such as encoding a multilingual switching content image to reduce (reduce) the data amount by a method that simulates multiple exposure.
  • an image conversion process such as encoding a multilingual switching content image to reduce (reduce) the data amount by a method that simulates multiple exposure.
  • the image subjected to the image change processing is hereinafter referred to as a “converted image”
  • the multilingual switching content is restored by executing the restoration process (hereinafter referred to as “image reverse conversion process”).
  • image reverse conversion process the restoration process
  • the data amount of the converted image is smaller than that of the original image. This is clear when considering that the same image is shifted and overlapped, the more the image is averaged and finally the color becomes one color.
  • the image conversion process and the image reverse conversion process employed in the present embodiment are mathematically guaranteed means that reduce the amount of image data and do not impair the authenticity of the data as an image. It is a process using. For this reason, it is virtually impossible for a third party to accidentally intercept or tamper with a confidential image.
  • the specification attached to the application for Japanese Patent Application No. 2014-214533 may be referred to.
  • the image conversion process and the image reverse conversion process employed in the present embodiment are merely examples of a reduction method, and any reduction method can be adopted.
  • the distribution device 3 holds a plurality of reduced contents, and provides a viewer-reduced reduced content to the viewer device 4 when a viewing request is received from the viewer device 4.
  • the viewer apparatus 4 is an embodiment of a content reproduction apparatus to which the present invention is applied, and is an apparatus operated when a user views content (for example, comic content in the present embodiment).
  • the viewer device 4 restores the multilingual switching content by performing the above-described image reverse conversion process on the reduced content to be browsed.
  • the viewer device 4 receives the instruction operation from the user and reproduces the multilingual switching content.
  • the language portion such as speech is played back in the language specified by the user, and the playback target language is switched according to the switching operation of the user.
  • each of the editor device 1, the authoring device 2, the distribution device 3, and the viewer device 4 is applied to a computer and its peripheral devices.
  • Each unit in the present embodiment is configured by hardware included in a computer and its peripheral devices, and software that controls the hardware.
  • the above hardware includes a storage unit, a communication unit, a display unit, and an input unit in addition to a CPU (Central Processing Unit) as a control unit.
  • a storage unit for example, a memory (RAM: Random Access Memory, ROM: Read Only Memory, etc.), a hard disk drive (HDD: Hard Disk Drive), an optical disk (CD: Compact Disk, DVD: Digital Versatile drive, etc.).
  • Examples of the communication unit include various wired and wireless interface devices.
  • Examples of the display unit include various displays such as a liquid crystal display.
  • Examples of the input unit include a keyboard and a pointing device (mouse, tracking ball, etc.).
  • the viewer apparatus 4 of this embodiment is comprised as a tablet, and also has the touchscreen which has both the input part and the display part.
  • the input unit of the touch panel includes, for example, a capacitance type or resistance type position input sensor stacked in the display area of the display unit, and detects the coordinates of the position where the touch operation is performed.
  • the touch operation refers to an operation of touching or approaching an object (such as a user's finger or a touch pen) with respect to a touch panel (more precisely, an input unit) serving as a display medium.
  • touch position the position where the touch operation is performed
  • the coordinates of the touch position are referred to as “touch coordinates”.
  • the software includes a computer program and data for controlling the hardware.
  • the computer program and data are stored in the storage unit, and are appropriately executed and referenced by the control unit. Further, the computer program and data can be distributed via a communication line, and can also be recorded and distributed on a computer-readable medium such as a CD-ROM.
  • FIG. 2 is a functional block diagram illustrating an example of a functional configuration of the editor device 1.
  • the editor device 1 includes an image reception unit 11, an embedding location selection unit 12, a text reception unit 13, a voice reception unit 14, an embedding unit 15, an embedding correspondence table generation unit 16, and switching content data generation.
  • a unit 17 and an output unit 18 are provided.
  • the image receiving unit 11 receives image data of the content.
  • the embedding location selection unit 12 selects a location in which language data is embedded (hereinafter, “embedding location”) from the received image.
  • the image data received in the present embodiment is image data indicating each of a plurality of pages constituting a comic, and can be divided in units of pages.
  • An image in one page is divided into a plurality of frames.
  • One frame includes a picture of a predetermined scene, and includes “speech balloons” as necessary. In this “speech balloon”, words such as persons included in the picture of the frame are displayed. Therefore, in the present embodiment, the “speech balloon” location included in the frame is selected as the “embedding location”.
  • FIG. 3 shows an example of the editor image 31 displayed on the editor device 1.
  • the content creator can use the editor image 31 to create multilingual switching content for an arbitrary comic.
  • the editor image 31 includes an area 41J for embedding Japanese data, an area 41E for embedding English data, an area 41C for embedding Chinese data, and the like as areas for embedding data in a predetermined language.
  • the producer designates a “Japanese” tab and displays an area 41J in which Japanese data is embedded as shown in FIG.
  • the editor image 31 will be described by taking as an example the case of embedding Japanese data.
  • the image of the work target page among the images of the plurality of pages constituting the comic is displayed.
  • the creator can switch the target page by pressing the software button shown in the page switching area 42 or pressing the thumbnail of the image of each page displayed in the page thumbnail image display area 43. it can.
  • the image of the work target page is divided into a plurality of frames, and one or more “speech balloons” are set for each frame.
  • the location of the “balloon” is a candidate for an embedded location.
  • “speech balloons” 51 to 55 are candidates for embedding locations.
  • the producer performs an operation of selecting an embedding location from such embedding location candidates.
  • the embedding location selection unit 12 in FIG. 2 selects an embedding location based on such an operation. For example, it is assumed that a “balloon” location 52 is selected as an embedding location.
  • the producer can embed at least one of text data and voice data as Japanese data in the location 52 selected as the embedding location.
  • the producer when embedding text data, can embed the input text data by directly inputting the text to the location 52 selected as the embedding location, or a text prepared in advance. Data can be embedded.
  • the text receiving unit 13 in FIG. 2 receives the text data to be embedded and supplies it to the embedding unit 15.
  • the producer when embedding audio data, the producer emits a predetermined sound and directly inputs it to a microphone (not shown) of the editor device 1 with the location 52 selected as the embedding location.
  • the input voice data can be embedded, or voice data prepared in advance can be embedded.
  • the voice reception unit 14 in FIG. 2 receives the voice data to be embedded and supplies it to the embedding unit 15.
  • the embedding unit 15 receives a predetermined language (the above-described example) received by the text receiving unit 13 with respect to the embedding portion (the “balloon” portion 52 in the above example) selected by the embedding location selecting unit 12.
  • a predetermined language the above-described example
  • the text receiving unit 13 receives a predetermined language (the above-described example) received by the text receiving unit 13 with respect to the embedding portion (the “balloon” portion 52 in the above example) selected by the embedding location selecting unit 12.
  • Japanese Japanese
  • processing for embedding voice data of a predetermined language (Japanese in the above example) received by the voice receiving unit 14 is executed.
  • the producer may further perform the same operation as described above with the tab “ENGLISH” designated to embed English data and the area 41E for embedding English data displayed. .
  • the producer may perform the same operation as described above while designating the “Chinese” tab and displaying the area 41E for embedding Chinese data.
  • multilingual text data and voice data can be embedded in one embedding location (in the example of FIG. 4, a “balloon” location 52).
  • the “embedding process” by the embedding unit 15 in FIG. 2 in this embodiment is not an image process for creating an image of a page in which text or the like is arranged at an embedding location (that is, a process for processing an image).
  • the method of this association is not particularly limited, but in this embodiment, a method of generating a table as shown in FIG. 5 (hereinafter referred to as “embedding correspondence table”) is adopted. That is, the embedding correspondence table generation unit 16 in FIG. 2 creates an embedding correspondence table in which embedding locations in image data are associated with data of a language to be embedded (text data or voice data).
  • a predetermined row corresponds to a predetermined one embedding location.
  • one comic content has a plurality of “speech balloons” (spots 51 to 56 only shown in FIG. 3). Separate text and voice are embedded. Therefore, a unique ID is assigned to the embedding location.
  • the text data ID and the voice data ID are used separately so that the text data and the voice data are clearly distinguished even at the same embedding location.
  • ID “Pn-Am-T” and ID “Pn-Am-S” are used in this embodiment.
  • “n” of “Pn” indicates a page number.
  • “M” of “Am” indicates a number given by a predetermined rule to each of a plurality of embedding portions included in the image of the “n” page. That is, the ID “Pn-Am” is an ID that uniquely indicates the “m” -th embedding location of the “n” page. Furthermore, “T” at the end of the ID indicates text data, and “S” at the end of the ID indicates audio data. It is assumed that each embedding location selected by the embedding location selection unit 12 in FIG. 2 and its ID are also associated in the image data. That is, by designating an ID, a “speech balloon” location (image region) indicated by the ID is specified from the image.
  • the embedding location of the ID “P1-A1-T”, that is, the “1” th embedding location of the “1” page (for example, location 52 in FIG. 4)
  • “My name is A.” is associated with Japanese
  • “My Name is A.” is associated with English
  • “My Name Is A.” is associated with Chinese.
  • parameters of each text data for example, font type, font size, etc., may be stored in the embedding correspondence table for each language (each item). it can.
  • the parameters of each text data are specified for each language and for each embedding location (for each character as required) by operating various operating tools (software) in the text parameter specifying area 44 of FIG. Is possible.
  • texts in each language are directly stored in the embedding correspondence table.
  • a text data file is prepared separately like voice data, and the link destination of the file is stored. Also good.
  • the audio data is stored in the embedding location of the ID “P1-A1-T”, that is, the “1” -th embedding location (eg, location 52 in FIG. 4) of the “1” page.
  • “A day.mp3” is associated in Japanese
  • “A English.mp3” is associated in English
  • “A middle.mp3” is associated in Chinese.
  • “A day.mp3” indicates a file name of voice data “My name is A.” pronounced in Japanese. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
  • “A English.mp3” indicates a file name of voice data “My Name is A.” pronounced in English.
  • the link destination of the audio data file is stored in the embedding correspondence table of FIG. “A middle.mp3” indicates the file name of the voice data “My Name is A.” pronounced in Chinese. That is, the link destination of the audio data file is stored in the embedding correspondence table of FIG.
  • the switching content data generation unit 17 performs image data of each page of the comic (corresponding to the embedded portion), text data and audio data embedded (corresponding) to each embedded portion. And a data group including an embedding correspondence table are generated as multilingual switching content.
  • the output unit 18 outputs the multilingual switching content from the editor device 1.
  • FIG. 6 is a functional block diagram illustrating an example of a functional configuration of the viewer device 4.
  • the viewer device 4 includes a switching content data acquisition unit 61, a separation unit 62, an image holding unit 63, a text holding unit 64, an audio holding unit 65, an embedding correspondence table holding unit 66, an operation unit 67, A reproduction target specifying unit 68, a reproduction target extraction unit 69, a reproduction control unit 70, and an output unit 71 are provided.
  • the switching content data acquisition unit 61 acquires the multilingual switching content distributed from the distribution device 3. Separation unit 62, from the multilingual switching content, image data of each page of the comic (embedded location is associated), text data and audio data embedded (associated) in each embedded location, and The embedding correspondence table is separated. Of the data separated from the multilingual switching content, the image data of each page of the comic (corresponding to the embedded portion) is held in the image holding unit 63. Text data embedded (associated) in each embedding location is held in the text holding unit 64. Audio data embedded (associated) in each embedding location is held in the audio holding unit 65. The embedding correspondence table is held in the embedding correspondence table holding unit 66.
  • the multilingual switching content output from the editor device 1 is subjected to image conversion processing for reduction in the authoring device 2 in FIG. 1 as described above, and the distribution device 3 as reduced content.
  • the switching content data acquisition unit 61 of the viewer device 4 acquires reduced content. Therefore, the separation unit 62 performs the above-described image reverse conversion process on the reduced content to restore the multilingual switching content.
  • the various data described above are separated from the restored multilingual switching content.
  • the operation unit 67 includes various software operation instruments displayed on the touch panel in this embodiment. That is, the user performs various touch operations on the touch panel, so that the operation unit 67 receives various operations.
  • a viewer image 101 shown in FIG. 7 is displayed on the touch panel.
  • a display area 111 for displaying the title (manga title) of the multilingual switching content a display area 112 for displaying the number of pages to be displayed, and the position of the display target page with respect to the entire page are displayed.
  • the viewer image 101 includes a display area 114 for displaying an image of a page to be browsed.
  • the image of the page to be browsed shows one page of a comic, which is divided into a plurality of frames, and each frame includes a “speech balloon” as appropriate.
  • the text of the reproduction target language is displayed at the location of this “balloon” (for example, location 151 in the example of FIG. 7).
  • a Japanese text “My name is A” is displayed.
  • the language to be played back can be switched.
  • the viewer presses the switching button 115 when switching the text setting language.
  • the switch button 115 in the operation unit 67 of FIG. 6 is pressed.
  • the reproduction target specifying unit 68 determines an image of the reproduction target page and also determines a reproduction target language. That is, when the switch button 115 is pressed, the reproduction target specifying unit 68 switches the text reproduction target language. For example, it is assumed that “Japanese” is switched to “English”.
  • the reproduction target extraction unit 69 extracts the image of the reproduction target page from the image holding unit 63 and reproduces the text out of the text embedded in each “balloon” portion (embedding portion) included in the image of the reproduction target page.
  • Extract texts in the target language for example, English
  • the text embedded in each “balloon” part (embedded part) included in the image of the reproduction target page is represented by the embedding correspondence table (FIG. 5). Determined.
  • the reproduction control unit 70 generates an image in which text in a reproduction target language (for example, English) is superimposed on each “balloon” portion (embedded portion) on the reproduction target page image, and an output unit 71 is reproduced. That is, the output unit 71 includes a touch panel display unit and a speaker (not shown).
  • a viewer image 101 is displayed on the display unit, and an image generated by the playback control unit 70 is displayed in the display area 114 of the viewer image 101. That is, when the switching button 115 is pressed and the reproduction target language is switched from “Japanese” to “English”, the display area 114 of the viewer image 101 displays a picture such as a person or background from the image shown in FIG. In the state as it is, each “speech balloon” portion (embedded portion) is displayed by switching to an image in which English text is displayed.
  • a reproduction target language for example, English
  • “My Name is A.” in English is displayed at the location “balloon” 151 as shown in the center of FIG.
  • the Chinese word “My name A” is displayed in the “balloon” portion 151 as shown on the right side of FIG.
  • the text in the portion “balloon” 151 corresponds to the text of each language stored in the first row of the embedding correspondence table in FIG. That is, the ID of the location 151 is managed as “P1-A1”.
  • the reproduction target extraction unit 69 extracts the text of the reproduction target language from the ID “P1-A1-T” in the first line as the text of the portion 151 having the ID “P1-A1”. Good. Therefore, the viewer simply presses the switching button 115 and displays “My name is A” in Japanese, “My Name is A.” in English, and “My Name is A.” The text in other languages such as the word “My personal name A.” can be switched sequentially.
  • all “speech balloons” displayed in the display area 114 of the viewer image 101 may be switched all at once, or specified by the viewer. Only the “speech balloon” portion may be switched.
  • the switching operation of the text reproduction target language may employ, for example, a touch operation on the target “speech balloon” in addition to the pressing operation of the switching button 115.
  • the switching order of the language to be played back may be a predetermined order, or the selection area 117 in FIG. 7 is displayed so that the viewer can select a desired language as in the case of switching of audio data described later. It may be.
  • the viewer can not only switch the multi-language for the text displayed in the “speech” part, but can also switch the multi-language for the audio output corresponding to the text displayed in the “speech” part. .
  • it can be established as a new genre in parallel with anime and manga as a manga that speaks.
  • differentiation from electronic books can be achieved.
  • the language can be freely selected by combining letters and voices so that anyone in the world can read them. Thereby, it is possible to respond to the request for localization. Furthermore, since you can enjoy learning while reading comics, you can also use it for language learning.
  • the switching button 116 in FIG. 7 is used to switch the other language of the voice output.
  • a selection area 117 is displayed.
  • the viewer can specify a desired language as a reproduction target language by performing a touch operation on the desired language among the languages displayed in the selection area 117.
  • the selection operation for the selection region 117 is accepted as an operation by the operation unit 67 of FIG.
  • the reproduction target specifying unit 68 determines a reproduction target language for the sound based on the selection operation. That is, the audio playback target language is switched by the selection operation. For example, it is assumed that “Japanese” is switched to “English”.
  • the reproduction target extraction unit 69 extracts the sound of the reproduction target language (for example, English) from the voices embedded in the portions (embedded portions) of each “balloon” included in the image of the reproduction target page.
  • the sound of the reproduction target language for example, English
  • the playback control unit 70 plays the audio in the playback target language (for example, English) at the timing of the audio playback at the location that is the target of audio playback among the locations (embedded locations) of each “balloon” on the playback target page. Is reproduced from the speaker of the output unit 71.
  • the playback target language for example, English
  • a cartoon as a so-called electronic book is used as the content, but the present invention is not particularly limited thereto.
  • the content may be animated cartoon content (moving cartoon).
  • comics manuscripts are used, so speedier production is possible. Thereby, it becomes possible to cope with a fast business speed.
  • the author's manga manuscript is directly converted into a movie, it can be established as a new genre characterized by high quality.
  • it produces mainly manga manuscripts it has the merit of lower costs compared to anime.
  • the embedding location for embedding language data is the location of the “speech balloon” of each frame of the comic, but is not particularly limited thereto, and may be any location in the image.
  • onomatopoeia is often expressed as a text part. Such a text part can also be adopted as an embedding part.
  • the multilingual switching content is not particularly required to be a comic, and any content that can embed language data is sufficient.
  • a menu provided at a restaurant or the like can be adopted as multilingual switching content.
  • a location indicating the name and price of the food and drink, an explanatory text explaining the food and drink, and the like can be adopted as the embedding location.
  • the contents of the language embedded in the embedding location do not necessarily need to be matched for each language. For example, at a restaurant in Japan, if the Japanese is used by a Japanese who eats sashimi, long explanations such as ingredients and how to make are often unnecessary, but the sashimi is always eaten. If it is a language used by foreigners who do not (such as English), it may be better to use long descriptions such as materials and how to make them. In such a case, the contents of the language embedded in the embedding location are different.
  • a map can be adopted as multilingual switching content.
  • a location indicating each location in the map can be adopted as an embedded location.
  • the contents of the language embedded in the embedding location do not necessarily need to be matched for each language. For example, if the Japanese map is used by a Japanese who is familiar with the geography, long explanations are often unnecessary, but if it is a language used by a foreigner familiar with the geography (eg English), sightseeing It may be better to use long explanations such as guidance. In such a case, the contents of the language embedded in the embedding location are different.
  • multilingual map Such a service is hereinafter referred to as a “multilingual map”.
  • a multilingual map an example of the multilingual map will be described with reference to FIGS. 9 to 14.
  • FIG. 9 is an image diagram showing an overview of an example of a multilingual map.
  • the multilingual map can be realized by the content creation and playback system of the present invention.
  • MAP category a category related to a map
  • shop category a category related to a store
  • menu category a category related to a menu
  • MAP category, store category, menu category a category related to a map
  • menu category a category related to a menu
  • the linkage of the three categories (MAP category, store category, menu category) described above is an example, and is not limited to these three. All kinds of categories can be linked. For example, it can be linked with a travel agency or a duty-free shop, or can be linked with a person (publisher or the like) that provides information such as a famous place, history, or person.
  • the multilingual switching content is produced by generating the embedding correspondence table as shown in FIG. That is, in this example, the other language switching content of the store category and the menu category is produced on the store (for example, restaurant) side that provides food and drink.
  • multilingual switching content in the MAP category is produced using the editor device 1 (FIG. 2) operated by a multilingual map service provider, a tourist company, or the like. That is, an employee of a restaurant (for example, a restaurant) that provides food and drinks uses the editor device 1 (FIG. 2) in FIG. 2 to support various descriptions of various descriptions on the website of the shop and various descriptions on the menu.
  • the multilingual switching content is produced by generating the embedding correspondence table as shown in FIG.
  • the editor apparatus 1 may be a dedicated terminal or a general-purpose terminal such as a personal computer in which dedicated software is installed.
  • the user side uses the multilingual map using the viewer device 4 of FIG.
  • the dedicated application when the dedicated application is activated on the mobile terminal, the multilingual map is displayed on the screen of the mobile terminal.
  • the multilingual map is a global map. Since the map and the location of the store are linked to each other, an icon indicating the store is displayed on the map.
  • MAP category in the example of FIG. 9 for example, information related to recommended spots in the vicinity, information related to traffic information such as closed roads and traffic jams, and information related to famous places such as tourist spots can be adopted.
  • a store category such as an address or the like, a store commercial or the like related to a store PR, or a store homepage related to the store category may be employed.
  • Menu categories include, for example, those related to signage such as video advertisements for products and services, those related to halal (meaning all “sound products and activities permitted by Islamic teaching”), and allergens (allergic symptoms). Can be used as well as those related to local specialties and special products.
  • the above three categories are linked to each other, so that, for example, the location and the surrounding spots can be linked in the relationship between the MAP category and the store category.
  • the home page and the store menu can be linked.
  • the food production area and the seasonal photos and videos introducing the production area can be linked by linking food and drink with a map. That is, users of different world languages can use the dedicated app to realize “enjoying”, “knowing”, and “eating” at a time.
  • multilingual maps are all linked to multilingual maps, categories, and menus. For this reason, the user can realize a function that cannot be realized by a conventional general travel guidebook by using a multilingual map as a travel guide when traveling abroad.
  • FIG. 10 is an image diagram showing a difference from a conventional guidebook when a multilingual map is used as a guidebook.
  • Overseas travelers can switch between text and voice output in real time by using a multilingual map as a travel guide.
  • a multilingual map as a travel guide.
  • information useful for overseas travel can be acquired in real time.
  • the store side can also receive a content editor that can be easily produced by individuals.
  • the status S1 indicates that when a restaurant manager (eg, restaurant) or the like operates the editor device 1 to generate or update the restaurant menu, the multilingual map m, the store homepage, and the like are all displayed. In conjunction, it shows that the content of the update is reflected in real-time on a multilingual map for the entire world. Further, when the restaurant homepage and the like are updated, the multilingual map m and the menu are all linked, and the contents of the update are reflected in real-time on the globally compatible multilingual map. In other words, since the updated content of the multilingual switching content can be browsed in real time and in multiple languages as it is in the world, the restaurant is not limited to Japan, but to foreign travelers visiting Japan from all over the world. It becomes easy to advertise the menu.
  • the editor device 1 can be operated at any time by being provided at the restaurant or the area to which the restaurant belongs.
  • Status S2 indicates that an overseas traveler can use a dedicated application by operating a mobile terminal when visiting Japan. That is, the overseas traveler can easily obtain the latest information of the restaurant in the language used by the overseas traveler (for example, the map m indicating the provided food and location) in real time. At this time, even if there are a plurality of overseas travelers and the languages used are different, it is not necessary to activate the dedicated application on all the mobile terminals possessed by each of the overseas travelers. By starting a dedicated application on a specific mobile terminal and switching the displayed language as appropriate, all the overseas travelers can easily view the latest information of the predetermined store in real time. For example, if the dedicated application is activated only on one portable terminal possessed by the tour conductor, it becomes possible for a companion with a different language to easily view necessary information.
  • the dedicated application is activated only on one portable terminal possessed by the tour conductor, it becomes possible for a companion with a different language to easily view necessary information.
  • the multilingual map can be used not only as a travel guide for overseas travelers during the trip, but also for collecting information before the departure of the trip.
  • Status S3 indicates that an overseas traveler can use a dedicated application by operating a mobile terminal before visiting Japan.
  • the overseas traveler operates a mobile terminal and uses a dedicated application before visiting Japan.
  • the latest information of the predetermined store can be easily acquired in real time in the language used by the overseas traveler.
  • the overseas traveler may check whether the store has a menu that he or she can consume. Can be easily confirmed in real time according to the language used by the overseas traveler. As a result, it is possible to prevent a situation in which, despite having visited Japan, there is no food that can be eaten or the physical condition has been lost due to having eaten.
  • the multilingual map introduces the recommended places and spots according to the contents of the prescribed tour guide, introduces the contents of the recommended sights in multiple languages, or leads to a tour around the sights. You can do it. It is also possible to link a multilingual map with a tour reservation system. Also, as advance information, seasonal recommended information and special features can be posted in real time. As described above, in the statuses S1 to S3, the multilingual map, the store homepage, and the menu are all linked.
  • FIG. 11 is an image diagram showing a difference from a conventional guidebook different from that shown in FIG. 10 when a multilingual map is used as a travel guide.
  • each category can be advertised all over the world by linking or sharing on the Internet.
  • the content of the multilingual map can be used as it is as a signage or menu for moving image advertisements of products and services. This makes it possible to appeal tourism resources to people all over the world.
  • step S11 after the dedicated app is started by an overseas traveler, characters or designs indicating an area (in the example of FIG. 11, “city center area”) that the overseas traveler wants to obtain information from the overseas traveler. When tapped, the target area is selected. At this time, the overseas traveler can switch the characters displayed on the mobile terminal to the language used by the overseas traveler in real time.
  • an area in the example of FIG. 11, “city center area”
  • the multilingual map can be searched and displayed for tours, shopping, restaurants, etc. based on the current location.
  • a favorite spot can be registered as a favorite spot at any timing before and after the visit.
  • step S12 the map indicating the area is enlarged and displayed, icons indicating the locations of the facilities are displayed on the map, and an overview of the facilities is displayed as a thumbnail. Also, an image showing a predetermined character displayed on the screen becomes an embedding location, and information in multiple languages is announced to the overseas user by means of a sound embedded in the embedding location or a photo displayed as a thumbnail. Can do. In addition, it is good also as dividing the shape of the icon displayed on a map, a pattern, etc. for every kind of each facility. For example, in the example of FIG. 11, each facility type is divided into Food (restaurant), Shop (non-restaurant), and SPOT (other facilities).
  • the overseas traveler taps an icon of a facility that he / she wants to obtain information from among the icons displayed on the enlarged map, information on the facility is displayed.
  • the menu of the store and the map m1 are linked, and the menu of the predetermined restaurant is displayed.
  • the menu of the restaurant and the map m1 can be linked together, and the production area of the food displayed as the description of the menu can be linked with the map m1.
  • the overseas traveler can easily grasp the source of the ingredients used in the food that he / she eats in the language he / she uses, so that he / she can deepen the excitement when eating the food. be able to.
  • PR of the menu for overseas travelers becomes easy.
  • the multilingual map can also issue store menus, catalogs, coupons, and the like from detailed spot information (for example, restaurants).
  • FIG. 12 is an image diagram showing a difference from a conventional guidebook different from those shown in FIGS. 10 and 11 when a multilingual map is used as a guidebook.
  • the multilingual map is provided with a dedicated editor (for example, the editor device 1 in FIG. 1) for the manager on the store (or region) side.
  • a dedicated editor for example, the editor device 1 in FIG. 1 for the manager on the store (or region) side.
  • the method for providing a dedicated editor includes provision including hardware and online provision. This makes it easy for managers of stores (or regions) to upload to the multilingual map platform and to update the SD card or online, thereby reducing the time costs required for these operations.
  • the latest information that the manager of the region or the store wants to publicize can be easily promoted to people all over the world without having know-how on the work.
  • the manager of the store can introduce the production area and the like in multiple languages and in real time not only with documents and photos but also with videos and sounds. This makes it possible to easily promote products and services to people all over the world using any method. It is also possible to make a dedicated mobile terminal (tablet) in a predetermined store (or region). For example, by deploying a dedicated tablet instead of the conventional paper menu table deployed at each restaurant table, the menu table corresponding to the language used by the visitor can be displayed on the dedicated tablet. . Thereby, the visitor can not only smoothly order the food but also obtain detailed information about each food and drink constituting the menu table by voice, a moving image, a map, and the like.
  • what can be interlocked with the multilingual map is not limited to the above-described example, and any object can be interlocked.
  • guidance with explanations (letters and voices) at each sightseeing spot, introduction of tours at recommended spots, various reservation systems, tax exemption information, simple conversation using comics, emergency contact methods, information on spot sales, purchase Information on services that deliver goods to hotels, gourmet information, etc. can be linked to a map that supports the world.
  • FIG. 13 is a diagram showing an example of a menu table created by an administrator of a predetermined store.
  • the left figure of FIG. 13 has shown an example of the menu table of the food provided in a predetermined shop (restaurant).
  • the menu table is an example of multilingual content created by the manager of the store using the editor device 1 and viewed by the user using the viewer device 4.
  • An example of the menu table includes a menu category 201 as an embedding location, a language switching button 202, and a map button 203 in addition to information (name, photo, and price) of the provided food.
  • a person who wants to obtain information from the menu table can display menus by category by selecting the menu category 201.
  • the menu category 201 is displayed in Chinese because Chinese (Chinese) is selected in the language switching button 202.
  • the map button 203 can indicate the location of the predetermined store, the production area of ingredients of each dish constituting the menu table, and the like on a map.
  • the right side of FIG. 13 shows an example in which information related to a specific dish is displayed in a document from a menu table of dishes provided at a predetermined store (restaurant). Note that the content displayed in the document can be read out by multilingual audio.
  • the information is managed by the embedding correspondence table illustrated in FIG. 5 and can be switched in multiple languages.
  • the information includes a cooking halal certification 204 as an embedding location, a food pictogram 205, a product description 206, and basic information 207, in addition to a photo of the specific dish.
  • the halal certification 204 is a display indicating whether or not the product falls under “Halal” which means a sound product or an overall activity permitted by Islamic teaching.
  • “Halam”, which means the opposite of “Halal”, is harmful and addictive to Muslims. That is, Muslims must avoid food and drink other than those officially recognized as falling under “Halal”. For this reason, Muslims confirm in real time whether the particular dish is officially recognized as a halal product by confirming the display of the halal certification 204 displayed in the menu table. be able to.
  • the food pictogram 205 displays foods used for cooking as a service for customers who have restrictions on what can be eaten and drinked for reasons of religion, vegetarianism, food allergies, and the like. Thereby, the said customer can order a dish in comfort.
  • the product description 206 is a sentence explaining the specific dish. For example, it is possible to display an explanation that may be interested in Japanese food or store preferences. In the example on the right side of FIG. 13, a product description in English is displayed. In this way, Japan can be strongly appealed not only by providing food to foreigners but also by conveying the Japanese culture embedded in the food.
  • the basic information 207 displays basic information about the specific dish such as the production area, allergen, and calories.
  • the production area of the ingredients used for cooking, the state of the producer, and the like can be displayed by multilingual video and images. Accordingly, the customer can order the food with peace of mind, and can easily grasp the source of the ingredients used for the food he / she eats in the language he / she uses. Thereby, the customer can deepen the excitement when the food is eaten.
  • the side which provides information, such as a restaurant becomes easy to PR of a menu with respect to a customer.
  • FIG. 14 is a diagram illustrating an example of a product catalog of a home appliance mass retailer and a menu of a restaurant displayed on a smartphone in which a dedicated application is installed.
  • FIG. 14A shows an example of a product catalog of a home appliance mass retailer.
  • the overseas traveler wants to purchase a rice cooker made in Japan at a predetermined consumer electronics retailer in Japan
  • the overseas traveler first activates a dedicated application to find the predetermined consumer electronics retailer. Search for.
  • the icon which shows a predetermined household appliance mass retailer is displayed on the map of a special application.
  • information for example, size and price included in the product catalog of the home appliance mass retailer embedded in the embedding location is displayed in a document or output by voice. .
  • the document displayed on the product catalog displayed on the screen of the smartphone can be switched and displayed in the language used by the overseas traveler by the operation of the overseas traveler.
  • documents and the like displayed in the product catalog are managed by the embedding correspondence table illustrated in FIG. 5 and can be switched in multiple languages.
  • FIG. 14B shows an example of a restaurant menu. For example, when an overseas traveler places a food order at a restaurant in Japan, to a store clerk who can select food from the traditional menu table displayed only in Japanese and speak only Japanese I often have to place orders. However, it is not possible to create a taste only with the names of the dishes listed in the menu table. Also, you may want to check the menu before entering a restaurant.
  • overseas travelers can not only smoothly order food at restaurants by using the dedicated app, but also have detailed information about each food and drink embedded in the embedded area of the menu table. Information can be acquired by sound, moving images, maps, and the like corresponding to multiple languages. It is also possible to place orders online and perform accounting using a dedicated app. You can also search for recommended stores from your current location.
  • the content creation and playback system to which the present invention is applied includes various information processing systems including the information processing system including the editor device 1 and the viewer device 4 according to the above-described embodiment. Embodiments can be taken. That is, a content creation and playback system to which the present invention is applied is a system including the following content creation device and content playback device.
  • the content creation device includes the following selection means, embedding means, embedding correspondence information generating means, and content generating means.
  • the selection means (for example, the embedding location selection unit 12 in FIG. 2) selects one or more embedding locations for embedding language data from the image data serving as content elements.
  • the embedding unit for example, the embedding unit 15 in FIG.
  • the embedding correspondence information generating means (for example, the embedding correspondence table generating unit 16 in FIG. 2) generates embedding correspondence information (for example, the embedding correspondence table in FIG. 5) indicating the result of the embedding process.
  • the content generation means (for example, the switching content data generation unit 17 in FIG. 2) includes the content including the image data, the language data of each of the two or more languages to be embedded, and the embedding correspondence information (for example, the above-described other Language switching content) is generated.
  • the content reproduction apparatus includes the following content acquisition means, identification means, extraction means, and reproduction control means.
  • a content acquisition unit acquires the content generated by the content generation unit.
  • the specifying unit specifies the playback target language.
  • the extraction means extracts language data of the reproduction target language from the language data of two or more languages to be embedded based on the embedding correspondence information.
  • the reproduction control means reproduces the image data in a state where language data of the reproduction target language is embedded in the embedding portion.
  • the content creation device and the content reproduction device to which the present invention is applied have been described by taking the editor device 1 and the viewer device 4 as examples.
  • the present invention is not particularly limited thereto.
  • the present invention can be applied to general electronic devices that can process sound and images.
  • the present invention can be applied to portable terminals such as smartphones, portable navigation devices, mobile phones, portable games, digital cameras, notebook personal computers, printers, television receivers, video cameras, and the like. It is.
  • FIGS. 2 and 6 are merely examples, and are not particularly limited. That is, it is sufficient that the editor device 1 and the viewer device 4 have a function capable of executing the above-described series of processing as a whole, and what functional block is used to realize this function is particularly shown in FIGS.
  • the example is not limited to six.
  • one functional block may be constituted by hardware alone, software alone, or a combination thereof.
  • a program constituting the software is installed on a computer or the like from a network or a recording medium.
  • the computer may be a computer incorporated in dedicated hardware.
  • the computer may be a computer capable of executing various functions by installing various programs, for example, a general-purpose personal computer.
  • a recording medium including such a program is provided not only to a removable medium distributed separately from the apparatus main body in order to provide the program to the user, but also to the user in a state of being incorporated in the apparatus main body in advance. It consists of a recording medium.
  • the removable medium 41 is composed of, for example, a magnetic disk (including a floppy disk), an optical disk, a magneto-optical disk, or the like.
  • the optical disk is composed of, for example, a CD-ROM (Compact Disk-Read Only Memory), a DVD (Digital Versatile Disk), or the like.
  • the magneto-optical disk is constituted by an MD (Mini-Disk) or the like.
  • the recording medium provided to the user in a state of being preinstalled in the apparatus main body is configured by, for example, a ROM or a hard disk in which a program is recorded.
  • the step of describing the program recorded on the recording medium is not limited to the processing performed in time series along the order, but is not necessarily performed in time series, either in parallel or individually.
  • the process to be executed is also included.
  • the term “system” means an overall apparatus configured by a plurality of devices, a plurality of means, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

 L'invention permet de créer facilement un contenu qui comprend des images et des mots et qui peut être commuté entre différentes langues. Une unité de sélection d'emplacement d'intégration (12) sélectionne, parmi des ensembles de données d'images qui servent d'éléments de contenu, au moins un emplacement d'intégration pour intégrer des données linguistiques. Une unité d'intégration (15) exécute un processus d'intégration dans lequel les données linguistiques à intégrer dans au moins deux langues sont associées à chaque emplacement du ou des emplacements d'intégration. Une unité de génération de table de correspondance d'intégration (16) génère une table de correspondance d'intégration en tant qu'information de correspondance d'intégration indiquant les résultats du processus d'intégration. L'unité de génération de données de contenu commutable (17) génère, en tant que contenu qui peut être commuté entre différentes langues, un contenu comprenant les données d'images, les ensembles de données linguistiques pour les au moins deux langues à intégrer, ainsi que les informations de correspondance d'intégration.
PCT/JP2016/054448 2015-02-17 2016-02-16 Dispositif de création de contenu, dispositif de lecture de contenu, programme et système de création et de lecture de contenu WO2016133091A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-028857 2015-02-17
JP2015028857A JP2018061071A (ja) 2015-02-17 2015-02-17 コンテンツ作成装置、コンテンツ再生装置、及びプログラム、並びにコンテンツ作成及び再生システム

Publications (2)

Publication Number Publication Date
WO2016133091A2 true WO2016133091A2 (fr) 2016-08-25
WO2016133091A3 WO2016133091A3 (fr) 2016-10-27

Family

ID=56689364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/054448 WO2016133091A2 (fr) 2015-02-17 2016-02-16 Dispositif de création de contenu, dispositif de lecture de contenu, programme et système de création et de lecture de contenu

Country Status (2)

Country Link
JP (1) JP2018061071A (fr)
WO (1) WO2016133091A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3214443U (ja) * 2017-10-13 2018-01-18 株式会社日本レストランエンタプライズ 販売食品の容器等へのピクトグラムの表示
JP2018101378A (ja) * 2016-12-22 2018-06-28 東芝テック株式会社 情報処理装置、販売データ処理装置およびプログラム

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001338307A (ja) * 2000-05-29 2001-12-07 Sharp Corp 電子まんが作成装置および電子まんが表示装置
JP2008040694A (ja) * 2006-08-03 2008-02-21 Mti Ltd 多言語表示装置
JP5674451B2 (ja) * 2010-12-22 2015-02-25 富士フイルム株式会社 ビューワ装置、閲覧システム、ビューワプログラム及び記録媒体
JP5439456B2 (ja) * 2011-10-21 2014-03-12 富士フイルム株式会社 電子コミック編集装置、方法及びプログラム

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018101378A (ja) * 2016-12-22 2018-06-28 東芝テック株式会社 情報処理装置、販売データ処理装置およびプログラム
JP3214443U (ja) * 2017-10-13 2018-01-18 株式会社日本レストランエンタプライズ 販売食品の容器等へのピクトグラムの表示

Also Published As

Publication number Publication date
WO2016133091A3 (fr) 2016-10-27
JP2018061071A (ja) 2018-04-12

Similar Documents

Publication Publication Date Title
Benyon Designing user experience
US20180095734A1 (en) System and method for creating a universally compatible application development system
US20100088631A1 (en) Interactive metro guide map and portal system, methods of operation, and storage medium
JP5518112B2 (ja) デジタルブック提供システム
US20200250369A1 (en) System and method for transposing web content
Torma et al. IReligion
US11831738B2 (en) System and method for selecting and providing available actions from one or more computer applications to a user
Anand et al. Quality dimensions of augmented reality-based mobile apps for smart-tourism and its impact on customer satisfaction & reuse intention
Basaraba et al. Digital narrative conventions in heritage trail mobile apps
Ozdemir-Guzel et al. Gen Z tourists and smart devices
WO2016133091A2 (fr) Dispositif de création de contenu, dispositif de lecture de contenu, programme et système de création et de lecture de contenu
JP4672543B2 (ja) 情報表示装置
US11304029B1 (en) Location based mobile device system and application for providing artifact tours
Katlav QR code applications in tourism
Verhoeff Theoretical consoles: Concepts for gadget analysis
Pearson et al. Exploring low-cost, Internet-free information access for resource-constrained communities
CN107111657A (zh) 基于web内容的信息与web内容的web应用检索和显示
JP7090779B2 (ja) 情報処理装置、情報処理方法及び情報処理システム
US9628573B1 (en) Location-based interaction with digital works
Esteves et al. Mementos: a tangible interface supporting travel
JP2022144206A (ja) コンテンツ提供装置、コンテンツ提供方法、およびプログラム
Yelmi Istanbuls cultural soundscape: Collecting, preserving and exhibiting the sonic cultural heritage of daily urban life
US10885267B1 (en) Interactive electronic book system and method therefor
WO2020184704A1 (fr) Système de traitement d'informations
Osman Adopting Technology in Preserving and Promoting Cultural Tourism

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16752477

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16752477

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: JP