WO2018135272A1 - Dispositif de traitement d'informations, procédé d'affichage, programme, et support d'enregistrement lisible par ordinateur - Google Patents

Dispositif de traitement d'informations, procédé d'affichage, programme, et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2018135272A1
WO2018135272A1 PCT/JP2017/046923 JP2017046923W WO2018135272A1 WO 2018135272 A1 WO2018135272 A1 WO 2018135272A1 JP 2017046923 W JP2017046923 W JP 2017046923W WO 2018135272 A1 WO2018135272 A1 WO 2018135272A1
Authority
WO
WIPO (PCT)
Prior art keywords
code
image
cpu
information
display
Prior art date
Application number
PCT/JP2017/046923
Other languages
English (en)
Japanese (ja)
Inventor
真人 柴田
昌敏 竹谷
Original Assignee
合同会社IP Bridge1号
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合同会社IP Bridge1号 filed Critical 合同会社IP Bridge1号
Publication of WO2018135272A1 publication Critical patent/WO2018135272A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals

Definitions

  • the present invention relates to an information processing apparatus, a display method, a program, and a computer-readable storage medium.
  • Two-dimensional codes that can hold more information than barcodes are known.
  • Two-dimensional codes are often formed in black and white, but in order to hold more information, two-dimensional codes in which data areas are formed in colors such as red, green, and blue are also used (for example, Patent Document 1).
  • Patent Document 1 two-dimensional codes in which data areas are formed in colors such as red, green, and blue are also used.
  • Patent Document 1 when a two-dimensional code is embedded in an image, the two-dimensional code becomes conspicuous and the design of the image is impaired.
  • an object of the present invention is to make it possible to operate content provided to a large number of people in a terminal device owned by each user.
  • An image acquisition unit that acquires an image, a recognition unit that recognizes a code included in the image, a position acquisition unit that acquires a position in the image selected by a user, and the image acquired by the position acquisition unit
  • An information processing apparatus comprising: a control unit configured to perform control so that the content held by the code is executed when the code is located at a middle position.
  • content provided to a large number of people can be operated on a terminal device owned by each user.
  • FIG. 1 is a diagram illustrating an overall configuration of an information processing system according to a first embodiment.
  • the block diagram which shows the content server 10, the portable terminal device 20, and the server 30 which concern on 1st Embodiment.
  • 5 is a flowchart showing a process for generating a code image 60 according to the first embodiment.
  • 5 is a flowchart showing a process for generating a code image 60 according to the first embodiment.
  • FIG. 1 is a diagram illustrating an overall configuration of an information processing system according to the first embodiment.
  • the information processing system includes a content server 10, a mobile terminal device 20, and a server 30.
  • the content server 10, the mobile terminal device 20, and the server 30 are connected via a network N so as to communicate with each other.
  • the network N includes, for example, a part or all of a WAN (Wide Area Network), a LAN (Local Area Network), the Internet, a provider device, a wireless base station, a dedicated line, and the like.
  • the mobile terminal device 20 is, for example, a tablet computer or a smartphone. The mobile terminal device 20 communicates with the content server 10 and the server 30 via the network N by performing wireless communication W.
  • the content server 10 is connected to a display D that displays content and the like.
  • the display D is a so-called digital signage.
  • Digital signage is an electronic signboard for displaying advertisements and the like on the relatively large display to the public.
  • the digital signage itself may be provided with a storage device or the like that temporarily holds content. Therefore, the content server 10 and the display D may be integrated as digital signage.
  • a computer that requests information is called a client
  • a computer that sends information in response to a request for information is called a server.
  • the content server 10 and the server 30 function as a server
  • the mobile terminal device 20 mainly functions as a client.
  • FIG. 2 is a block diagram showing the content server 10, the mobile terminal device 20, and the server 30 according to the first embodiment.
  • the content server 10 includes a CPU (Central Processing Unit) 11, a memory 12, a bus 13, and a storage 14.
  • CPU Central Processing Unit
  • the CPU 11 as an information processing unit and the memory 12 as an information holding unit are connected to each other via a bus 13.
  • the CPU 11 accesses information stored in the memory 12 via the bus 13 according to a program.
  • the memory 12 is a RAM (Random Access Memory), but is not limited thereto.
  • the memory 12 may be another type of memory such as a flash memory.
  • the memory 12 temporarily stores information processed by the CPU 11.
  • the CPU 11 reads and writes various information to and from storage 14.
  • the storage 14 is, for example, an HDD (Hard Disk Drive), but is not limited to this.
  • the storage 14 may be an external device accessible by the content server 10 such as a flash memory, NAS (Network Attached Storage), or an external storage server.
  • the content server 10 may store information in the storage 14 instead of the memory 12. Further, alternatives such as a memory and a storage may be used according to a change in the architecture of the content server 10.
  • the mobile terminal device 20 includes a CPU 21 as a control unit, a memory 22, a bus 23, a camera 24, and a display 25 (see FIG. 6 and the like).
  • the display 25 has a function of detecting and acquiring selection by a user such as a touch panel.
  • the CPU 21 serving as a position acquisition unit acquires information on the position (coordinates) selected by the user from a touch panel or the like.
  • the CPU 21 accesses information stored in the memory 22 via the bus 23 according to a program.
  • the memory 22 is a RAM, but is not limited to this.
  • the memory 22 may be another type of memory such as a flash memory.
  • the memory 22 temporarily stores information processed by the CPU 11.
  • the image read by the camera 24 can be processed by the CPU 21 via the bus 23. Further, the camera 24 can display the image on the display 25 in order to present the captured information to the user even when there is no recording instruction (shutter) (so-called live view function).
  • the CPU 21 performs image processing on the image acquired by the camera 24 and displays the result of the image processing on the display 25. In the case of expressing as an image, it may be interpreted as including a moving image as well as a still image.
  • An apparatus in which a camera and a display are configured as one device may be interpreted as an information processing apparatus, or a system configured by linking a device having a camera function and a plurality of devices having a display function as information. It may be interpreted as a processing device.
  • the server 30 includes a CPU 31, a memory 32, a bus 33, and a storage 34.
  • the CPU 31 accesses information stored in the memory 32 via the bus 33 according to the program.
  • the memory 32 is a RAM, but is not limited to this.
  • the memory 32 may be another type of memory such as a flash memory.
  • the memory 32 temporarily stores information processed by the CPU 31.
  • the CPU 31 reads and writes various information to and from storage 34.
  • the storage 34 is, for example, an HDD, but is not limited to this.
  • the storage 34 may be an external device accessible by the server 30 such as a NAS or an external storage server.
  • a camera that captures an image may be connected to the content server 10 and the server 30.
  • the server 30 may include a display, like the content server 10 and the mobile terminal device 20.
  • ⁇ Invisible code> In general, blinking of an image of less than 0.03 [seconds] is difficult to recognize with the naked eye. In other words, blinking of an image having a frequency exceeding 33 [Hz] is difficult to recognize with the naked eye. When an image blinks at a frequency exceeding 33 [Hz], it is difficult to recognize with the naked eye even when an image with a large difference in brightness is easily blinking. For this reason, if the difference between brightness and darkness is small, even if the blinking frequency is slightly lower than 33 [Hz], it is possible to create a state in which blinking of an image is difficult to be recognized with the naked eye. Such a state that is difficult to be recognized with the naked eye is referred to as “invisible” in the present embodiment.
  • FIG. 3 is a diagram illustrating an example of the code image 60 in which the invisible code according to the first embodiment is embedded.
  • the code image 60 is an image in which the first frame image 60a and the second frame image 60b are alternately switched.
  • the code image 60 is generated by embedding the code information 50 in the background image 40 in an invisible state.
  • the code information 50 embedded in the background image 40 in the invisible state is referred to as “invisible code”.
  • the background image 40 is an arbitrary image designated by a creator (content provider) that generates content, but is not limited thereto.
  • the background image 40 may be an image displayed as an operation screen of application software.
  • the code information 50 is described by taking a QR code (registered trademark) as an example, but is not limited thereto.
  • the code information 50 may be another two-dimensional code such as DataMatrix (registered trademark) or VeriCode (registered trademark), or may be a one-dimensional code such as a barcode.
  • the code information 50 is information in which data designated by the user (for example, URL of a WEB page, account information of messenger software, account information of a social networking service) is coded.
  • the code information 50 includes a position detection marker 51, an attitude correction marker 52, and a data dot 53.
  • the position detection marker 51 is a marker used to detect the position of the code information 50 and is arranged at three corners of the code information 50.
  • the posture correction marker 52 is a marker used to correct the posture of the code information 50 read by the camera 24 as a reading unit.
  • the data dot 53 is a dot for expressing information related to data held by the code.
  • the data dot 53 includes an error correction code such as a Reed-Solomon code.
  • an error correction code having an error correction rate of 15% or more is employed. It is preferable to employ an error correction code having an error correction rate of 30% or more.
  • the CPU 21 of the mobile terminal device 20 acquires the background image 40 and the code information 50. Based on the acquired background image 40 and code information 50, the CPU 21 generates a first frame image 60a and a second frame image 60b. For example, the CPU 21 generates the first frame image 60 a by reducing the brightness of the pixels of the background image 40 at the position overlapping the code information 50. Further, the CPU 21 generates the second frame image 60b by increasing the brightness of the pixel of the background image 40 at the position overlapping the code information 50. The CPU 21 generates, as the code image 60, an image in which the first frame image 60a and the second frame image 60b are alternately switched at intervals of less than 0.03 [seconds].
  • the code information 50 becomes conspicuous and the design of the image is impaired. Therefore, the CPU 21 alternately switches the first frame image 60a and the second frame image 60b at intervals of less than 0.03 [seconds]. As a result, the code information 50 becomes invisible to the naked eye. In other words, it is difficult to distinguish between the background image 40 and the code image 60 with the naked eye even though the first frame image 60a and the second frame image 60b are alternately displayed.
  • the light ID method such as LinkRay (registered trademark) that causes the LED light source to blink at high speed is easily wiped out by sunlight. For this reason, it is preferable to employ an invisible code in digital signage.
  • FIG. 3 for the sake of explanation, the description is made using a black and white image. However, it is assumed that a color image is used as the background image, and a high recognition rate can be ensured even outdoors by changing the value in the hue direction instead of the brightness.
  • FIG. 4 is a flowchart showing the display processing of the code image 60 according to the first embodiment.
  • the display processing of the code image 60 to be displayed on the display D will be described in detail with reference to FIG.
  • an image (moving image) in which an invisible code is embedded an image (moving image) generated by a system other than the content server 10 may be stored in the server 10 and distributed to the display D.
  • the CPU 11 of the mobile terminal device 10 acquires data to be stored as an invisible code to be displayed on the digital signage (S101).
  • Data to be stored as an invisible code displayed on the digital signage may be data designated by the user.
  • the data designated by the user includes a URL of a WEB server that can purchase a product displayed on the digital signage, messenger software account information, SNS account information, and the like.
  • the CPU 11 generates code information 50 based on the acquired data (S102).
  • the CPU 11 as the acquisition unit acquires the background image 40.
  • the CPU 11 acquires the background image 40 from the memory 12 or another server 30 (S103).
  • the CPU 11 calculates the frequency characteristics of the background image 40 by performing Fourier transform on the background image 40 (S104).
  • the CPU 11 removes a spatial frequency component equal to or greater than a predetermined value of the calculated background image 40 (S105). Note that the CPU 11 may hold both the background image 40 from which the high frequency component has been removed and the background image 40 from which the high frequency component has been removed, in the memory 22.
  • the CPU 11 performs a filtering process on the background image 40 to remove components having a spatial frequency included in the background image 40 of a predetermined value or more. Specifically, the CPU 11 performs a filtering process on the coordinates (i, j) of the background image 40 based on the following equation (1).
  • Xi, j indicates a pixel value before filtering
  • Yi, j indicates a pixel value after filtering.
  • the background image 40 includes a component having a high spatial frequency
  • the background image 40 is susceptible to camera shake when the background image 40 is captured. For this reason, if the background image 40 is blurred, it becomes difficult to extract an invisible code. For this reason, the CPU 11 performs a filtering process on the background image 40 to remove components having a spatial frequency equal to or higher than a predetermined value. Thereby, the influence of camera shake can be reduced.
  • the CPU 11 performs a code image generation process to be described later using the code information 50 and the background image 40 (S106).
  • the CPU 11 as the generation unit generates the code image 60 by embedding the code information 50 in the background image 40 in an invisible state.
  • the CPU 11 embeds the code information 50 in the background image 40 in an invisible state by changing the pixel value of the background image 40 at a position overlapping the code information 50.
  • the moving image in which the invisible code generated as described above is embedded is transmitted from the content server 10 to the display D so as to be reproducible. Thereby, the moving image with the invisible code embedded can be presented to the public by digital signage installed outdoors.
  • FIG. 5 is a flowchart showing a code image generation process according to the first embodiment.
  • the flowchart shown in FIG. 5 shows the process of S106 of FIG. 4 in detail.
  • the CPU 11 analyzes the background image 40 and the code information 50 (S201). For example, the CPU 11 acquires the color data of the background image 40 at the position where the code information 50 is arranged, and acquires the RGB value of the acquired color data. The CPU 11 determines whether the code information 50 is suitable for embedding in the background image 40 based on the acquired RGB values. For example, when the majority of the color data of the background image 40 at the position where the code information 50 is arranged is not suitable for embedding the code information 50, the CPU 11 determines that the code information 50 is not suitable for embedding in the background image 40. judge.
  • the position where the code is embedded may be shifted in consideration of the background image 40.
  • the color data that is not suitable for embedding the code information 50 means a pixel value that cannot afford to change the color.
  • the pixel value is expressed in 8-bit RGB, (255, 255, 255), (0, 0, 0), and the like correspond to color data that is not suitable for embedding the code information 50.
  • the saturation direction is set as the variation direction of the pixel value
  • the pixel value whose saturation is the limit value also corresponds to color data that is not suitable for embedding the code information 50.
  • the code information 50 is suitably used for the background image 40 if the sum of differences between the RGB values and the limit value (0 or 255) can be secured. Can be embedded.
  • the CPU 11 determines that the code information 50 is not suitable for embedding in the background image 40, the CPU 11 regenerates the code information 50 in consideration of the background image 40 (S202). For example, the CPU 11 uses the margin bits of the error correction code of the code information 50 to change the arrangement of the data dots 53 while maintaining the identity of the information held by the data dots 53. As a result, the CPU 11 regenerates the code information 50. The CPU 11 repeats the generation of the code image 60 until the code information 50 can be suitably embedded in the background image 40.
  • the CPU 11 determines the fluctuation direction and fluctuation width of the pixel value of the code information 50 (S203).
  • the CPU 11 changes the pixel value to two complementary colors on the hue with the pixel value of the code information 50 as the center.
  • CPU21 acquires the image value of the background image 40 of the position which overlaps with the code information 50 (S204).
  • the CPU 11 performs image processing on the acquired image value under the conditions determined in S203 (S205).
  • the CPU 11 converts the pixel values (R, G, B) acquired in S204 into pixel values (H, S, V) in the HSV model.
  • H Hue
  • S saturated Chroma
  • V Value Lightness Brightness
  • the CPU 11 converts the pixel values (R, G, B) into pixel values (H, S, V) based on the following formulas (2) to (4).
  • MAX indicates the maximum value of RGB
  • MIN indicates the minimum value of RGB.
  • the CPU 11 performs color data A obtained by subtracting 5 from the pixel value in the saturation direction with respect to the converted pixel value (H, S, V), and color data B obtained by adding 5 to the pixel value in the saturation direction. Is generated. That is, the color data A is (H, S-5, V), and the color data B is (H, S + 5, V).
  • the fluctuation range of the pixel value is set to 5, but the present invention is not limited to this.
  • the CPU 11 generates a first frame image and a second frame image by changing the pixel value of the background image 40 at a position overlapping the code information 50.
  • the CPU 11 generates a code image 60 (moving image) in which an invisible code is embedded by alternately switching the generated first frame image and second frame image at intervals of less than 0.03 [seconds] ( S206).
  • the CPU 11 generates the code image 60 (moving image) in which the invisible code is embedded by alternately switching the generated first frame image and second frame image at a frequency higher than 30 [Hz]. .
  • the CPU 11 generates the code image 60 in which the code information 50 is embedded in the invisible state in the background image 40 subjected to the filter processing. Specifically, the CPU 11 generates the code image 60 by changing the pixel value of the background image 40 at the position overlapping with the code information 50 in a predetermined direction around the pixel value. As a result, it is possible to provide a link or the like while maintaining the design for the public outdoors while suppressing the mobile terminal on the photographing side from being difficult to read the invisible code due to camera shake.
  • RGB pixel values and HSV model pixel values are used.
  • the present invention is not limited to this.
  • a pixel value in the HSL color space or a pixel value of L * a * b * may be used.
  • FIG. 6 is a diagram for explaining a link when a mobile terminal device according to the first embodiment simultaneously displays a moving image in which an invisible code displayed on digital signage is embedded in live view (obtained display as needed).
  • a code reader is designed to change a screen according to data held by a code when the code is recognized.
  • the QR code reader recognizes a scanned QR code
  • the QR code reader performs screen transition in accordance with presentation of retained contents (for example, URL information) and stored data.
  • the invisible code an image that is alternately switched at a predetermined time interval is read by the camera 24, and a code that is invisiblely embedded between a plurality of frames is extracted by difference calculation.
  • This extraction process is performed by the CPU 21 that processes the image data.
  • the mobile terminal device 20 including the camera 24 and the display 25 according to the present embodiment is immediately stored in the code when the code is recognized.
  • the screen does not transition to etc. 6 to 9 are image diagrams for explaining an operation when the mobile terminal device 20 captures an image including the display D by the live view function.
  • a region C hatched in gray in the display D indicates a position where an invisible code is embedded.
  • a region C hatched in gray in the display 25 indicates a position corresponding to a region hatched in gray in the display D.
  • FIG. 6 is an image diagram for explaining a state in which an image in which an invisible code is embedded is presented on the entire display D of the digital signage.
  • the mobile terminal device 20 shoots a part of a building including digital signage in the live view mode.
  • the lower part of FIG. 6 shows an image displayed on the display 25 of the mobile terminal device 20 in the live view state in the situation as shown in the upper part.
  • the mobile terminal device 20 is in a state of taking an image including the display D in which the invisible code is embedded, but does not perform processing such as screen transition by interpreting the invisible code. However, the mobile terminal device 20 is instructed by the invisible code only when the user performs a selection operation such as a touch on the portion corresponding to the area where the invisible code of the display 25 displayed in the live view is embedded. Perform the action.
  • URL information for guiding to a product purchase page may be embedded as an invisible code in an image in which a product advertisement is displayed so that it can be seen by human eyes on digital signage.
  • a smartphone has been described as an example, but screen transition processing may be performed when selection is performed by a selection unit such as a ring shape, line-of-sight recognition, glove shape, or brain interface as an alternative to a selection operation such as touch. Further, when the camera 24 that acquires an image and the display 25 are separated, the devices may cooperate to perform the same operation.
  • FIG. 7A and 7B are image diagrams for explaining states in which the posture and zoom state taken by the camera 24 are different.
  • an image including the entire digital signage display D is taken.
  • FIG. 7B only a part of the display D is photographed with the display D and the camera 24 not facing each other.
  • the invisible code may be embedded in a part of the display D, or the invisible code may be embedded at a position that overlaps with an article (product) to be guided to the purchase page.
  • a mark such as “.” May be added to indicate that it can be selected, or a leader line is displayed to display an advertisement page. Or it may be shown that it is possible to link to the purchase page.
  • FIG. 8 is an image diagram for explaining a form in which the invisible code embedded in the digital signage moves.
  • the area C (the area where the invisible code is embedded) hatched in gray in the display D moves as indicated by a solid arrow.
  • the hatched area C is moved in accordance with the movement of the image related to the product that can be recognized by the person as indicated by the solid arrow. Thereby, it is possible to prevent transition to the purchase page unless the user intentionally selects the product itself.
  • FIG. 9 is an image diagram for explaining a state in which there are a plurality of digital signage, each of which has an area in which an invisible code is embedded.
  • Each digital signage includes a first area C1 and a second area C2 hatched in gray. These multiple areas are moved as indicated by solid arrows in accordance with the contents displayed on the digital signage.
  • the user selects C1
  • processing corresponding to data held by the invisible code embedded in the C1 region is performed.
  • processing corresponding to the data held by the invisible code embedded in the area C2 is performed.
  • FIG. 9 illustrates the case where selectable areas are arranged on each display, a plurality of selectable areas may be arranged on a single display.
  • FIG. 10 is a flowchart for explaining the operation of the mobile terminal device 20.
  • the portable terminal device 20 has a live view function, and acquires external image information from the camera 24 (S301).
  • the CPU 21 acquires the image data acquired from the camera 24 and realizes the live view function by displaying the image data on the display 25.
  • the CPU 21 as a recognition unit for recognizing the code analyzes the image data acquired from the camera 24, and analyzes (recognizes) whether the invisible code is embedded in the image data (S302).
  • the CPU 21 identifies the position where the invisible code is embedded, and embeds data held by the invisible code as a link at the identified position (S303). . Thereafter, the CPU 21 acquires touch information indicating that the user has touched the display 25 displaying the live view from the display 25 including a touch panel or the like (S304). Specifically, when the user touches the position where the invisible code specified in S303 is embedded, the CPU 21 performs control based on the link embedded in the touched position. As described above, even if the invisible code is embedded in the image displayed on the display D, the CPU 21 does not change the screen when there is no user selection operation.
  • the CPU 21 acquires touch information (user selection information) indicating that the area C in which the invisible code is embedded is touched, and executes the process of S306 when the user selects the area C (touches the embedded link). (S305: Yes). On the other hand, when the area C in which the invisible code is embedded is not selected (touched) by the user, the CPU 21 repeatedly executes the processing from S301 to S304 (S305: No).
  • the CPU 21 transmits a request to the Web server or the like via the network in order to display the embedded link (S306).
  • the CPU 21 performs the decoding process on the code in S303, but this operation may be performed immediately before S306.
  • additional information such as information about the advertisement and URL can be given to the moving image displayed on the facility such as digital signage.
  • an interactive user experience can be provided by presenting content displayed on a public display without a touch panel function so as to be selectable on a smartphone or the like.
  • FIG. 11 is a diagram for explaining a route map of the subway displayed by the digital signage arranged in the vehicle.
  • FIG. 11A information on routes is presented in many languages to cope with the recent increase in foreigners visiting Japan.
  • the display area of the digital signage display is limited, and there is a limit to supporting multiple languages. For this reason, it is adopted to switch between a route map written in Japanese as shown in FIG. 11B and a route map written in English as shown in FIG. 11C at regular intervals.
  • the method of switching the display language every predetermined period there is a problem in convenience because it is necessary to wait until the user displays in a readable language.
  • an invisible code serving as a pointer regarding the operation status and route map is embedded in a part of the digital signage.
  • the area C where the invisible code is embedded may be a part of the display or the entire surface.
  • the user's own mobile terminal individually holds language settings related to the user's regular language. Using the user's language setting, the CPU 21 controls to display an overlay on the image acquired by the camera 24. Thereby, the user can confirm the route map described in the language which a user can read with the own portable terminal device 20.
  • the operation of the mobile terminal device 20 of this embodiment will be described with reference to FIG. Since S401 to S404 are processes similar to S301 to S304, respectively, description thereof will be omitted.
  • the CPU 21 analyzes data held by the invisible code. Then, the CPU 21 downloads the route map information described in the user-readable language via the network without waiting for the user's instruction (S405). Thereafter, the CPU 21 switches the display language in accordance with an instruction such as a user touch. If the language information that can be identified by the user cannot be acquired, the CPU 21 controls the display language to be switched every time the route map is touched on the live view screen (S407).
  • the system of the present embodiment it is possible to provide means for individual users to acquire and convert information provided to a large number of people through digital signage based on their own setting information. it can.
  • the content displayed on the public display can be customized and provided for the user using a simple method.
  • an invisible code it is not visually recognized by a user who is not interested in the information held in the code, so that the conventional landscape can be maintained.
  • the content server 10, the portable terminal device 20, and the server 30 in the said embodiment have a computer system inside.
  • Each process of the content server 10, the mobile terminal device 20, and the server 30 described above is stored in a computer-readable recording medium in the form of a program. Then, the above-described various processes are performed by the computer reading and executing the stored program.
  • the computer-readable recording medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like.
  • the computer program may be distributed to the computer via a communication line, and the computer that has received the distribution may execute the program.
  • a part of processing executed on a specific computer may be shared by other computers. For this reason, the entire information processing system may be regarded as an information processing apparatus. Even if an element that performs characteristic information processing is intentionally installed outside the country, if the region that refers to the processing result is in Japan, the information processing is considered to have been executed in the country.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention concerne un dispositif de traitement d'informations comprenant : une unité de reconnaissance pour reconnaître un code inclus dans une image ; une unité d'acquisition de position pour acquérir une position sélectionnée par l'utilisateur dans l'image ; et une unité de commande pour effectuer une commande de façon à exécuter le contenu du code lorsque le code est situé au niveau de la position dans l'image acquise par l'unité d'acquisition de position.
PCT/JP2017/046923 2017-01-18 2017-12-27 Dispositif de traitement d'informations, procédé d'affichage, programme, et support d'enregistrement lisible par ordinateur WO2018135272A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017007051 2017-01-18
JP2017-007051 2017-01-18

Publications (1)

Publication Number Publication Date
WO2018135272A1 true WO2018135272A1 (fr) 2018-07-26

Family

ID=62908620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/046923 WO2018135272A1 (fr) 2017-01-18 2017-12-27 Dispositif de traitement d'informations, procédé d'affichage, programme, et support d'enregistrement lisible par ordinateur

Country Status (1)

Country Link
WO (1) WO2018135272A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004234318A (ja) * 2003-01-30 2004-08-19 Denso Wave Inc 二次元情報コード、二次元情報コードの表示方法、二次元情報コードの生成方法、二次元情報コードの読取方法
JP2014139732A (ja) * 2013-01-21 2014-07-31 Sony Corp 画像処理装置、画像処理方法、プログラムおよび表示装置
JP2015106848A (ja) * 2013-11-29 2015-06-08 富士通株式会社 情報埋め込み装置、情報検出装置、情報埋め込み方法、及び情報検出方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004234318A (ja) * 2003-01-30 2004-08-19 Denso Wave Inc 二次元情報コード、二次元情報コードの表示方法、二次元情報コードの生成方法、二次元情報コードの読取方法
JP2014139732A (ja) * 2013-01-21 2014-07-31 Sony Corp 画像処理装置、画像処理方法、プログラムおよび表示装置
JP2015106848A (ja) * 2013-11-29 2015-06-08 富士通株式会社 情報埋め込み装置、情報検出装置、情報埋め込み方法、及び情報検出方法

Similar Documents

Publication Publication Date Title
US10122888B2 (en) Information processing system, terminal device and method of controlling display of secure data using augmented reality
US10677596B2 (en) Image processing device, image processing method, and program
US10650264B2 (en) Image recognition apparatus, processing method thereof, and program
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
CN116342622A (zh) 视频流的图像分割和修改
US20140247272A1 (en) Image processing apparatus, method and computer program product
US8891815B2 (en) Invisible information embedding apparatus, invisible information detecting apparatus, invisible information embedding method, invisible information detecting method, and storage medium
CN103189864A (zh) 用于确定个人的共享好友的方法、设备和计算机程序产品
CN106462768B (zh) 使用图像特征从图像提取视窗
WO2013088637A2 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
KR101079925B1 (ko) 마커와 훈련자 손 인식을 통한 증강 현실 상황 훈련 시스템
CN105637529A (zh) 图像捕获输入和投影输出
TWI608428B (zh) 利用影像辨識產生相對應資訊之影像處理系統及其相關方法
WO2018235595A1 (fr) Dispositif de fourniture d'informations, procédé de fourniture d'informations et programme
KR20130143101A (ko) 기계 가독 도트 패턴의 표시 장치를 이용한 표시 포맷
JP5868050B2 (ja) 表示装置及びその制御方法
KR20110136026A (ko) 증강현실 데이터 최적화 시스템
JP2015005223A (ja) 情報処理システム、ラベル、情報処理方法、情報処理プログラム
JP2006072669A (ja) 画像処理装置、ゲーム装置および画像処理方法
CN110832525A (zh) 在对象上的增强现实广告
EP3251091B1 (fr) Étiquetage visuel hybride à l'aide de pavés colorés personnalisés
WO2018135272A1 (fr) Dispositif de traitement d'informations, procédé d'affichage, programme, et support d'enregistrement lisible par ordinateur
Beglov Object information based on marker recognition
KR20160138823A (ko) 증강현실 이용한 전통장터 정보 제공 방법
JP2015125543A (ja) 視線予測システム、視線予測方法、および視線予測プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP