WO2018135272A1 - Information processing device, display method, program, and computer-readable recording medium - Google Patents

Information processing device, display method, program, and computer-readable recording medium Download PDF

Info

Publication number
WO2018135272A1
WO2018135272A1 PCT/JP2017/046923 JP2017046923W WO2018135272A1 WO 2018135272 A1 WO2018135272 A1 WO 2018135272A1 JP 2017046923 W JP2017046923 W JP 2017046923W WO 2018135272 A1 WO2018135272 A1 WO 2018135272A1
Authority
WO
WIPO (PCT)
Prior art keywords
code
image
cpu
information
display
Prior art date
Application number
PCT/JP2017/046923
Other languages
French (fr)
Japanese (ja)
Inventor
真人 柴田
昌敏 竹谷
Original Assignee
合同会社IP Bridge1号
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合同会社IP Bridge1号 filed Critical 合同会社IP Bridge1号
Publication of WO2018135272A1 publication Critical patent/WO2018135272A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals

Definitions

  • the present invention relates to an information processing apparatus, a display method, a program, and a computer-readable storage medium.
  • Two-dimensional codes that can hold more information than barcodes are known.
  • Two-dimensional codes are often formed in black and white, but in order to hold more information, two-dimensional codes in which data areas are formed in colors such as red, green, and blue are also used (for example, Patent Document 1).
  • Patent Document 1 two-dimensional codes in which data areas are formed in colors such as red, green, and blue are also used.
  • Patent Document 1 when a two-dimensional code is embedded in an image, the two-dimensional code becomes conspicuous and the design of the image is impaired.
  • an object of the present invention is to make it possible to operate content provided to a large number of people in a terminal device owned by each user.
  • An image acquisition unit that acquires an image, a recognition unit that recognizes a code included in the image, a position acquisition unit that acquires a position in the image selected by a user, and the image acquired by the position acquisition unit
  • An information processing apparatus comprising: a control unit configured to perform control so that the content held by the code is executed when the code is located at a middle position.
  • content provided to a large number of people can be operated on a terminal device owned by each user.
  • FIG. 1 is a diagram illustrating an overall configuration of an information processing system according to a first embodiment.
  • the block diagram which shows the content server 10, the portable terminal device 20, and the server 30 which concern on 1st Embodiment.
  • 5 is a flowchart showing a process for generating a code image 60 according to the first embodiment.
  • 5 is a flowchart showing a process for generating a code image 60 according to the first embodiment.
  • FIG. 1 is a diagram illustrating an overall configuration of an information processing system according to the first embodiment.
  • the information processing system includes a content server 10, a mobile terminal device 20, and a server 30.
  • the content server 10, the mobile terminal device 20, and the server 30 are connected via a network N so as to communicate with each other.
  • the network N includes, for example, a part or all of a WAN (Wide Area Network), a LAN (Local Area Network), the Internet, a provider device, a wireless base station, a dedicated line, and the like.
  • the mobile terminal device 20 is, for example, a tablet computer or a smartphone. The mobile terminal device 20 communicates with the content server 10 and the server 30 via the network N by performing wireless communication W.
  • the content server 10 is connected to a display D that displays content and the like.
  • the display D is a so-called digital signage.
  • Digital signage is an electronic signboard for displaying advertisements and the like on the relatively large display to the public.
  • the digital signage itself may be provided with a storage device or the like that temporarily holds content. Therefore, the content server 10 and the display D may be integrated as digital signage.
  • a computer that requests information is called a client
  • a computer that sends information in response to a request for information is called a server.
  • the content server 10 and the server 30 function as a server
  • the mobile terminal device 20 mainly functions as a client.
  • FIG. 2 is a block diagram showing the content server 10, the mobile terminal device 20, and the server 30 according to the first embodiment.
  • the content server 10 includes a CPU (Central Processing Unit) 11, a memory 12, a bus 13, and a storage 14.
  • CPU Central Processing Unit
  • the CPU 11 as an information processing unit and the memory 12 as an information holding unit are connected to each other via a bus 13.
  • the CPU 11 accesses information stored in the memory 12 via the bus 13 according to a program.
  • the memory 12 is a RAM (Random Access Memory), but is not limited thereto.
  • the memory 12 may be another type of memory such as a flash memory.
  • the memory 12 temporarily stores information processed by the CPU 11.
  • the CPU 11 reads and writes various information to and from storage 14.
  • the storage 14 is, for example, an HDD (Hard Disk Drive), but is not limited to this.
  • the storage 14 may be an external device accessible by the content server 10 such as a flash memory, NAS (Network Attached Storage), or an external storage server.
  • the content server 10 may store information in the storage 14 instead of the memory 12. Further, alternatives such as a memory and a storage may be used according to a change in the architecture of the content server 10.
  • the mobile terminal device 20 includes a CPU 21 as a control unit, a memory 22, a bus 23, a camera 24, and a display 25 (see FIG. 6 and the like).
  • the display 25 has a function of detecting and acquiring selection by a user such as a touch panel.
  • the CPU 21 serving as a position acquisition unit acquires information on the position (coordinates) selected by the user from a touch panel or the like.
  • the CPU 21 accesses information stored in the memory 22 via the bus 23 according to a program.
  • the memory 22 is a RAM, but is not limited to this.
  • the memory 22 may be another type of memory such as a flash memory.
  • the memory 22 temporarily stores information processed by the CPU 11.
  • the image read by the camera 24 can be processed by the CPU 21 via the bus 23. Further, the camera 24 can display the image on the display 25 in order to present the captured information to the user even when there is no recording instruction (shutter) (so-called live view function).
  • the CPU 21 performs image processing on the image acquired by the camera 24 and displays the result of the image processing on the display 25. In the case of expressing as an image, it may be interpreted as including a moving image as well as a still image.
  • An apparatus in which a camera and a display are configured as one device may be interpreted as an information processing apparatus, or a system configured by linking a device having a camera function and a plurality of devices having a display function as information. It may be interpreted as a processing device.
  • the server 30 includes a CPU 31, a memory 32, a bus 33, and a storage 34.
  • the CPU 31 accesses information stored in the memory 32 via the bus 33 according to the program.
  • the memory 32 is a RAM, but is not limited to this.
  • the memory 32 may be another type of memory such as a flash memory.
  • the memory 32 temporarily stores information processed by the CPU 31.
  • the CPU 31 reads and writes various information to and from storage 34.
  • the storage 34 is, for example, an HDD, but is not limited to this.
  • the storage 34 may be an external device accessible by the server 30 such as a NAS or an external storage server.
  • a camera that captures an image may be connected to the content server 10 and the server 30.
  • the server 30 may include a display, like the content server 10 and the mobile terminal device 20.
  • ⁇ Invisible code> In general, blinking of an image of less than 0.03 [seconds] is difficult to recognize with the naked eye. In other words, blinking of an image having a frequency exceeding 33 [Hz] is difficult to recognize with the naked eye. When an image blinks at a frequency exceeding 33 [Hz], it is difficult to recognize with the naked eye even when an image with a large difference in brightness is easily blinking. For this reason, if the difference between brightness and darkness is small, even if the blinking frequency is slightly lower than 33 [Hz], it is possible to create a state in which blinking of an image is difficult to be recognized with the naked eye. Such a state that is difficult to be recognized with the naked eye is referred to as “invisible” in the present embodiment.
  • FIG. 3 is a diagram illustrating an example of the code image 60 in which the invisible code according to the first embodiment is embedded.
  • the code image 60 is an image in which the first frame image 60a and the second frame image 60b are alternately switched.
  • the code image 60 is generated by embedding the code information 50 in the background image 40 in an invisible state.
  • the code information 50 embedded in the background image 40 in the invisible state is referred to as “invisible code”.
  • the background image 40 is an arbitrary image designated by a creator (content provider) that generates content, but is not limited thereto.
  • the background image 40 may be an image displayed as an operation screen of application software.
  • the code information 50 is described by taking a QR code (registered trademark) as an example, but is not limited thereto.
  • the code information 50 may be another two-dimensional code such as DataMatrix (registered trademark) or VeriCode (registered trademark), or may be a one-dimensional code such as a barcode.
  • the code information 50 is information in which data designated by the user (for example, URL of a WEB page, account information of messenger software, account information of a social networking service) is coded.
  • the code information 50 includes a position detection marker 51, an attitude correction marker 52, and a data dot 53.
  • the position detection marker 51 is a marker used to detect the position of the code information 50 and is arranged at three corners of the code information 50.
  • the posture correction marker 52 is a marker used to correct the posture of the code information 50 read by the camera 24 as a reading unit.
  • the data dot 53 is a dot for expressing information related to data held by the code.
  • the data dot 53 includes an error correction code such as a Reed-Solomon code.
  • an error correction code having an error correction rate of 15% or more is employed. It is preferable to employ an error correction code having an error correction rate of 30% or more.
  • the CPU 21 of the mobile terminal device 20 acquires the background image 40 and the code information 50. Based on the acquired background image 40 and code information 50, the CPU 21 generates a first frame image 60a and a second frame image 60b. For example, the CPU 21 generates the first frame image 60 a by reducing the brightness of the pixels of the background image 40 at the position overlapping the code information 50. Further, the CPU 21 generates the second frame image 60b by increasing the brightness of the pixel of the background image 40 at the position overlapping the code information 50. The CPU 21 generates, as the code image 60, an image in which the first frame image 60a and the second frame image 60b are alternately switched at intervals of less than 0.03 [seconds].
  • the code information 50 becomes conspicuous and the design of the image is impaired. Therefore, the CPU 21 alternately switches the first frame image 60a and the second frame image 60b at intervals of less than 0.03 [seconds]. As a result, the code information 50 becomes invisible to the naked eye. In other words, it is difficult to distinguish between the background image 40 and the code image 60 with the naked eye even though the first frame image 60a and the second frame image 60b are alternately displayed.
  • the light ID method such as LinkRay (registered trademark) that causes the LED light source to blink at high speed is easily wiped out by sunlight. For this reason, it is preferable to employ an invisible code in digital signage.
  • FIG. 3 for the sake of explanation, the description is made using a black and white image. However, it is assumed that a color image is used as the background image, and a high recognition rate can be ensured even outdoors by changing the value in the hue direction instead of the brightness.
  • FIG. 4 is a flowchart showing the display processing of the code image 60 according to the first embodiment.
  • the display processing of the code image 60 to be displayed on the display D will be described in detail with reference to FIG.
  • an image (moving image) in which an invisible code is embedded an image (moving image) generated by a system other than the content server 10 may be stored in the server 10 and distributed to the display D.
  • the CPU 11 of the mobile terminal device 10 acquires data to be stored as an invisible code to be displayed on the digital signage (S101).
  • Data to be stored as an invisible code displayed on the digital signage may be data designated by the user.
  • the data designated by the user includes a URL of a WEB server that can purchase a product displayed on the digital signage, messenger software account information, SNS account information, and the like.
  • the CPU 11 generates code information 50 based on the acquired data (S102).
  • the CPU 11 as the acquisition unit acquires the background image 40.
  • the CPU 11 acquires the background image 40 from the memory 12 or another server 30 (S103).
  • the CPU 11 calculates the frequency characteristics of the background image 40 by performing Fourier transform on the background image 40 (S104).
  • the CPU 11 removes a spatial frequency component equal to or greater than a predetermined value of the calculated background image 40 (S105). Note that the CPU 11 may hold both the background image 40 from which the high frequency component has been removed and the background image 40 from which the high frequency component has been removed, in the memory 22.
  • the CPU 11 performs a filtering process on the background image 40 to remove components having a spatial frequency included in the background image 40 of a predetermined value or more. Specifically, the CPU 11 performs a filtering process on the coordinates (i, j) of the background image 40 based on the following equation (1).
  • Xi, j indicates a pixel value before filtering
  • Yi, j indicates a pixel value after filtering.
  • the background image 40 includes a component having a high spatial frequency
  • the background image 40 is susceptible to camera shake when the background image 40 is captured. For this reason, if the background image 40 is blurred, it becomes difficult to extract an invisible code. For this reason, the CPU 11 performs a filtering process on the background image 40 to remove components having a spatial frequency equal to or higher than a predetermined value. Thereby, the influence of camera shake can be reduced.
  • the CPU 11 performs a code image generation process to be described later using the code information 50 and the background image 40 (S106).
  • the CPU 11 as the generation unit generates the code image 60 by embedding the code information 50 in the background image 40 in an invisible state.
  • the CPU 11 embeds the code information 50 in the background image 40 in an invisible state by changing the pixel value of the background image 40 at a position overlapping the code information 50.
  • the moving image in which the invisible code generated as described above is embedded is transmitted from the content server 10 to the display D so as to be reproducible. Thereby, the moving image with the invisible code embedded can be presented to the public by digital signage installed outdoors.
  • FIG. 5 is a flowchart showing a code image generation process according to the first embodiment.
  • the flowchart shown in FIG. 5 shows the process of S106 of FIG. 4 in detail.
  • the CPU 11 analyzes the background image 40 and the code information 50 (S201). For example, the CPU 11 acquires the color data of the background image 40 at the position where the code information 50 is arranged, and acquires the RGB value of the acquired color data. The CPU 11 determines whether the code information 50 is suitable for embedding in the background image 40 based on the acquired RGB values. For example, when the majority of the color data of the background image 40 at the position where the code information 50 is arranged is not suitable for embedding the code information 50, the CPU 11 determines that the code information 50 is not suitable for embedding in the background image 40. judge.
  • the position where the code is embedded may be shifted in consideration of the background image 40.
  • the color data that is not suitable for embedding the code information 50 means a pixel value that cannot afford to change the color.
  • the pixel value is expressed in 8-bit RGB, (255, 255, 255), (0, 0, 0), and the like correspond to color data that is not suitable for embedding the code information 50.
  • the saturation direction is set as the variation direction of the pixel value
  • the pixel value whose saturation is the limit value also corresponds to color data that is not suitable for embedding the code information 50.
  • the code information 50 is suitably used for the background image 40 if the sum of differences between the RGB values and the limit value (0 or 255) can be secured. Can be embedded.
  • the CPU 11 determines that the code information 50 is not suitable for embedding in the background image 40, the CPU 11 regenerates the code information 50 in consideration of the background image 40 (S202). For example, the CPU 11 uses the margin bits of the error correction code of the code information 50 to change the arrangement of the data dots 53 while maintaining the identity of the information held by the data dots 53. As a result, the CPU 11 regenerates the code information 50. The CPU 11 repeats the generation of the code image 60 until the code information 50 can be suitably embedded in the background image 40.
  • the CPU 11 determines the fluctuation direction and fluctuation width of the pixel value of the code information 50 (S203).
  • the CPU 11 changes the pixel value to two complementary colors on the hue with the pixel value of the code information 50 as the center.
  • CPU21 acquires the image value of the background image 40 of the position which overlaps with the code information 50 (S204).
  • the CPU 11 performs image processing on the acquired image value under the conditions determined in S203 (S205).
  • the CPU 11 converts the pixel values (R, G, B) acquired in S204 into pixel values (H, S, V) in the HSV model.
  • H Hue
  • S saturated Chroma
  • V Value Lightness Brightness
  • the CPU 11 converts the pixel values (R, G, B) into pixel values (H, S, V) based on the following formulas (2) to (4).
  • MAX indicates the maximum value of RGB
  • MIN indicates the minimum value of RGB.
  • the CPU 11 performs color data A obtained by subtracting 5 from the pixel value in the saturation direction with respect to the converted pixel value (H, S, V), and color data B obtained by adding 5 to the pixel value in the saturation direction. Is generated. That is, the color data A is (H, S-5, V), and the color data B is (H, S + 5, V).
  • the fluctuation range of the pixel value is set to 5, but the present invention is not limited to this.
  • the CPU 11 generates a first frame image and a second frame image by changing the pixel value of the background image 40 at a position overlapping the code information 50.
  • the CPU 11 generates a code image 60 (moving image) in which an invisible code is embedded by alternately switching the generated first frame image and second frame image at intervals of less than 0.03 [seconds] ( S206).
  • the CPU 11 generates the code image 60 (moving image) in which the invisible code is embedded by alternately switching the generated first frame image and second frame image at a frequency higher than 30 [Hz]. .
  • the CPU 11 generates the code image 60 in which the code information 50 is embedded in the invisible state in the background image 40 subjected to the filter processing. Specifically, the CPU 11 generates the code image 60 by changing the pixel value of the background image 40 at the position overlapping with the code information 50 in a predetermined direction around the pixel value. As a result, it is possible to provide a link or the like while maintaining the design for the public outdoors while suppressing the mobile terminal on the photographing side from being difficult to read the invisible code due to camera shake.
  • RGB pixel values and HSV model pixel values are used.
  • the present invention is not limited to this.
  • a pixel value in the HSL color space or a pixel value of L * a * b * may be used.
  • FIG. 6 is a diagram for explaining a link when a mobile terminal device according to the first embodiment simultaneously displays a moving image in which an invisible code displayed on digital signage is embedded in live view (obtained display as needed).
  • a code reader is designed to change a screen according to data held by a code when the code is recognized.
  • the QR code reader recognizes a scanned QR code
  • the QR code reader performs screen transition in accordance with presentation of retained contents (for example, URL information) and stored data.
  • the invisible code an image that is alternately switched at a predetermined time interval is read by the camera 24, and a code that is invisiblely embedded between a plurality of frames is extracted by difference calculation.
  • This extraction process is performed by the CPU 21 that processes the image data.
  • the mobile terminal device 20 including the camera 24 and the display 25 according to the present embodiment is immediately stored in the code when the code is recognized.
  • the screen does not transition to etc. 6 to 9 are image diagrams for explaining an operation when the mobile terminal device 20 captures an image including the display D by the live view function.
  • a region C hatched in gray in the display D indicates a position where an invisible code is embedded.
  • a region C hatched in gray in the display 25 indicates a position corresponding to a region hatched in gray in the display D.
  • FIG. 6 is an image diagram for explaining a state in which an image in which an invisible code is embedded is presented on the entire display D of the digital signage.
  • the mobile terminal device 20 shoots a part of a building including digital signage in the live view mode.
  • the lower part of FIG. 6 shows an image displayed on the display 25 of the mobile terminal device 20 in the live view state in the situation as shown in the upper part.
  • the mobile terminal device 20 is in a state of taking an image including the display D in which the invisible code is embedded, but does not perform processing such as screen transition by interpreting the invisible code. However, the mobile terminal device 20 is instructed by the invisible code only when the user performs a selection operation such as a touch on the portion corresponding to the area where the invisible code of the display 25 displayed in the live view is embedded. Perform the action.
  • URL information for guiding to a product purchase page may be embedded as an invisible code in an image in which a product advertisement is displayed so that it can be seen by human eyes on digital signage.
  • a smartphone has been described as an example, but screen transition processing may be performed when selection is performed by a selection unit such as a ring shape, line-of-sight recognition, glove shape, or brain interface as an alternative to a selection operation such as touch. Further, when the camera 24 that acquires an image and the display 25 are separated, the devices may cooperate to perform the same operation.
  • FIG. 7A and 7B are image diagrams for explaining states in which the posture and zoom state taken by the camera 24 are different.
  • an image including the entire digital signage display D is taken.
  • FIG. 7B only a part of the display D is photographed with the display D and the camera 24 not facing each other.
  • the invisible code may be embedded in a part of the display D, or the invisible code may be embedded at a position that overlaps with an article (product) to be guided to the purchase page.
  • a mark such as “.” May be added to indicate that it can be selected, or a leader line is displayed to display an advertisement page. Or it may be shown that it is possible to link to the purchase page.
  • FIG. 8 is an image diagram for explaining a form in which the invisible code embedded in the digital signage moves.
  • the area C (the area where the invisible code is embedded) hatched in gray in the display D moves as indicated by a solid arrow.
  • the hatched area C is moved in accordance with the movement of the image related to the product that can be recognized by the person as indicated by the solid arrow. Thereby, it is possible to prevent transition to the purchase page unless the user intentionally selects the product itself.
  • FIG. 9 is an image diagram for explaining a state in which there are a plurality of digital signage, each of which has an area in which an invisible code is embedded.
  • Each digital signage includes a first area C1 and a second area C2 hatched in gray. These multiple areas are moved as indicated by solid arrows in accordance with the contents displayed on the digital signage.
  • the user selects C1
  • processing corresponding to data held by the invisible code embedded in the C1 region is performed.
  • processing corresponding to the data held by the invisible code embedded in the area C2 is performed.
  • FIG. 9 illustrates the case where selectable areas are arranged on each display, a plurality of selectable areas may be arranged on a single display.
  • FIG. 10 is a flowchart for explaining the operation of the mobile terminal device 20.
  • the portable terminal device 20 has a live view function, and acquires external image information from the camera 24 (S301).
  • the CPU 21 acquires the image data acquired from the camera 24 and realizes the live view function by displaying the image data on the display 25.
  • the CPU 21 as a recognition unit for recognizing the code analyzes the image data acquired from the camera 24, and analyzes (recognizes) whether the invisible code is embedded in the image data (S302).
  • the CPU 21 identifies the position where the invisible code is embedded, and embeds data held by the invisible code as a link at the identified position (S303). . Thereafter, the CPU 21 acquires touch information indicating that the user has touched the display 25 displaying the live view from the display 25 including a touch panel or the like (S304). Specifically, when the user touches the position where the invisible code specified in S303 is embedded, the CPU 21 performs control based on the link embedded in the touched position. As described above, even if the invisible code is embedded in the image displayed on the display D, the CPU 21 does not change the screen when there is no user selection operation.
  • the CPU 21 acquires touch information (user selection information) indicating that the area C in which the invisible code is embedded is touched, and executes the process of S306 when the user selects the area C (touches the embedded link). (S305: Yes). On the other hand, when the area C in which the invisible code is embedded is not selected (touched) by the user, the CPU 21 repeatedly executes the processing from S301 to S304 (S305: No).
  • the CPU 21 transmits a request to the Web server or the like via the network in order to display the embedded link (S306).
  • the CPU 21 performs the decoding process on the code in S303, but this operation may be performed immediately before S306.
  • additional information such as information about the advertisement and URL can be given to the moving image displayed on the facility such as digital signage.
  • an interactive user experience can be provided by presenting content displayed on a public display without a touch panel function so as to be selectable on a smartphone or the like.
  • FIG. 11 is a diagram for explaining a route map of the subway displayed by the digital signage arranged in the vehicle.
  • FIG. 11A information on routes is presented in many languages to cope with the recent increase in foreigners visiting Japan.
  • the display area of the digital signage display is limited, and there is a limit to supporting multiple languages. For this reason, it is adopted to switch between a route map written in Japanese as shown in FIG. 11B and a route map written in English as shown in FIG. 11C at regular intervals.
  • the method of switching the display language every predetermined period there is a problem in convenience because it is necessary to wait until the user displays in a readable language.
  • an invisible code serving as a pointer regarding the operation status and route map is embedded in a part of the digital signage.
  • the area C where the invisible code is embedded may be a part of the display or the entire surface.
  • the user's own mobile terminal individually holds language settings related to the user's regular language. Using the user's language setting, the CPU 21 controls to display an overlay on the image acquired by the camera 24. Thereby, the user can confirm the route map described in the language which a user can read with the own portable terminal device 20.
  • the operation of the mobile terminal device 20 of this embodiment will be described with reference to FIG. Since S401 to S404 are processes similar to S301 to S304, respectively, description thereof will be omitted.
  • the CPU 21 analyzes data held by the invisible code. Then, the CPU 21 downloads the route map information described in the user-readable language via the network without waiting for the user's instruction (S405). Thereafter, the CPU 21 switches the display language in accordance with an instruction such as a user touch. If the language information that can be identified by the user cannot be acquired, the CPU 21 controls the display language to be switched every time the route map is touched on the live view screen (S407).
  • the system of the present embodiment it is possible to provide means for individual users to acquire and convert information provided to a large number of people through digital signage based on their own setting information. it can.
  • the content displayed on the public display can be customized and provided for the user using a simple method.
  • an invisible code it is not visually recognized by a user who is not interested in the information held in the code, so that the conventional landscape can be maintained.
  • the content server 10, the portable terminal device 20, and the server 30 in the said embodiment have a computer system inside.
  • Each process of the content server 10, the mobile terminal device 20, and the server 30 described above is stored in a computer-readable recording medium in the form of a program. Then, the above-described various processes are performed by the computer reading and executing the stored program.
  • the computer-readable recording medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like.
  • the computer program may be distributed to the computer via a communication line, and the computer that has received the distribution may execute the program.
  • a part of processing executed on a specific computer may be shared by other computers. For this reason, the entire information processing system may be regarded as an information processing apparatus. Even if an element that performs characteristic information processing is intentionally installed outside the country, if the region that refers to the processing result is in Japan, the information processing is considered to have been executed in the country.

Abstract

An information processing device provided with: a recognition unit for recognizing a code included in an image; a position acquisition unit for acquiring a user-selected position in the image; and a control unit for performing control so as to execute the content of the code when the code is located at the in-image position acquired by the position acquisition unit.

Description

情報処理装置、表示方法、プログラム、およびコンピュータ読み取り可能な記憶媒体Information processing apparatus, display method, program, and computer-readable storage medium
 本発明は、情報処理装置、表示方法、プログラム、およびコンピュータ読み取り可能な記憶媒体に関する。 The present invention relates to an information processing apparatus, a display method, a program, and a computer-readable storage medium.
 近年、バーコードよりも多くの情報を保持可能な2次元コードが知られている。2次元コードは白黒で形成されることが多いが、より多くの情報を保持するために、データ領域が赤、緑、青等のカラーで形成された2次元コードも用いられている(例えば、特許文献1参照)。しかし、画像に2次元コードが埋め込まれると、2次元コードが目立ってしまい画像のデザイン性が損なわれてしまう。 In recent years, two-dimensional codes that can hold more information than barcodes are known. Two-dimensional codes are often formed in black and white, but in order to hold more information, two-dimensional codes in which data areas are formed in colors such as red, green, and blue are also used (for example, Patent Document 1). However, when a two-dimensional code is embedded in an image, the two-dimensional code becomes conspicuous and the design of the image is impaired.
特開2004-234318号公報JP 2004-234318 A
 一方、デジタルサイネージなどの屋外広告が増えてきている。このようなデジタルサイネージの多くは大型で、ユーザが直接触ることの難しい離れた位置(例えば、ビルの屋上や壁面等)に設置される。また、車内や店舗にデジタルサイネージが設置される場合、多数のユーザに情報を提供する目的で設置されている場合が多い。 On the other hand, outdoor advertising such as digital signage is increasing. Many of such digital signage are large and are installed at remote locations (for example, the rooftop of a building or a wall surface) where it is difficult for the user to make direct contact. In addition, when digital signage is installed in a car or in a store, it is often installed for the purpose of providing information to a large number of users.
 このようなデジタルサイネージに表示されるコンテンツ中に2次元コード等を表示すると、デザインを崩す要因となる。また、ディスプレイに表示されるコンテンツは多人数向けに提供する目的であることが多い。このような場合に、個別のユーザがコンテンツに興味をもったとしても、ユーザ自身の端末装置で検索行動を誘起させることは難しかった。 * Displaying a 2D code or the like in the content displayed on such digital signage will cause the design to break. In addition, the content displayed on the display is often intended to be provided for a large number of people. In such a case, even if an individual user is interested in the content, it is difficult to induce search behavior by the user's own terminal device.
 そこで、本発明は、多人数に提供されるコンテンツを各ユーザが保有する端末装置において操作可能とすることを目的とする。 Therefore, an object of the present invention is to make it possible to operate content provided to a large number of people in a terminal device owned by each user.
 画像を取得する画像取得部と、前記画像中に含まれるコードを認識する認識部と、ユーザによって選択された前記画像中の位置を取得する位置取得部と、前記位置取得部が取得した前記画像中の位置に前記コードが位置する場合、前記コードが保持する内容を実行するように制御する制御部と、を有する情報処理装置。 An image acquisition unit that acquires an image, a recognition unit that recognizes a code included in the image, a position acquisition unit that acquires a position in the image selected by a user, and the image acquired by the position acquisition unit An information processing apparatus comprising: a control unit configured to perform control so that the content held by the code is executed when the code is located at a middle position.
 本発明の更なる特徴及び態様は、添付図面を参照し、以下に述べる実施形態の詳細な説明から明らかとなるであろう。 Further features and aspects of the present invention will become apparent from the detailed description of embodiments set forth below, taken in conjunction with the accompanying drawings.
 本発明によれば、多人数に提供されるコンテンツを各ユーザが保有する端末装置において操作可能とすることができる。 According to the present invention, content provided to a large number of people can be operated on a terminal device owned by each user.
第1の実施形態に係る情報処理システムの全体構成を示す図。1 is a diagram illustrating an overall configuration of an information processing system according to a first embodiment. 第1の実施形態に係るコンテンツサーバ10、携帯端末装置20、およびサーバ30を示すブロック図。The block diagram which shows the content server 10, the portable terminal device 20, and the server 30 which concern on 1st Embodiment. 第1の実施形態に係る不可視コードが埋め込まれたコード画像60の一例を示す図。The figure which shows an example of the code image 60 with which the invisible code which concerns on 1st Embodiment was embedded. 第1の実施形態に係るコード画像60の生成処理を示すフローチャート。5 is a flowchart showing a process for generating a code image 60 according to the first embodiment. 第1の実施形態に係るコード画像60の生成処理を示すフローチャート。5 is a flowchart showing a process for generating a code image 60 according to the first embodiment. 第1の実施形態に係るシステムの動作を説明するためのイメージ図。The image figure for demonstrating operation | movement of the system which concerns on 1st Embodiment. 第1の実施形態に係るシステムの動作を説明するためのイメージ図。The image figure for demonstrating operation | movement of the system which concerns on 1st Embodiment. 第1の実施形態に係るシステムの動作を説明するためのイメージ図。The image figure for demonstrating operation | movement of the system which concerns on 1st Embodiment. 第1の実施形態に係るシステムの動作を説明するためのイメージ図。The image figure for demonstrating operation | movement of the system which concerns on 1st Embodiment. 第1の実施形態に係るシステムの動作を説明するためのイメージ図。The image figure for demonstrating operation | movement of the system which concerns on 1st Embodiment. 第1の実施形態に係る携帯端末装置のライブビューに関する処理を説明するためのフローチャート。The flowchart for demonstrating the process regarding the live view of the portable terminal device which concerns on 1st Embodiment. 第2の実施形態に係る多言語表示された案内図を説明するためのイメージ図。The image figure for demonstrating the multilingual guide map which concerns on 2nd Embodiment. 第2の実施形態に係る多言語表示された案内図を説明するためのイメージ図。The image figure for demonstrating the multilingual guide map which concerns on 2nd Embodiment. 第2の実施形態に係る多言語表示された案内図を説明するためのイメージ図。The image figure for demonstrating the multilingual guide map which concerns on 2nd Embodiment. 第2の実施形態に係る形態情報端末側の言語表示方法の切り替えに関する処理を説明するためのフローチャート。The flowchart for demonstrating the process regarding the switching of the language display method by the form information terminal side which concerns on 2nd Embodiment.
 以下、実施形態の情報処理装置、表示方法、読取方法、およびコンピュータ読み取り可能な非一時的記憶媒体を、図面を参照して説明する。 Hereinafter, an information processing apparatus, a display method, a reading method, and a computer-readable non-transitory storage medium according to embodiments will be described with reference to the drawings.
(第1の実施形態)
<情報処理システムの全体構成>
 図1は、第1の実施形態に係る情報処理システムの全体構成を示す図である。情報処理システムは、コンテンツサーバ10と、携帯端末装置20と、サーバ30とを備える。コンテンツサーバ10、携帯端末装置20、およびサーバ30は、ネットワークNを介して互いに通信可能に接続されている。
(First embodiment)
<Overall configuration of information processing system>
FIG. 1 is a diagram illustrating an overall configuration of an information processing system according to the first embodiment. The information processing system includes a content server 10, a mobile terminal device 20, and a server 30. The content server 10, the mobile terminal device 20, and the server 30 are connected via a network N so as to communicate with each other.
 ネットワークNは、例えば、WAN(Wide Area Network)、LAN(Local Area Network)、インターネット、プロバイダ装置、無線基地局、専用回線などのうちの一部または全部を含む。携帯端末装置20は、例えば、タブレットコンピュータまたはスマートフォンである。携帯端末装置20は、ワイヤレス通信Wを行うことで、ネットワークNを介してコンテンツサーバ10およびサーバ30と通信する。 The network N includes, for example, a part or all of a WAN (Wide Area Network), a LAN (Local Area Network), the Internet, a provider device, a wireless base station, a dedicated line, and the like. The mobile terminal device 20 is, for example, a tablet computer or a smartphone. The mobile terminal device 20 communicates with the content server 10 and the server 30 via the network N by performing wireless communication W.
 コンテンツサーバ10には、コンテンツ等を表示するディスプレイDが接続されている。ここで、ディスプレイDはいわゆるデジタルサイネージである。デジタルサイネージとは、比較的大型ディスプレイに広告等を公衆に向けて表示するための電子看板である。なお、デジタルサイネージ自体にコンテンツを一時的に保持する記憶装置等を備えていてもよい。そのため、コンテンツサーバ10とディスプレイDを一体としてデジタルサイネージとして解釈してもよい。 The content server 10 is connected to a display D that displays content and the like. Here, the display D is a so-called digital signage. Digital signage is an electronic signboard for displaying advertisements and the like on the relatively large display to the public. The digital signage itself may be provided with a storage device or the like that temporarily holds content. Therefore, the content server 10 and the display D may be integrated as digital signage.
 便宜上、情報を要求するコンピュータをクライアント、情報の要求に応じて情報を送信するコンピュータをサーバと称する。本実施形態においては、コンテンツサーバ10及びサーバ30はサーバとして機能し、携帯端末装置20は主にクライアントとして機能する。 For convenience, a computer that requests information is called a client, and a computer that sends information in response to a request for information is called a server. In the present embodiment, the content server 10 and the server 30 function as a server, and the mobile terminal device 20 mainly functions as a client.
 図2は、第1の実施形態に係るコンテンツサーバ10、携帯端末装置20、およびサーバ30を示すブロック図である。コンテンツサーバ10は、CPU(Central Processing Unit)11と、メモリ12と、バス13と、ストレージ14とを備える。 FIG. 2 is a block diagram showing the content server 10, the mobile terminal device 20, and the server 30 according to the first embodiment. The content server 10 includes a CPU (Central Processing Unit) 11, a memory 12, a bus 13, and a storage 14.
 情報処理部としてのCPU11および情報保持部としてのメモリ12は、互いにバス13で接続されている。CPU11は、プログラムに従ってバス13を介してメモリ12に格納された情報にアクセスする。メモリ12は、RAM(Random Access Memory)であるが、これに限られない。例えば、メモリ12は、フラッシュメモリ等の他の種類のメモリであってもよい。メモリ12は、CPU11によって処理される情報を一時的に格納する。 The CPU 11 as an information processing unit and the memory 12 as an information holding unit are connected to each other via a bus 13. The CPU 11 accesses information stored in the memory 12 via the bus 13 according to a program. The memory 12 is a RAM (Random Access Memory), but is not limited thereto. For example, the memory 12 may be another type of memory such as a flash memory. The memory 12 temporarily stores information processed by the CPU 11.
 CPU11は、ストレージ14に対して各種情報の読み出しおよび書き込みを行う。ストレージ14は、例えば、HDD(Hard Disk Drive)であるが、これに限られない。例えば、ストレージ14は、フラッシュメモリや、NAS(Network Attached Storage)や外部のストレージサーバなど、コンテンツサーバ10がアクセス可能な外部装置であってもよい。 CPU 11 reads and writes various information to and from storage 14. The storage 14 is, for example, an HDD (Hard Disk Drive), but is not limited to this. For example, the storage 14 may be an external device accessible by the content server 10 such as a flash memory, NAS (Network Attached Storage), or an external storage server.
 コンテンツサーバ10は、メモリ12に代えてストレージ14に情報を記憶してもよい。また、コンテンツサーバ10のアーキテクチャーの変化に応じて、メモリおよびストレージ等の代替物が用いられてもよい。 The content server 10 may store information in the storage 14 instead of the memory 12. Further, alternatives such as a memory and a storage may be used according to a change in the architecture of the content server 10.
 携帯端末装置20は、制御部としてのCPU21と、メモリ22と、バス23と、カメラ24と、ディスプレイ25(図6等参照)とを備える。また、ディスプレイ25はタッチパネル等のユーザによる選択を検知、取得する機能を備える。また、位置取得部としてのCPU21はタッチパネル等からユーザが選択した位置(座標)に関する情報を取得する。CPU21は、プログラムに従ってバス23を介してメモリ22に格納された情報にアクセスする。メモリ22はRAMであるが、これに限られない。例えば、メモリ22は、フラッシュメモリ等の他の種類のメモリであってもよい。メモリ22は、CPU11によって処理される情報を一時的に格納する。 The mobile terminal device 20 includes a CPU 21 as a control unit, a memory 22, a bus 23, a camera 24, and a display 25 (see FIG. 6 and the like). The display 25 has a function of detecting and acquiring selection by a user such as a touch panel. Further, the CPU 21 serving as a position acquisition unit acquires information on the position (coordinates) selected by the user from a touch panel or the like. The CPU 21 accesses information stored in the memory 22 via the bus 23 according to a program. The memory 22 is a RAM, but is not limited to this. For example, the memory 22 may be another type of memory such as a flash memory. The memory 22 temporarily stores information processed by the CPU 11.
 カメラ24が読み取った画像はバス23を介してCPU21により処理可能となる。また、カメラ24は記録指示(シャッタ)がない状態であっても、取り込まれた情報をユーザに提示するためにディスプレイ25にその画像を表示することができる(いわゆる、ライブビュー機能)。詳細は後述するが、CPU21は、カメラ24が取得した画像に対して画像処理を行い、その画像処理を加えた結果についてディスプレイ25に表示する。なお、画像と表現する場合には、静止画のみならず動画を含むと解釈してもよい。また、カメラとディスプレイが1つのデバイスとして構成された装置が情報処理装置として解釈されてもよいし、カメラ機能を有するデバイスとディスプレイ機能を有する複数のデバイスがリンクすることで構成されたシステムが情報処理装置として解釈されてもよい。 The image read by the camera 24 can be processed by the CPU 21 via the bus 23. Further, the camera 24 can display the image on the display 25 in order to present the captured information to the user even when there is no recording instruction (shutter) (so-called live view function). Although details will be described later, the CPU 21 performs image processing on the image acquired by the camera 24 and displays the result of the image processing on the display 25. In the case of expressing as an image, it may be interpreted as including a moving image as well as a still image. An apparatus in which a camera and a display are configured as one device may be interpreted as an information processing apparatus, or a system configured by linking a device having a camera function and a plurality of devices having a display function as information. It may be interpreted as a processing device.
 サーバ30は、CPU31と、メモリ32と、バス33と、ストレージ34とを備える。CPU31は、プログラムに従ってバス33を介してメモリ32に格納された情報にアクセスする。メモリ32はRAMであるが、これに限られない。例えば、メモリ32は、フラッシュメモリ等の他の種類のメモリであってもよい。メモリ32は、CPU31によって処理される情報を一時的に格納する。 The server 30 includes a CPU 31, a memory 32, a bus 33, and a storage 34. The CPU 31 accesses information stored in the memory 32 via the bus 33 according to the program. The memory 32 is a RAM, but is not limited to this. For example, the memory 32 may be another type of memory such as a flash memory. The memory 32 temporarily stores information processed by the CPU 31.
 CPU31は、ストレージ34に対して各種情報の読み出しおよび書き込みを行う。ストレージ34は、例えば、HDDであるが、これに限られない。例えば、ストレージ34は、NASや外部のストレージサーバなど、サーバ30がアクセス可能な外部装置であってもよい。 CPU 31 reads and writes various information to and from storage 34. The storage 34 is, for example, an HDD, but is not limited to this. For example, the storage 34 may be an external device accessible by the server 30 such as a NAS or an external storage server.
 なお、コンテンツサーバ10およびサーバ30には、携帯端末装置20と同様に、画像を撮影するカメラが接続されてもよい。また、サーバ30は、コンテンツサーバ10および携帯端末装置20と同様に、ディスプレイを備えてもよい。 Note that, similarly to the mobile terminal device 20, a camera that captures an image may be connected to the content server 10 and the server 30. In addition, the server 30 may include a display, like the content server 10 and the mobile terminal device 20.
<不可視コード>
 一般に、0.03[秒]未満の画像の点滅は、肉眼では認識され難い。言い換えると、33[Hz]を超える周波数の画像の点滅は、肉眼では認識され難い。33[Hz]を超える周波数で画像が点滅する場合、明暗の差が大きな目立ちやすい画像が点滅する場合であっても、肉眼で認識され難い。このため、明暗の差が小さければ、点滅の周波数が33[Hz]よりも少し低くても、画像の点滅を肉眼で認識され難い状態を作り出すことができる。このように、肉眼で認識され難い状態を、本実施形態において「不可視」と称する。
<Invisible code>
In general, blinking of an image of less than 0.03 [seconds] is difficult to recognize with the naked eye. In other words, blinking of an image having a frequency exceeding 33 [Hz] is difficult to recognize with the naked eye. When an image blinks at a frequency exceeding 33 [Hz], it is difficult to recognize with the naked eye even when an image with a large difference in brightness is easily blinking. For this reason, if the difference between brightness and darkness is small, even if the blinking frequency is slightly lower than 33 [Hz], it is possible to create a state in which blinking of an image is difficult to be recognized with the naked eye. Such a state that is difficult to be recognized with the naked eye is referred to as “invisible” in the present embodiment.
 図3は、第1の実施形態に係る不可視コードが埋め込まれたコード画像60の一例を示す図である。以下、図3を参照して、コード画像60の生成方法について説明する。コード画像60は、第1のフレーム画像60aと第2のフレーム画像60bが交互に切り替わる画像である。コード画像60は、背景画像40にコード情報50を不可視状態で埋め込むことで生成される。以下、不可視状態で背景画像40に埋め込まれたコード情報50を、「不可視コード」と称する。 FIG. 3 is a diagram illustrating an example of the code image 60 in which the invisible code according to the first embodiment is embedded. Hereinafter, a method for generating the code image 60 will be described with reference to FIG. The code image 60 is an image in which the first frame image 60a and the second frame image 60b are alternately switched. The code image 60 is generated by embedding the code information 50 in the background image 40 in an invisible state. Hereinafter, the code information 50 embedded in the background image 40 in the invisible state is referred to as “invisible code”.
 背景画像40は、コンテンツを生成する作成者(コンテンツ提供者)によって指定された任意の画像であるが、これに限られない。例えば、背景画像40は、アプリケーションソフトの操作画面として表示される画像であってもよい。コード情報50は、QRコード(登録商標)を例に挙げて説明しているが、これに限られない。例えば、コード情報50は、DataMatrix(登録商標)やVeriCode(登録商標)などの他の2次元コードであってもよいし、バーコード等の一次元コードであってもよい。コード情報50は、ユーザによって指定されたデータ(例えば、WEBページのURLや、メッセンジャーソフトウェアのアカウント情報や、ソーシャルネットワーキングサービスのアカウント情報)がコード化された情報である。 The background image 40 is an arbitrary image designated by a creator (content provider) that generates content, but is not limited thereto. For example, the background image 40 may be an image displayed as an operation screen of application software. The code information 50 is described by taking a QR code (registered trademark) as an example, but is not limited thereto. For example, the code information 50 may be another two-dimensional code such as DataMatrix (registered trademark) or VeriCode (registered trademark), or may be a one-dimensional code such as a barcode. The code information 50 is information in which data designated by the user (for example, URL of a WEB page, account information of messenger software, account information of a social networking service) is coded.
 コード情報50には、位置検出用マーカ51と、姿勢補正用マーカ52と、データドット53とが含まれる。位置検出用マーカ51は、コード情報50の位置を検出するために用いられるマーカであり、コード情報50の3つの角に配置される。姿勢補正用マーカ52は、読取部としてのカメラ24によって読み取られたコード情報50の姿勢を補正するために用いられるマーカである。データドット53は、コードが保持するデータに関する情報を表現するためのドットである。 The code information 50 includes a position detection marker 51, an attitude correction marker 52, and a data dot 53. The position detection marker 51 is a marker used to detect the position of the code information 50 and is arranged at three corners of the code information 50. The posture correction marker 52 is a marker used to correct the posture of the code information 50 read by the camera 24 as a reading unit. The data dot 53 is a dot for expressing information related to data held by the code.
 画像取得部としてのカメラ24は携帯端末装置20に設けられているため、カメラ24によって撮影された画像には手ぶれが発生する可能性がある。このため、データドット53には、リードソロモン符号等の誤り訂正符号が含まれる。ロバスト性を確保するため、15%以上の誤り訂正率を有する誤り訂正符号を採用する。なお、30%以上の誤り訂正率を有する誤り訂正符号を採用することが好ましい。 Since the camera 24 as an image acquisition unit is provided in the mobile terminal device 20, there is a possibility that camera shake may occur in an image taken by the camera 24. For this reason, the data dot 53 includes an error correction code such as a Reed-Solomon code. In order to ensure robustness, an error correction code having an error correction rate of 15% or more is employed. It is preferable to employ an error correction code having an error correction rate of 30% or more.
 携帯端末装置20のCPU21は、背景画像40とコード情報50とを取得する。CPU21は、取得した背景画像40およびコード情報50に基づき、第1のフレーム画像60aおよび第2のフレーム画像60bを生成する。例えば、CPU21は、コード情報50と重なる位置の背景画像40の画素の明度を低くすることで、第1のフレーム画像60aを生成する。また、CPU21は、コード情報50と重なる位置の背景画像40の画素の明度を高くすることで、第2のフレーム画像60bを生成する。CPU21は、第1のフレーム画像60aおよび第2のフレーム画像60bが0.03[秒]未満の間隔で交互に切り替わる画像を、コード画像60として生成する。 The CPU 21 of the mobile terminal device 20 acquires the background image 40 and the code information 50. Based on the acquired background image 40 and code information 50, the CPU 21 generates a first frame image 60a and a second frame image 60b. For example, the CPU 21 generates the first frame image 60 a by reducing the brightness of the pixels of the background image 40 at the position overlapping the code information 50. Further, the CPU 21 generates the second frame image 60b by increasing the brightness of the pixel of the background image 40 at the position overlapping the code information 50. The CPU 21 generates, as the code image 60, an image in which the first frame image 60a and the second frame image 60b are alternately switched at intervals of less than 0.03 [seconds].
 前述したように、0.03[秒]未満の画像の点滅は、肉眼で認識され難い。このため、第1のフレーム画像60aおよび第2のフレーム画像60bが0.03[秒]未満の間隔で交互に切り替わると、第1のフレーム画像60aおよび第2のフレーム画像60bの明度の平均がコード画像60の明度として肉眼によって認識される。このため、コード情報50が背景画像40に埋め込まれても、背景画像40のデザイン性が損なわれることが低減される。 As described above, blinking of an image of less than 0.03 [seconds] is difficult to recognize with the naked eye. For this reason, when the first frame image 60a and the second frame image 60b are alternately switched at intervals of less than 0.03 [seconds], the average brightness of the first frame image 60a and the second frame image 60b is obtained. The lightness of the code image 60 is recognized by the naked eye. For this reason, even if the code information 50 is embedded in the background image 40, it is reduced that the designability of the background image 40 is impaired.
 単純に背景画像40にコード情報50が重畳されると、コード情報50が目立ってしまい画像のデザイン性が損なわれる。このため、CPU21は、第1のフレーム画像60aと第2のフレーム画像60bとを0.03[秒]未満の間隔で交互に切り替える。これによって、コード情報50が肉眼では不可視な状態となる。言い換えると、第1のフレーム画像60aと第2のフレーム画像60bとが交互に表示されているにも関わらず、肉眼では背景画像40とコード画像60とを区別することが難しい。なお、屋外に設置されることの多いデジタルサイネージにおいて、LED光源を高速点滅させるLinkRay(登録商標)等の光ID方式では太陽光によりかき消されやすい。このため、デジタルサイネージにおいては、不可視コードを採用することが好ましい。なお、図3においては説明の都合上、白黒画像を用いて説明しているため明度方向に変動させる例で説明した。しかし、背景画像はカラー画像を用いることが想定され、明度ではなく色相方向に値を変動させることにより、屋外においても高い認識率を確保することができる。 If the code information 50 is simply superimposed on the background image 40, the code information 50 becomes conspicuous and the design of the image is impaired. Therefore, the CPU 21 alternately switches the first frame image 60a and the second frame image 60b at intervals of less than 0.03 [seconds]. As a result, the code information 50 becomes invisible to the naked eye. In other words, it is difficult to distinguish between the background image 40 and the code image 60 with the naked eye even though the first frame image 60a and the second frame image 60b are alternately displayed. In digital signage that is often installed outdoors, the light ID method such as LinkRay (registered trademark) that causes the LED light source to blink at high speed is easily wiped out by sunlight. For this reason, it is preferable to employ an invisible code in digital signage. In FIG. 3, for the sake of explanation, the description is made using a black and white image. However, it is assumed that a color image is used as the background image, and a high recognition rate can be ensured even outdoors by changing the value in the hue direction instead of the brightness.
<コード画像の表示処理>
 図4は、第1の実施形態に係るコード画像60の表示処理を示すフローチャートである。図4を用いて、ディスプレイDに表示するためのコード画像60の表示処理を詳細に説明する。なお、不可視コードが埋め込まれた画像(動画)については、コンテンツサーバ10以外のシステムにより生成された画像(動画)をサーバ10に格納し、ディスプレイDへ配信してもよい。
<Code image display processing>
FIG. 4 is a flowchart showing the display processing of the code image 60 according to the first embodiment. The display processing of the code image 60 to be displayed on the display D will be described in detail with reference to FIG. As for an image (moving image) in which an invisible code is embedded, an image (moving image) generated by a system other than the content server 10 may be stored in the server 10 and distributed to the display D.
 携帯端末装置10のCPU11は、デジタルサイネージに表示する不可視コードとして格納したいデータを取得する(S101)。デジタルサイネージに表示する不可視コードとして格納したいデータは、ユーザによって指定されたデータであってよい。例えば、ユーザによって指定されたデータとは、デジタルサイネージに表示された商品を購入可能なWEBサーバのURLや、メッセンジャーソフトウェアのアカウント情報、SNSのアカウント情報等などが挙げられる。CPU11は、取得したデータに基づきコード情報50を生成する(S102)。 The CPU 11 of the mobile terminal device 10 acquires data to be stored as an invisible code to be displayed on the digital signage (S101). Data to be stored as an invisible code displayed on the digital signage may be data designated by the user. For example, the data designated by the user includes a URL of a WEB server that can purchase a product displayed on the digital signage, messenger software account information, SNS account information, and the like. The CPU 11 generates code information 50 based on the acquired data (S102).
 次に、取得部としてのCPU11は、背景画像40を取得する。例えば、CPU11は、メモリ12または他のサーバ30から背景画像40を取得する(S103)。 Next, the CPU 11 as the acquisition unit acquires the background image 40. For example, the CPU 11 acquires the background image 40 from the memory 12 or another server 30 (S103).
 背景画像40に高周波成分が多く含まれると、不可視コードの抽出が困難になる。このため、CPU11は、背景画像40に対してフーリエ変換を行うことにより、背景画像40の周波数特性を算出する(S104)。 If the background image 40 contains many high-frequency components, it is difficult to extract invisible codes. Therefore, the CPU 11 calculates the frequency characteristics of the background image 40 by performing Fourier transform on the background image 40 (S104).
 その後、CPU11は、算出した背景画像40の所定値以上の空間周波数成分を除去する(S105)。なお、CPU11は、高周波成分が除去された背景画像40と、高周波成分が除去される前の背景画像40の両方を、メモリ22に保持してもよい。 Thereafter, the CPU 11 removes a spatial frequency component equal to or greater than a predetermined value of the calculated background image 40 (S105). Note that the CPU 11 may hold both the background image 40 from which the high frequency component has been removed and the background image 40 from which the high frequency component has been removed, in the memory 22.
 例えば、CPU11は、背景画像40に対してフィルタ処理を行うことによって、背景画像40に含まれる空間周波数が所定値以上の成分を除去する。具体的に、CPU11は、以下の式(1)に基づいて、背景画像40の座標(i,j)に対してフィルタ処理を行う。ここで、Xi,jはフィルタ処理前の画素値を示し、Yi,jはフィルタ処理後の画素値を示す。 For example, the CPU 11 performs a filtering process on the background image 40 to remove components having a spatial frequency included in the background image 40 of a predetermined value or more. Specifically, the CPU 11 performs a filtering process on the coordinates (i, j) of the background image 40 based on the following equation (1). Here, Xi, j indicates a pixel value before filtering, and Yi, j indicates a pixel value after filtering.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 背景画像40に空間周波数が高い成分が含まれていると、背景画像40の撮影時に手ぶれの影響を受けやすい。このため、背景画像40がぶれてしまうと、不可視コードを抽出するのが困難となる。このため、CPU11は、背景画像40に対してフィルタ処理を行うことにより、空間周波数が所定値以上の成分が除去する。これによって、手ぶれの影響を低減することができる。 If the background image 40 includes a component having a high spatial frequency, the background image 40 is susceptible to camera shake when the background image 40 is captured. For this reason, if the background image 40 is blurred, it becomes difficult to extract an invisible code. For this reason, the CPU 11 performs a filtering process on the background image 40 to remove components having a spatial frequency equal to or higher than a predetermined value. Thereby, the influence of camera shake can be reduced.
 その後、CPU11は、コード情報50と背景画像40とを用いて、後述するコード画像生成処理を行う(S106)。コード画像生成処理において、生成部としてのCPU11は、背景画像40にコード情報50を不可視状態で埋め込むことで、コード画像60を生成する。詳細は後述するが、CPU11は、コード情報50と重なる位置の背景画像40の画素値を変動させることで、背景画像40にコード情報50を不可視状態で埋め込む。上述のように生成された不可視コードが埋め込まれた動画は、コンテンツサーバ10からディスプレイDへ再生可能に送信される。これにより、屋外等に設置されたデジタルサイネージにより、不可視コードが埋め込まれた動画を公衆に向けて提示することができる。 Thereafter, the CPU 11 performs a code image generation process to be described later using the code information 50 and the background image 40 (S106). In the code image generation process, the CPU 11 as the generation unit generates the code image 60 by embedding the code information 50 in the background image 40 in an invisible state. Although details will be described later, the CPU 11 embeds the code information 50 in the background image 40 in an invisible state by changing the pixel value of the background image 40 at a position overlapping the code information 50. The moving image in which the invisible code generated as described above is embedded is transmitted from the content server 10 to the display D so as to be reproducible. Thereby, the moving image with the invisible code embedded can be presented to the public by digital signage installed outdoors.
 図5は、第1の実施形態に係るコード画像生成処理を示すフローチャートである。図5に示されるフローチャートは、図4のS106の処理を詳細に示す。 FIG. 5 is a flowchart showing a code image generation process according to the first embodiment. The flowchart shown in FIG. 5 shows the process of S106 of FIG. 4 in detail.
 まず、CPU11は、背景画像40およびコード情報50を解析する(S201)。例えば、CPU11は、コード情報50が配置される位置の背景画像40の色データを取得するとともに、取得した色データのRGB値を取得する。CPU11は、取得したRGB値に基づき、コード情報50が背景画像40への埋め込みに適するか否かを判定する。例えば、CPU11は、コード情報50が配置される位置の背景画像40の色データのうち、過半数がコード情報50を埋め込むのに適さない場合、コード情報50が背景画像40への埋め込みに適さないと判定する。ここで、デジタルサイネージに表示する用途を勘案すれば、コードを埋め込む位置精度を高くすることが必ずしも要求されない。そのため、背景画像40を勘案して、コードを埋め込む位置をシフトさせてもよい。 First, the CPU 11 analyzes the background image 40 and the code information 50 (S201). For example, the CPU 11 acquires the color data of the background image 40 at the position where the code information 50 is arranged, and acquires the RGB value of the acquired color data. The CPU 11 determines whether the code information 50 is suitable for embedding in the background image 40 based on the acquired RGB values. For example, when the majority of the color data of the background image 40 at the position where the code information 50 is arranged is not suitable for embedding the code information 50, the CPU 11 determines that the code information 50 is not suitable for embedding in the background image 40. judge. Here, in consideration of the application for display on the digital signage, it is not always required to increase the position accuracy for embedding the code. Therefore, the position where the code is embedded may be shifted in consideration of the background image 40.
 コード情報50を埋め込むのに適さない色データとは、色を変動させる余裕がない画素値を意味する。具体的には、画素値を8ビットRGBで表現した場合、(255,255,255)や(0,0,0)等が、コード情報50を埋め込むのに適さない色データに該当する。なお、画素値の変動方向として彩度方向を設定した場合には、彩度が限界値である画素値も、コード情報50を埋め込むのに適さない色データに該当する。 The color data that is not suitable for embedding the code information 50 means a pixel value that cannot afford to change the color. Specifically, when the pixel value is expressed in 8-bit RGB, (255, 255, 255), (0, 0, 0), and the like correspond to color data that is not suitable for embedding the code information 50. When the saturation direction is set as the variation direction of the pixel value, the pixel value whose saturation is the limit value also corresponds to color data that is not suitable for embedding the code information 50.
 画素値が表現範囲の限界値近傍である場合、画素値を変動させるための変動幅を十分に確保することが難しい。このため、画素値を8ビットRGB(256諧調)で表現した場合、各RGB値と限界値(0または255)との差の総和が10程度確保できれば、コード情報50を背景画像40に好適に埋め込むことができる。 When the pixel value is near the limit value of the expression range, it is difficult to ensure a sufficient fluctuation range for changing the pixel value. For this reason, when the pixel value is expressed in 8-bit RGB (256 gradations), the code information 50 is suitably used for the background image 40 if the sum of differences between the RGB values and the limit value (0 or 255) can be secured. Can be embedded.
 CPU11は、コード情報50が背景画像40への埋め込みに適さないと判定した場合、背景画像40を考慮してコード情報50を再生成する(S202)。例えば、CPU11は、コード情報50の誤り訂正符号の余裕ビットを利用して、データドット53によって保持される情報の同一性を維持した状態で、データドット53の配置を変更する。これによって、CPU11は、コード情報50を再生成する。CPU11は、コード情報50を背景画像40に好適に埋め込むことができるまで、コード画像60の生成を繰り返す。 When the CPU 11 determines that the code information 50 is not suitable for embedding in the background image 40, the CPU 11 regenerates the code information 50 in consideration of the background image 40 (S202). For example, the CPU 11 uses the margin bits of the error correction code of the code information 50 to change the arrangement of the data dots 53 while maintaining the identity of the information held by the data dots 53. As a result, the CPU 11 regenerates the code information 50. The CPU 11 repeats the generation of the code image 60 until the code information 50 can be suitably embedded in the background image 40.
 次に、CPU11は、コード情報50の画素値の変動方向および変動幅を決定する(S203)。本実施形態においては、CPU11は、コード情報50の画素値を中心として色相上で相補的な二色に画素値を変動させる。 Next, the CPU 11 determines the fluctuation direction and fluctuation width of the pixel value of the code information 50 (S203). In the present embodiment, the CPU 11 changes the pixel value to two complementary colors on the hue with the pixel value of the code information 50 as the center.
 CPU21は、コード情報50と重なる位置の背景画像40の画像値を取得する(S204)。CPU11は、取得した画像値に対して、S203において決定された条件で画像処理を行う(S205)。 CPU21 acquires the image value of the background image 40 of the position which overlaps with the code information 50 (S204). The CPU 11 performs image processing on the acquired image value under the conditions determined in S203 (S205).
 例えば、CPU11は、S204において取得された画素値(R,G,B)を、HSVモデルにおける画素値(H,S,V)に変換する。ここで、H(Hue)は色相を示し、S(Saturation Chroma)は彩度を示し、V(Value Lightness Brightness)は明度を示す。なお、CPU11は、以下の式(2)~式(4)に基づいて、画素値(R,G,B)を画素値(H,S,V)に変換する。ここで、MAXはRGBのうちの最大の値を示し、MINはRGBのうちの最小の値を示す。 For example, the CPU 11 converts the pixel values (R, G, B) acquired in S204 into pixel values (H, S, V) in the HSV model. Here, H (Hue) represents a hue, S (Saturation Chroma) represents saturation, and V (Value Lightness Brightness) represents lightness. The CPU 11 converts the pixel values (R, G, B) into pixel values (H, S, V) based on the following formulas (2) to (4). Here, MAX indicates the maximum value of RGB, and MIN indicates the minimum value of RGB.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 CPU11は、変換した画素値(H,S,V)に対し、彩度方向の画素値から5が減算された色データAと、彩度方向の画素値に5が加算された色データBとを生成する。すなわち、色データAは(H,S-5,V)であり、色データBは(H,S+5,V)である。なお、本実施形態においては画素値の変動幅を5としたが、これに限られない。 The CPU 11 performs color data A obtained by subtracting 5 from the pixel value in the saturation direction with respect to the converted pixel value (H, S, V), and color data B obtained by adding 5 to the pixel value in the saturation direction. Is generated. That is, the color data A is (H, S-5, V), and the color data B is (H, S + 5, V). In the present embodiment, the fluctuation range of the pixel value is set to 5, but the present invention is not limited to this.
 CPU11は、コード情報50と重なる位置の背景画像40の画素値を変動させることによって、第1のフレーム画像と第2のフレーム画像とを生成する。CPU11は、生成した第1のフレーム画像と第2のフレーム画像とを0.03[秒]未満の間隔で交互に切り替えることにより、不可視コードが埋め込まれたコード画像60(動画)を生成する(S206)。言い換えると、CPU11は、生成した第1のフレーム画像と第2のフレーム画像とを30[Hz]より高い周波数で交互に切り替えることにより、不可視コードが埋め込まれたコード画像60(動画)を生成する。 The CPU 11 generates a first frame image and a second frame image by changing the pixel value of the background image 40 at a position overlapping the code information 50. The CPU 11 generates a code image 60 (moving image) in which an invisible code is embedded by alternately switching the generated first frame image and second frame image at intervals of less than 0.03 [seconds] ( S206). In other words, the CPU 11 generates the code image 60 (moving image) in which the invisible code is embedded by alternately switching the generated first frame image and second frame image at a frequency higher than 30 [Hz]. .
 以上説明したように、CPU11は、フィルタ処理が行われた背景画像40に、コード情報50が不可視状態で埋め込まれたコード画像60を生成する。具体的に、CPU11は、コード情報50と重なる位置の背景画像40の画素値を、この画素値を中心として所定の方向に変動させることで、コード画像60を生成する。これによって、撮影側の携帯端末が手振れにより不可視コードを読み取りづらくなってしまうのを抑制しつつ、屋外で公衆に対してデザイン性を維持しつつリンク等を提供することができる。 As described above, the CPU 11 generates the code image 60 in which the code information 50 is embedded in the invisible state in the background image 40 subjected to the filter processing. Specifically, the CPU 11 generates the code image 60 by changing the pixel value of the background image 40 at the position overlapping with the code information 50 in a predetermined direction around the pixel value. As a result, it is possible to provide a link or the like while maintaining the design for the public outdoors while suppressing the mobile terminal on the photographing side from being difficult to read the invisible code due to camera shake.
 なお、本実施形態においては、RGBの画素値や、HSVモデルの画素値を用いることとしたが、これに限られない。例えば、HSL色空間の画素値や、L*a*b*(JIS Z 8730)の画素値を用いてもよい。 In the present embodiment, RGB pixel values and HSV model pixel values are used. However, the present invention is not limited to this. For example, a pixel value in the HSL color space or a pixel value of L * a * b * (JIS Z 8730) may be used.
<コード画像の読取処理>
 図6は、第1の実施形態に係る携帯端末装置により、デジタルサイネージに表示された不可視コードが埋め込まれた動画をライブビュー(随時取得表示)で撮影同時表示をした際のリンクについて説明するための図である。従来、コードリーダは、コードを認識した時点でコードによって保持されたデータに従い画面を遷移するように設計されている。例えば、QRコードリーダはスキャンしたQRコードを認識した時点で、保持された内容(例えば、URL情報)の提示や、格納されたデータに従い画面遷移を行っていた。不可視コードについては、所定の時間間隔で交互に切り替わる画像をカメラ24で読み込み、複数フレーム間に不可視に埋め込まれたコードを差分演算で抽出する。差分については、明度方向の変動、彩度方向の変動、色相方向への変動であったとしても同様に差分演算で抽出することができる。この抽出処理は画像データを処理するCPU21によって行われる。
<Code image reading process>
FIG. 6 is a diagram for explaining a link when a mobile terminal device according to the first embodiment simultaneously displays a moving image in which an invisible code displayed on digital signage is embedded in live view (obtained display as needed). FIG. Conventionally, a code reader is designed to change a screen according to data held by a code when the code is recognized. For example, when a QR code reader recognizes a scanned QR code, the QR code reader performs screen transition in accordance with presentation of retained contents (for example, URL information) and stored data. As for the invisible code, an image that is alternately switched at a predetermined time interval is read by the camera 24, and a code that is invisiblely embedded between a plurality of frames is extracted by difference calculation. About the difference, even if it is the change in the brightness direction, the change in the saturation direction, and the change in the hue direction, it can be extracted by the difference calculation in the same manner. This extraction process is performed by the CPU 21 that processes the image data.
 対して、本実施形態に係るカメラ24とディスプレイ25を備える携帯端末装置20は、デジタルサイネージに不可視コードが埋め込まれていたとしても、そのコードを認識した時点で即座にコードに格納されているURL等に画面を遷移しない。図6乃至図9は、携帯端末装置20がライブビュー機能でディスプレイDを含む画像を撮影した際の動作を説明するためのイメージ図である。各図において、ディスプレイD中で灰色にハッチングされた領域Cは、不可視コードが埋め込まれた位置を示す。同様に、ディスプレイ25中で灰色にハッチングされた領域Cは、ディスプレイD中で灰色にハッチングされた領域に対応する位置を示す。 On the other hand, even if the invisible code is embedded in the digital signage, the mobile terminal device 20 including the camera 24 and the display 25 according to the present embodiment is immediately stored in the code when the code is recognized. The screen does not transition to etc. 6 to 9 are image diagrams for explaining an operation when the mobile terminal device 20 captures an image including the display D by the live view function. In each figure, a region C hatched in gray in the display D indicates a position where an invisible code is embedded. Similarly, a region C hatched in gray in the display 25 indicates a position corresponding to a region hatched in gray in the display D.
 図6はデジタルサイネージのディスプレイD全体に不可視コードを埋め込まれた画像が提示されている状態を説明するためのイメージ図である。図6では、携帯端末装置20がライブビューモードでデジタルサイネージを含むビルの一部を切り取るように撮影している。図6の下部は、上部のような状況において、ライブビュー状態である携帯端末装置20のディスプレイ25に表示される画像を示す。 FIG. 6 is an image diagram for explaining a state in which an image in which an invisible code is embedded is presented on the entire display D of the digital signage. In FIG. 6, the mobile terminal device 20 shoots a part of a building including digital signage in the live view mode. The lower part of FIG. 6 shows an image displayed on the display 25 of the mobile terminal device 20 in the live view state in the situation as shown in the upper part.
 携帯端末装置20は、不可視コードが埋め込まれたディスプレイDを含む画像を撮影している状態であるが、不可視コードを解釈して画面遷移等の処理を行うことはない。ただし、ライブビューで表示されているディスプレイ25の不可視コードが埋め込まれた領域に対応する箇所に対して、ユーザがタッチ等の選択動作を行うことで初めて、携帯端末装置20は不可視コードが指示する動作を行う。一例としては、デジタルサイネージに人の目で見えるように、商品の広告が表示されている画像に商品購入ページへ誘導するためのURL情報が不可視コードとして埋め込まれてもよい。この場合、ユーザが風景を切り取るようにライブビュー状態のスマートフォンを街並みにかざしただけで商品購入ページへ誘導されると、ユーザが意図しない反応を招いてしまう。そこで、不可視コードが埋め込められている箇所をユーザが意図的に選択した際に画面遷移を行うことにより、ユーザの興味のない商品の購入ページへ遷移する状況を回避することができる。ここで、スマートフォンを例にして説明したが、タッチ等の選択動作の代替としてリング形状、視線認識、グローブ形状、ブレインインターフェイス等の選択手段により選択した際に、画面遷移処理を行ってもよい。また、画像を取得するカメラ24とディスプレイ25が分離している場合において、各デバイスが提携して同様の動作を行ってもよい。 The mobile terminal device 20 is in a state of taking an image including the display D in which the invisible code is embedded, but does not perform processing such as screen transition by interpreting the invisible code. However, the mobile terminal device 20 is instructed by the invisible code only when the user performs a selection operation such as a touch on the portion corresponding to the area where the invisible code of the display 25 displayed in the live view is embedded. Perform the action. As an example, URL information for guiding to a product purchase page may be embedded as an invisible code in an image in which a product advertisement is displayed so that it can be seen by human eyes on digital signage. In this case, if the user is guided to the product purchase page just by holding the smartphone in the live view state over the cityscape so as to cut out the scenery, a reaction unintended by the user is caused. Therefore, when the user intentionally selects a place where the invisible code is embedded, the screen transition is performed, thereby avoiding a situation of transition to a purchase page for a product that is not of interest to the user. Here, a smartphone has been described as an example, but screen transition processing may be performed when selection is performed by a selection unit such as a ring shape, line-of-sight recognition, glove shape, or brain interface as an alternative to a selection operation such as touch. Further, when the camera 24 that acquires an image and the display 25 are separated, the devices may cooperate to perform the same operation.
 図7Aおよび図7Bは、カメラ24が撮影する姿勢やズーム状態が異なる状態について説明するためのイメージ図である。図7Aに示す状態では、デジタルサイネージのディスプレイD全体を含む画像が撮影されている。対して、図7Bに示す状態では、ディスプレイDとカメラ24が正対しない状態でディスプレイDの一部のみが撮影されている。この場合、不可視コードとして埋め込むコードとしてランダムドット等を採用することにより、保持データ量が制限されるものの、不可視コードの一部のみがディスプレイ25に表示されていたとしても、不可視コードを選択可能に提示することができる。なお、不可視コードをディスプレイDの一部に埋め込んでも良いし、購入ページに誘導したい物品(商品)と重なるような位置に不可視コードが埋め込まれていてもよい。具体的には、不可視コードが埋め込まれている商品を認識しやすくするために、「 」等のマークを付して選択可能である旨を提示しても良いし、引き出し線を表示し広告ページや購買ページへリンク可能である旨を提示してもよい。 7A and 7B are image diagrams for explaining states in which the posture and zoom state taken by the camera 24 are different. In the state shown in FIG. 7A, an image including the entire digital signage display D is taken. On the other hand, in the state shown in FIG. 7B, only a part of the display D is photographed with the display D and the camera 24 not facing each other. In this case, by adopting random dots or the like as a code to be embedded as an invisible code, the amount of retained data is limited, but even if only a part of the invisible code is displayed on the display 25, the invisible code can be selected. Can be presented. The invisible code may be embedded in a part of the display D, or the invisible code may be embedded at a position that overlaps with an article (product) to be guided to the purchase page. Specifically, in order to make it easy to recognize a product with an invisible code embedded, a mark such as “.” May be added to indicate that it can be selected, or a leader line is displayed to display an advertisement page. Or it may be shown that it is possible to link to the purchase page.
 図6、図7A、及び図7Bでは、ディスプレイDの全体に不可視コードを埋め込んだ際の動作について説明した。それに対して、図8はデジタルサイネージ中に埋め込んだ不可視コードが移動する形態を説明するためのイメージ図である。図8では、ディスプレイD中の灰色にハッチングされた領域C(不可視コードが埋め込まれた領域)は実線矢印に示すように移動するものとする。例えば、人が認識可能な商品に関する画像が実線矢印に示すように移動するのに合わせて、ハッチングされた領域Cを移動させる。これにより、ユーザがその商品自体を意図的に選択しなければ購買ページへと遷移しないようにすることができる。 6, 7 </ b> A, and 7 </ b> B, the operation when the invisible code is embedded in the entire display D has been described. On the other hand, FIG. 8 is an image diagram for explaining a form in which the invisible code embedded in the digital signage moves. In FIG. 8, it is assumed that the area C (the area where the invisible code is embedded) hatched in gray in the display D moves as indicated by a solid arrow. For example, the hatched area C is moved in accordance with the movement of the image related to the product that can be recognized by the person as indicated by the solid arrow. Thereby, it is possible to prevent transition to the purchase page unless the user intentionally selects the product itself.
 図9は、複数のデジタルサイネージが存在し、それぞれに不可視コードが埋め込まれた領域が存在する状態を説明するためのイメージ図である。それぞれのデジタルサイネージには、灰色にハッチングされた第一の領域C1と第二の領域C2を備える。これらの複数の領域はデジタルサイネージに表示されたコンテンツの内容に応じて、実線矢印のように移動するものとする。この場合、ユーザがC1を選択すればC1の領域に埋め込まれた不可視コードが保持するデータに応じた処理が行われる。同様に、ユーザがC2を選択すればC2の領域に埋め込まれた不可視コードが保持するデータに応じた処理が行われる。なお、図9においては各ディスプレイに選択可能な領域がそれぞれ配置された場合を説明したが、単一のディスプレイに複数の選択可能な領域を配置してもよい。 FIG. 9 is an image diagram for explaining a state in which there are a plurality of digital signage, each of which has an area in which an invisible code is embedded. Each digital signage includes a first area C1 and a second area C2 hatched in gray. These multiple areas are moved as indicated by solid arrows in accordance with the contents displayed on the digital signage. In this case, if the user selects C1, processing corresponding to data held by the invisible code embedded in the C1 region is performed. Similarly, if the user selects C2, processing corresponding to the data held by the invisible code embedded in the area C2 is performed. Although FIG. 9 illustrates the case where selectable areas are arranged on each display, a plurality of selectable areas may be arranged on a single display.
 続いて、図10を用いて携帯端末装置20側の動作について説明する。図10は携帯端末装置20の動作を説明するためのフローチャートである。携帯端末装置20はライブビュー機能を備え、カメラ24から外部の画像情報を取得する(S301)。CPU21はカメラ24から取得した画像データを取得するとともに、ディスプレイ25に画像データを表示させることによりライブビュー機能を実現する。このとき、コードを認識する認識部としてのCPU21はカメラ24から取得した画像データを解析し、画像データ中に不可視コードが埋め込まれているか解析(認識)を行う(S302)。 Subsequently, the operation on the mobile terminal device 20 side will be described with reference to FIG. FIG. 10 is a flowchart for explaining the operation of the mobile terminal device 20. The portable terminal device 20 has a live view function, and acquires external image information from the camera 24 (S301). The CPU 21 acquires the image data acquired from the camera 24 and realizes the live view function by displaying the image data on the display 25. At this time, the CPU 21 as a recognition unit for recognizing the code analyzes the image data acquired from the camera 24, and analyzes (recognizes) whether the invisible code is embedded in the image data (S302).
 解析した結果、不可視コードが画像データ中に存在する場合には、CPU21は、不可視コードが埋め込まれている位置を特定し、特定した位置に、不可視コードが保持するデータをリンクとして埋め込む(S303)。その後、CPU21は、ライブビューを表示しているディスプレイ25をユーザがタッチしたことを示すタッチ情報を、タッチパネル等を備えるディスプレイ25から取得する(S304)。具体的には、ユーザがS303で特定された不可視コードが埋め込まれている位置をタッチした場合に、CPU21はタッチされた位置に埋め込まれたリンクに基づいて動作するように制御する。前述のように、例え不可視コードがディスプレイDに表示された画像に埋め込まれていたとしても、CPU21は、ユーザの選択動作がない場合には画面を遷移させることはない。 If the invisible code is present in the image data as a result of the analysis, the CPU 21 identifies the position where the invisible code is embedded, and embeds data held by the invisible code as a link at the identified position (S303). . Thereafter, the CPU 21 acquires touch information indicating that the user has touched the display 25 displaying the live view from the display 25 including a touch panel or the like (S304). Specifically, when the user touches the position where the invisible code specified in S303 is embedded, the CPU 21 performs control based on the link embedded in the touched position. As described above, even if the invisible code is embedded in the image displayed on the display D, the CPU 21 does not change the screen when there is no user selection operation.
 そのため、CPU21は不可視コードが埋め込まれた領域Cをタッチしたことを示すタッチ情報(ユーザ選択情報)を取得し、ユーザが領域Cを選択(埋め込みリンクをタッチ)した場合に、S306の処理を実行する(S305:Yes)。一方、ユーザによって不可視コードが埋め込まれた領域Cが選択(タッチ)されない場合、CPU21はS301からS304の処理を繰り返し実行する(S305:No)。 Therefore, the CPU 21 acquires touch information (user selection information) indicating that the area C in which the invisible code is embedded is touched, and executes the process of S306 when the user selects the area C (touches the embedded link). (S305: Yes). On the other hand, when the area C in which the invisible code is embedded is not selected (touched) by the user, the CPU 21 repeatedly executes the processing from S301 to S304 (S305: No).
 ユーザが埋め込みリンクをタッチした場合、CPU21は埋め込まれたリンクを表示するために、Webサーバ等にネットワークを介して要求を送信する(S306)。なお、本実施形態において、CPU21は、コードに対するデコード処理をS303において行ったが、この動作をS306の直前に行ってもよい。これにより、デジタルサイネージなどの設備に表示された動画に対して、広告に関する情報やURL等の付加情報を付与することができる。言い換えれば、タッチパネル機能を備えない公衆向けのディスプレイに表示されるコンテンツを、スマートフォン等において選択可能に提示することで、インタラクティブなユーザエクスペリエンスを提供することができる。 When the user touches the embedded link, the CPU 21 transmits a request to the Web server or the like via the network in order to display the embedded link (S306). In the present embodiment, the CPU 21 performs the decoding process on the code in S303, but this operation may be performed immediately before S306. Thereby, additional information such as information about the advertisement and URL can be given to the moving image displayed on the facility such as digital signage. In other words, an interactive user experience can be provided by presenting content displayed on a public display without a touch panel function so as to be selectable on a smartphone or the like.
(第2の実施形態)
 第1の実施形態において、広告等を表示する端末への適用について重点を置いて説明した。本実施形態では、多言語対応を要する電車内の乗り換え案内、交通標識、案内図等についての適用形態について説明する。なお、第1の実施形態において説明した部分と重複する箇所については同一符号を付すことにより、説明を省略するものとする。
 本実施形態は、公共性の高い情報を提示する画面中の多言語記載、もしくは、頻繁に表示言語が切り替えられることによる利便性の低下を改善する。
(Second Embodiment)
In the first embodiment, application to a terminal that displays an advertisement or the like has been described with emphasis. In this embodiment, an application mode for a transfer guide, a traffic sign, a guide map, and the like in a train that requires multilingual support will be described. In addition, about the location which overlaps with the part demonstrated in 1st Embodiment, description shall be abbreviate | omitted by attaching | subjecting the same code | symbol.
This embodiment improves the deterioration of convenience due to multilingual descriptions on the screen presenting highly public information or frequent switching of the display language.
 図11は、車両内に配置されたデジタルサイネージが表示する地下鉄の路線図を説明するための図である。図11Aに示すように、近年の来日外国人の増加に対応すべく多くの言語で路線に関する情報が提示されている。しかしながら、デジタルサイネージのディスプレイの表示領域には限りがあり、多言語に対応するにも限界があった。そのため、図11Bに示すような日本語で記載された路線図と、図11Cに示すような英語で記載された路線図とを一定期間ごとに切り替え表示することが採用されている。しかし、所定期間ごとに表示言語を切り替える方式では、ユーザが判読可能な言語で表示されるまで待つ必要があり、利便性に問題がある。 FIG. 11 is a diagram for explaining a route map of the subway displayed by the digital signage arranged in the vehicle. As shown in FIG. 11A, information on routes is presented in many languages to cope with the recent increase in foreigners visiting Japan. However, the display area of the digital signage display is limited, and there is a limit to supporting multiple languages. For this reason, it is adopted to switch between a route map written in Japanese as shown in FIG. 11B and a route map written in English as shown in FIG. 11C at regular intervals. However, in the method of switching the display language every predetermined period, there is a problem in convenience because it is necessary to wait until the user displays in a readable language.
 そこで、図11Bに示すように、デジタルサイネージの一部に運行状況や路線図に関するポインターとなる不可視コードを埋め込む。なお、不可視コードを埋め込む領域Cはディスプレイの一部でも全面でもどちらでもよい。 Therefore, as shown in FIG. 11B, an invisible code serving as a pointer regarding the operation status and route map is embedded in a part of the digital signage. The area C where the invisible code is embedded may be a part of the display or the entire surface.
 ユーザが個別に保有する自身の携帯端末は、ユーザが常用する言語に関する言語設定を保持している。そのユーザの言語設定を用い、CPU21はカメラ24が取得した画像に対してオーバレイ表示をするように制御する。これにより、ユーザは、ユーザが判読可能な言語で表記された路線図を自身の携帯端末装置20で確認することができる。 The user's own mobile terminal individually holds language settings related to the user's regular language. Using the user's language setting, the CPU 21 controls to display an overlay on the image acquired by the camera 24. Thereby, the user can confirm the route map described in the language which a user can read with the own portable terminal device 20. FIG.
 図12を用いて、本実施形態の携帯端末装置20の動作について説明する。S401からS404はそれぞれS301からS304に類似する処理であるため説明を省略する。CPU21は不可視コードが保持するデータを解析する。そして、CPU21は、ユーザの判読可能な言語で記載された路線図情報を、ユーザの指示を待たずにネットワークを介してダウンロードする(S405)。その後、CPU21は、ユーザのタッチ等の指示に応じて表示言語を切り替える。なお、CPU21は、ユーザの判別可能な言語情報を取得できない場合は、ライブビュー画面で路線図がタッチされるごとに表示言語を切り替えるように制御する(S407)。 The operation of the mobile terminal device 20 of this embodiment will be described with reference to FIG. Since S401 to S404 are processes similar to S301 to S304, respectively, description thereof will be omitted. The CPU 21 analyzes data held by the invisible code. Then, the CPU 21 downloads the route map information described in the user-readable language via the network without waiting for the user's instruction (S405). Thereafter, the CPU 21 switches the display language in accordance with an instruction such as a user touch. If the language information that can be identified by the user cannot be acquired, the CPU 21 controls the display language to be switched every time the route map is touched on the live view screen (S407).
 以上のように、本実施形態のシステムを利用することにより、多人数へデジタルサイネージを通じて提供される情報を、個別のユーザが自身の設定情報に基づき取得及び変換するための手段を提供することができる。言い換えると、公衆向けのディスプレイに表示されるコンテンツを、簡易な方法を用いてユーザに合わせてカスタマイズして提供することができる。また、不可視コードを用いるため、コードに保持されている情報に興味のないユーザにとっては視認されないため、従前通りの景観を維持することができる。 As described above, by using the system of the present embodiment, it is possible to provide means for individual users to acquire and convert information provided to a large number of people through digital signage based on their own setting information. it can. In other words, the content displayed on the public display can be customized and provided for the user using a simple method. Moreover, since an invisible code is used, it is not visually recognized by a user who is not interested in the information held in the code, so that the conventional landscape can be maintained.
 なお、上記実施形態におけるコンテンツサーバ10、携帯端末装置20、およびサーバ30は、内部にコンピュータシステムを有している。そして、上述したコンテンツサーバ10、携帯端末装置20、およびサーバ30の各処理の過程は、プログラムの形式でコンピュータ読み取り可能な記録媒体に記憶されている。そして、記憶されているプログラムをコンピュータが読み出して実行することによって上記各種処理が行われる。ここで、コンピュータ読み取り可能な記録媒体とは、磁気ディスク、光磁気ディスク、CD-ROM、DVD-ROM、半導体メモリ等をいう。また、このコンピュータプログラムを通信回線によってコンピュータに配信し、この配信を受けたコンピュータが当該プログラムを実行するようにしてもよい。 In addition, the content server 10, the portable terminal device 20, and the server 30 in the said embodiment have a computer system inside. Each process of the content server 10, the mobile terminal device 20, and the server 30 described above is stored in a computer-readable recording medium in the form of a program. Then, the above-described various processes are performed by the computer reading and executing the stored program. Here, the computer-readable recording medium means a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, or the like. Alternatively, the computer program may be distributed to the computer via a communication line, and the computer that has received the distribution may execute the program.
 また、特定のコンピュータで実行される処理の一部を、他のコンピュータによって分担してもよい。このため、情報処理システム全体を、情報処理装置としてみなしてもよい。また、特徴的な情報処理を行う要素が意図的に国外に設置されたとしても、処理結果を参照する地域が国内であれば、情報処理が国内で実行されたものとみなす。 Also, a part of processing executed on a specific computer may be shared by other computers. For this reason, the entire information processing system may be regarded as an information processing apparatus. Even if an element that performs characteristic information processing is intentionally installed outside the country, if the region that refers to the processing result is in Japan, the information processing is considered to have been executed in the country.
 以上、本発明の好ましい実施形態を説明したが、本発明はこれらの実施形態に限定されることはない。本発明の趣旨を逸脱しない範囲で、構成の付加、省略、置換、およびその他の変更が可能である。本発明は前述した説明によって限定されることはなく、添付のクレームの範囲によってのみ限定される。 The preferred embodiments of the present invention have been described above, but the present invention is not limited to these embodiments. Additions, omissions, substitutions, and other modifications can be made without departing from the spirit of the present invention. The present invention is not limited by the above description, but only by the scope of the appended claims.
 10 コンテンツサーバ
 11 CPU
 12 メモリ
 13 バス
 14 ストレージ
 20 携帯端末装置
 21 CPU
 22 メモリ
 23 バス
 24 カメラ
 25 ディスプレイ
 30 サーバ
 31 CPU
 32 メモリ
 33 バス
 34 ストレージ
 C 領域
 D ディスプレイ
10 content server 11 CPU
12 memory 13 bus 14 storage 20 portable terminal device 21 CPU
22 Memory 23 Bus 24 Camera 25 Display 30 Server 31 CPU
32 memory 33 bus 34 storage C area D display

Claims (9)

  1.  画像を取得する画像取得部と、
     前記画像中に含まれるコードを認識する認識部と、
     ユーザによって選択された前記画像中の位置を取得する位置取得部と、
     前記位置取得部が取得した前記画像中の位置に前記コードが位置する場合、前記コードが保持する内容を実行するように制御する制御部と、
     を有する情報処理装置。
    An image acquisition unit for acquiring images;
    A recognition unit for recognizing a code included in the image;
    A position acquisition unit for acquiring a position in the image selected by the user;
    A control unit that controls to execute the content held by the code when the code is located at a position in the image acquired by the position acquisition unit;
    An information processing apparatus.
  2.  前記画像取得部が取得した画像を表示する表示部を備え、
     前記表示部は前記画像取得部が随時取得する画像を表示する請求項1に記載の情報処理装置。
    A display unit for displaying the image acquired by the image acquisition unit;
    The information processing apparatus according to claim 1, wherein the display unit displays an image acquired by the image acquisition unit as needed.
  3.  前記表示部はタッチパネルであって、前記位置取得部は前記タッチパネルから前記ユーザが選択する位置を取得する請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the display unit is a touch panel, and the position acquisition unit acquires a position selected by the user from the touch panel.
  4.  前記制御部は、前記認識部がコードを認識した場合、前記ユーザが選択する前に前記コードが保持する内容をデコードする請求項1乃至3のいずれか一項に記載の情報処理装置。 4. The information processing apparatus according to claim 1, wherein when the recognition unit recognizes a code, the control unit decodes content held by the code before the user selects the code.
  5.  前記コードは不可視コードである請求項1乃至4のいずれか一項に記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 4, wherein the code is an invisible code.
  6.  画像を取得する画像取得部と、
     前記画像中に含まれる不可視コードを認識する認識部と、
     前記画像取得部が取得した画像を表示する表示部と、を備え、
     前記不可視コードが保持する内容に基づき、前記不可視コードが含まれる画像の一部を変更する情報処理装置。
    An image acquisition unit for acquiring images;
    A recognition unit for recognizing an invisible code included in the image;
    A display unit for displaying the image acquired by the image acquisition unit,
    An information processing apparatus that changes a part of an image including the invisible code based on contents held by the invisible code.
  7.  請求項2に記載の情報処理装置が実行する表示方法。 A display method executed by the information processing apparatus according to claim 2.
  8.  情報処理装置に請求項7記載の表示方法を実行させるためのプログラム。 A program for causing an information processing apparatus to execute the display method according to claim 7.
  9.  請求項8に記載のプログラムが格納されたコンピュータ読み取り可能な記憶媒体。 A computer-readable storage medium in which the program according to claim 8 is stored.
PCT/JP2017/046923 2017-01-18 2017-12-27 Information processing device, display method, program, and computer-readable recording medium WO2018135272A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-007051 2017-01-18
JP2017007051 2017-01-18

Publications (1)

Publication Number Publication Date
WO2018135272A1 true WO2018135272A1 (en) 2018-07-26

Family

ID=62908620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/046923 WO2018135272A1 (en) 2017-01-18 2017-12-27 Information processing device, display method, program, and computer-readable recording medium

Country Status (1)

Country Link
WO (1) WO2018135272A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004234318A (en) * 2003-01-30 2004-08-19 Denso Wave Inc Two-dimension information code, and display method, generation method and reading method thereof
JP2014139732A (en) * 2013-01-21 2014-07-31 Sony Corp Image processing device, image processing method, program and display device
JP2015106848A (en) * 2013-11-29 2015-06-08 富士通株式会社 Information embedding apparatus, information detection apparatus, information embedding method, and information detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004234318A (en) * 2003-01-30 2004-08-19 Denso Wave Inc Two-dimension information code, and display method, generation method and reading method thereof
JP2014139732A (en) * 2013-01-21 2014-07-31 Sony Corp Image processing device, image processing method, program and display device
JP2015106848A (en) * 2013-11-29 2015-06-08 富士通株式会社 Information embedding apparatus, information detection apparatus, information embedding method, and information detection method

Similar Documents

Publication Publication Date Title
US10122888B2 (en) Information processing system, terminal device and method of controlling display of secure data using augmented reality
CN108604378B (en) Image segmentation and modification of video streams
US10677596B2 (en) Image processing device, image processing method, and program
US10650264B2 (en) Image recognition apparatus, processing method thereof, and program
US20140368542A1 (en) Image processing apparatus, image processing method, program, print medium, and print-media set
CN103890810A (en) Image processing apparatus, method and computer program product
CN103189864A (en) Methods and apparatuses for determining shared friends in images or videos
CN106462768B (en) Using characteristics of image from image zooming-out form
WO2013088637A2 (en) Information processing device, information processing method and program
KR101079925B1 (en) An augmented reality situational training system by recognition of markers and hands of trainee
CN105637529A (en) Image capture input and projection output
WO2018235595A1 (en) Information provision device, information provision method and program
US20200013373A1 (en) Computer system, screen sharing method, and program
KR20130143101A (en) Display format using display device for machine-readable dot patterns
JP2018097581A (en) Information processing device and program
JP5868050B2 (en) Display device and control method thereof
JP2015005223A (en) Information processing system, label, information processing method, and information processing program
JP2006072669A (en) Image processor, game device, and image processing method
CN110832525A (en) Augmented reality advertising on objects
EP3251091B1 (en) Hybrid visual tagging using customized colored tiles
WO2018135272A1 (en) Information processing device, display method, program, and computer-readable recording medium
US20210176375A1 (en) Information processing device, information processing system, information processing method and program
KR20160138823A (en) Method for providing information of traditional market using augmented reality
JP2015125543A (en) Line-of-sight prediction system, line-of-sight prediction method, and line-of-sight prediction program
WO2016135471A1 (en) Interactive information system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP