WO2019189768A1 - Communication method, communication device, transmitter, and program - Google Patents

Communication method, communication device, transmitter, and program Download PDF

Info

Publication number
WO2019189768A1
WO2019189768A1 PCT/JP2019/014013 JP2019014013W WO2019189768A1 WO 2019189768 A1 WO2019189768 A1 WO 2019189768A1 JP 2019014013 W JP2019014013 W JP 2019014013W WO 2019189768 A1 WO2019189768 A1 WO 2019189768A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
receiver
visible light
time
information
Prior art date
Application number
PCT/JP2019/014013
Other languages
French (fr)
Japanese (ja)
Inventor
秀紀 青山
大嶋 光昭
Original Assignee
パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ filed Critical パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Priority to JP2020511094A priority Critical patent/JP7287950B2/en
Publication of WO2019189768A1 publication Critical patent/WO2019189768A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to a communication method, a communication device, a transmitter, a program, and the like.
  • Patent Literature 1 in an optical space transmission device that transmits information to free space using light, limited transmission is performed by performing communication using a plurality of monochromatic light sources of illumination light. A technology for efficiently realizing communication between devices is described in the apparatus.
  • the conventional method is limited to a case where a device to be applied has a three-color light source such as illumination. Moreover, the receiver which receives the transmitted information cannot display an image useful for the user.
  • the present invention solves such problems and provides a communication method that enables communication between various devices.
  • a communication method is a communication method using a terminal including an image sensor, and determines whether or not the terminal can perform visible light communication, and the terminal performs visible light communication.
  • the image sensor determines that it is possible to obtain a decoding image by capturing an image of a subject whose luminance changes, the first image is transmitted from the striped pattern that appears in the decoding image.
  • the image sensor When acquiring identification information and determining that the terminal is not capable of performing visible light communication in the determination of visible light communication, the image sensor acquires a captured image by capturing the subject, By performing edge detection of the captured image, at least one contour is extracted, a predetermined specific region is specified from the at least one contour, and a line of the specific region The subject from the turn acquires the second identification information to be transmitted.
  • FIG. 1 is a diagram illustrating an example of an observation method of luminance of a light emitting unit in the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment.
  • FIG. 3 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment.
  • FIG. 4 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment.
  • FIG. 5A is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5B is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5A is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5B is a diagram illustrating an example of an observation method of lumina
  • FIG. 5C is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5D is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5E is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5F is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5G is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 5H is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1.
  • FIG. 6A is a flowchart of the information communication method in Embodiment 1.
  • FIG. 6B is a block diagram of the information communication apparatus according to Embodiment 1.
  • FIG. 7 is a diagram illustrating an example of a photographing operation of the receiver in the second embodiment.
  • FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment.
  • FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment.
  • FIG. 10 is a diagram illustrating an example of display operation of the receiver in Embodiment 2.
  • FIG. 11 is a diagram illustrating an example of display operation of the receiver in Embodiment 2.
  • FIG. 12 is a diagram illustrating an example of operation of a receiver in Embodiment 2.
  • FIG. 12 is a diagram illustrating an example of operation of a receiver in Embodiment 2.
  • FIG. 13 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 14 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 15 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 16 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 17 is a diagram illustrating another example of operation of a receiver in Embodiment 2.
  • FIG. 18A is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2.
  • FIG. 18B is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2.
  • FIG. 18A is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2.
  • FIG. 18C is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2.
  • FIG. 19 is a diagram for explaining an application example to the route guidance in the second embodiment.
  • FIG. 20 is a diagram for explaining an application example to usage log accumulation and analysis in the second embodiment.
  • FIG. 21 is a diagram illustrating an example of application of the transmitter and the receiver in the second embodiment.
  • FIG. 22 is a diagram illustrating an example of application of a transmitter and a receiver in Embodiment 2.
  • FIG. 23 is a diagram illustrating an example of an application according to the third embodiment.
  • FIG. 24 is a diagram illustrating an example of an application according to the third embodiment.
  • FIG. 25 is a diagram illustrating an example of a transmission signal and an example of a voice synchronization method in the third embodiment.
  • FIG. 26 is a diagram illustrating an example of a transmission signal in the third embodiment.
  • FIG. 27 is a diagram illustrating an example of a process flow of the receiver in Embodiment 3.
  • FIG. 28 is a diagram illustrating an example of a user interface of the receiver in the third embodiment.
  • FIG. 29 is a diagram illustrating an example of a process flow of the receiver in Embodiment 3.
  • FIG. 30 is a diagram illustrating another example of the processing flow of the receiver in Embodiment 3.
  • FIG. 31A is a diagram for explaining a specific method of synchronized playback in the third embodiment.
  • FIG. 31A is a diagram for explaining a specific method of synchronized playback in the third embodiment.
  • FIG. 31B is a block diagram illustrating a configuration of a playback device (receiver) that performs synchronized playback in the third embodiment.
  • FIG. 31C is a flowchart illustrating a processing operation of a playback device (receiver) that performs synchronized playback in the third embodiment.
  • FIG. 32 is a diagram for explaining preparations for synchronized playback in the third embodiment.
  • FIG. 33 is a diagram illustrating an example of application of a receiver in Embodiment 3.
  • FIG. 34A is a front view of a receiver held by a holder in the third embodiment.
  • FIG. 34B is a rear view of the receiver held by the holder in the third embodiment.
  • FIG. 35 is a diagram for describing a use case of a receiver held by a holder in the third embodiment.
  • FIG. 36 is a flowchart showing the processing operation of the receiver held by the holder in the third embodiment.
  • FIG. 37 is a diagram illustrating an example of an image displayed by the receiver in Embodiment 3.
  • FIG. 38 is a diagram showing another example of the holder in the third embodiment.
  • FIG. 39A is a diagram illustrating an example of a visible light signal in Embodiment 3.
  • FIG. 39B is a diagram illustrating an example of a visible light signal in Embodiment 3.
  • FIG. 39C is a diagram illustrating an example of a visible light signal in Embodiment 3.
  • FIG. 39D is a diagram illustrating an example of a visible light signal in Embodiment 3.
  • FIG. 40 is a diagram illustrating a configuration of a visible light signal in the third embodiment.
  • FIG. 41 is a diagram illustrating an example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 42 is a diagram illustrating an example of a display system in Embodiment 4.
  • FIG. 43 is a diagram illustrating another example of the display system according to Embodiment 4.
  • FIG. 44 is a diagram illustrating another example of the display system according to Embodiment 4.
  • FIG. 45 is a flowchart illustrating an example of processing operations of a receiver in Embodiment 4.
  • FIG. 46 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 46 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 47 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 48 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 49 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 50 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 51 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 52 is a flowchart illustrating another example of the processing operation of the receiver in the fourth embodiment.
  • FIG. 53 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 53 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 54 is a diagram illustrating a captured display image Ppre and a decoding image Pdec acquired by capturing by the receiver in the fourth embodiment.
  • FIG. 55 is a diagram illustrating an example of a captured display image Ppre displayed on the receiver in the fourth embodiment.
  • FIG. 56 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 4.
  • FIG. 57 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 58 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 59 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 60 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image.
  • FIG. 61 is a diagram showing an example of recognition information in the fourth embodiment.
  • FIG. 62 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 4.
  • FIG. 63 is a diagram illustrating an example in which the receiver in Embodiment 4 identifies bright line pattern regions.
  • FIG. 64 is a diagram illustrating another example of the receiver in Embodiment 4.
  • FIG. 65 is a flowchart illustrating another example of the processing operation of the receiver in the fourth embodiment.
  • FIG. 66 is a diagram illustrating an example of a transmission system including a plurality of transmitters in Embodiment 4.
  • FIG. 67 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in Embodiment 4.
  • 68A is a flowchart illustrating an example of processing operations of a receiver in Embodiment 4.
  • FIG. 68B is a flowchart illustrating an example of processing operations of a receiver in Embodiment 4.
  • FIG. 69A is a flowchart illustrating a display method according to Embodiment 4.
  • FIG. 69B is a block diagram illustrating a structure of a display device in Embodiment 4.
  • FIG. 70 is a diagram illustrating an example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • FIG. 71 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • FIG. 72 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • FIG. 73 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • 74 is a diagram illustrating another example of a receiver in Modification 1 of Embodiment 4.
  • FIG. 75 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • FIG. 76 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • FIG. 77 is a flowchart illustrating an example of processing operations of the receiver in the first modification of the fourth embodiment.
  • FIG. 78 is a diagram illustrating an example of a problem when an AR image assumed in the receiver in Embodiment 4 or the modification 1 thereof is displayed.
  • FIG. 79 is a diagram illustrating an example in which the receiver in the second modification of the fourth embodiment displays an AR image.
  • FIG. 80 is a flowchart illustrating an example of processing operations of the receiver in the second modification of the fourth embodiment.
  • FIG. 81 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image.
  • FIG. 82 is a flowchart illustrating another example of the processing operation of the receiver in the second modification of the fourth embodiment.
  • FIG. 83 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image.
  • FIG. 84 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image.
  • FIG. 85 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image.
  • FIG. 86 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image.
  • FIG. 87A is a flowchart illustrating a display method according to one embodiment of the present invention.
  • FIG. 87B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
  • FIG. 88 is a diagram illustrating an example of enlargement and movement of an AR image in the third modification of the fourth embodiment.
  • FIG. 89 is a diagram illustrating an example of expansion of an AR image in the third modification of the fourth embodiment.
  • FIG. 90 is a flowchart illustrating an example of a processing operation related to enlargement and movement of an AR image by a receiver according to the third modification of the fourth embodiment.
  • FIG. 91 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment.
  • FIG. 92 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment.
  • FIG. 93 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment.
  • FIG. 95A is a diagram illustrating an example of a captured display image obtained by imaging by the receiver in the third modification of the fourth embodiment.
  • FIG. 95B is a diagram illustrating an example of a menu screen displayed on the display of the receiver in the third modification of the fourth embodiment.
  • FIG. 96 is a flowchart illustrating an example of processing operations of the receiver and the server in the third modification of the fourth embodiment.
  • FIG. 97 is a diagram for explaining sound volume reproduced by the receiver in the third modification of the fourth embodiment.
  • FIG. 98 is a diagram illustrating a relationship between the distance from the receiver to the transmitter and the sound volume in the third modification of the fourth embodiment.
  • FIG. 99 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment.
  • FIG. 100 is a diagram illustrating an example of superimposition of AR images by a receiver in the third modification of the fourth embodiment.
  • FIG. 101 is a diagram for describing an example of how to obtain a line scan time by a receiver in the third modification of the fourth embodiment.
  • FIG. 102 is a diagram for describing an example of how to obtain the line scan time by the receiver in the third modification of the fourth embodiment.
  • FIG. 103 is a flowchart illustrating an example of how to obtain the line scan time by the receiver in the third modification of the fourth embodiment.
  • FIG. 104 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment.
  • FIG. 105 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment.
  • FIG. 106 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment.
  • FIG. 107 is a diagram illustrating an example of a decoding image acquired in accordance with the attitude of the receiver in the third modification of the fourth embodiment.
  • FIG. 108 is a diagram illustrating another example of the decoding image acquired in accordance with the attitude of the receiver in the third modification of the fourth embodiment.
  • FIG. 109 is a flowchart illustrating an example of processing operations of the receiver in Modification 3 of Embodiment 4.
  • FIG. 109 is a flowchart illustrating an example of processing operations of the receiver in Modification 3 of Embodiment 4.
  • FIG. 110 is a diagram illustrating an example of a camera lens switching process performed by a receiver according to the third modification of the fourth embodiment.
  • FIG. 111 is a diagram illustrating an example of camera switching processing by the receiver in the third modification of the fourth embodiment.
  • FIG. 112 is a flowchart illustrating an example of processing operations of the receiver and the server in the third modification of the fourth embodiment.
  • FIG. 113 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment.
  • FIG. 114 is a sequence diagram illustrating processing operations of a system including a receiver, a microwave oven, a relay server, and an electronic settlement server in Modification 3 of Embodiment 4.
  • FIG. 114 is a sequence diagram illustrating processing operations of a system including a receiver, a microwave oven, a relay server, and an electronic settlement server in Modification 3 of Embodiment 4.
  • FIG. 115 is a sequence diagram illustrating processing operations of a system including a POS terminal, a server, a receiver, and a microwave oven according to the third modification of the fourth embodiment.
  • 116 is a diagram illustrating an example of indoor use in Modification 3 of Embodiment 4.
  • FIG. 117 is a diagram illustrating an example of an augmented reality object display according to the third modification of the fourth embodiment.
  • FIG. 118 is a diagram illustrating a configuration of a display system according to the fourth modification of the fourth embodiment.
  • FIG. 119 is a flowchart illustrating the processing operation of the display system in the fourth modification of the fourth embodiment.
  • FIG. 120 is a flowchart illustrating a recognition method according to an aspect of the present invention.
  • 121 is a diagram illustrating an example of an operation mode of a visible light signal according to Embodiment 5.
  • 122A is a flowchart illustrating a visible light signal generation method according to Embodiment 5.
  • FIG. 122B is a block diagram illustrating a configuration of the signal generation device according to Embodiment 5.
  • FIG. 123 is a diagram illustrating a format of an MPM MAC frame according to the sixth embodiment.
  • FIG. 124 is a flowchart illustrating a processing operation of the encoding device that generates an MPM MAC frame according to the sixth embodiment.
  • FIG. 125 is a flowchart showing a processing operation of the decoding apparatus for decoding the MPM MAC frame in the sixth embodiment.
  • FIG. 126 shows MAC PIB attributes in the sixth embodiment.
  • FIG. 127 is a diagram for explaining an MPM light control method according to the sixth embodiment.
  • FIG. 128 is a diagram showing attributes of the PHY PIB in the sixth embodiment.
  • FIG. 129 is a diagram for explaining MPM in the sixth embodiment.
  • FIG. 130 is a diagram illustrating a PLCP header subfield according to the sixth embodiment.
  • FIG. 131 is a diagram illustrating a PLCP center subfield according to the sixth embodiment.
  • FIG. 132 is a diagram illustrating a PLCP footer subfield according to the sixth embodiment.
  • FIG. 133 is a diagram illustrating a waveform of the PHY PWM mode in the MPM according to the sixth embodiment.
  • FIG. 134 is a diagram illustrating a PHY PPM mode waveform in the MPM according to the sixth embodiment.
  • FIG. 135 is a flowchart illustrating an example of the decoding method according to the sixth embodiment.
  • FIG. 136 is a flowchart illustrating an example of the encoding method according to the sixth embodiment.
  • FIG. 137 is a diagram illustrating an example in which the receiver in Embodiment 7 displays an AR image.
  • FIG. 138 is a diagram illustrating an example of a captured display image on which an AR image is superimposed, according to the seventh embodiment.
  • FIG. 139 is a diagram illustrating another example in which the receiver in Embodiment 7 displays an AR image.
  • FIG. 140 is a flowchart illustrating the operation of the receiver in the seventh embodiment.
  • FIG. 141 is a diagram for explaining operation of a transmitter in Embodiment 7.
  • FIG. 142 is a diagram for explaining another operation of the transmitter in Embodiment 7.
  • FIG. FIG. 143 is a diagram for describing another operation of the transmitter in the seventh embodiment.
  • FIG. 144 is a diagram illustrating a comparative example for describing easiness of receiving an optical ID in the seventh embodiment.
  • FIG. 145A is a flowchart illustrating an operation of the transmitter in the seventh embodiment.
  • FIG. 145B is a block diagram illustrating a configuration of a transmitter in Embodiment 7.
  • FIG. 146 is a diagram illustrating another example in which the receiver in Embodiment 7 displays an AR image.
  • FIG. 147 is a diagram for explaining an operation of the transmitter in the eighth embodiment.
  • FIG. 148A is a flowchart illustrating a transmission method according to the eighth embodiment.
  • 148B is a block diagram illustrating a configuration of a transmitter in Embodiment 8.
  • FIG. 149 is a diagram illustrating an example of a detailed configuration of a visible light signal in Embodiment 8.
  • FIG. 150 is a diagram illustrating another example of a detailed configuration of a visible light signal according to Embodiment 8.
  • FIG. 151 is a diagram illustrating another example of a detailed configuration of a visible light signal according to the eighth embodiment.
  • FIG. 152 is a diagram illustrating another example of a detailed configuration of a visible light signal according to the eighth embodiment.
  • FIG. 153 is a diagram illustrating a relationship between the sum of the variables y 0 to y 3 , the total time length, and the effective time length in the eighth embodiment.
  • FIG. 154A is a flowchart illustrating a transmission method according to Embodiment 8.
  • FIG. 154B is a block diagram illustrating a configuration of a transmitter in Embodiment 8.
  • FIG. 155 is a diagram illustrating the structure of the display system according to the ninth embodiment.
  • FIG. 156 is a sequence diagram illustrating processing operations of the receiver and the server according to Embodiment 9.
  • FIG. 157 is a flowchart showing the processing operation of the server in the ninth embodiment.
  • FIG. 158 is a diagram illustrating an example of communication in the case where the transmitter and the receiver in Embodiment 9 are mounted on a vehicle, respectively.
  • FIG. 159 is a flowchart showing the processing operation of the vehicle in the ninth embodiment.
  • FIG. 160 is a diagram illustrating an example in which the receiver in Embodiment 9 displays an AR image.
  • FIG. 161 is a diagram illustrating another example in which the receiver in Embodiment 9 displays an AR image.
  • FIG. 162 is a diagram illustrating processing operation of a receiver in Embodiment 9.
  • FIG. 163 is a diagram illustrating an example of operation on a receiver in Embodiment 9.
  • FIG. 164 is a diagram illustrating an example of AR image displayed on the receiver in Embodiment 9.
  • FIG. 165 is a diagram illustrating an example of the AR image superimposed on the captured display image in the ninth embodiment.
  • FIG. 166 is a diagram illustrating an example of the AR image superimposed on the captured display image in Embodiment 9.
  • 167 is a diagram illustrating an example of a transmitter in Embodiment 9.
  • FIG. 168 is a diagram illustrating another example of a transmitter in Embodiment 9.
  • FIG. 169 is a diagram illustrating another example of a transmitter in Embodiment 9.
  • FIG. FIG. 170 is a diagram illustrating an example of a system using a receiver compatible with optical communication and a receiver not compatible with optical communication in Embodiment 9.
  • FIG. 171 is a flowchart illustrating processing operations of the receiver in Embodiment 9.
  • FIG. 171 is a flowchart illustrating processing operations of the receiver in Embodiment 9.
  • FIG. 173A is a flowchart illustrating a display method according to one embodiment of the present invention.
  • FIG. 173B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
  • 174 is a diagram illustrating an example of an image drawn on a transmitter in Embodiment 10.
  • FIG. 175 is a diagram illustrating another example of an image drawn on a transmitter in Embodiment 10.
  • FIG. 176 is a diagram illustrating an example of a transmitter and a receiver in Embodiment 10.
  • FIG. FIG. 177 is a diagram for describing the fundamental frequency of a line pattern in the tenth embodiment.
  • FIG. 178A is a flowchart showing a processing operation of the encoding apparatus according to the tenth embodiment.
  • FIG. 178B is a diagram for describing a processing operation of the encoding device according to the tenth embodiment.
  • FIG. 179 is a flowchart illustrating processing operations of a receiver which is a decoding device according to Embodiment 10.
  • FIG. 180 is a flowchart illustrating processing operations of a receiver in Embodiment 10.
  • FIG. 181A is a diagram illustrating an example of a system configuration in Embodiment 10.
  • FIG. 181B is a diagram illustrating processing of the camera according to Embodiment 10.
  • FIG. 182 is a diagram illustrating another example of the configuration of the system according to the tenth embodiment.
  • FIG. 183 is a diagram illustrating another example of an image drawn on the transmitter in Embodiment 10.
  • FIG. 184 is a diagram illustrating an example of a format of a MAC frame constituting the frame ID in the tenth embodiment.
  • FIG. 185 is a diagram illustrating an example of a MAC header configuration in the tenth embodiment.
  • FIG. 186 is a diagram illustrating an example of a table for deriving the number of packet divisions according to the tenth embodiment.
  • FIG. 187 is a diagram illustrating PHY coding according to the tenth embodiment.
  • FIG. 188 is a diagram illustrating an example of a transmission image Im3 having a PHY symbol in Embodiment 10.
  • FIG. 189 is a diagram for explaining two PHY versions in the tenth embodiment.
  • FIG. 190 is a diagram for explaining the Gray code in the tenth embodiment.
  • FIG. 191 is a diagram illustrating an example of decoding processing by the receiver in Embodiment 10.
  • FIG. 192 is a diagram for describing a transmission image fraud detection method by the receiver in the tenth embodiment.
  • FIG. 193 is a flowchart illustrating an example of a decoding process including fraud detection of a transmission image by a receiver in the tenth embodiment.
  • FIG. 194A is a flowchart showing a display method according to a modification of the tenth embodiment.
  • FIG. 194B is a block diagram illustrating a structure of the display device according to the modification of the tenth embodiment.
  • FIG. 191 is a diagram illustrating an example of decoding processing by the receiver in Embodiment 10.
  • FIG. 192 is a diagram for describing a transmission image fraud detection method by the receiver in the tenth embodiment.
  • FIG. 193 is a flowchart illustrating an example of
  • FIG. 194C is a flowchart illustrating a communication method according to a modification of the tenth embodiment.
  • FIG. 194D is a block diagram showing a configuration of a communication apparatus according to this variation of the tenth embodiment.
  • FIG. 194E is a block diagram illustrating a configuration of the transmitter according to Embodiment 10 and its modifications.
  • FIG. 195 is a diagram illustrating an example of a configuration of a communication system including a server in the eleventh embodiment.
  • FIG. 196 is a flowchart illustrating a management method by the first server in the eleventh embodiment.
  • FIG. 197 is a diagram illustrating an illumination system in Embodiment 12.
  • FIG. 198 is a diagram illustrating an example of arrangement of illumination devices and a decoding image in Embodiment 12.
  • FIG. 199 is a diagram illustrating another example of arrangement of illumination devices and a decoding image in Embodiment 12.
  • FIG. 200 is a diagram for describing position estimation using the illumination device in Embodiment 12.
  • FIG. 201 is a flowchart illustrating processing operation of a receiver in Embodiment 12.
  • FIG. 202 is a diagram illustrating an example of a communication system in Embodiment 12.
  • FIG. 203 is a diagram for describing self-position estimation processing by a receiver in Embodiment 12.
  • FIG. 204 is a flowchart illustrating self-position estimation processing by a receiver in Embodiment 12.
  • FIG. 205 is a flowchart illustrating an outline of receiver self-position estimation processing according to the twelfth embodiment.
  • FIG. 206 is a diagram illustrating the relationship between radio wave IDs and optical IDs in the twelfth embodiment.
  • 207 is a diagram for describing an example of imaging by a receiver in Embodiment 12.
  • FIG. 208 is a diagram for describing another example of imaging by a receiver in Embodiment 12.
  • FIG. 209 is a diagram for describing a camera used by a receiver in Embodiment 12.
  • FIG. 210 is a flowchart illustrating an example of processing in which the receiver in Embodiment 12 changes the visible light signal of the transmitter.
  • FIG. 211 is a flowchart illustrating another example of processing in which the receiver in Embodiment 12 changes the visible light signal of the transmitter.
  • FIG. 212 is a diagram for describing navigation by a receiver in Embodiment 13.
  • FIG. 213 is a flowchart illustrating an example of self-position estimation by the receiver in Embodiment 13.
  • FIG. 214 is a diagram for describing a visible light signal received by the receiver in Embodiment 13.
  • FIG. 215 is a flowchart illustrating another example of self-position estimation by the receiver in the thirteenth embodiment.
  • FIG. 216 is a flowchart illustrating an example of determination of reflected light by the receiver in Embodiment 13.
  • FIG. 217 is a flowchart illustrating an example of navigation by a receiver in Embodiment 13.
  • FIG. 218 is a diagram illustrating an example of the transmitter 100 configured as a projector in Embodiment 13. In FIG. FIG. FIG.
  • FIG. 219 is a flowchart illustrating another example of self-position estimation by the receiver in Embodiment 13.
  • FIG. 220 is a flowchart illustrating an example of processing performed by a transmitter in the thirteenth embodiment.
  • FIG. 221 is a flowchart illustrating another example of navigation by a receiver in Embodiment 13.
  • FIG. 222 is a flowchart illustrating an example of processing by a receiver in Embodiment 13.
  • 223 is a diagram illustrating an example of a screen which is displayed on the display of the receiver in Embodiment 13.
  • FIG. 224 is a diagram illustrating an example of character display by the receiver in Embodiment 13.
  • FIG. 225 is a diagram illustrating another example of a screen displayed on the display of the receiver in Embodiment 13.
  • FIG. 226 is a diagram showing a system configuration for performing navigation to a meeting place in the thirteenth embodiment.
  • 227 is a diagram illustrating another example of a screen displayed on the display of the receiver in Embodiment 13.
  • FIG. 228 is a diagram showing the inside of the concert hall.
  • FIG. 229 is a flowchart illustrating an example of a communication method according to the first aspect of the present invention.
  • a communication method is a communication method using a terminal including an image sensor, and determines whether or not the terminal can perform visible light communication, and the terminal performs visible light communication.
  • the image sensor captures a subject whose luminance changes to obtain a decoding image, and the subject transmits from the striped pattern appearing in the decoding image.
  • a captured image is acquired by capturing the subject with the image sensor. Then, by performing edge detection of the captured image, at least one contour is extracted, a predetermined specific area is specified from the at least one outline, and a line of the specific area is identified. The object acquires the second identification information transmitted from the pattern.
  • a terminal such as a receiver can receive first identification information or second identification information from a subject such as a transmitter regardless of whether or not visible light communication is possible. Can be acquired. That is, when the terminal can perform visible light communication, the terminal acquires, for example, the light ID from the subject as the first identification information. On the other hand, even if the terminal cannot perform visible light communication, the terminal can acquire, for example, an image ID or a frame ID from the subject as the second identification information.
  • the transmission image illustrated in FIGS. 183 and 188 is captured as a subject, the area of the transmission image is selected as a specific area (that is, a selection area), and second identification information is obtained from the line pattern of the transmission image. Is acquired. Therefore, even when visible light communication is impossible, the second identification information can be appropriately acquired.
  • the striped pattern is also called a bright line pattern or a bright line pattern region.
  • an area having a quadrangular outline of a predetermined size or an area having a rounded quadrangular outline of a predetermined size or more may be specified as the specific area.
  • a quadrangular or rounded quadrangular region can be appropriately identified as the specific region.
  • the terminal when the terminal is identified as a terminal that can change the exposure time to a predetermined value or less, it is determined that visible light communication can be performed.
  • the terminal specifies that the exposure time cannot be changed to the predetermined value or less, it may be determined that the visible light communication cannot be performed.
  • the exposure time of the image sensor is set to a first exposure time, By capturing the subject with the first exposure time to obtain the decoding image, and in the determination of the visible light communication, when it is determined that the terminal cannot perform visible light communication,
  • the exposure time of the image sensor is set to a second exposure time, the captured image is acquired by capturing the subject with the second exposure time, and the first exposure time is set. May be shorter than the second exposure time.
  • the terminal can acquire the first identification information or the second identification information suitable for the terminal by properly using the first exposure time and the second exposure time.
  • the subject has a rectangular shape as viewed from the image sensor, and the central area of the subject changes in luminance, whereby the first identification information is transmitted, and a barcode-like line pattern is formed around the subject.
  • the image sensor corresponds to a plurality of exposure lines of the image sensor.
  • the decoding image including a bright line pattern composed of a plurality of bright lines is acquired, the first identification information is acquired by decoding the bright line pattern, and in the determination of the visible light communication, the terminal
  • the second pattern is extracted from the line pattern of the captured image. It may acquire the different information.
  • the first identification information and the second identification information can be appropriately acquired from the subject whose central region changes in luminance.
  • the first identification information obtained from the decoding image and the second identification information obtained from the line pattern may be the same information.
  • the same information can be acquired from the subject both in a terminal capable of visible light communication and a terminal incapable of visible light communication.
  • the first moving image associated with the first identification information is displayed, and the first When the operation of sliding the moving image is received, the second moving image associated with the first identification information may be displayed next to the first moving image.
  • each of the first moving image and the second moving image is the first AR image P46 and the second AR image P46c shown in FIG.
  • the first identification information is, for example, an optical ID as described above.
  • the second moving image associated with the first identification information is next to the first moving image. Is displayed. Therefore, an image useful for the user can be easily displayed.
  • FIG. 194A since it is determined in advance whether or not visible light communication is possible, it is possible to omit useless processing to acquire a visible light signal until impossible, The processing burden can be reduced.
  • the second moving image when an operation of sliding the first moving image in the horizontal direction is accepted, the second moving image is displayed, and the first moving image is slid in the vertical direction.
  • a still image associated with the first identification information may be displayed.
  • the second moving image is displayed by sliding the first moving image in the horizontal direction, that is, by swiping.
  • a still image associated with the first identification information is displayed by sliding the first moving image in the vertical direction.
  • the still image is, for example, an AR image P47 shown in FIG. Therefore, it is possible to easily display a wide variety of images useful to the user.
  • the object in the picture displayed first may be in the same position.
  • the first displayed object is at the same position, so the user can It is possible to easily grasp that the moving image and the second moving image are related to each other.
  • the next moving image associated with the first identification information may be displayed next to the displayed moving image.
  • the object in the picture displayed first may be in the same position.
  • At least one of the first moving image and the second moving image has a higher transparency at the position as the position in the moving image is closer to the end of the moving image. It may be formed.
  • an image may be displayed outside an area where at least one of the first moving image and the second moving image is displayed.
  • the image sensor is a region that includes a pattern of a plurality of bright lines by acquiring a normal captured image by imaging with a first exposure time by the image sensor and imaging by a second exposure time shorter than the first exposure time.
  • the decoding image including a bright line pattern region is acquired
  • the first identification information is acquired by decoding the decoding image
  • at least one moving image of the first moving image or the second moving image In the image display, a reference area at the same position as the bright line pattern area in the decoding image is specified from the normal captured image, and the moving image is superimposed on the normal captured image based on the reference area. May be recognized as a target area, and the moving image may be superimposed on the target area. For example, in displaying the moving image of at least one of the first moving image and the second moving image, the upper, lower, left, or right region of the reference region in the normal captured image is the target region. You may recognize as.
  • the target area is recognized based on the reference area, and the moving image is superimposed on the target area. Can be easily increased.
  • the size of the moving image may be increased as the size of the bright line pattern region is increased.
  • the size of the moving image changes according to the size of the bright line pattern region, so that the object indicated by the moving image is more compared to the case where the size of the moving image is fixed.
  • the moving image can be displayed so that it exists in reality.
  • a transmitter includes an illumination plate, a light source that emits light from a back side of the illumination plate, and a microcontroller that changes luminance of the light source, and the microcontroller includes the light source.
  • the first identification information is transmitted from the light source through the illuminating plate, and a barcode-like line pattern is arranged around the front side of the illuminating plate.
  • Second identification information is encoded, and the first identification information and the second identification information are the same information.
  • the lighting plate has a rectangular shape.
  • the same information can be transmitted to a terminal capable of performing visible light communication and a terminal capable of performing visible light communication.
  • a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement
  • FIG. 1 shows an example in which imaging devices arranged in one row are exposed simultaneously, and imaging is performed by shifting the exposure start time in the order of closer rows.
  • the exposure line of the image sensor that exposes simultaneously is called an exposure line
  • the pixel line on the image corresponding to the image sensor is called a bright line.
  • this image capturing method When this image capturing method is used to capture an image of a blinking light source on the entire surface of the image sensor, bright lines (light and dark lines of pixel values) along the exposure line appear on the captured image as shown in FIG. .
  • bright lines light and dark lines of pixel values
  • the imaging frame rate By recognizing the bright line pattern, it is possible to estimate the light source luminance change at a speed exceeding the imaging frame rate. Thereby, by transmitting a signal as a change in light source luminance, communication at a speed higher than the imaging frame rate can be performed.
  • LO lower luminance value
  • HI high
  • Low may be in a state where the light source is not shining, or may be shining weaker than high.
  • the imaging frame rate is 30 fps
  • a change in luminance with a period of 1.67 milliseconds can be recognized.
  • the exposure time is set shorter than 10 milliseconds, for example.
  • FIG. 2 shows a case where the exposure of the next exposure line is started after the exposure of one exposure line is completed.
  • the transmission speed is a maximum of flm bits per second.
  • the light emission time of the light emitting unit is controlled by a unit time shorter than the exposure time of each exposure line. More information can be transmitted.
  • information can be transmitted at a maximum rate of flElv bits per second.
  • the basic period of transmission can be recognized by causing the light emitting unit to emit light at a timing slightly different from the exposure timing of each exposure line.
  • FIG. 4 shows a case where the exposure of the next exposure line is started before the exposure of one exposure line is completed. That is, the exposure times of adjacent exposure lines are partially overlapped in time.
  • the S / N ratio can be improved.
  • the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. By configuring a part of the exposure lines so as not to partially overlap in time, it is possible to suppress the generation of intermediate colors due to the overlap of exposure times on the imaging screen, and to detect bright lines more appropriately. .
  • the exposure time is calculated from the brightness of each exposure line, and the light emission state of the light emitting unit is recognized.
  • the brightness of each exposure line is determined by a binary value indicating whether the luminance is equal to or higher than a threshold value, in order to recognize the state where no light is emitted, the state where the light emitting unit does not emit light is indicated for each line. It must last longer than the exposure time.
  • FIG. 5A shows the influence of the difference in exposure time when the exposure start times of the exposure lines are equal.
  • 7500a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line
  • 7500b is the case where the exposure time is longer than that.
  • the exposure time of adjacent exposure lines is partially overlapped in time, so that the exposure time can be increased. That is, the light incident on the image sensor increases and a bright image can be obtained.
  • the imaging sensitivity for capturing images with the same brightness can be suppressed to a low level, an image with less noise can be obtained, so that communication errors are suppressed.
  • FIG. 5B shows the influence of the difference in the exposure start time of each exposure line when the exposure times are equal.
  • 7501a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line
  • 7501b is the case where the exposure of the next exposure line is started earlier than the end of exposure of the previous exposure line.
  • the sample interval ( difference in exposure start time) becomes dense, the change in the light source luminance can be estimated more accurately, the error rate can be reduced, and the change in the light source luminance in a shorter time is recognized. be able to.
  • the exposure time overlap it is possible to recognize blinking of the light source that is shorter than the exposure time by using the difference in exposure amount between adjacent exposure lines.
  • the exposure time satisfy the exposure time> (sample interval ⁇ pulse width).
  • the pulse width is a pulse width of light that is a period during which the luminance of the light source is High. Thereby, the High brightness can be detected appropriately.
  • the exposure time is set to be longer than that in the normal shooting mode.
  • the communication speed can be dramatically improved.
  • the exposure time needs to be set as exposure time ⁇ 1/8 ⁇ f. Blanking that occurs during shooting is at most half the size of one frame. That is, since the blanking time is less than half of the shooting time, the actual shooting time is 1 / 2f at the shortest time.
  • the exposure time since it is necessary to receive quaternary information within a time of 1 / 2f, at least the exposure time needs to be shorter than 1 / (2f ⁇ 4). Since the normal frame rate is 60 frames / second or less, it is possible to generate an appropriate bright line pattern in the image data and perform high-speed signal transmission by setting the exposure time to 1/480 seconds or less. Become.
  • FIG. 5C shows an advantage when the exposure times are short when the exposure times of the exposure lines do not overlap.
  • the exposure time is long, even if the light source has a binary luminance change as in 7502a, the captured image has an intermediate color portion as in 7502e, and it becomes difficult to recognize the luminance change of the light source.
  • the free time (predetermined waiting time) t D2 not predetermined exposure start exposure of the next exposure line, the luminance variation of the light source Can be easily recognized. That is, a more appropriate bright line pattern such as 7502f can be detected.
  • an exposure time t E can be realized to be smaller than the time difference t D of the exposure start time of each exposure line.
  • the exposure time is set shorter than the normal shooting mode until a predetermined idle time occurs. This can be realized. Further, even when the normal photographing mode is the case where the exposure end time of the previous exposure line and the exposure start time of the next exposure line are equal, by setting the exposure time short until a predetermined non-exposure time occurs, Can be realized.
  • the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. Further, in all exposure lines, it is not necessary to provide a configuration in which an idle time (predetermined waiting time) in which a predetermined exposure is not performed is provided after the exposure of one exposure line is completed until the exposure of the next exposure line is started. It is also possible to have a configuration in which the lines partially overlap in time. With such a configuration, it is possible to take advantage of the advantages of each configuration.
  • the same readout method is used in the normal shooting mode in which shooting is performed at a normal frame rate (30 fps, 60 fps) and in the visible light communication mode in which shooting is performed with an exposure time of 1/480 second or less in which visible light communication is performed.
  • a signal may be read by a circuit.
  • FIG. 5D shows the relationship between the minimum change time t S of the light source luminance, the exposure time t E , the time difference t D of the exposure start time of each exposure line, and the captured image.
  • Figure 5E the transition and time t T of the light source luminance, which shows the relationship between the time difference t D of the exposure start time of each exposure line.
  • t D is larger than the t T, exposure lines to be neutral is reduced, it is easy to estimate the light source luminance.
  • the exposure line of the intermediate color is continuously 2 or less, which is desirable.
  • t T the light source is less than 1 microsecond in the case of LED, light source for an approximately 5 microseconds in the case of organic EL, a t D by 5 or more microseconds, to facilitate estimation of the light source luminance be able to.
  • Figure 5F shows a high frequency noise t HT of light source luminance, the relationship between the exposure time t E.
  • t E is larger than t HT , the captured image is less affected by high frequency noise, and light source luminance is easily estimated.
  • t E is an integral multiple of t HT , the influence of high frequency noise is eliminated, and the light source luminance is most easily estimated.
  • t E > t HT .
  • the main cause of high frequency noise derived from the switching power supply circuit since many of the t HT in the switching power supply for the lamp is less than 20 microseconds, by the t E and 20 micro-seconds or more, the estimation of the light source luminance It can be done easily.
  • Figure 5G is the case t HT is 20 microseconds, which is a graph showing the relationship between the size of the exposure time t E and the high frequency noise.
  • t E is the value becomes equal to the value when the amount of noise takes a maximum, 15 microseconds or more, or, 35 microseconds or more, or, It can be confirmed that the efficiency is good when it is set to 54 microseconds or more, or 74 microseconds or more. From the viewpoint of reducing high-frequency noise, it is desirable that t E be large. However, as described above, there is a property that light source luminance can be easily estimated in that the smaller the t E , the more difficult the intermediate color portion is generated.
  • t E when the light source luminance change period is 15 to 35 microseconds, t E is 15 microseconds or more, and when the light source luminance change period is 35 to 54 microseconds, t E is 35 microseconds or more.
  • t E is 54 microseconds or more when the cycle is 54 to 74 microseconds of change, t E when the period of the change in light source luminance is 74 microseconds or more may be set as 74 microseconds or more.
  • Figure 5H shows the relationship between the exposure time t E and the recognition success rate. Since the exposure time t E has a relative meaning with respect to the time when the luminance of the light source is constant, the value (relative exposure time) obtained by dividing the period t S in which the light source luminance changes by the exposure time t E is taken as the horizontal axis. Yes. From the graph, it can be seen that if the recognition success rate is desired to be almost 100%, the relative exposure time should be 1.2 or less. For example, when the transmission signal is 1 kHz, the exposure time may be about 0.83 milliseconds or less.
  • the relative exposure time may be set to 1.25 or less, and when the recognition success rate is set to 80% or more, the relative exposure time may be set to 1.4 or less. Recognize. Also, the recognition success rate drops sharply when the relative exposure time is around 1.5, and becomes almost 0% at 1.6, so it can be seen that the relative exposure time should not be set to exceed 1.5. . It can also be seen that after the recognition rate becomes 0 at 7507c, it rises again at 7507d, 7507e, and 7507f.
  • the relative exposure time is 1.9 to 2.2, 2.4 to 2.6, and 2.8 to 3.0. Just do it.
  • these exposure times may be used as the intermediate mode.
  • FIG. 6A is a flowchart of the information communication method in the present embodiment.
  • the information communication method in the present embodiment is an information communication method for acquiring information from a subject, and includes steps SK91 to SK93.
  • a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor are generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • a first exposure time setting step SK91 for setting a first exposure time of the image sensor; and the image sensor shoots the subject whose luminance changes with the set first exposure time
  • First image acquisition step SK92 for acquiring a bright line image including a plurality of bright lines, and information acquisition for acquiring information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image Step SK93, and in the first image acquisition step SK92, the plurality of exposure lines Re starts exposure at successively different times, and the exposure of the adjacent exposure line after a predetermined idle time from the end, the exposure is started adjacent to the exposure line.
  • FIG. 6B is a block diagram of the information communication apparatus according to the present embodiment.
  • the information communication device K90 in the present embodiment is an information communication device that acquires information from a subject, and includes constituent elements K91 to K93.
  • the information communication apparatus K90 causes a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor to be generated in response to a change in luminance of the subject in an image obtained by photographing the subject by an image sensor.
  • an exposure time setting unit K91 for setting an exposure time of the image sensor, and the image sensor for acquiring a bright line image including the plurality of bright lines by photographing the subject whose luminance changes with the set exposure time.
  • an information acquisition unit K93 that acquires information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image, and the plurality of exposure lines. Each of these starts exposure at different times sequentially and is adjacent to the exposure line. From the exposure of the down is completed after a predetermined idle time has elapsed, exposure is started.
  • each of the plurality of exposure lines is exposed to the adjacent exposure line adjacent to the exposure line. Since exposure is started after a lapse of a predetermined idle time after the end, it is possible to easily recognize a change in luminance of the subject. As a result, information can be appropriately acquired from the subject.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the program causes the computer to execute the information communication method shown by the flowchart of FIG. 6A.
  • shooting in the normal shooting mode or normal shooting mode is referred to as normal shooting
  • shooting in the visible light communication mode or visible light communication mode is referred to as visible light shooting (visible light communication).
  • shooting in an intermediate mode may be used, and an intermediate image may be used instead of a composite image described later.
  • FIG. 7 is a diagram illustrating an example of the photographing operation of the receiver in this embodiment.
  • the receiver 8000 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on. Then, the receiver 8000 generates a composite image in which the bright line pattern, the subject, and the surrounding area are clearly displayed by combining the normal captured image and the visible light communication image, and displays the composite image on the display. .
  • This composite image is an image generated by superimposing the bright line pattern of the visible light communication image on the portion where the signal in the normal captured image is transmitted. Further, the bright line pattern, the subject, and the surroundings displayed by the composite image are clear and have a sharpness sufficiently recognized by the user. By displaying such a composite image, the user can more clearly know from where or from where the signal is transmitted.
  • FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
  • the receiver 8000 includes a camera Ca1 and a camera Ca2.
  • the camera Ca1 performs normal photographing
  • the camera Ca2 performs visible light photographing.
  • the camera Ca1 acquires the normal captured image as described above
  • the camera Ca2 acquires the visible light communication image as described above.
  • the receiver 8000 generates the above-described combined image by combining the normal captured image and the visible light communication image, and displays the combined image on the display.
  • FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
  • the camera Ca1 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on.
  • the camera Ca2 continuously performs normal shooting.
  • the receiver 8000 receives from the normal shooting images acquired by these cameras using stereo vision (the principle of triangulation).
  • the distance from the machine 8000 to the subject (hereinafter referred to as subject distance) is estimated.
  • FIG. 10 is a diagram illustrating an example of the display operation of the receiver in this embodiment.
  • the receiver 8000 switches the photographing mode to visible light communication, normal photographing, visible light communication, and so on.
  • the receiver 8000 activates an application program when performing visible light communication for the first time.
  • the receiver 8000 estimates its own position based on the signal received by visible light communication.
  • the receiver 8000 displays AR (Augmented Reality) information on the normal shot image acquired by the normal shooting.
  • This AR information is acquired based on the position estimated as described above.
  • the receiver 8000 estimates the movement and direction change of the receiver 8000 based on the detection result of the 9-axis sensor and the motion detection of the normal captured image, and matches the estimated movement and direction change. To move the display position of the AR information.
  • the AR information can be made to follow the subject image of the normal captured image.
  • the receiver 8000 switches the shooting mode from the normal shooting to the visible light communication, the AR information is superimposed on the latest normal shooting image acquired at the time of the normal shooting immediately before the visible light communication.
  • the receiver 8000 displays a normal captured image on which the AR information is superimposed.
  • the receiver 8000 estimates the movement and direction change of the receiver 8000 on the basis of the detection result by the 9-axis sensor, and AR in accordance with the estimated movement and direction change.
  • Move information and normal captured images Thereby, AR information can be made to follow the subject image of the normal captured image in accordance with the movement of the receiver 8000 or the like in the case of visible light communication as in the case of normal imaging. Further, the normal image can be enlarged and reduced in accordance with the movement of the receiver 8000 or the like.
  • FIG. 11 is a diagram showing an example of the display operation of the receiver in this embodiment.
  • the receiver 8000 may display the composite image on which the bright line pattern is projected, as shown in FIG.
  • the receiver 8000 normally captures a signal explicit object that is an image having a predetermined color for notifying that a signal is transmitted instead of the bright line pattern.
  • a composite image may be generated by superimposing on the image, and the composite image may be displayed.
  • the receiver 8000 normally has a location where a signal is transmitted indicated by a dotted frame and an identifier (for example, ID: 101, ID: 102, etc.).
  • the captured image may be displayed as a composite image.
  • the receiver 8000 recognizes a signal that is an image having a predetermined color for notifying that a specific type of signal is transmitted instead of the bright line pattern.
  • a composite image may be generated by superimposing an object on a normal captured image, and the composite image may be displayed.
  • the color of the signal identification object differs depending on the type of signal output from the transmitter. For example, when the signal output from the transmitter is position information, a red signal identification object is superimposed, and when the signal output from the transmitter is a coupon, the green signal identification object is Superimposed.
  • FIG. 12 is a diagram illustrating an example of the operation of the receiver in this embodiment.
  • the receiver 8000 may display a normal captured image and output a sound for notifying the user that the transmitter has been found.
  • the receiver 8000 varies the type of output sound, the number of outputs, or the output time depending on the number of transmitters found, the type of received signal, or the type of information specified by the signal. It may be allowed.
  • FIG. 13 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image.
  • This information notification image indicates, for example, a store coupon or a place.
  • the bright line pattern may be a signal explicit object, a signal identification object, a dotted line frame, or the like shown in FIG. The same applies to the bright line patterns described below.
  • FIG. 14 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image.
  • the information notification image indicates the current location of the receiver 8000 by a map or the like.
  • FIG. 15 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 when the user performs a swipe on the receiver 8000 on which the composite image is displayed, the receiver 8000 performs normal shooting having a dotted frame and an identifier, similar to the normal shot image illustrated in FIG. An image is displayed and a list of information is displayed so as to follow the swipe operation. In this list, information specified by a signal transmitted from a location (transmitter) indicated by each identifier is shown.
  • the swipe may be, for example, an operation of moving a finger from outside the right side of the display in the receiver 8000.
  • the swipe may be an operation of moving a finger from the upper side, the lower side, or the left side of the display.
  • the receiver 8000 may display an information notification image (for example, an image showing a coupon) showing the information in more detail.
  • an information notification image for example, an image showing a coupon
  • FIG. 16 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 displays the information notification image superimposed on the composite image so as to follow the swipe operation.
  • This information notification image shows the subject distance with an arrow in an easy-to-understand manner for the user.
  • the swipe may be, for example, an operation of moving a finger from outside the lower side of the display in the receiver 8000.
  • the swipe may be an operation of moving a finger from the left side of the display, from the upper side, or from the right side.
  • FIG. 17 is a diagram illustrating another example of the operation of the receiver in this embodiment.
  • the receiver 8000 images a transmitter, which is a signage indicating a plurality of stores, as a subject, and displays a normal captured image acquired by the imaging.
  • the receiver 8000 when the user taps the signage image of one store included in the subject displayed in the normal captured image, the receiver 8000 generates an information notification image based on a signal transmitted from the signage of the store Then, the information notification image 8001 is displayed.
  • This information notification image 8001 is an image showing, for example, a vacant seat situation in a store.
  • the information communication method is an information communication method for acquiring information from a subject, and an bright line corresponding to an exposure line included in the image sensor is included in an image obtained by photographing the subject with an image sensor.
  • a first exposure time setting step for setting an exposure time of the image sensor so as to occur in accordance with a change in luminance of the subject; and the image sensor photographs the subject whose luminance changes with the set exposure time.
  • the bright line image acquisition step of acquiring a bright line image that is an image including the bright line, and the spatial position of the portion where the bright line appears can be identified based on the bright line image, and the subject and the subject
  • An image display step for displaying a display image in which the surroundings of the subject are projected, and before the image is included in the acquired bright line image Including an information acquisition step of acquiring transmission information by demodulating the data identified by the pattern of bright lines.
  • a composite image or an intermediate image as shown in FIGS. 7, 8, and 11 is displayed as a display image.
  • the spatial position of the part where the bright line appears is identified by a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like. Therefore, the user can easily find a subject that is transmitting a signal due to a change in luminance by viewing such a display image.
  • the information communication method further includes a second exposure time setting step for setting an exposure time longer than the exposure time, and the image sensor photographs the subject and the surroundings of the subject with the long exposure time.
  • a composite step of generating a composite image by superimposing the captured image, and the composite image may be displayed as the display image in the image display step.
  • the signal object is a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like, and a composite image is displayed as a display image as shown in FIGS.
  • the user can more easily find the subject that is transmitting the signal due to the luminance change.
  • an exposure time is set to 1/3000 sec.
  • the bright line image acquisition step the bright line image in which the periphery of the subject is projected is acquired, and in the image display step.
  • the bright line image may be displayed as the display image.
  • the bright line image is acquired and displayed as an intermediate image. Therefore, it is not necessary to perform processing such as acquiring and synthesizing the normal captured image and the visible light communication image, and the processing can be simplified.
  • the image sensor includes a first image sensor and a second image sensor.
  • the first image sensor captures the normal captured image, and the bright line is acquired.
  • the bright line image may be acquired by capturing the second image sensor simultaneously with the capturing of the first image sensor.
  • a normal photographed image and a visible light communication image that is a bright line image are acquired by each camera. Therefore, compared with the case where a normal captured image and a visible light communication image are acquired with one camera, those images can be acquired earlier, and the processing can be speeded up.
  • the information communication method when the part where the bright line appears in the display image is designated by a user operation, the information communication method further includes the transmission information acquired from the pattern of the bright line of the designated part.
  • An information presentation step of presenting presentation information based on the information may be included.
  • the operation by the user is shown in association with a tap, swipe, an operation in which a fingertip is continuously applied to the part for a predetermined time, an operation in which a line of sight is directed to the part for a predetermined time or more,
  • the presentation information is displayed as an information notification image. Thereby, desired information can be presented to the user.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time.
  • the plurality of subjects are photographed during the period in which the image sensor is moved.
  • the information communication method By acquiring the bright line image including a plurality of parts where the bright lines appear, and in the information acquisition step, for each part, by demodulating data specified by the pattern of the bright lines of the parts, The information communication method further estimates the position of the image sensor based on the acquired positions of the plurality of subjects and the movement state of the image sensor. A position estimation step may be included.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time is performed.
  • the user can be authenticated and the convenience can be improved.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the exposure time setting step for setting the exposure time of the image sensor, and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time.
  • the image acquisition step the image is reflected on the reflection surface.
  • the bright line image is acquired by photographing a plurality of the subjects, and the information acquisition step is performed.
  • the bright line is separated into bright lines corresponding to each of the plurality of subjects according to the intensity of the bright lines included in the bright line image, and each subject is specified by a bright line pattern corresponding to the subject.
  • Information may be acquired by demodulating the data.
  • a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject.
  • the exposure time setting step for setting the exposure time of the image sensor and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time.
  • the image acquisition step the image is reflected on the reflection surface.
  • the bright line image is acquired by photographing the subject, and the information communication method includes: , Based on the luminance distribution in the emission line image may include position estimation step for estimating the position of the object.
  • the luminance change when the luminance change is switched between the luminance change according to the first pattern and the luminance change according to the second pattern, it may be switched with a buffer time.
  • An information communication method for transmitting a signal according to a luminance change wherein a determination step of determining a luminance change pattern by modulating a signal to be transmitted, and the light emitter changes in luminance according to the determined pattern And transmitting the signal to be transmitted, the signal comprising a plurality of large blocks, each of the plurality of large blocks including first data and a preamble for the first data. And a check signal for the first data, wherein the first data is composed of a plurality of small blocks, and the small blocks include second data, a preamble for the second data, and the first data. And a check signal for the second data may be included.
  • An information communication method for transmitting a signal by luminance change wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, And a transmitting step in which a light emitter provided in the transmitter changes the luminance according to the determined pattern and transmits the signal to be transmitted.
  • signals having different frequencies or protocols are transmitted. May be.
  • An information communication method for transmitting a signal by luminance change wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, A transmitter in which a light emitter provided in the transmitter transmits a signal to be transmitted by changing in luminance according to the determined pattern, wherein in the transmitting step, one of the plurality of transmitters One transmitter may receive a signal transmitted from the other transmitter and transmit another signal in a manner that does not interfere with the received signal.
  • FIG. 18A shows an example of a usage form of the present invention in a train platform.
  • the user holds the portable terminal over an electronic bulletin board or lighting, and obtains information displayed on the electronic bulletin board, train information of a station where the electronic bulletin board is installed, information on the premises of the station, or the like by visible light communication.
  • the information itself displayed on the electronic bulletin board may be transmitted to the portable terminal by visible light communication, or ID information corresponding to the electronic bulletin board is transmitted to the portable terminal, and the ID information acquired by the portable terminal
  • the information displayed on the electronic bulletin board may be acquired by inquiring the server.
  • the server transmits the content displayed on the electronic bulletin board to the mobile terminal based on the ID information.
  • the train ticket information stored in the memory of the mobile terminal is compared with the information displayed on the electronic bulletin board, and the ticket information corresponding to the user's ticket is displayed on the electronic bulletin board.
  • An arrow indicating the destination to the home where the user's scheduled train arrives is displayed on the display.
  • the route to that seat may be displayed.
  • the arrow When the arrow is displayed, it can be displayed more easily by displaying the arrow using the same color as the color of the train route in the map or the train guide information.
  • the user's reservation information (home number, vehicle number, departure time, seat number) can also be displayed. By displaying the user reservation information together, it is possible to prevent erroneous recognition.
  • the ticket information is stored in the server, query the server from the mobile terminal to obtain and compare the ticket information, or compare the ticket information with the information displayed on the electronic bulletin board on the server side. Thus, information related to the ticket information can be acquired.
  • the target route may be estimated from the history of the user performing a transfer search, and the route may be displayed.
  • the contents displayed on the electronic bulletin board may be acquired and compared.
  • Information related to the user may be highlighted with respect to the display of the electronic bulletin board on the display, or may be rewritten and displayed.
  • an arrow for guiding to the boarding place on each route may be displayed.
  • an arrow for guiding to a store or restroom may be displayed on the display.
  • the user's behavior characteristics may be managed in advance by a server, and an arrow for guiding the user to a store / restroom may be displayed on the display when the user often stops at a store / restaurant in the station.
  • FIG. 18A shows an example of a train, it is possible to perform display with a similar configuration even on an airplane or a bus.
  • a portable terminal such as a smartphone (that is, a receiver such as a receiver 200 described later) captures a visible light signal from the electronic bulletin board by imaging the electronic bulletin board as illustrated in (1) of FIG. 18A. Is received as an optical ID or optical data.
  • the mobile terminal performs self-position estimation. That is, the mobile terminal acquires the position on the map of the electronic bulletin board indicated directly or indirectly by the optical data. Then, the mobile terminal, for example, with respect to the electronic bulletin board based on its own posture obtained by the 9-axis sensor and the position, shape, size, etc. in the image of the electronic bulletin board displayed in the image obtained by imaging. Calculate the relative position of the mobile terminal.
  • the portable terminal estimates its own position, which is the position of the portable terminal on the map, based on the position of the electronic bulletin board on the map and its relative position.
  • the mobile terminal searches for a route from the starting point that is the self-location to a destination indicated by, for example, ticket information, and starts navigation for guiding the user to the destination along the route.
  • the mobile terminal may transmit information indicating the starting point and the destination to the server, and obtain the above-described route searched by the server from the server. At this time, the mobile terminal may acquire a map including the route from the server.
  • the mobile terminal In navigation, as shown in (2) to (4) of FIG. 18A, the mobile terminal repeatedly captures images by the camera, and sequentially displays normal captured images obtained by the imaging in real time, while indicating an arrow indicating the user's destination Or the like is superimposed on the normal captured image. The user moves according to the displayed direction instruction image while carrying the portable terminal. Then, the mobile terminal updates the self-position of the mobile terminal based on the movement of the object or the feature point displayed in each of the above normal captured images. For example, the portable terminal detects the movement of the object or feature point displayed in each of the above-described normal captured images, and estimates the movement direction and movement distance of the portable terminal based on the movement.
  • a portable terminal updates the present self position based on the estimated moving direction and moving distance, and the self position estimated in (1) of FIG. 18A.
  • This self-position update may be performed every frame period of the normal captured image, or may be performed every period longer than the frame period. That is, when the mobile terminal is on the underground floor or route, the mobile terminal cannot acquire GPS data. Therefore, in such a case, the mobile terminal estimates or updates its own position based on the movement of the above-described feature points of each normal captured image without using GPS data.
  • the mobile terminal may guide the elevator to the user during the route to the destination.
  • the mobile terminal receives the optical data, and FIG.
  • the self-position is estimated. For example, even when a user gets on an elevator, the mobile terminal transmits optical data transmitted from a transmitter (that is, a transmitter such as a transmitter 100 described later) installed as an illumination device or the like inside the elevator cage. Receive.
  • the light data directly or indirectly indicates the floor on which the elevator car is currently located.
  • the mobile terminal can identify the floor on which the mobile terminal is currently located by receiving the optical data. If the current position of the bag is not directly indicated by the optical data, the mobile terminal transmits the information indicated by the optical data to the server, and stores the floor information associated with the information in the server. , Get from that server. Thus, the mobile terminal specifies the floor indicated by the floor information as the floor where the mobile terminal is currently located. The floor specified in this way is treated as a self-position.
  • the terminal device replaces the self-position derived from the movement of the feature points of the normal captured image with the self-position derived using the optical data. , Reset the self-position.
  • the mobile terminal performs the same processing as (2) to (4) of FIG. 18A if the user does not reach the destination after getting off the elevator. While navigating. Moreover, the portable terminal repeatedly confirms whether GPS data can be acquired during navigation. Therefore, the portable terminal determines that the GPS data can be acquired when it goes up from the underground floor or route. And a portable terminal switches the estimation method of a self-position from the estimation method based on motion, such as a feature point, to the estimation method based on GPS data. Then, as shown in (9) of FIG. 18A, the mobile terminal continues to execute navigation until the user arrives at the destination while estimating its own position based on the GPS data. Note that since the mobile terminal cannot acquire GPS data, for example, when the user enters the basement again, the self-position estimation method is changed from the estimation method based on GPS data to the estimation method based on the movement of feature points or the like. Switch to.
  • FIG. 18A will be described in detail.
  • a receiver mounted as a wearable device such as a smartphone or smart glass receives the visible light signal (optical data) transmitted from the transmitter in (1) of FIG. 18A.
  • the transmitter is implemented, for example, as an illumination signboard, a poster, or illumination that illuminates an image.
  • the receiver starts navigation to the destination according to the received optical data, information preset in the receiver, and a user instruction.
  • the receiver transmits optical data to the server and obtains navigation information associated with the data.
  • the navigation information includes the following first information to sixth information.
  • the first information is information indicating the position and shape of the transmitter.
  • the second information is information indicating a route to the destination.
  • the third information is information on another transmitter on and near the route to the destination.
  • the information of another transmitter indicates the optical data transmitted by the transmitter, the position and shape of the transmitter, and the position and shape of the reflected light.
  • the fourth information is position specifying information regarding the route and its vicinity.
  • the position specifying information is radio wave information or sound wave information for specifying an image feature amount or position.
  • the fifth information is information indicating the distance to the destination and the estimated arrival time.
  • the sixth information is part or all of the content information for performing the AR display.
  • the navigation information may be stored in advance in the receiver. Note that the above-described shape may include a size.
  • the receiver uses the relative position between the transmitter and the receiver, which is calculated from the way the transmitter is reflected in the image obtained by imaging and the sensor value of the acceleration sensor, and the position information of the transmitter.
  • the self position is estimated, and the self position is set as the starting point of navigation.
  • the receiver may start navigation by estimating the receiver's own position not by optical data but by image feature values, barcodes or two-dimensional codes, radio waves, or sound waves.
  • the receiver displays the navigation to the destination as shown in (2) of FIG. 18A.
  • the navigation display may be an AR display in which another image is superimposed on a normal captured image obtained by imaging by a camera, may be a map display, may be an instruction by voice or vibration, A combination of these may also be used.
  • the display method may be selected by a receiver, optical data, or settings on the server. Any setting may be prioritized. If the destination (that is, the destination) is a boarding place for transportation, the receiver acquires the timetable and displays the reserved time or the departure time or boarding time near the expected arrival time. Also good. If the destination is a theater or the like, the receiver may display the start time or the admission deadline.
  • the receiver advances navigation according to the movement of the receiver as shown in (3) and (4) of FIG. 18A.
  • the receiver determines from the moving distance between the images of the feature points in the plurality of images, to the moving distance of the receiver during the imaging of those images.
  • the direction may be estimated.
  • the receiver may estimate the moving distance and direction of the receiver from the transition of the acceleration sensor, radio waves, or sound waves. Further, the receiver may estimate the moving distance and direction of the receiver by SLAM (Simultaneous Localization and Mapping) or PTAM (Parallel Tracking and Mapping).
  • SLAM Simultaneous Localization and Mapping
  • PTAM Parallel Tracking and Mapping
  • the receiver when the receiver receives optical data different from the optical data received in (1) of FIG. 18A, for example, outside the elevator, the receiver sends the optical data to the server.
  • the shape and position of the transmitter associated with the data may be obtained.
  • the receiver may estimate the receiver's own position by the same method as (1) in FIG. 18A. Thereby, the receiver corrects the current position of navigation by eliminating the error of the receiver's self-position estimation that has occurred in the processes (3) and (4) of FIG. 18A. If the receiver receives only part of the visible light signal and complete optical data cannot be obtained, the transmitter that is nearest to the navigation information is the transmitter that is transmitting the visible light signal.
  • receiver's self-position is estimated in the same manner as described above. This allows transmitters with poor reception conditions, such as small transmitters, remote transmitters, or dark transmitters, to be used for receiver self-position estimation. it can.
  • the receiver receives optical data by reflected light in (6) of FIG. 18A.
  • the receiver identifies that the medium of the received optical data is reflected light based on the imaging direction, the light intensity, or the clarity of the outline.
  • the receiver identifies the position of the reflected light (that is, the position on the map) from the navigation information, and estimates the center of the reflected light area being imaged as the position of the reflected light. .
  • the receiver estimates the receiver's own position and corrects the current navigation position.
  • the receiver When the receiver receives a signal for specifying the position of GPS, GLONASS, Galileo, Hokuto satellite positioning system, IRNSS, etc., the receiver specifies the position of the receiver based on the signal, and the current position of navigation (ie, self-position). ) Is corrected. If the strength of the signal is sufficient, that is, if the strength is higher than a predetermined strength, the receiver estimates the self-position based only on the signal. If the strength is equal to or lower than the predetermined strength, (3) in FIG. 18A and The method used in (4) may be used in combination.
  • the receiver When the receiver receives a visible light signal, [1] a radio signal having a predetermined ID received simultaneously with the visible light signal, [2] a radio signal having a predetermined ID received last, or [3]
  • the information indicating the position of the last estimated receiver is transmitted to the server together with the information indicated by the visible light signal.
  • the transmitter which transmits the visible light signal is specified.
  • the receiver receives a visible light signal with an algorithm specified by the above-described radio signal or information indicating the position of the receiver, and is indicated by the visible light signal to a server specified in the same manner as described above. Information may be transmitted.
  • the receiver may estimate the self-position and display information on products near the self-position. Further, the receiver may navigate to the position of the product designated by the user. In addition, the receiver may present an optimal route to go around all the locations of the plurality of products specified by the user. The optimum route is the route with the shortest distance, the route with the shortest required time, or the route with the least amount of movement effort. Further, the receiver may perform navigation so as to pass through a predetermined place in addition to the product or place designated by the user. Thereby, the advertisement of a predetermined place, or the goods or store in the place can be performed.
  • FIG. 18B is a diagram for describing navigation by the receiver 200 in the elevator according to the present embodiment.
  • AR navigation is a navigation function that guides a user to a destination by superimposing a direction indicating image such as an arrow on a normal captured image.
  • AR navigation is also simply referred to as navigation or navigation.
  • the receiver receives an optical signal (that is, a visible light signal, optical data, or optical ID) from a transmitter disposed in the elevator cage.
  • an optical signal that is, a visible light signal, optical data, or optical ID
  • Receive. a receiver acquires elevator ID and floor information based on the optical signal.
  • the elevator ID is identification information for identifying the elevator in which the transmitter is arranged or the cage
  • the floor information is information indicating the floor (or the number of floors) where the cage is currently located.
  • the receiver transmits an optical signal (or information indicated by the optical signal) to a server, and obtains an elevator ID and rank information associated with the optical signal at the server from the server.
  • the transmitter may always transmit the same optical signal regardless of the floor of the elevator, and may transmit different optical signals depending on the floor on which the fence is located.
  • the transmitter is configured as a lighting device, for example.
  • the light from this transmitter shines brightly inside the elevator car. Therefore, the receiver can directly receive the optical signal superimposed on such light from the transmitter, and can also indirectly receive the signal through reflection from the inner wall surface or floor of the fence.
  • the receiver is located at the current position according to the elevator ID and floor number information acquired based on the optical signal transmitted from the transmitter even when the cage containing the receiver is rising. Sequentially identifies the floors. Then, as shown in (3) of FIG. 18B, when the floor on which the receiver is currently located is the target floor, the receiver displays a message or an image prompting to get off the elevator on the display of the receiver. To do. Further, the receiver may output a sound prompting to get off the elevator.
  • the receiver uses the estimation method using the movement of the feature points of the normal captured image as described above (( As shown in 4), the above-mentioned AR navigation is resumed while estimating the self-position.
  • the receiver determines its own position by an estimation method using the GPS data as shown in (4) of FIG. 18B. The above-mentioned AR navigation is resumed while estimating.
  • FIG. 18C is a diagram illustrating an example of a system configuration provided in the elevator according to the present embodiment.
  • the transmitter 100 which is the above-mentioned transmitter, is installed in the elevator cage 420.
  • the transmitter 100 is disposed on the ceiling of the eaves 420 as an illuminating device for the eaves 420 of the elevator.
  • the transmitter 100 also includes a built-in camera 404 and a microphone 411.
  • the built-in camera 404 takes an image of the inside of the bag 420, and the microphone 411 collects the sound inside the bag 420.
  • a monitoring camera system 401 is a system having at least one camera that captures an image of the interior of the bag 420.
  • the floor display unit 414 displays the floor on which the bag 420 is currently located.
  • the sensor 403 includes, for example, at least one of an atmospheric pressure sensor and an acceleration sensor.
  • the elevator also includes an image recognition unit 402, a current floor detection unit 405, a light modulation unit 406, a light emitting circuit 407, a wireless unit 409, and a voice recognition unit 410.
  • the image recognizing unit 402 recognizes a character (that is, a floor) displayed on the floor display unit 414 from an image obtained by imaging by the monitoring camera system 401 or the built-in camera 404, and obtains current floor data obtained by the recognition. Output.
  • the current floor data indicates the number of floors displayed in the floor number display unit 414.
  • the voice recognition unit 410 recognizes the floor where the bag 420 is currently located based on the voice data output from the microphone 411, and outputs floor data indicating the floor.
  • the current floor detection unit 405 detects the floor on which the bag 420 is currently located based on data output from at least one of the sensor 403, the image recognition unit 402, and the voice recognition unit 410. Then, the current floor detection unit 405 outputs information indicating the detected floor to the light modulation unit 406.
  • the light modulation unit 406 modulates the signal indicating the floor output from the current floor detection unit 405 and the signal indicating the elevator ID, and outputs the modulated signal to the light emitting circuit 407.
  • the light emitting circuit 407 changes the luminance of the transmitter 100 in accordance with the modulated signal. As a result, the visible light signal, the optical signal, the optical data, or the optical ID, which indicates the floor where the fence 420 is currently located and the elevator ID, is transmitted from the transmitter 100.
  • the radio unit 409 modulates information indicating the floor output from the current floor detection unit 405 and a signal indicating the elevator ID, and transmits the modulated signal by radio.
  • the wireless unit 409 transmits a signal by Wi-Fi or Bluetooth.
  • the receiver 200 can identify the floor and the elevator ID where the receiver 200 is currently located by receiving at least one of the radio signal and the optical signal.
  • the elevator may include a current floor detection unit 412 having the above-described floor number display unit 414.
  • the current floor detection unit 412 includes an elevator control unit 413 and a floor number display unit 414.
  • the elevator control unit 413 controls the elevating and stopping of the eaves 420. Such an elevator control unit 413 keeps track of the floor on which the fence 420 is currently located.
  • the elevator control unit 413 may output data indicating the grasped floor to the light modulation unit 406 and the radio unit 409 as current floor data.
  • the receiver 200 can realize the AR navigation shown in FIGS. 18A and 18B.
  • FIG. 19 is a diagram illustrating an example of application of the transmission / reception system in the second embodiment.
  • the receiver 8955a receives, for example, the transmission ID of the transmitter 8955b configured as a guide plate, acquires the map data displayed on the guide plate from the server, and displays the map data.
  • the server may transmit an advertisement suitable for the user of the receiver 8955a, and the receiver 8955a may also display this advertisement information.
  • the receiver 8955a displays a route from the current location to a location designated by the user.
  • FIG. 20 is a diagram illustrating an example of an application example of the transmission and reception system in the second embodiment.
  • the receiver 8957a receives the ID transmitted from the transmitter 8957b configured as a signboard, for example, acquires coupon information from the server, and displays the coupon information.
  • the receiver 8957a stores subsequent user actions such as saving a coupon, moving to a store displayed on the coupon, shopping at the store, and leaving without saving the coupon. Save to 8957c.
  • the subsequent behavior of the user who has acquired information from the sign 8957b can be analyzed, and the advertising value of the sign 8957b can be estimated.
  • the information communication method in the present embodiment is an information communication method for acquiring information from a subject, and each exposure included in the image sensor is included in an image obtained by photographing the first subject that is the subject by an image sensor.
  • a first exposure time setting step for setting a first exposure time of the image sensor so that a plurality of bright lines corresponding to a line are generated according to a change in luminance of the first subject;
  • a first bright line image acquisition step of acquiring a first bright line image that is an image including the plurality of bright lines by photographing the first subject that changes with the set first exposure time;
  • the first transmission information is acquired by demodulating the data specified by the plurality of bright line patterns included in the acquired first bright line image.
  • Comprising a first information obtaining step after the first transmission information is acquired by sending a control signal, and a door control steps to open the door against the opening and closing devices of the door.
  • the information communication method may further include a second image in which the image sensor includes a plurality of bright lines by photographing the second subject whose luminance changes with the set first exposure time.
  • Second transmission information is acquired by demodulating data specified by a pattern of the plurality of bright lines included in the acquired second bright line image and a second bright line image acquiring step of acquiring a bright line image A second information acquisition step; and an approach determination step of determining whether or not a receiving device including the image sensor is approaching the door based on the acquired first and second transmission information.
  • the control signal may be transmitted when it is determined that the receiving device is approaching the door.
  • the door can be opened only when the receiving device (receiver) approaches the door, that is, only at an appropriate timing.
  • a second exposure time setting step for setting a second exposure time longer than the first exposure time and the image sensor sets a third subject.
  • the charge is read after a predetermined time has elapsed from the time when the charge is read for the exposure line adjacent to the exposure line, and the first bright line image obtaining step is performed. Then, without using the optical black for the charge readout, the optical black in the image sensor is different from the optical black.
  • the charge is read after a time longer than the predetermined time from when the charge is read for the exposure line adjacent to the exposure line. Also good.
  • the readout (exposure) of charges from the optical black is not performed. Therefore, the readout (exposure) of charges from the effective pixel area, which is an area other than the optical black in the image sensor, is performed. This time can be lengthened. As a result, it is possible to increase the time for receiving a signal in the effective pixel region, and it is possible to acquire many signals.
  • the length in the direction perpendicular to each of the plurality of bright lines in the plurality of bright line patterns included in the first bright line image is less than a predetermined length.
  • the frame rate is reduced and the bright line is renewed.
  • An image is acquired as a third bright line image.
  • the length of the bright line pattern included in the third bright line image can be increased, and the transmitted signal can be acquired for one block.
  • the information communication method further includes a ratio setting step for setting a ratio between a vertical width and a horizontal width of an image obtained by the image sensor, and the first bright line image acquisition step includes the set ratio.
  • a clipping determination step for determining whether or not an end in a direction perpendicular to each exposure line in the image is clipped, and when it is determined that the end is clipped, the ratio set in the ratio setting step Changing the ratio to a non-clipping ratio which is a ratio at which the edge is not clipped, and the image sensor captures the first bright line with the non-clipping ratio by photographing the first subject whose luminance changes.
  • the ratio of the horizontal width to the vertical width of the effective pixel area of the image sensor is 4: 3
  • the ratio of the horizontal width to the vertical width of the image is set to 16: 9, and a bright line along the horizontal direction appears. That is, when the exposure line is along the horizontal direction, it is determined that the upper end and the lower end of the image are clipped. That is, it is determined that the end of the first bright line image is missing.
  • the ratio of the image is changed to 4: 3, which is a ratio that is not clipped.
  • the information communication method further includes a compression step of generating a compressed image by compressing the first bright line image in a direction parallel to each of the plurality of bright lines included in the first bright line image. And a compressed image transmission step of transmitting the compressed image.
  • the information communication method further determines that the receiving device including the image sensor has been moved in a predetermined manner, and a gesture determination step for determining whether or not the receiving device has been moved in a predetermined manner.
  • a gesture determination step for determining whether or not the receiving device has been moved in a predetermined manner.
  • an activation step of activating the image sensor may be included.
  • FIG. 21 is a diagram illustrating an application example of the transmitter and the receiver in the second embodiment.
  • the robot 8970 has, for example, a function as a self-propelled cleaner and a function as a receiver in each of the above embodiments.
  • the lighting devices 8971a and 8971b each have a function as a transmitter in each of the above embodiments.
  • the robot 8970 performs cleaning while moving in the room and photographs the lighting device 8971a that illuminates the room.
  • the lighting device 8971a transmits the ID of the lighting device 8971a by changing the luminance.
  • the robot 8970 receives the ID from the lighting device 8971a and estimates its own position (self-position) based on the ID as in the above embodiments. That is, the robot 8970 moves itself based on the detection result by the 9-axis sensor, the relative position of the lighting device 8971a reflected in the image obtained by photographing, and the absolute position of the lighting device 8971a specified by the ID. Is estimated.
  • the robot 8970 when the robot 8970 moves away from the lighting device 8971a by moving, the robot 8970 transmits a signal to turn off the lighting device 8971a (turn-off command). For example, when the robot 8970 leaves the lighting device 8971a by a predetermined distance, the robot 8970 transmits a turn-off command. Alternatively, the robot 8970 transmits a turn-off command to the lighting device 8971a when the lighting device 8971a does not appear in the image obtained by shooting or when another lighting device appears in the image. When the lighting device 8971a receives a turn-off command from the robot 8970, the lighting device 8971a turns off according to the turn-off command.
  • turn-off command For example, when the robot 8970 leaves the lighting device 8971a by a predetermined distance, the robot 8970 transmits a turn-off command. Alternatively, the robot 8970 transmits a turn-off command to the lighting device 8971a when the lighting device 8971
  • the robot 8970 detects that it has approached the lighting device 8971b based on the estimated self-position while moving and cleaning. That is, the robot 8970 holds information indicating the position of the lighting device 8971b, and when the distance between the self position and the position of the lighting device 8971b is equal to or less than a predetermined distance, the lighting device 8971b. Detecting that you are approaching. Then, the robot 8970 transmits a signal (lighting command) for instructing lighting to the lighting device 8971b. When the lighting device 8971b receives the lighting command, the lighting device 8971b lights up in accordance with the lighting command.
  • the robot 8970 can brighten only the surroundings while moving and can easily perform cleaning.
  • FIG. 22 is a diagram illustrating an application example of the transmitter and the receiver in the second embodiment.
  • the lighting device 8974 has a function as a transmitter in each of the above embodiments.
  • the lighting device 8974 illuminates a route bulletin board 8975 at a railway station, for example, while changing in luminance.
  • the receiver 8973 pointed to the route bulletin board 8975 by the user photographs the route bulletin board 8975.
  • the receiver 8973 acquires the ID of the route bulletin board 8975, and acquires detailed information about each route described in the route bulletin board 8975, which is information associated with the ID.
  • the receiver 8973 displays a guide image 8973a indicating the detailed information.
  • the guidance image 8973a indicates the distance to the route described on the route bulletin board 8975, the direction toward the route, and the time when the next train arrives on the route.
  • the receiver 8973 displays a supplementary guide image 8973b.
  • This supplementary guide image 8973b is, for example, a user's selection operation of any one of a railway timetable, information on another route different from the route indicated by the guide image 8973a, and detailed information on the station. It is an image for displaying accordingly.
  • FIG. 23 is a diagram illustrating an example of an application according to the third embodiment.
  • a receiver 1800a configured as a smartphone receives a signal (visible light signal) transmitted from a transmitter 1800b configured as, for example, a street digital signage. That is, the receiver 1800a receives the timing of image reproduction by the transmitter 1800b. The receiver 1800a reproduces sound at the same timing as the image reproduction. In other words, the receiver 1800a performs synchronized reproduction of the sound so that the image and sound reproduced by the transmitter 1800b are synchronized. Note that the receiver 1800a may reproduce the same image as the image (reproduced image) reproduced by the transmitter 1800b or a related image related to the reproduced image together with the sound. Further, the receiver 1800a may cause a device connected to the receiver 1800a to reproduce sound and the like. Further, after receiving the visible light signal, the receiver 1800a may download content such as sound or related images associated with the visible light signal from the server. The receiver 1800a performs synchronous reproduction after the download.
  • the user can select the sound that matches the display of the transmitter 1800b. Can hear. Further, even when there is a distance that takes time to reach the voice, it is possible to listen to the voice that matches the display.
  • FIG. 24 is a diagram illustrating an example of an application according to the third embodiment.
  • Each of the receiver 1800a and the receiver 1800c obtains and reproduces audio corresponding to a video such as a movie displayed on the transmitter 1800d from the server, in the language set in the receiver.
  • the transmitter 1800d transmits a visible light signal indicating an ID for identifying the displayed video to the receiver.
  • the receiver transmits a request signal including the ID indicated in the visible light signal and the language set in the receiver to the server.
  • the receiver acquires the audio corresponding to the request signal from the server and reproduces it. Thereby, the user can enjoy the work displayed on the transmitter 1800d in the language set by the user.
  • 25 and 26 are diagrams showing an example of a transmission signal and an example of a voice synchronization method in the third embodiment.
  • Different data are associated with a time every fixed time (N seconds).
  • These data may be, for example, an ID for identifying time, may be time, or may be audio data (for example, 64 Kbps data).
  • the following description is based on the assumption that the data is an ID. Different IDs may have different additional information parts attached to the ID.
  • the packets that make up the ID are different. Therefore, it is desirable that IDs are not continuous.
  • the transmitter 1800d transmits the ID in accordance with the reproduction time of the displayed image, for example.
  • the receiver can recognize the reproduction time (synchronization time) of the image of the transmitter 1800d by detecting the timing when the ID is changed.
  • the synchronization time can be recognized by the following method.
  • (B1) Assume that the midpoint of the receiving section where the ID has changed is the ID changing point. Further, the time after an integer multiple of the time N from the ID change point estimated in the past is also estimated as the ID change point, and the midpoint of the plurality of ID change points is estimated as a more accurate ID change point. With such an estimation algorithm, an accurate ID change point can be gradually estimated.
  • N By setting N to 0.5 seconds or less, it can be synchronized accurately.
  • FIG. 26 is a diagram illustrating an example of a transmission signal in the third embodiment.
  • a time packet is a packet that holds the time of transmission.
  • the time packet is divided into a time packet 1 representing a fine time and a time packet 2 representing a rough time.
  • time packet 2 indicates the hour and minute of the time
  • time packet 1 indicates only the second of the time.
  • a packet indicating the time may be divided into three or more time packets. Since the coarse time is less necessary, the receiver can recognize the synchronization time quickly and accurately by transmitting more fine time packets than coarse time packets.
  • the visible light signal includes the second information (hour packet 2) indicating the hour and minute of the time, and the first information (time packet 1) indicating the second of the time.
  • the time when the visible light signal is transmitted from the transmitter 1800d is indicated.
  • the receiver 1800a receives the second information and receives the first information more times than the number of times of receiving the second information.
  • FIG. 27 is a diagram illustrating an example of a processing flow of the receiver 1800a according to the third embodiment.
  • a processing delay time is designated for the receiver 1800a (step S1801). This may be stored in the processing program or specified by the user. When the user performs correction, it is possible to realize more accurate synchronization according to the individual receiver. This processing delay time can be synchronized more accurately by changing it depending on the receiver model, the temperature of the receiver, and the CPU usage rate.
  • the receiver 1800a determines whether or not a time packet has been received or whether or not an ID associated for voice synchronization has been received (step S1802).
  • the receiver 1800a determines whether it has been received (Y in step S1802), it further determines whether there is an image waiting for processing (step S1804). If it is determined that there is an image waiting for processing (Y in step S1804), the receiver 1800a discards the image waiting for processing or delays processing of the image waiting for processing to receive from the latest acquired image. Is performed (step S1805). Thereby, it is possible to avoid an unexpected delay due to the amount of waiting for processing.
  • the receiver 1800a measures the position in the image where the visible light signal (specifically the bright line) is located (step S1806). In other words, by measuring the position in the direction perpendicular to the exposure line from the first exposure line in the image sensor, the time difference (delay time in the image) from the image acquisition start time to the signal reception time is obtained. Can be calculated.
  • the receiver 1800a can accurately perform synchronized reproduction by reproducing the sound or moving image at the time obtained by adding the processing delay time and the in-image delay time to the recognized synchronization time (step S1807).
  • step S1802 if it is determined in step S1802 that the receiver 1800a has not received the time packet or the voice synchronization ID, the receiver 1800a receives a signal from the image obtained by imaging (step S1803).
  • FIG. 28 is a diagram illustrating an example of a user interface of the receiver 1800a according to the third embodiment.
  • the user can adjust the processing delay time described above by pressing one of the buttons Bt1 to Bt4 displayed on the receiver 1800a.
  • the processing delay time may be set by a swipe operation as shown in FIG. Thereby, synchronous reproduction can be performed more accurately based on the user's sense.
  • FIG. 29 is a diagram illustrating an example of a processing flow of the receiver 1800a according to the third embodiment.
  • the earphone-only playback shown by this processing flow enables audio playback without disturbing the surroundings.
  • the receiver 1800a checks whether or not the setting limited to the earphone is performed (step S1811).
  • the setting limited to the earphone is performed, for example, the setting limited to the earphone is set in the receiver 1800a.
  • settings that are limited to earphones are made in the received signal (visible light signal). Alternatively, it is recorded in the server or the receiver 1800a in association with the received signal that it is limited to the earphone.
  • step S1813 it is determined whether or not the earphone is connected to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a confirms that the earphone is not limited (N in Step S1811) or determines that the earphone is connected (Y in Step S1813), the receiver 1800a reproduces the sound (Step S1812). When playing back audio, the receiver 1800a adjusts the volume so that the volume is within the set range. This setting range is set similarly to the setting limited to the earphone.
  • the receiver 1800a determines that the earphone is not connected (N in step S1813)
  • the receiver 1800a performs a notification prompting the user to connect the earphone (step S1814).
  • This notification is performed by, for example, screen display, audio output, or vibration.
  • the receiver 1800a prepares an interface for forced reproduction and determines whether or not the user has performed an operation of forced reproduction (Ste S1815). If it is determined that the forced playback operation has been performed (Y in step S1815), the receiver 1800a plays back the audio even when the earphone is not connected (step S1812).
  • the receiver 1800a retains the audio data received in advance and the analyzed synchronization time so that the earphone is connected. Quickly synchronize audio playback.
  • FIG. 30 is a diagram illustrating another example of the processing flow of the receiver 1800a according to the third embodiment.
  • the receiver 1800a receives an ID from the transmitter 1800d (step S1821). That is, the receiver 1800a receives a visible light signal indicating the ID of the transmitter 1800d or the ID of the content displayed on the transmitter 1800d.
  • the receiver 1800a downloads information (content) associated with the received ID from the server (step S1822). Alternatively, the receiver 1800a reads out the information from the data holding unit in the receiver 1800a. Hereinafter, this information is referred to as related information.
  • the receiver 1800a determines whether or not the synchronous reproduction flag included in the related information indicates ON (step S1823). If it is determined that the synchronous reproduction flag does not indicate ON (N in step S1823), the receiver 1800a outputs the content indicated by the related information (step S1824). That is, when the content is an image, the receiver 1800a displays an image, and when the content is audio, the receiver 1800a outputs audio.
  • receiver 1800a determines that the synchronous reproduction flag indicates ON (Y in step S1823), is the time adjustment mode included in the related information set to the transmitter reference mode? Then, it is determined whether or not the absolute time mode is set (step S1825). If it is determined that the absolute time mode is set, the receiver 1800a determines whether or not the last time adjustment has been performed within a certain time from the current time (step S1826). The time adjustment at this time is processing for obtaining time information by a predetermined method and using the time information to adjust the time of a clock provided in the receiver 1800a to the absolute time of the reference clock.
  • the predetermined method is, for example, a method using a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave. Note that the current time described above may be a time when the receiver 1800a, which is a terminal device, receives a visible light signal.
  • the receiver 1800a determines that the last time adjustment has been performed within a certain time (Y in step S1826), the receiver 1800a outputs the related information based on the time of the clock of the receiver 1800a, and is displayed on the transmitter 1800d. Content and related information are synchronized (step S1827).
  • the content indicated by the related information is, for example, a moving image
  • the receiver 1800a displays the moving image so as to be synchronized with the content displayed on the transmitter 1800d.
  • the content indicated by the related information is, for example, audio
  • the receiver 1800a outputs the audio so as to be synchronized with the content displayed on the transmitter 1800d.
  • the related information indicates sound
  • the related information includes each frame constituting the sound, and these frames are time stamped.
  • the receiver 1800a outputs a sound synchronized with the content of the transmitter 1800d by playing back a frame with a type stamp corresponding to the time of its own clock.
  • the receiver 1800a determines that the last time adjustment has not been performed within a certain time (N in step S1826), the receiver 1800a attempts to obtain the time information by a predetermined method, and whether or not the time information has been obtained. Is determined (step S1828). If it is determined that the time information has been obtained (Y in step S1828), the receiver 1800a updates the time of the clock of the receiver 1800a using the time information (step S1829). Then, the receiver 1800a executes the process of step S1827 described above.
  • step S1825 If it is determined in step S1825 that the time adjustment mode is the transmitter reference mode, or if it is determined in step S1828 that time information could not be obtained (N in step S1828), the receiver 1800a
  • the time information is acquired from the transmitter 1800d (step S1830). That is, the receiver 1800a acquires time information that is a synchronization signal from the transmitter 1800d through visible light communication.
  • the synchronization signals are time packet 1 and time packet 2 shown in FIG.
  • the receiver 1800a acquires time information from the transmitter 1800d by radio waves such as Bluetooth (registered trademark) or Wi-Fi. Then, the receiver 1800a executes the processes of steps S1829 and S1827 described above.
  • processing is performed for synchronization between the clock of the terminal device that is the receiver 1800a and the reference clock by GPS radio waves or NTP radio waves.
  • the time of the terminal device, the time of the terminal device, and the time of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter 1800d. Synchronize between. Accordingly, the terminal device can reproduce the content (moving image or sound) at the timing synchronized with the transmitter-side content reproduced by the transmitter 1800d.
  • FIG. 31A is a diagram for explaining a specific method of synchronized playback in the third embodiment. There are methods a to e shown in FIG.
  • the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments.
  • the content playback time is the playback time of data that is part of the content that is being played back by the transmitter 1800d when the content ID is transmitted from the transmitter 1800d.
  • the data is a picture or a sequence constituting the moving image if the content is a moving image, or a frame constituting the sound if the content is sound.
  • the playback time indicates, for example, the playback time from the beginning of the content as the time. If the content is a moving image, the playback time is included in the content as a PTS (Presentation Time Stamp). That is, the content includes the reproduction time (display time) of the data for each data constituting the content.
  • PTS Presentation Time Stamp
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal to the server 1800f. The server 1800f receives the request signal, and transmits the content associated with the content ID included in the request signal to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception).
  • the elapsed time from the reception of the ID is an elapsed time from when the content ID is received by the receiver 1800a.
  • the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal and the content playback time to the server 1800f.
  • the server 1800f receives the request signal, and transmits only a part of the content after the content playback time to the receiver 1800a among the content associated with the content ID included in the request signal.
  • the receiver 1800a When the receiver 1800a receives the part of the content, the receiver 1800a reproduces the part of the content from the time point (elapsed time since the reception of the ID).
  • the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the content reproduction time by changing the luminance of the display, as in the above embodiments.
  • the transmitter ID is information for identifying the transmitter.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
  • the server 1800f holds, for each transmitter ID, a reproduction schedule that is a timetable of content reproduced by the transmitter with the transmitter ID. Further, the server 1800f includes a clock. When such a server 1800f receives the request signal, the content associated with the transmitter ID included in the request signal and the clock time (server time) of the server 1800f is the content being played back. Identify from the playback schedule. Then, the server 1800f transmits the content to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception).
  • the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the transmitter time by changing the luminance of the display as in the above embodiments.
  • the transmitter time is a time indicated by a clock provided in the transmitter 1800d.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal and the transmitter time to the server 1800f.
  • the server 1800f holds the above reproduction schedule.
  • the server 1800f receives the request signal
  • the server 1800f identifies the content associated with the transmitter ID and the transmitter time included in the request signal as the content being reproduced from the reproduction schedule.
  • the server 1800f specifies the content playback time from the transmitter time. That is, the server 1800f finds the playback start time of the specified content from the playback schedule, and specifies the time between the transmitter time and the playback start time as the content playback time. Then, the server 1800f transmits the content and the content playback time to the receiver 1800a.
  • the receiver 1800a Upon receiving the content and the content playback time, the receiver 1800a plays the content from the time of (content playback time + elapsed time since reception of ID).
  • the visible light signal indicates the time when the visible light signal is transmitted from the transmitter 1800d. Therefore, the receiver 1800a, which is a terminal device, can receive content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter 1800d. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
  • the server 1800f has a plurality of contents each associated with a time.
  • the content associated with the time indicated by the visible light signal may not exist in the server 1800f.
  • the receiver 1800a as the terminal device is closest to the time indicated by the visible light signal and is associated with the time after the time indicated by the visible light signal among the plurality of contents. Content may be received. Thereby, even if the content associated with the time indicated by the visible light signal does not exist in the server 1800f, it is possible to receive appropriate content from among the plurality of contents in the server 1800f.
  • the reproduction method includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal according to a change in luminance of the light source, and a receiver.
  • 1800a transmits a request signal for requesting the content associated with the visible light signal to the server 1800f
  • the receiver 1800a receives the content from the server 1800f, and reproduces the content.
  • the visible light signal indicates a transmitter ID and a transmitter time.
  • the transmitter ID is ID information.
  • the transmitter time is the time indicated by the clock of the transmitter 1800d, and the time when the visible light signal is transmitted from the transmitter 1800d.
  • the receiver 1800a receives the content associated with the transmitter ID and the transmitter time indicated by the visible light signal. As a result, the receiver 1800a can reproduce appropriate content with respect to the transmitter ID and the transmitter time.
  • the transmitter 1800d outputs a visible light signal indicating the transmitter ID by changing the luminance of the display as in the above embodiments.
  • the receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
  • the server 1800f holds the above-described reproduction schedule and further includes a clock.
  • the server 1800f receives the request signal
  • the server 1800f identifies the content associated with the transmitter ID and the server time included in the request signal from the reproduction schedule as content being reproduced.
  • the server time is the time indicated by the clock of the server 1800f.
  • the server 1800f finds the reproduction start time of the specified content from the reproduction schedule table. Then, the server 1800f transmits the content and the content reproduction start time to the receiver 1800a.
  • the receiver 1800a When the receiver 1800a receives the content and the content playback start time, the receiver 1800a plays the content from the time of (receiver time-content playback start time).
  • the receiver time is a time indicated by a clock provided in the receiver 1800a.
  • the reproduction method includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal due to a luminance change of the light source;
  • the transmitting step of transmitting a request signal for requesting the content associated with the visible light signal from the receiver 1800a to the server 1800f, and the receiver 1800a include each time and data reproduced at each time
  • the transmitter 1800d if content related to the content (transmitter-side content) is reproduced, the receiver 1800a can reproduce the content in synchronization with the transmitter-side content appropriately. .
  • the server 1800f may transmit only a part of the content after the content playback time to the receiver 1800a.
  • the receiver 1800a transmits a request signal to the server 1800f and receives necessary data from the server 1800f.
  • the data in the server 1800f is transmitted in advance without performing such transmission / reception. You may keep it.
  • FIG. 31B is a block diagram showing the configuration of a playback apparatus that performs synchronized playback by the method e described above.
  • the playback device B10 is a receiver 1800a or a terminal device that performs synchronous playback by the method e described above, and includes a sensor B11, a request signal transmission unit B12, a content reception unit B13, a clock B14, and a playback unit B15. I have.
  • Sensor B11 is, for example, an image sensor, and receives the visible light signal from a transmitter 1800d that transmits a visible light signal according to a change in luminance of the light source.
  • the request signal transmission unit B12 transmits a request signal for requesting content associated with the visible light signal to the server 1800f.
  • the content receiving unit B13 receives content including each time and data reproduced at each time from the server 1800f.
  • the reproduction unit B15 reproduces data corresponding to the time of the clock B14 in the content.
  • FIG. 31C is a flowchart showing the processing operation of the terminal device that performs synchronous reproduction by the method e described above.
  • the playback device B10 is a receiver 1800a or a terminal device that performs synchronized playback by the method e described above, and executes each process of steps SB11 to SB15.
  • step SB11 the visible light signal is received from the transmitter 1800d that transmits the visible light signal according to the luminance change of the light source.
  • step SB12 a request signal for requesting content associated with the visible light signal is transmitted to server 1800f.
  • step SB13 content including each time and data reproduced at each time is received from server 1800f.
  • step SB15 data corresponding to the time of the clock B14 is reproduced from the content.
  • the data in the content can be appropriately played back at the correct time indicated by the content without being played back at the wrong time.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the software that realizes the playback apparatus B10 and the like of the present embodiment is a program that causes a computer to execute each step included in the flowchart shown in FIG. 31C.
  • FIG. 32 is a diagram for explaining preparations for synchronized playback in the third embodiment.
  • the receiver 1800a adjusts the time of the clock provided in the receiver 1800a to the time of the reference clock in order to perform synchronized playback. For this time adjustment, the receiver 1800a performs the following processes (1) to (5).
  • the receiver 1800a receives a signal.
  • This signal may be a visible light signal transmitted by a change in luminance of the display of the transmitter 1800d, or a radio wave signal based on Wi-Fi or Bluetooth (registered trademark) from a wireless device.
  • the receiver 1800a acquires position information indicating the position of the receiver 1800a by, for example, GPS instead of receiving such a signal. Then, the receiver 1800a recognizes that the receiver 1800a has entered a predetermined place or building based on the position information.
  • the receiver 1800a When the receiver 1800a receives the above signal or recognizes that it has entered a predetermined location, it receives a request signal for requesting data (related information) associated with the signal or location. It transmits to the server (visible light ID resolution server) 1800f.
  • the server visible light ID resolution server
  • the server 1800f transmits the above-described data and a time adjustment request for causing the receiver 1800a to adjust the time to the receiver 1800a.
  • the receiver 1800a When receiving the data and the time adjustment request, the receiver 1800a transmits the time adjustment request to the GPS time server, the NTP server, or the base station of the telecommunications carrier (carrier).
  • the server or the base station Upon receiving the time adjustment request, the server or the base station transmits time data (time information) indicating the current time (reference clock time or absolute time) to the receiver 1800a.
  • time data time information
  • the receiver 1800a adjusts the time by adjusting the time of the clock provided to the receiver 1800a to the current time indicated by the time data.
  • a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave is used between the clock provided in the receiver 1800a (terminal device) and the reference clock. Synchronized. Therefore, the receiver 1800a can reproduce the data corresponding to the time at an appropriate time according to the reference clock.
  • FIG. 33 is a diagram illustrating an example of application of the receiver 1800a in the third embodiment.
  • the receiver 1800a is configured as a smartphone as described above, and is used by being held by a holder 1810 formed of, for example, a translucent resin or glass member.
  • the holder 1810 includes a back plate portion 1810a and a locking portion 1810b provided upright on the back plate portion 1810a.
  • the receiver 1800a is inserted between the back plate portion 1810a and the locking portion 1810b so as to be along the back plate portion 1810a.
  • FIG. 34A is a front view of receiver 1800a held by holder 1810 in the third embodiment.
  • the receiver 1800a is held by the holder 1810 in the inserted state as described above.
  • the locking portion 1810b locks with the lower portion of the receiver 1800a and sandwiches the lower portion with the back plate portion 1810a.
  • the back surface of the receiver 1800a faces the back plate portion 1810a, and the display 1801 of the receiver 1800a is exposed.
  • FIG. 34B is a rear view of receiver 1800a held by holder 1810 in the third embodiment.
  • a through hole 1811 is formed in the back plate portion 1810a, and a variable filter 1812 is attached in the vicinity of the through hole 1811.
  • camera 1802 of receiver 1800a is exposed through back hole 1811 from back plate portion 1810a.
  • the flashlight 1803 of the receiver 1800a faces the variable filter 1812.
  • the variable filter 1812 is formed in a disk shape, for example, and has three color filters (a red filter, a yellow filter, and a green filter) each having a fan shape and the same size.
  • the variable filter 1812 is attached to the back plate portion 1810a so as to be rotatable about the center of the variable filter 1812.
  • the red filter is a filter having red translucency
  • the yellow filter is a filter having yellow translucency
  • the green filter is a filter having green translucency.
  • variable filter 1812 is rotated, and, for example, the red filter is disposed at a position facing the flashlight 1803a.
  • the light emitted from the flashlight 1803a is diffused inside the holder 1810 as red light by passing through the red filter.
  • substantially the entire holder 1810 emits red light.
  • variable filter 1812 is rotated and, for example, the yellow filter is disposed at a position facing the flashlight 1803a.
  • the light emitted from the flashlight 1803a is diffused inside the holder 1810 as yellow light by passing through the yellow filter.
  • substantially the entire holder 1810 emits yellow light.
  • variable filter 1812 is rotated so that, for example, the green filter is disposed at a position facing the flashlight 1803a.
  • the light emitted from the flashlight 1803a is diffused inside the holder 1810 as green light by passing through the green filter.
  • substantially the entire holder 1810 emits green light.
  • the holder 1810 lights in red, yellow or green like a penlight.
  • FIG. 35 is a diagram for describing a use case of the receiver 1800a held by the holder 1810 in the third embodiment.
  • a receiver with a holder that is a receiver 1800a held by a holder 1810 is used in an amusement park or the like. That is, the plurality of receivers with holders that are directed to the float moving in the amusement park blink in synchronization with the music flowing from the float.
  • the float is configured as a transmitter in each of the above embodiments, and transmits a visible light signal by a change in luminance of a light source attached to the float.
  • the float transmits a visible light signal indicating the ID of the float.
  • the receiver with a holder receives the visible light signal, ie, ID, by imaging
  • the receiver 1800a that has received the ID acquires a program associated with the ID from, for example, a server.
  • This program includes instructions for turning on the flashlight 1803 of the receiver 1800a at each predetermined time. Each predetermined time is set in accordance with the music flowing from the float (so as to be synchronized). Then, the receiver 1800a blinks the flashlight 1803a according to the program.
  • each receiver 1800a that has received the ID repeats lighting at the same timing according to the music flowing from the float of the ID.
  • each receiver 1800a blinks the flashlight 1803 in accordance with a set color filter (hereinafter referred to as a setting filter).
  • the setting filter is a color filter that faces the flashlight 1803 of the receiver 1800a.
  • Each receiver 1800a recognizes the current setting filter based on an operation by the user. Alternatively, each receiver 1800a recognizes the current setting filter based on the color of an image obtained by photographing with the camera 1802.
  • the receiver 1800a held in the holder 1810 is synchronized with the float music and the receiver 1800a held in the other holder 1810 in the same manner as the synchronous playback shown in FIGS. Then, the flashlight 1803, that is, the holder 1810 is blinked.
  • FIG. 36 is a flowchart showing the processing operation of the receiver 1800a held by the holder 1810 in the third embodiment.
  • the receiver 1800a receives the float ID indicated by the visible light signal from the float (step S1831). Next, the receiver 1800a acquires a program associated with the ID from the server (step S1832). Next, the receiver 1800a executes the program to turn on the flashlight 1803 at each predetermined time according to the setting filter (step S1833).
  • the receiver 1800a may cause the display 1801 to display an image corresponding to the received ID or the acquired program.
  • FIG. 37 is a diagram illustrating an example of an image displayed by the receiver 1800a according to the third embodiment.
  • the receiver 1800a when the receiver 1800a receives an ID from a Santa Claus float, the receiver 1800a displays a Santa Claus image as shown in FIG. Further, as shown in FIG. 37B, the receiver 1800a may change the background color of the Santa Claus image to the color of the setting filter simultaneously with the lighting of the flashlight 1803. For example, when the color of the setting filter is red, the holder 1810 is lit red by turning on the flashlight 1803, and at the same time, a Santa Claus image having a red background color is displayed on the display 1801. That is, the blinking of the holder 1810 and the display on the display 1801 are synchronized.
  • FIG. 38 is a diagram showing another example of the holder in the third embodiment.
  • the holder 1820 is configured in the same manner as the holder 1810 described above, but does not include the through hole 1811 and the variable filter 1812.
  • a holder 1820 holds the receiver 1800a in a state where the display 1801 of the receiver 1800a is directed to the back plate portion 1820a.
  • the receiver 1800a causes the display 1801 to emit light instead of the flashlight 1803.
  • light from the display 1801 is diffused over substantially the entire holder 1820. Therefore, when the receiver 1800a causes the display 1801 to emit light with red light according to the above-described program, the holder 1820 is lit red. Similarly, when the receiver 1800a causes the display 1801 to emit light with yellow light according to the above-described program, the holder 1820 is lit in yellow.
  • the holder 1820 lights up in green. If such a holder 1820 is used, the setting of the variable filter 1812 can be omitted.
  • FIG. 39A to FIG. 39D are diagrams illustrating examples of visible light signals in the third embodiment.
  • the transmitter generates a 4PPM visible light signal and changes the luminance according to the visible light signal, for example, as shown in FIG. 39A.
  • the transmitter allocates 4 slots to one signal unit, and generates a visible light signal composed of a plurality of signal units.
  • the signal unit indicates High (H) or Low (L) for each slot.
  • the transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot.
  • one slot is a period corresponding to a time of 1/9600 seconds.
  • the transmitter may generate a visible light signal in which the number of slots allocated to one signal unit is variable.
  • the signal unit includes a signal indicating H in one or more consecutive slots and a signal indicating L in one slot following the H signal. Since the number of slots of H is variable, the total number of slots in the signal unit is variable.
  • the transmitter generates a visible light signal including these signal units in the order of a signal unit of 3 slots, a signal unit of 4 slots, and a signal unit of 6 slots. Also in this case, the transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot.
  • the transmitter may allocate an arbitrary period (signal unit period) to one signal unit without allocating a plurality of slots to one signal unit.
  • the signal unit period includes an H period and an L period following the H period.
  • the period of H is adjusted according to the signal before modulation.
  • the period L may be fixed and may be a period corresponding to the slot.
  • the H period and the L period are, for example, periods of 100 ⁇ s or more. For example, as shown in FIG.
  • the transmitter transmits a visible light signal including signal units in the order of a signal unit having a signal unit period of 210 ⁇ s, a signal unit having a signal unit period of 220 ⁇ s, and a signal unit having a signal unit period of 230 ⁇ s.
  • the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
  • the transmitter may generate a signal indicating L and H alternately as a visible light signal.
  • the L period and the H period in the visible light signal are adjusted according to the signals before modulation.
  • the transmitter indicates H for a period of 100 ⁇ s, then indicates L for a period of 120 ⁇ s, then indicates H for a period of 110 ⁇ s, and further indicates L for a period of 200 ⁇ s.
  • a visible light signal is transmitted.
  • the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
  • FIG. 40 is a diagram showing a configuration of a visible light signal in the third embodiment.
  • the visible light signal includes, for example, a signal 1, a brightness adjustment signal corresponding to the signal 1, a signal 2, and a brightness adjustment signal corresponding to the signal 2.
  • the transmitter When the transmitter generates the signal 1 and the signal 2 by modulating the signals before modulation, the transmitter generates a brightness adjustment signal for the signals and generates the above-described visible light signal.
  • the brightness adjustment signal corresponding to signal 1 is a signal that compensates for increase / decrease in brightness due to a luminance change according to signal 1.
  • the brightness adjustment signal corresponding to the signal 2 is a signal that compensates for increase / decrease in brightness due to a luminance change according to the signal 2.
  • the brightness B1 is expressed by the luminance change according to the signal 1 and the brightness adjustment signal of the signal 1
  • Brightness B2 is expressed.
  • the transmitter in the present embodiment generates the brightness adjustment signals of signal 1 and signal 2 as part of the visible light signal so that the brightness B1 and brightness B2 are equal. Thereby, the brightness is kept constant and flicker can be suppressed.
  • the transmitter 1 when the transmitter 1 generates the signal 1, the transmitter 1 generates the signal 1 including the data 1, the preamble (header) following the data 1, and the data 1 following the preamble.
  • the preamble is a signal corresponding to data 1 arranged before and after the preamble.
  • this preamble is a signal serving as an identifier for reading data 1.
  • the reproduction method includes a signal receiving step of receiving the visible light signal by a sensor of a terminal device from a transmitter that transmits a visible light signal according to a luminance change of a light source, and the visible light signal from the terminal device.
  • the terminal device can appropriately reproduce the data in the content at the correct time indicated by the content without reproducing the data at the wrong time.
  • the receiver as the terminal device reproduces the content from the time of (receiver time ⁇ content reproduction start time).
  • the data corresponding to the clock time of the terminal device described above is data at the time of (receiver time ⁇ content reproduction start time) in the content.
  • the terminal device can play back the content appropriately synchronized with the transmitter-side content.
  • the content is sound or image.
  • the clock provided in the terminal device and the reference clock may be synchronized with each other by GPS (Global Positioning System) radio waves or NTP (Network Time Protocol) radio waves.
  • GPS Global Positioning System
  • NTP Network Time Protocol
  • the visible light signal may indicate a time when the visible light signal is transmitted from the transmitter.
  • the terminal device can receive the content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
  • the time at which the process for synchronizing the clock of the terminal device and the reference clock is performed by the GPS radio wave or the NTP radio wave is determined by the terminal device as the visible signal. If it is before a predetermined time from the time when the optical signal is received, synchronization is performed between the clock of the terminal device and the clock of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter. It may be taken.
  • the synchronization may not be properly maintained. In such a case, there is a possibility that the terminal device cannot reproduce the content at a time synchronized with the transmitter-side content reproduced by the transmitter. Therefore, in the playback method according to one aspect of the present invention, as shown in steps S1829 and S1830 of FIG. 30, when a predetermined time has elapsed, the clock of the terminal device (receiver) and the clock of the transmitter are set. Synchronized. Therefore, the terminal device can reproduce the content at a time synchronized with the transmitter-side content reproduced by the transmitter.
  • the server has a plurality of contents each associated with a time, and in the contents receiving step, when the contents associated with the time indicated by the visible light signal does not exist in the server, Among the plurality of contents, content that is closest to the time indicated by the visible light signal and that is associated with a time after the time indicated by the visible light signal may be received.
  • a signal receiving step of receiving the visible light signal by a sensor of a terminal device from a transmitter that transmits a visible light signal due to a luminance change of the light source, and a content associated with the visible light signal from the terminal device A transmission step of transmitting a request signal for requesting to the server, a content reception step in which the terminal device receives the content from the server, and a reproduction step of reproducing the content, wherein the visible light signal is: ID information and a time at which the visible light signal is transmitted from the transmitter are indicated.
  • the ID information indicated by the visible light signal and the content associated with the time are received. Good.
  • the visible light signal is associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter among the plurality of contents associated with the ID information (transmitter ID).
  • the received content is received and played back. Therefore, it is possible to reproduce content appropriate for the transmitter ID and transmitter time.
  • the visible light signal includes second information indicating the hour and minute of the time and first information indicating the second of the time, so that the visible light signal is transmitted from the transmitter.
  • the second information may be received and the first information may be received more times than the number of times the second information is received.
  • the packet indicating the current time expressed using all of the hour, minute, and second can be saved in every second. That is, as shown in FIG. 26, if the hour and minute of the time when the packet is transmitted is not updated from the time and minute indicated in the previously transmitted packet, the packet indicating only the second (time packet 1 Only the first information is required to be transmitted. Therefore, by reducing the second information that is the packet indicating the hour and the minute (time packet 2) than the first information that is the packet indicating the second (time packet 1) transmitted by the transmitter, Transmission of packets containing redundant contents can be suppressed.
  • FIG. 41 is a diagram illustrating an example in which the receiver according to the present embodiment displays an AR image.
  • the receiver 200 is a receiver including the image sensor and the display 201 according to any one of the first to third embodiments, and is configured as a smartphone, for example.
  • Such a receiver 200 acquires the above-described captured display image Pa, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
  • the image sensor of the receiver 200 images the transmitter 100 configured as a station name sign.
  • the transmitter 100 is the transmitter according to any one of the first to third embodiments, and includes one or a plurality of light emitting elements (for example, LEDs).
  • the transmitter 100 changes in luminance by blinking one or more light emitting elements, and transmits an optical ID (light identification information) by the change in luminance.
  • This light ID is the above-mentioned visible light signal.
  • the receiver 200 captures the transmitter 100 with the normal exposure time, thereby acquiring the captured display image Pa projected by the transmitter 100, and the transmitter 100 with a communication exposure time shorter than the normal exposure time.
  • a decoding image is acquired by imaging.
  • the normal exposure time is the exposure time in the above-described normal photographing mode
  • the communication exposure time is the exposure time in the above-described visible light communication mode.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P1 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. For example, the receiver 200 recognizes an area in which a station name sign that is the transmitter 100 is displayed as a target area. Then, the receiver 200 superimposes the AR image P1 on the target area, and displays the captured display image Pa on which the AR image P1 is superimposed on the display 201.
  • the receiver 200 when “Kyoto Station” is written in Japanese as a station name in the station name that is the transmitter 100, the receiver 200 describes the AR image P1 in which the station name is written in English, that is, “Kyoto Station”. Obtained AR image P1.
  • the AR image P1 is superimposed on the target area of the captured display image Pa, the captured display image Pa can be displayed so that a station name with a station name written in English actually exists.
  • a user who can understand English can easily understand the station name described in the station name sign that is the transmitter 100 by looking at the captured display image Pa even if the user cannot read Japanese.
  • the recognition information may be an image to be recognized (for example, an image of the above-described station name sign), or may be a feature point and a feature amount of the image.
  • the feature points and feature quantities are obtained by, for example, SIFT (Scale-invariant feature transform), SURF (Speed-Uploaded Robust Feature), ORB (Oriented-BREF), AKAZE (Accelerated) image, etc.
  • the recognition information may be a white square image similar to the image to be recognized, and may further indicate the aspect ratio (aspect ratio) of the square.
  • the identification information may be random dots that appear in the recognition target image.
  • the recognition information may indicate a direction based on a predetermined direction, such as the above-described white square or random dot.
  • the predetermined direction is, for example, the direction of gravity.
  • the receiver 200 recognizes an area corresponding to such recognition information as a target area from the captured display image Pa. Specifically, if the recognition information is an image, the receiver 200 recognizes a region similar to the image that is the recognition information as a target region. If the recognition information is a feature point and a feature amount obtained by image processing, the receiver 200 performs feature point detection and feature amount extraction by performing the image processing on the captured display image Pa. . Then, the receiver 200 recognizes, in the captured display image Pa, a region having feature points and feature amounts that are similar to the feature points and feature amounts that are recognition information as target regions. If the recognition information indicates a white square and its direction, the receiver 200 first detects the direction of gravity using an acceleration sensor provided in the receiver 200. Then, the receiver 200 recognizes, as a target area, an area similar to a white square directed in the direction indicated by the recognition information from the captured display image Pa arranged with reference to the direction of gravity.
  • the recognition information may include reference information for specifying a reference area in the captured display image Pa and target information indicating a relative position of the target area with respect to the reference area.
  • the reference information is an image to be recognized, a feature point and a feature amount, a white square image, or a random dot as described above.
  • the receiver 200 when recognizing the target area, the receiver 200 first specifies the reference area from the captured display image Pa based on the reference information. And the receiver 200 recognizes the area
  • the target information may indicate that the target area is in the same position as the reference area.
  • the recognition information includes the reference information and the target information, the target region can be recognized in a wide range. Further, the server can freely set the location where the AR image is superimposed and can be instructed to the receiver 200.
  • the reference information may indicate that the reference area in the captured display image Pa is an area where the display of the captured display image is displayed.
  • the transmitter 100 is configured as a display such as a television, for example, the target area can be recognized with reference to the area where the display is displayed.
  • the receiver 200 in the present embodiment specifies the reference image and the image recognition method based on the light ID.
  • the image recognition method is a method for recognizing the captured display image Pa, for example, geometric feature extraction, spectral feature extraction, texture feature extraction, or the like.
  • the reference image is data indicating a reference feature amount.
  • the feature amount is, for example, the feature amount of the white outer frame of the image, and specifically, may be data expressing the feature of the image as a vector.
  • the receiver 200 extracts a feature amount from the captured display image Pa in accordance with an image recognition method, and compares the feature amount with the feature amount of the reference image, so that the above-described reference region or target region is extracted from the captured display image Pa. Find out.
  • the image recognition method may include, for example, a location use method, a marker use method, and a markerless method.
  • the location utilization method is a method utilizing GPS position information (that is, the position of the receiver 200), and the target area is recognized from the captured display image Pa based on the position information.
  • the marker utilization method is a method of using a marker composed of white and black graphics such as a two-dimensional barcode as a target specifying mark. That is, in this marker usage method, the target region is recognized based on the marker displayed in the captured display image Pa.
  • a feature point or feature amount is extracted from the captured display image Pa by image analysis on the captured display image Pa, and the position and region of the target are specified based on the extracted feature point or feature amount. Is the method. That is, when the image recognition method is a markerless method, the image recognition method is the above-described geometric feature amount extraction, spectral feature amount extraction, texture feature amount extraction, or the like.
  • Such a receiver 200 receives an optical ID from the transmitter 100 and acquires a reference image and an image recognition method associated with the optical ID (hereinafter referred to as a received optical ID) from the server, thereby obtaining the reference.
  • Images and image recognition methods may be specified. That is, the server stores a plurality of sets including the reference image and the image recognition method, and each of the plurality of sets is associated with a different light ID. Thus, one set associated with the received light ID can be identified from among a plurality of sets stored in the server. Therefore, the speed of image processing for superimposing the AR image can be improved.
  • the receiver 200 may acquire a reference image associated with the received light ID by inquiring of the server, and the received light ID may be obtained from a plurality of reference images held by the receiver 200 in advance. You may acquire the reference
  • the server may hold the relative position information associated with each light ID together with the reference image, the image recognition method, and the AR image for each light ID.
  • the relative position information is information indicating the relative positional relationship between the reference area and the target area, for example.
  • the receiver 200 may recognize the above-described reference area as a target area and superimpose an AR image on the reference area. That is, the receiver 200 may store a program for displaying an AR image based on the reference image in advance, instead of acquiring the relative position information, and display the AR image in a white frame that is a reference region, for example. . In this case, relative position information is not necessary.
  • the server holds a plurality of sets including a reference image, relative position information, an AR image, and an image recognition method.
  • the receiver 200 acquires one set associated with the received light ID from these sets.
  • the server holds a plurality of sets including a reference image and an AR image.
  • the receiver 200 uses a predetermined relative position information and an image recognition method, and acquires one set associated with the received light ID from the set.
  • the receiver 200 may hold a plurality of sets including the relative position information and the image recognition method in advance, and select one set associated with the received light ID from the plurality of sets.
  • the receiver 200 may inquire by transmitting the received light ID to the server, and acquire relative position information corresponding to the received light ID and information for specifying the image recognition method from the server. Then, the receiver 200 selects one set based on the information acquired from the server from among a plurality of sets each having the relative position information and the image recognition method.
  • the receiver 200 selects one set associated with the received light ID from a plurality of sets each including the relative position information and the image recognition method stored in advance without inquiring of the server. May be.
  • the receiver 200 holds a plurality of sets including a reference image, relative position information, an AR image, and an image recognition method, and selects one set from these sets. Similarly to (2) above, the receiver 200 may select one set by making an inquiry to the server, or may select one set associated with the receiver optical ID.
  • the receiver 200 holds a plurality of sets including the reference image and the AR image, and selects one set associated with the received light ID.
  • the receiver 200 uses a predetermined image recognition method and relative position information.
  • FIG. 42 is a diagram showing an example of the display system in the present embodiment.
  • the display system in the present embodiment includes, for example, the transmitter 100, the receiver 200, and the server 300, which are the above-described station names.
  • the receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the server 300.
  • the server 300 holds an AR image and recognition information associated with each light ID for each light ID. Therefore, when the server 300 receives the optical ID from the receiver 200, the server 300 selects the AR image and the recognition information associated with the received optical ID, and sends the selected AR image and the recognition information to the receiver 200. Send. Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the server 300, and displays the captured display image on which the AR image is superimposed.
  • FIG. 43 is a diagram showing another example of the display system in the present embodiment.
  • the display system according to the present embodiment includes, for example, the transmitter 100 that is the above-described station name sign, the receiver 200, the first server 301, and the second server 302.
  • the receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the first server 301.
  • the first server 301 When receiving the optical ID from the receiver 200, the first server 301 notifies the receiver 200 of a URL (Uniform Resource Locator) and Key associated with the received optical ID. Receiving such notification, the receiver 200 accesses the second server 302 based on the URL, and passes the key to the second server 302.
  • a URL Uniform Resource Locator
  • the second server 302 holds an AR image and recognition information associated with each key for each key. Therefore, when receiving a key from the receiver 200, the second server 302 selects an AR image and recognition information associated with the key, and transmits the selected AR image and recognition information to the receiver 200. . Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the second server 302, and displays a captured display image on which the AR image is superimposed.
  • FIG. 44 is a diagram showing another example of the display system in the present embodiment.
  • the display system according to the present embodiment includes, for example, the transmitter 100 that is the above-described station name sign, the receiver 200, the first server 301, and the second server 302.
  • the receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the first server 301.
  • the first server 301 When receiving the optical ID from the receiver 200, the first server 301 notifies the second server 302 of the Key associated with the received optical ID.
  • the second server 302 holds an AR image and recognition information associated with each key for each key. Therefore, when the second server 302 receives the key from the first server 301, the second server 302 selects the AR image and the recognition information associated with the key, and the selected AR image and the recognition information are used as the first server. To the server 301. When receiving the AR image and the recognition information from the second server 302, the first server 301 transmits the AR image and the recognition information to the receiver 200. Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the first server 301, and displays the captured display image on which the AR image is superimposed.
  • the second server 302 transmits the AR image and the recognition information to the first server 301.
  • the second server 302 may transmit the AR image and the recognition information to the receiver 200 without transmitting to the first server 301. .
  • FIG. 45 is a flowchart showing an example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 starts imaging with the above-described normal exposure time and communication exposure time (step S101). Then, the receiver 200 acquires an optical ID by decoding the decoding image obtained by imaging with the communication exposure time (step S102). Next, the receiver 200 transmits the optical ID to the server (step S103).
  • the receiver 200 acquires the AR image corresponding to the transmitted optical ID and the recognition information from the server (step S104). Next, the receiver 200 recognizes, as a target area, an area corresponding to the recognition information in the captured display image obtained by imaging with the normal exposure time (step S105). Then, the receiver 200 superimposes the AR image on the target area, and displays the captured display image on which the AR image is superimposed (step S106).
  • the receiver 200 determines whether or not the imaging and the display of the captured display image should be terminated (step S107).
  • the receiver 200 determines whether or not the imaging and the display of the captured display image should be terminated (N in step S107).
  • it further determines whether or not the acceleration of the receiver 200 is equal to or greater than a threshold value (step S108). This acceleration is measured by an acceleration sensor provided in the receiver 200.
  • the receiver 200 executes the processing from step S105. Thereby, even when the captured display image displayed on the display 201 of the receiver 200 is shifted, the AR image can follow the target area of the captured display image.
  • the receiver 200 determines that the acceleration is equal to or greater than the threshold (Y in step S108)
  • the receiver 200 executes the processing from step S102. Thereby, when the transmitter 100 is no longer displayed in the captured display image, it is possible to suppress erroneously recognizing an area where a subject different from the transmitter 100 is displayed as the target area.
  • the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
  • augmented reality that is, AR
  • a huge number of recognition target images stored in advance are compared with the captured display image, and any of the recognition target images is included in the captured display image. It is determined whether or not If it is determined that the recognition target image is included, the AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image.
  • a huge number of recognition target images are compared with captured display images, and further, position detection of the recognition target images in the captured display image is necessary even in alignment. There is a problem of a large amount of calculation and a high processing load.
  • the light ID is acquired by decoding the decoding image obtained by imaging the subject. That is, the optical ID transmitted from the transmitter that is the subject is received. Furthermore, the AR image corresponding to this optical ID and the recognition information are acquired from the server. Therefore, the server does not need to compare a huge number of recognition target images and captured display images, and can select and transmit an AR image previously associated with the optical ID to the display device. Thereby, the amount of calculation can be reduced and the processing load can be significantly suppressed. Furthermore, the AR image display process can be speeded up.
  • recognition information corresponding to this optical ID is acquired from the server.
  • the recognition information is information for recognizing a target area that is an area in which an AR image is superimposed in a captured display image.
  • This recognition information may be information indicating that a white square is the target area, for example.
  • the target area can be easily recognized, and the processing load can be further suppressed. That is, the processing load can be further suppressed according to the content of the recognition information.
  • the server can arbitrarily set the content of the recognition information according to the optical ID, the balance between the processing load and the recognition accuracy can be appropriately maintained.
  • the receiver 200 acquires the AR image and the recognition information corresponding to the optical ID from the server.
  • the AR image and the recognition information At least one of these may be acquired in advance. That is, the receiver 200 collects a plurality of AR images and a plurality of recognition information corresponding to a plurality of optical IDs that may be received from the server and stores them. Thereafter, when receiving the optical ID, the receiver 200 selects an AR image and recognition information corresponding to the optical ID from a plurality of AR images and a plurality of recognition information stored in the receiver 200. Thereby, the display processing of the AR image can be further accelerated.
  • FIG. 46 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 is configured as a lighting device, and transmits a light ID by changing the luminance while illuminating the facility guide plate 101. Since the guide plate 101 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
  • the receiver 200 acquires the captured display image Pb and the decoding image in the same manner as described above by imaging the guide plate 101 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 101.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P2 and the recognition information corresponding to the optical ID from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pb as a target area. For example, the receiver 200 recognizes the area where the frame 102 on the guide plate 101 is projected as the target area. This frame 102 is a frame for indicating the waiting time of the facility.
  • the receiver 200 superimposes the AR image P2 on the target area, and displays the captured display image Pb on which the AR image P2 is superimposed on the display 201.
  • the AR image P2 is an image including the character string “30 minutes”.
  • the receiver 200 captures the captured display image so that the guide plate 101 on which the waiting time “30 minutes” is described actually exists. Pb can be displayed. Thereby, it is possible to inform the user of the receiver 200 of the waiting time easily and easily without providing a special display device on the guide plate 101.
  • 47 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 includes two illumination devices as shown in FIG. 47, for example.
  • the transmitter 100 transmits the light ID by changing the luminance while illuminating the facility guide plate 104. Since the guide plate 104 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
  • the guide plate 104 indicates names of a plurality of facilities such as “ABC land” and “adventure land”.
  • the receiver 200 acquires the captured display image Pc and the decoding image in the same manner as described above by imaging the guide plate 104 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 104.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P3 and the recognition information corresponding to the optical ID from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pc as a target area. For example, the receiver 200 recognizes an area where the guide plate 104 is projected as a target area.
  • the receiver 200 superimposes the AR image P3 on the target area, and displays the captured display image Pc on which the AR image P3 is superimposed on the display 201.
  • the AR image P3 is an image indicating names of a plurality of facilities.
  • the longer the waiting time of the facility the smaller the name of the facility is displayed.
  • the shorter the waiting time of the facility the larger the name of the facility is displayed.
  • the receiver 200 since the AR image P3 is superimposed on the target area of the captured display image Pc, the receiver 200 seems to actually have a guide plate 104 on which each facility name having a size corresponding to the waiting time is described.
  • the captured display image Pc can be displayed. Thereby, without providing a special display device on the guide plate 104, the user of the receiver 200 can be informed easily and easily of the waiting time of each facility.
  • FIG. 48 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 includes two illumination devices as shown in FIG. 48, for example.
  • the transmitter 100 transmits the light ID by changing the luminance while illuminating the castle wall 105. Since the castle wall 105 is illuminated by the light from the transmitter 100, the luminance is changed in the same manner as the transmitter 100, and the light ID is transmitted. Further, on the castle wall 105, for example, a small mark imitating the character's face is engraved as a hidden character 106.
  • the receiver 200 acquires the captured display image Pd and the decoding image in the same manner as described above by capturing an image of the castle wall 105 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the castle wall 105.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P4 corresponding to the optical ID and the recognition information from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pd as a target area. For example, the receiver 200 recognizes, as the target area, an area in which a range including the hidden character 106 in the castle wall 105 is projected.
  • the receiver 200 superimposes the AR image P4 on the target area, and displays the captured display image Pd on which the AR image P4 is superimposed on the display 201.
  • the AR image P4 is an image imitating a character's face.
  • the AR image P4 is an image that is sufficiently larger than the hidden character 106 displayed in the captured display image Pd.
  • the receiver 200 captures the image so that the castle wall 105 engraved with a large mark imitating the character's face actually exists.
  • the display image Pd can be displayed. Thereby, it is possible to inform the user of the receiver 200 of the position of the hidden character 106 in an easy-to-understand manner.
  • FIG. 49 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 includes two illumination devices as shown in FIG. 49, for example.
  • the transmitter 100 transmits the light ID by changing the luminance while illuminating the facility guide plate 107. Since the guide plate 107 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
  • an infrared shielding paint 108 is applied to a plurality of corners of the guide plate 107.
  • the receiver 200 acquires the captured display image Pe and the decoding image in the same manner as described above by imaging the guide plate 107 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 107.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P5 and the recognition information corresponding to the optical ID from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pe as a target area. For example, the receiver 200 recognizes an area where the guide plate 107 is projected as a target area.
  • the recognition information indicates that a rectangle circumscribing the plurality of infrared shielding paints 108 is the target region.
  • the infrared blocking paint 108 blocks infrared rays included in the light emitted from the transmitter 100. Therefore, the image sensor of the receiver 200 recognizes the infrared shielding paint 108 as an image darker than the surrounding area.
  • the receiver 200 recognizes rectangles circumscribing the plurality of infrared shielding paints 108 that appear as dark images as target regions.
  • the receiver 200 superimposes the AR image P5 on the target area, and displays the captured display image Pe on which the AR image P5 is superimposed on the display 201.
  • the AR image P5 shows a schedule of events to be performed in the facility of the guide board 107.
  • the receiver 200 displays the captured display image Pe so that the guide plate 107 on which the event schedule is described actually exists. can do. Thereby, without providing a special display device on the guide plate 107, it is possible to inform the user of the receiver 200 of the schedule of the facility event in an easy-to-understand manner.
  • an infrared reflecting paint may be applied to the guide plate 107 instead of the infrared shielding paint 108.
  • the infrared reflecting paint reflects infrared rays included in the light irradiated from the transmitter 100. Therefore, the image sensor of the receiver 200 recognizes the infrared reflective paint as an image brighter than the surrounding area. That is, in this case, the receiver 200 recognizes rectangles circumscribing a plurality of infrared reflective paints that appear as bright images as target regions.
  • FIG. 50 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 is configured as a station name sign and is disposed near the station exit guide plate 110.
  • the station exit guide plate 110 includes a light source and emits light, but, unlike the transmitter 100, does not transmit an optical ID.
  • the decoding image Pdec includes a bright line pattern region Pdec1 corresponding to the transmitter 100 and a bright region corresponding to the station exit guide plate 110.
  • Pdec2 appears.
  • the bright line pattern region Pdec1 is a region composed of a plurality of bright line patterns that appear by exposure at a communication exposure time of a plurality of exposure lines of the image sensor of the receiver 200.
  • the identification information includes reference information for specifying the reference region Pbas in the captured display image Ppre and target information indicating the relative position of the target region Ptar with respect to the reference region Pbas.
  • the reference information indicates that the position of the reference area Pbas in the captured display image Ppre is the same as the position of the bright line pattern area Pdec1 in the decoding image Pdec.
  • the target information indicates that the position of the target area is the position of the reference area.
  • the receiver 200 specifies the reference region Pbas from the captured display image Ppre based on the reference information. That is, the receiver 200 specifies, in the captured display image Ppre, an area that is at the same position as the bright line pattern area Pdec1 in the decoding image Pdec as the reference area Pbas. Further, the receiver 200 recognizes, as the target region Ptar, a region in the relative position indicated by the target information with reference to the position of the reference region Pbas in the captured display image Ppre. In the above example, since the target information indicates that the position of the target area Ptar is the position of the reference area Pbas, the receiver 200 recognizes the reference area Pbas in the captured display image Ppre as the target area Ptar.
  • the receiver 200 superimposes the AR image P1 on the target area Ptar in the captured display image Ppre.
  • the bright line pattern region Pdec1 is used to recognize the target region Ptar.
  • the region where the transmitter 100 is projected is to be recognized as the target region Ptar from only the captured display image Ppre without using the bright line pattern region Pdec1
  • erroneous recognition may occur.
  • the captured display image Ppre not the area where the transmitter 100 is projected but the area where the station exit guide plate 110 is projected may be erroneously recognized as the target area Ptar. This is because the image of the transmitter 100 and the image of the station exit guide plate 110 in the captured display image Ppre are similar.
  • the bright line pattern region Pdec1 it is possible to accurately recognize the target region Ptar while suppressing the occurrence of erroneous recognition.
  • FIG. 51 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 transmits a light ID by changing the luminance of the entire station name sign, and the target information indicates that the position of the target area is the position of the reference area.
  • the transmitter 100 transmits the light ID by changing the luminance of the light emitting elements arranged in a part of the outer frame of the station name sign without changing the brightness of the entire station name sign.
  • the target information only needs to indicate the relative position of the target area Ptar with respect to the reference area Pbas. For example, the position of the target area Ptar is above the reference area Pbas (specifically, vertically upward). May be shown.
  • the transmitter 100 transmits the light ID by changing the luminance of a plurality of light emitting elements arranged along the horizontal direction below the outer frame of the station name sign.
  • the target information indicates that the position of the target area Ptar is above the reference area Pbas.
  • the receiver 200 specifies the reference region Pbas from the captured display image Ppre based on the reference information. That is, the receiver 200 specifies, in the captured display image Ppre, an area that is at the same position as the bright line pattern area Pdec1 in the decoding image Pdec as the reference area Pbas. Specifically, the receiver 200 specifies a rectangular reference region Pbas that is long in the horizontal direction and short in the vertical direction. Further, the receiver 200 recognizes, as the target region Ptar, a region in the relative position indicated by the target information with reference to the position of the reference region Pbas in the captured display image Ppre. That is, the receiver 200 recognizes an area above the reference area Pbas in the captured display image Ppre as the target area Ptar. At this time, the receiver 200 specifies the direction above the reference region Pbas based on the direction of gravity measured by the acceleration sensor provided in the receiver 200.
  • the target information may indicate not only the relative position of the target area Ptar but also the size, shape, and aspect ratio of the target area Ptar.
  • the receiver 200 recognizes the target area Ptar having the size, shape, and aspect ratio indicated by the target information.
  • the receiver 200 may determine the size of the target area Ptar based on the size of the reference area Pbas.
  • FIG. 52 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 executes the processing of steps S101 to S104, as in the example shown in FIG.
  • the receiver 200 identifies the bright line pattern region Pdec1 from the decoding image Pdec (step S111).
  • the receiver 200 specifies a reference area Pbas corresponding to the bright line pattern area Pdec1 from the captured display image Ppre (step S112).
  • the receiver 200 recognizes the target area Ptar from the captured display image Ppre based on the recognition information (specifically, target information) and the reference area Pbas (step S113).
  • the receiver 200 superimposes the AR image on the target area Ptar of the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S106). . Then, the receiver 200 determines whether or not the imaging and the display of the captured display image Pre are to be ended (Step S107). Here, if the receiver 200 determines that it should not be ended (N in step S107), it further determines whether or not the acceleration of the receiver 200 is equal to or greater than a threshold value (step S114). This acceleration is measured by an acceleration sensor provided in the receiver 200. When the receiver 200 determines that the acceleration is less than the threshold value (N in step S114), the receiver 200 executes the processing from step S113.
  • a threshold value is measured by an acceleration sensor provided in the receiver 200.
  • the receiver 200 determines that the acceleration is equal to or greater than the threshold (Y in step S114), the receiver 200 executes the processing from step S111 or step S102. Thereby, it can suppress that the area
  • FIG. 53 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the receiver 200 When the AR image P1 in the displayed captured display image Ppre is tapped, the receiver 200 enlarges and displays the AR image P1. Alternatively, when tapped, the receiver 200 may display a new AR image showing more detailed content than that shown in the AR image P1 instead of the AR image P1. In addition, when the AR image P1 indicates information for one page of an information magazine including a plurality of pages, the receiver 200 displays a new AR image indicating information on the next page of the page of the AR image P1. You may display instead of AR image P1. Alternatively, when tapped, the receiver 200 may display a moving image related to the AR image P1 as a new AR image instead of the AR image P1. At this time, the receiver 200 may display a moving image in which an object (autumn leaves in the example of FIG. 53) comes out from the target area Ptar as an AR image.
  • an object autumn leaves in the example of FIG. 53
  • FIG. 54 is a diagram showing a captured display image Ppre and a decoding image Pdec acquired by imaging of the receiver 200 in the present embodiment.
  • the receiver 200 acquires captured images such as a captured display image Ppre and a decoding image Pdec at a frame rate of 30 fps as shown in FIG. Specifically, the receiver 200 acquires the captured display image Ppre “A” at time t1, acquires the decoding image Pdec at time t2, and acquires the captured display image Ppre “B” at time t3. The captured display image Ppre and the decoding image Pdec are obtained alternately.
  • the receiver 200 displays only the captured display image Ppre among the captured images, and does not display the decoding image Pdec. That is, as shown in (a2) of FIG. 54, the receiver 200 displays the captured display image Ppre acquired immediately before, instead of the decoding image Pdec, when acquiring the decoding image Pdec. Specifically, the receiver 200 displays the acquired captured display image Ppre “A” at time t1, and again displays the captured display image Ppre “A” acquired at time t1 at time t2. To do. Thereby, the receiver 200 displays the captured display image Ppre at a frame rate of 15 fps.
  • the receiver 200 alternately acquires the captured display image Ppre and the decoding image Pdec, but the acquisition form of these images in the present embodiment is It is not restricted to such a form. That is, the receiver 200 continuously obtains N (N is an integer equal to or greater than 1) decoding images Pdec, and then continuously captures M (M is an integer equal to or greater than 1) captured display images Ppre. You may repeat acquiring.
  • the receiver 200 needs to switch the acquired captured image to the captured display image Ppre and the decoding image Pdec, and this switching may take time. Therefore, as illustrated in (b1) of FIG. 54, the receiver 200 may provide a switching period when switching between acquisition of the captured display image Ppre and acquisition of the decoding image Pdec. Specifically, when the receiver 200 obtains the decoding image Pdec at time t3, the receiver 200 executes processing for switching the captured image during the switching period from time t3 to t5, and at time t5, the captured display image Prep " A ”is acquired. Thereafter, the receiver 200 executes processing for switching the captured image in the switching period from time t5 to time t7, and acquires the decoding image Pdec at time t7.
  • the receiver 200 displays the captured display image Ppre acquired immediately before in the switching period, as shown in (b2) of FIG. Therefore, in this case, the display frame rate of the captured display image Ppre in the receiver 200 is low, for example, 3 fps.
  • the captured display image Ppre displayed may not move according to the movement of the receiver 200. That is, the captured display image Ppre is not displayed as a live view. Therefore, the receiver 200 may move the captured display image Ppre according to the movement of the receiver 200.
  • FIG. 55 is a diagram showing an example of the captured display image Ppre displayed on the receiver 200 in the present embodiment.
  • the receiver 200 displays, on the display 201, a captured display image Ppre obtained by imaging, for example, as illustrated in FIG.
  • the user moves the receiver 200 to the left side.
  • the receiver 200 moves the displayed captured display image Ppre to the right as shown in FIG. 55 (b). That is, the receiver 200 includes an acceleration sensor, and moves the displayed captured display image Ppre to match the movement of the receiver 200 according to the acceleration measured by the acceleration sensor. Thereby, the receiver 200 can display the captured display image Ppre as a live view in a pseudo manner.
  • FIG. 56 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 superimposes the AR image on the target area Ptar of the captured display image Ppre and follows the target area Ptar (step S122). That is, an AR image that moves together with the target area Ptar in the captured display image Ppre is displayed. Then, the receiver 200 determines whether or not to maintain the display of the AR image (step S122). If it is determined that the display of the AR image is not maintained (N in step S122), if the receiver 200 acquires a new light ID by imaging, the new AR image corresponding to the light ID is captured and displayed. It is displayed superimposed on Pre (step S123).
  • the receiver 200 repeatedly executes the processing from step S121. At this time, the receiver 200 does not display another AR image even if another AR image is acquired. Alternatively, even when the receiver 200 has acquired a new decoding image Pdec, the receiver 200 does not acquire an optical ID by decoding the decoding image Pdec. At this time, power consumption for decoding can be suppressed.
  • the display of the AR image by maintaining the display of the AR image, it is possible to prevent the displayed AR image from being erased or being difficult to see due to the display of another AR image. That is, the displayed AR image can be easily seen by the user.
  • the receiver 200 determines to maintain the display of the AR image until a predetermined period (a certain period) elapses after the AR image is displayed. That is, when displaying the captured display image Ppre, the receiver 200 is determined in advance while suppressing the display of the second AR image different from the first AR image that is the AR image superimposed in step S121. The first AR image is displayed only during the display period. The receiver 200 may prohibit the decoding of the newly acquired decoding image Pdec during this display period.
  • a predetermined period a certain period
  • the receiver 200 may include a face camera, and when detecting that the user's face is approaching based on the imaging result of the face camera, the receiver 200 may determine to maintain the display of the AR image. . That is, when displaying the captured display image Ppre, the receiver 200 further determines whether or not the user's face is approaching the receiver 200 by imaging with a face camera provided in the receiver 200. When the receiver 200 determines that the face is approaching, the first AR image is suppressed while suppressing the display of the second AR image that is different from the first AR image that is the AR image superimposed in step S121. An AR image is displayed.
  • the receiver 200 may include an acceleration sensor, and when detecting that the user's face is approaching based on the measurement result of the acceleration sensor, the receiver 200 may determine to maintain the display of the AR image. . That is, when displaying the captured display image Ppre, the receiver 200 further determines whether or not the user's face is approaching the receiver 200 based on the acceleration of the receiver 200 measured by the acceleration sensor. For example, when the acceleration of the receiver 200 measured by the acceleration sensor shows a positive value in a direction perpendicular to the display 201 of the receiver 200, the receiver 200 is approaching the user's face. judge. When the receiver 200 determines that the face is approaching, the first 200 is performed while suppressing the display of the second AR image that is different from the first augmented reality image that is the AR image superimposed in step S121. The AR image is displayed.
  • the first AR image can be prevented from being replaced with a different second AR image.
  • the receiver 200 may determine that the display of the AR image is maintained when a lock button provided in the receiver 200 is pressed.
  • step S122 the receiver 200 determines that the display of the AR image is not maintained when the above-described certain period (that is, the display period) has elapsed. Further, the receiver 200 determines that the display of the AR image is not maintained when acceleration equal to or greater than the threshold value is measured by the acceleration sensor even when the above-described certain period has not elapsed. That is, when displaying the captured display image Ppre, the receiver 200 further measures the acceleration of the receiver 200 with the acceleration sensor during the display period described above, and determines whether or not the measured acceleration is equal to or greater than a threshold value. When the receiver 200 determines that the second AR image is greater than or equal to the threshold, the receiver 200 cancels the suppression of the display of the second AR image, thereby displaying the second AR image instead of the first AR image in step S123.
  • the above-described certain period that is, the display period
  • the receiver 200 determines that the display of the AR image is not maintained when acceleration equal to or greater than the threshold value is measured by the acceleration sensor even when the above-described
  • the suppression of the display of the second AR image is released. Therefore, for example, when the user moves the receiver 200 greatly so as to point the image sensor at another subject, the second AR image can be displayed immediately.
  • FIG. 57 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 is configured as a lighting device, and transmits a light ID by changing the luminance while illuminating a stage 111 for a small doll. Since the stage 111 is illuminated by the light from the transmitter 100, the luminance changes similarly to the transmitter 100, and the optical ID is transmitted.
  • the two receivers 200 image the stage 111 illuminated by the transmitter 100 from the left and right.
  • the left receiver 200 of the two receivers 200 captures the captured display image Pf and the decoding image in the same manner as described above by capturing the stage 111 illuminated by the transmitter 100 from the left.
  • the receiver 200 on the left side acquires the optical ID by decoding the decoding image. That is, the left receiver 200 receives the optical ID from the stage 111.
  • the left receiver 200 transmits the optical ID to the server.
  • the left-side receiver 200 acquires a three-dimensional AR image and recognition information corresponding to the optical ID from the server.
  • This three-dimensional AR image is an image for displaying a doll three-dimensionally, for example.
  • the left receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pf as a target area. For example, the left receiver 200 recognizes the upper area at the center of the stage 111 as the target area.
  • the left-side receiver 200 generates a two-dimensional AR image P6a corresponding to the orientation from the three-dimensional AR image based on the orientation of the stage 111 displayed in the captured display image Pf. Then, the receiver 200 on the left side superimposes the two-dimensional AR image P6a on the target area, and displays the captured display image Pf on which the AR image P6a is superimposed on the display 201. In this case, since the two-dimensional AR image P6a is superimposed on the target area of the captured display image Pf, the left receiver 200 displays the captured display image Pf so that the doll actually exists on the stage 111. can do.
  • the right receiver 200 of the two receivers 200 captures the captured display image Pg and the decoding image in the same manner as described above by capturing the stage 111 illuminated by the transmitter 100 from the right. get.
  • the right receiver 200 acquires the optical ID by decoding the decoding image. That is, the right receiver 200 receives the optical ID from the stage 111.
  • the right receiver 200 transmits the optical ID to the server.
  • the right-side receiver 200 acquires a three-dimensional AR image and recognition information corresponding to the optical ID from the server.
  • the right receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pg as a target area. For example, the right receiver 200 recognizes the upper area at the center of the stage 111 as the target area.
  • the right-side receiver 200 generates a two-dimensional AR image P6b corresponding to the orientation from the three-dimensional AR image based on the orientation of the stage 111 displayed in the captured display image Pg. Then, the receiver 200 on the right side superimposes the two-dimensional AR image P6b on the target region, and displays the captured display image Pg on which the AR image P6b is superimposed on the display 201. In this case, since the two-dimensional AR image P6b is superimposed on the target region of the captured display image Pg, the right-side receiver 200 displays the captured display image Pg so that the doll actually exists on the stage 111. can do.
  • the two receivers 200 display the AR images P6a and P6b at the same position on the stage 111.
  • the AR images P6a and P6b are generated according to the orientation of the receiver 200 so that the virtual doll is actually facing a predetermined direction. Therefore, the captured display image can be displayed so that the doll actually exists on the stage 111 no matter what direction the stage 111 is captured.
  • the receiver 200 generates a two-dimensional AR image corresponding to the positional relationship between the receiver 200 and the stage 111 from the three-dimensional AR image. May be obtained from the server. That is, the receiver 200 transmits information indicating the positional relationship together with the optical ID to the server, and acquires the two-dimensional AR image from the server instead of the three-dimensional AR image. Thereby, the burden on the receiver 200 can be reduced.
  • FIG. 58 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 is configured as an illumination device, and transmits a light ID by changing the luminance while illuminating a cylindrical structure 112. Since the structure 112 is illuminated by the light from the transmitter 100, the luminance is changed similarly to the transmitter 100, and the light ID is transmitted.
  • the receiver 200 acquires the captured display image Ph and the decoding image in the same manner as described above by imaging the structure 112 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the structure 112.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P7 corresponding to the optical ID and the recognition information from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Ph as a target area. For example, the receiver 200 recognizes an area where the central portion of the structure 112 is projected as a target area.
  • the receiver 200 superimposes the AR image P7 on the target area, and displays the captured display image Ph on which the AR image P7 is superimposed on the display 201.
  • the AR image P ⁇ b> 7 is an image including a character string “ABCD”, and the character string is distorted in accordance with the curved surface at the center of the structure 112.
  • the receiver 200 makes sure that the character string drawn on the structure 112 actually exists.
  • the captured display image Ph can be displayed.
  • FIG. 59 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 transmits the light ID by changing the luminance while illuminating the menu 113 of the restaurant. Since the menu 113 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
  • the menu 113 indicates names of a plurality of dishes such as “ABC soup”, “XYZ salad”, and “KLM lunch”.
  • the receiver 200 acquires the captured display image Pi and the decoding image in the same manner as described above by capturing an image of the menu 113 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the menu 113.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P8 corresponding to the optical ID and the recognition information from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pi as a target area. For example, the receiver 200 recognizes an area where the menu 113 is displayed as the target area.
  • the receiver 200 superimposes the AR image P8 on the target region, and displays the captured display image Pi on which the AR image P8 is superimposed on the display 201.
  • the AR image P8 is an image that shows the ingredients used for each of a plurality of dishes by marks.
  • the AR image P8 shows a mark imitating an egg for a dish “XYZ salad” using eggs, and a pig for a dish “KLM lunch” using pork. The imitated mark is shown.
  • the receiver 200 displays the captured display image Pi so that the menu 113 with the food mark is actually present. be able to. Thereby, without providing a special display device in the menu 113, the user of the receiver 200 can be easily and easily informed of the ingredients of each dish.
  • the receiver 200 acquires a plurality of AR images, selects an AR image suitable for the user from the plurality of AR images based on the user information set by the user, and superimposes the AR images. May be. For example, if the user information indicates that the user shows an allergic reaction to the egg, the receiver 200 selects an AR image that is marked with an egg for a dish in which the egg is used. If the user information indicates that the intake of pork is prohibited, the receiver 200 selects an AR image in which a pork mark is attached to a dish in which pork is used. Alternatively, the receiver 200 may transmit the user information together with the optical ID to the server and acquire an AR image corresponding to the optical ID and the user information from the server. Thereby, for each user, a menu that prompts the user to call can be displayed.
  • FIG. 60 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
  • the transmitter 100 is configured as a television, and transmits an optical ID by changing luminance while displaying an image on a display.
  • a normal television 114 is disposed in the vicinity of the transmitter 100. The television 114 displays an image on the display, but does not transmit an optical ID.
  • the receiver 200 acquires the captured display image Pj and the decoding image in the same manner as described above, for example, by imaging the television 114 together with the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P9 and the recognition information corresponding to the optical ID from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pj as a target area.
  • the receiver 200 uses the bright line pattern area of the image for decoding, so that the lower part of the area where the transmitter 100 transmitting the optical ID is displayed in the captured display image Pj is the first target area. Recognize as At this time, the reference information included in the recognition information indicates that the position of the reference area in the captured display image Pj is the same as the position of the bright line pattern area in the decoding image. Furthermore, the target information included in the recognition information indicates that there is a target area below the reference area. The receiver 200 recognizes the first target area described above using such recognition information.
  • the receiver 200 recognizes an area whose position is fixed in advance below the captured display image Pj as the second target area.
  • the second target area is larger than the first target area.
  • the target information included in the recognition information further indicates not only the position of the first target area but also the position and size of the second target area as described above.
  • the receiver 200 recognizes the second target area described above using such recognition information.
  • the receiver 200 superimposes the AR image P9 on the first target area and the second target area, and displays the captured display image Pj on which the AR image P8 is superimposed on the display 201.
  • the receiver 200 matches the size of the AR image P9 with the size of the first target area, and superimposes the AR image P9 whose size has been adjusted on the first target area.
  • the receiver 200 matches the size of the AR image P9 with the size of the second target area, and superimposes the AR image P9 whose size has been adjusted on the second target area.
  • the AR image P9 indicates a caption for the video of the transmitter 100.
  • the language of the caption of the AR image P9 is a language according to the user information set and registered in the receiver 200. That is, when transmitting the optical ID to the server, the receiver 200 also transmits the user information (for example, information indicating the user's nationality or language used) to the server. Then, the receiver 200 acquires an AR image P9 indicating a language caption corresponding to the user information.
  • the receiver 200 acquires a plurality of AR images P9 indicating subtitles in different languages, and uses the AR images used for superimposition from the plurality of AR images P9 according to the user information registered and registered. P9 may be selected.
  • the receiver 200 acquires a captured display image Pj and a decoding image by capturing a plurality of displays each displaying an image as a subject. Then, when the receiver 200 recognizes the target area, an area in which the transmission display (that is, the transmitter 100) that is the display that transmits the light ID among the plurality of displays appears in the captured display image Pj. Is recognized as a target area. Next, the receiver 200 superimposes the first subtitle corresponding to the image displayed on the transmission display as an AR image on the target area. Furthermore, the receiver 200 superimposes a second subtitle, which is a subtitle obtained by enlarging the first subtitle, on a region larger than the target region in the captured display image Pj.
  • a second subtitle which is a subtitle obtained by enlarging the first subtitle
  • the receiver 200 can display the captured display image Pj so that captions actually exist in the video of the transmitter 100. Furthermore, since the receiver 200 also superimposes a large caption on the lower part of the captured display image Pj, even if the caption attached to the video of the transmitter 100 is small, the caption can be easily viewed. In addition, when there is no caption attached to the video of the transmitter 100 and only a large caption is superimposed on the lower part of the captured display image Pj, whether the superimposed caption is a caption for the video of the transmitter 100 or a television It is difficult to determine whether it is a caption for 114 videos. However, in the present embodiment, since captions are attached to the video of the transmitter 100 that transmits the optical ID, the user can easily determine which video the superimposed caption is for. Can do.
  • the receiver 200 may further determine whether or not audio information is included in the information acquired from the server. When the receiver 200 determines that the audio information is included, the receiver 200 outputs the audio indicated by the audio information with priority over the first and second subtitles. Thereby, since sound is preferentially output, it is possible to reduce the burden of the user reading subtitles.
  • the subtitle language is changed according to the user information (that is, the user attribute), but the video (that is, the content) itself displayed on the transmitter 100 may be changed.
  • the video displayed on the transmitter 100 is a news video
  • the receiver 200 may update the news video broadcast in Japan. Is acquired as an AR image. Then, the receiver 200 superimposes the news video on an area where the display of the transmitter 100 is displayed (that is, the target area).
  • the user information indicates that the user is American
  • the receiver 200 acquires a news video broadcast in the United States as an AR image.
  • the receiver 200 superimposes the news video on an area where the display of the transmitter 100 is displayed (that is, the target area). Thereby, an image suitable for the user can be displayed.
  • the user information indicates, for example, nationality or language used as the user attribute, and the receiver 200 acquires the above-described AR image based on the attribute.
  • FIG. 61 is a diagram showing an example of recognition information in the present embodiment.
  • the transmitters 100a and 100b are configured as station names as with the transmitter 100, respectively. Even if these transmitters 100a and 100b are station names different from each other, they may be misrecognized because they are similar if they are located close to each other.
  • the recognition information of each of the transmitters 100a and 100b does not indicate each feature point and each feature amount of the entire image of the transmitter 100a or 100b, and each feature point of only a characteristic part of the image and Each feature amount may be indicated.
  • the part a1 of the transmitter 100a and the part b1 of the transmitter 100b are greatly different from each other, and the part a2 of the transmitter 100a and the part b2 of the transmitter 100b are greatly different from each other. Therefore, if the transmitters 100a and 100b are installed in a predetermined range (that is, a short distance), the server uses image characteristics of the parts a1 and a2 as the recognition information corresponding to the transmitter 100a. Preserve points and features. Similarly, the server holds the feature points and feature amounts of the images of the parts b1 and b2 as the identification information corresponding to the transmitter 100b.
  • the receiver 200 can appropriately use the identification information.
  • the target area can be recognized.
  • FIG. 62 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 first determines whether or not the user has a visual impairment based on the user information set and registered in the receiver 200 (step S131). If the receiver 200 determines that there is a visual impairment (Y in step S131), the receiver 200 outputs the characters of the AR image displayed in a superimposed manner by voice (step S132). On the other hand, when the receiver 200 determines that there is no visual impairment (N in step S131), the receiver 200 further determines whether the user has a hearing impairment based on the user information (step S133). Here, if the receiver 200 determines that there is a hearing impairment (Y in step S133), the receiver 200 stops the sound output (step S134). At this time, the receiver 200 stops outputting sound by all functions.
  • the receiver 200 may perform the process of step S133 when it is determined in step S131 that there is a visual impairment (Y in step S131). That is, when it is determined that there is a visual impairment and there is no hearing impairment, the receiver 200 may output the AR image characters displayed in a superimposed manner by voice.
  • FIG. 63 is a diagram illustrating an example in which the receiver 200 according to the present embodiment identifies bright line pattern regions.
  • the receiver 200 obtains a decoding image by imaging two transmitters each transmitting an optical ID, and performs decoding on the decoding image to obtain an optical ID as shown in FIG. To get.
  • the receiver 200 transmits the light ID of the transmitter corresponding to the bright line pattern area X and the transmission corresponding to the bright line pattern area Y. Get the machine's optical ID.
  • the light ID of the transmitter corresponding to the bright line pattern region X is made up of numerical values (that is, data) corresponding to addresses 0 to 9, for example, “5, 2, 8, 4, 3, 6, 1, 9, 4,3 ".
  • the transmitter optical ID corresponding to the bright line pattern region X is also composed of numerical values corresponding to addresses 0 to 9, for example, “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 ".
  • the receiver 200 Even if the receiver 200 acquires these light IDs once, that is, even if these light IDs are known, from which bright line pattern area each light ID was obtained when taking an image. It may be a situation that you do not understand. In such a case, the receiver 200 can easily determine from which bright line pattern area each known light ID is obtained by performing the processes shown in FIGS. Can be determined quickly.
  • the receiver 200 first acquires the decoding image Pdec11 as shown in (a) of FIG. 63, and decodes each of the bright line pattern regions X and Y by decoding the decoding image Pdec11.
  • the numerical value of the optical ID address 0 is acquired.
  • the numerical value of the light ID address 0 of the bright line pattern region X is “5”
  • the numerical value of the light ID address 0 of the bright line pattern region Y is also “5”. Since the numerical value of the address 0 of each light ID is “5”, at this time, it cannot be determined from which bright line pattern area the known light ID is obtained.
  • the receiver 200 acquires the decoding image Pdec12 and decodes the decoding image Pdec12 to obtain the address 1 of each light ID of the bright line pattern regions X and Y. Get the number of. For example, the numerical value of the address 1 of the light ID in the bright line pattern region X is “2”, and the numerical value of the address 1 of the light ID in the bright line pattern region Y is also “2”. Since the numerical value of the address 1 of each light ID is “2”, it is impossible to determine from which bright line pattern region the known light ID is obtained.
  • the receiver 200 acquires the decoding image Pdec13, and decodes the decoding image Pdec13 to obtain the respective light IDs of the bright line pattern regions X and Y.
  • the numerical value of the address 2 of the light ID in the bright line pattern region X is “8”
  • the numerical value of the address 2 of the light ID in the bright line pattern region Y is “7”.
  • it can be determined that the known light ID “5, 2, 8, 4, 3, 6, 1, 9, 4, 3” has been obtained from the bright line pattern region X, and the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 "can be determined to have been obtained from the bright line pattern region Y.
  • the receiver 200 may further acquire the numerical value of the address 3 of each optical ID, as shown in FIG. That is, the receiver 200 acquires the decoding image Pdec14, and acquires the numerical value of the address 3 of the light ID of each of the bright line pattern areas X and Y by decoding the decoding image Pdec14.
  • the numerical value of the address 3 of the light ID in the bright line pattern region X is “4”
  • the numerical value of the address 3 of the light ID in the bright line pattern region Y is “7”.
  • the known light ID “5, 2, 8, 4, 3, 6, 1, 9, 4, 3” has been obtained from the bright line pattern region X
  • the numerical value of at least one address is reacquired without acquiring the numerical values (that is, data) of all addresses of the optical ID again. This makes it possible to easily and quickly determine from which bright line pattern region the known light ID is obtained.
  • the numerical value acquired for the predetermined address matches the numerical value of the known optical ID, but it does not have to match. Good.
  • the receiver 200 acquires “6” as the numerical value of the address 3 of the light ID of the bright line pattern region Y.
  • the numerical value “6” of the address 3 is different from the numerical value “7” of the address 3 of the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4”.
  • the receiver 200 has a known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4” with a bright line.
  • the numerical value “6” is a numerical value close to the numerical value “7” depending on whether the numerical value “6” is within the range of the numerical value “7” ⁇ n (n is a number of 1 or more, for example). It may be determined whether or not.
  • FIG. 64 is a diagram illustrating another example of the receiver 200 in the present embodiment.
  • the receiver 200 is configured as a smartphone in the above example, but may be configured as a head-mounted display (also referred to as glass) including an image sensor.
  • a head-mounted display also referred to as glass
  • image sensor an image sensor
  • Such a receiver 200 detects a predetermined signal because power consumption increases when a processing circuit for displaying an AR image as described above (hereinafter referred to as an AR processing circuit) is always activated. When this occurs, the AR processing circuit may be activated.
  • the receiver 200 includes a touch sensor 202.
  • the touch sensor 202 outputs a touch signal when it touches a user's finger or the like.
  • the receiver 200 activates the AR processing circuit when detecting the touch signal.
  • the receiver 200 may activate the AR processing circuit when detecting a radio signal such as Bluetooth (registered trademark) or Wi-Fi (registered trademark).
  • a radio signal such as Bluetooth (registered trademark) or Wi-Fi (registered trademark).
  • the receiver 200 may include an acceleration sensor, and may activate the AR processing circuit when the acceleration sensor measures an acceleration equal to or greater than a threshold value in a direction opposite to the direction of gravity. That is, the receiver 200 activates the AR processing circuit when detecting the signal indicating the acceleration. For example, when the user pushes up the nose pad portion of the receiver 200 configured as a glass upward with a fingertip from below, the receiver 200 detects a signal indicating the acceleration and activates the AR processing circuit.
  • the receiver 200 may activate the AR processing circuit when it is detected by the GPS and the 9-axis sensor that the image sensor is directed to the transmitter 100. That is, the receiver 200 activates the AR processing circuit when detecting a signal indicating that the receiver 200 is directed in a predetermined direction. In this case, if the transmitter 100 is the above-mentioned Japanese station name mark, the receiver 200 displays an AR image indicating the English station name superimposed on the station name mark.
  • FIG. 65 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 When the receiver 200 acquires the optical ID from the transmitter 100 (step S141), the receiver 200 switches the mode of noise cancellation by receiving mode designation information corresponding to the optical ID (step S142). Then, the receiver 200 determines whether or not the mode switching process should be terminated (step S143), and when it is determined that the mode switching process should not be terminated (N in step S143), the process from step S141 is repeatedly executed.
  • the switching of the noise canceling mode is, for example, a mode (ON) for canceling noise such as an engine in an airplane and a mode (OFF) for not canceling the noise.
  • a user carrying the receiver 200 is listening to a sound such as music output from the receiver 200 by putting an earphone connected to the receiver 200 on the ear.
  • the receiver 200 When such a user gets on the airplane, the receiver 200 acquires an optical ID. As a result, the receiver 200 switches the noise cancellation mode from OFF to ON. As a result, the user can hear a voice that does not include noise such as engine noise even in the cabin.
  • the receiver 200 also acquires the light ID when the user leaves the airplane.
  • the receiver 200 that has acquired this optical ID switches the noise cancellation mode from ON to OFF.
  • the noise to be subject to noise cancellation is not limited to engine noise but may be any sound such as a human voice.
  • FIG. 66 is a diagram illustrating an example of a transmission system including a plurality of transmitters in the present embodiment.
  • This transmission system includes a plurality of transmitters 120 arranged in a predetermined order. Like the transmitter 100, these transmitters 120 are transmitters in any one of the first to third embodiments, and include one or a plurality of light emitting elements (for example, LEDs).
  • the leading transmitter 120 transmits the optical ID by changing the luminance of one or a plurality of light emitting elements according to a predetermined frequency (carrier frequency). Further, the first transmitter 120 outputs a signal indicating the change in luminance to the subsequent transmitter 120 as a synchronization signal.
  • the subsequent transmitter 120 receives the synchronization signal, it transmits the optical ID by changing the luminance of one or more light emitting elements in accordance with the synchronization signal. Further, the subsequent transmitter 120 outputs a signal indicating the change in luminance to the subsequent subsequent transmitter 120 as a synchronization signal. Thereby, all the transmitters 120 included in the transmission system transmit the optical ID in synchronization.
  • the synchronization signal is transferred from the first transmitter 120 to the subsequent transmitter 120, and is further transferred from the subsequent transmitter 120 to the next subsequent transmitter 120 to the last transmitter 120. reach. For example, it takes about 1 ⁇ sec to transfer the synchronization signal. Therefore, if N (N is an integer of 2 or more) transmitters 120 are provided in the transmission system, it takes 1 ⁇ N ⁇ seconds for the synchronization signal to reach the last transmitter 120 from the first transmitter 120. become. As a result, the transmission timing of the optical ID is shifted by a maximum of N ⁇ seconds.
  • the receiver 200 even if N transmitters 120 transmit optical IDs according to a frequency of 9.6 kHz, and the receiver 200 attempts to receive an optical ID at a frequency of 9.6 kHz, the receiver 200 receives light that is shifted by N ⁇ seconds. Since the ID is received, the optical ID may not be received correctly.
  • the head transmitter 120 transmits the optical ID at a higher speed according to the number of transmitters 120 included in the transmission system.
  • the first transmitter 120 transmits an optical ID according to a frequency of 9.605 kHz.
  • the receiver 200 receives the optical ID at a frequency of 9.6 kHz. At this time, even if the receiver 200 receives the optical ID shifted by N ⁇ s, the frequency of the leading transmitter 120 is higher than the frequency of the receiver 200 by 0.005 kHz. Occurrence can be suppressed.
  • the first transmitter 120 may control the frequency adjustment amount by having the synchronization signal fed back from the last transmitter 120. For example, the first transmitter 120 measures the time from when it outputs the synchronization signal itself until it receives the synchronization signal fed back from the last transmitter 120. And the head transmitter 120 transmits optical ID according to a frequency higher than a reference frequency (for example, 9.6 kHz), so that the time is long.
  • a reference frequency for example, 9.6 kHz
  • FIG. 67 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in the present embodiment.
  • This transmission system includes, for example, two transmitters 120 and a receiver 200.
  • One of the two transmitters 120 transmits an optical ID according to a frequency of 9.599 kHz.
  • the other transmitter 120 transmits an optical ID according to a frequency of 9.601 kHz.
  • each of the two transmitters 120 notifies the receiver 200 of the frequency of its own optical ID with a radio wave signal.
  • the receiver 200 When the receiver 200 receives the notification of those frequencies, the receiver 200 tries to perform decoding according to each of the notified frequencies. That is, the receiver 200 attempts to decode the decoding image according to the frequency of 9.599 kHz. If the optical ID cannot be received by this, the receiver 200 attempts to decode the decoding image according to the frequency of 9.601 kHz. As described above, the receiver 200 attempts to decode the decoding image according to each of all the notified frequencies. In other words, the receiver 200 performs brute force for each notified frequency. Alternatively, the receiver 200 may attempt decoding according to the average frequency of all the notified frequencies. That is, the receiver 200 attempts decoding according to 9.6 kHz, which is an average frequency of 9.599 kHz and 9.601 kHz.
  • FIG. 68A is a flowchart illustrating an example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 starts imaging (step S151) and initializes the parameter N to 1 (step S152).
  • the receiver 200 decodes the decoding image obtained by the imaging according to the frequency corresponding to the parameter N, and calculates an evaluation value for the decoding result (step S153).
  • the evaluation value indicates a higher numerical value as the decoding result is more similar to the correct optical ID.
  • the receiver 200 determines whether or not the numerical value of the parameter N is equal to Nmax that is a predetermined integer of 1 or more (step S154).
  • the receiver 200 determines that it is not equal to Nmax (N in step S154)
  • it increments the parameter N (step S155) and repeats the processing from step S153.
  • the receiver 200 determines that it is equal to Nmax (Y in step S154)
  • the frequency for which the maximum evaluation value is calculated is registered as the optimum frequency in association with the location information indicating the location of the receiver 200. To do.
  • the optimum frequency and location information registered in this way are used for receiving the optical ID by the receiver 200 that has moved to the location indicated by the location information after registration.
  • the location information may be information indicating a position measured by GPS, for example, or may be identification information (for example, SSID: Service Set Identifier) of an access point in a wireless LAN (Local Area Network).
  • the receiver 200 that has registered with the server displays, for example, the AR image as described above according to the optical ID obtained by decoding at the optimum frequency.
  • FIG. 68B is a flowchart illustrating an example of the processing operation of the receiver 200 in the present embodiment.
  • the receiver 200 After registration with the server shown in FIG. 68A is performed, the receiver 200 transmits location information indicating the location where the receiver 200 exists to the server (step S161). Next, the receiver 200 acquires the optimum frequency registered in association with the location information from the server (step S162).
  • the receiver 200 starts imaging (step S163), and decodes the decoding image obtained by the imaging according to the optimum frequency acquired in step S162 (step S164).
  • the receiver 200 displays an AR image as described above, for example, according to the optical ID obtained by this decoding.
  • the receiver 200 can acquire the optimum frequency and receive the optical ID without executing the processing shown in FIG. 68A.
  • the receiver 200 may acquire the optimum frequency by executing the process shown in FIG. 68A when the optimum frequency cannot be obtained in step S162.
  • FIG. 69A is a flowchart showing a display method in the present embodiment.
  • the display method in the present embodiment is a display method in which the display device that is the above-described receiver 200 displays an image, and includes steps SL11 to SL16.
  • step SL11 the image sensor captures an image of the subject to acquire a captured display image and a decoding image.
  • step SL12 the optical ID is acquired by decoding the decoding image.
  • step SL13 the optical ID is transmitted to the server.
  • step SL14 the AR image corresponding to the optical ID and the recognition information are acquired from the server.
  • step SL15 an area corresponding to the recognition information in the captured display image is recognized as a target area.
  • step SL16 the captured display image in which the AR image is superimposed on the target area is displayed.
  • the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
  • augmented reality that is, AR
  • a huge number of recognition target images stored in advance are compared with the captured display image, and any of the recognition target images is included in the captured display image. It is determined whether or not If it is determined that the recognition target image is included, the AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image.
  • a huge number of recognition target images are compared with captured display images, and further, position detection of the recognition target images in the captured display image is necessary even in alignment. There is a problem of a large amount of calculation and a high processing load.
  • the light ID is acquired by decoding the decoding image obtained by imaging the subject. That is, the optical ID transmitted from the transmitter that is the subject is received. Furthermore, the AR image corresponding to this optical ID and the recognition information are acquired from the server. Therefore, the server does not need to compare a huge number of recognition target images and captured display images, and can select and transmit an AR image previously associated with the optical ID to the display device. Thereby, the amount of calculation can be reduced and the processing load can be significantly suppressed.
  • the recognition information corresponding to this light ID is acquired from the server.
  • the recognition information is information for recognizing a target area that is an area in which an AR image is superimposed in a captured display image.
  • This recognition information may be information indicating that a white square is the target area, for example.
  • the target area can be easily recognized, and the processing load can be further suppressed. That is, the processing load can be further suppressed according to the content of the recognition information.
  • the server can arbitrarily set the content of the recognition information according to the optical ID, the balance between the processing load and the recognition accuracy can be appropriately maintained.
  • the recognition information is reference information for specifying a reference area in the captured display image.
  • the reference area is specified from the captured display image based on the reference information, and the captured display image is displayed.
  • the target area may be recognized based on the position of the reference area.
  • the recognition information may include reference information for specifying a reference area in the captured display image and target information indicating a relative position of the target area with respect to the reference area.
  • the reference area is identified from the captured display image based on the reference information, and the area at the relative position indicated by the target information in the captured display image with the position of the reference area as a reference, Recognize as a target area.
  • the reference information is such that the position of the reference area in the captured display image is the same as the position of the bright line pattern area composed of a plurality of bright line patterns appearing by exposure of the multiple exposure lines of the image sensor in the decoding image. May be indicated.
  • the target area can be recognized with reference to the area corresponding to the bright line pattern area in the captured display image.
  • the reference information may indicate that the reference area in the captured display image is an area in which the display of the captured display image is displayed.
  • the target area can be recognized with reference to the area where the display is displayed.
  • the first AR image is displayed for a predetermined display period while suppressing the display of the second AR image different from the first AR image that is the above-described AR image. May be.
  • decoding of a newly acquired decoding image may be prohibited during the display period.
  • decoding of the newly acquired decoding image is a wasteful process when the display of the second AR image is suppressed. Power consumption can be reduced.
  • the acceleration of the display device may be measured by an acceleration sensor during the display period, and it may be determined whether or not the measured acceleration is equal to or greater than a threshold value. And when it determines with more than a threshold value, you may display a 2nd AR image instead of a 1st AR image by canceling suppression of a display of the 2nd AR image.
  • the suppression of the display of the second AR image is released. Accordingly, for example, when the user moves the display device greatly to direct the image sensor toward another subject, the second AR image can be displayed immediately.
  • the display of the captured display image it may be further determined whether or not the user's face is approaching the display device by imaging with a face camera provided in the display device. And if it determines with the face approaching, you may display a 1st AR image, suppressing the display of the 2nd AR image different from a 1st AR image.
  • whether or not the user's face is approaching the display device may be further determined based on the acceleration of the display device measured by the acceleration sensor. And if it determines with the face approaching, you may display a 1st AR image, suppressing the display of the 2nd AR image different from a 1st AR image.
  • the first AR image is replaced with a different second AR image. That can be suppressed.
  • the captured display image and the decoding image are acquired by capturing a plurality of displays each displaying an image as a subject. Also good.
  • an area in which a transmission display that is a display that transmits the light ID among the plurality of displays appears in the captured display image is recognized as the target area.
  • the first subtitle corresponding to the image displayed on the transmission display is superimposed on the target area as an AR image, and further, in an area larger than the target area of the captured display image, The second subtitle, which is an expanded subtitle of the first subtitle, is superimposed.
  • the first subtitle is superimposed on the image on the transmission display, the user can easily grasp which display subtitle is the subtitle for the display image of the plurality of displays. it can.
  • the second subtitle which is an enlarged subtitle of the first subtitle, is also displayed, even if the first subtitle is small and difficult to read, the subtitle can be easily read by displaying the second subtitle. can do.
  • the display of the captured display image it is further determined whether or not audio information is included in the information acquired from the server.
  • the first and second subtitles are determined.
  • the voice indicated by the voice information may be output with priority.
  • FIG. 69B is a block diagram illustrating a configuration of the display device in the present embodiment.
  • the display device 10 is a display device that displays an image, and includes an image sensor 11, a decoding unit 12, a transmission unit 13, an acquisition unit 14, a recognition unit 15, and a display unit 16. Prepare.
  • the display device 10 corresponds to the receiver 200 described above.
  • the image sensor 11 acquires a captured display image and a decoding image by imaging a subject.
  • the decoding unit 12 acquires the optical ID by decoding the decoding image.
  • the transmission unit 13 transmits the optical ID to the server.
  • the acquisition unit 14 acquires the AR image corresponding to the optical ID and the recognition information from the server.
  • the recognition unit 15 recognizes a region corresponding to the recognition information in the captured display image as a target region.
  • the display unit 16 displays a captured display image in which an AR image is superimposed on the target area.
  • the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • software for realizing the receiver 200 or the display device 10 according to the present embodiment is included in the flowcharts shown in FIGS. 45, 52, 56, 62, 65, and 68A to 69A.
  • FIG. 70 is a diagram illustrating an example in which the receiver in the first modification of the fourth embodiment displays an AR image.
  • the receiver 200 acquires the above-described captured display image Pk, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
  • the image sensor of the receiver 200 images the transmitter 100c configured as a robot and the person 21 adjacent to the transmitter 100c.
  • the transmitter 100c is the transmitter according to any one of the first to third embodiments, and includes one or a plurality of light emitting elements (for example, LEDs) 131.
  • the transmitter 100c changes its luminance by blinking the one or more light emitting elements 131, and transmits an optical ID (optical identification information) by the luminance change.
  • This light ID is the above-mentioned visible light signal.
  • the receiver 200 acquires the captured display image Pk on which the transmitter 100c and the person 21 are imaged by the normal exposure time. Furthermore, the receiver 200 acquires the decoding image by capturing the transmitter 100c and the person 21 with the communication exposure time shorter than the normal exposure time.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100c. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P10 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pk as a target area. For example, the receiver 200 recognizes an area on the right side of the area where the robot that is the transmitter 100c is projected as the target area. Specifically, the receiver 200 specifies the distance between the two markers 132a and 132b of the transmitter 100c displayed in the captured display image Pk.
  • the receiver 200 recognizes an area having a width and a height corresponding to the distance as a target area. That is, the recognition information indicates the shape of the markers 132a and 132b and the position and size of the target region with reference to the markers 132a and 132b.
  • the receiver 200 superimposes the AR image P10 on the target area, and displays the captured display image Pk on which the AR image P10 is superimposed on the display 201.
  • the receiver 200 acquires an AR image P10 indicating another robot different from the transmitter 100c.
  • the captured display image Pk can be displayed so that another robot actually exists next to the transmitter 100c.
  • the person 21 can be photographed together with the other robot together with the transmitter 100c even if no other robot exists.
  • FIG. 71 is a diagram illustrating another example in which the receiver 200 according to the first modification of the fourth embodiment displays an AR image.
  • the transmitter 100 is configured as an image display device having a display panel, and transmits a light ID by changing the luminance while displaying a still image PS on the display panel.
  • the display panel is, for example, a liquid crystal display or an organic EL (electroluminescence) display.
  • the receiver 200 acquires the captured display image Pm and the decoding image by imaging the transmitter 100 in the same manner as described above.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P11 and the recognition information corresponding to the optical ID from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pm as a target area. For example, the receiver 200 recognizes the area where the display panel of the transmitter 100 is displayed as the target area.
  • the receiver 200 superimposes the AR image P11 on the target area, and displays the captured display image Pm on which the AR image P11 is superimposed on the display 201.
  • the AR image P11 is a moving image having the same or substantially the same picture as the still image PS displayed on the display panel of the transmitter 100 as the first picture in the display order. That is, the AR image P11 is a moving image that starts to move from the still image PS.
  • the receiver 200 displays the captured display image Pm so that an image display device that displays a moving image actually exists. Can do.
  • FIG. 72 is a diagram illustrating another example in which the receiver 200 according to the first modification of the fourth embodiment displays an AR image.
  • the transmitter 100 is configured as a station name sign, and transmits a light ID by changing the luminance.
  • the receiver 200 images the transmitter 100 from a position away from the transmitter 100 as shown in FIG. Thereby, the receiver 200 acquires the captured display image Pn and the decoding image in the same manner as described above.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR images P12 to P14 corresponding to the optical ID and the recognition information from the server.
  • the receiver 200 recognizes two regions corresponding to the recognition information in the captured display image Pn as first and second target regions. For example, the receiver 200 recognizes the area around the transmitter 100 as the first target area.
  • the receiver 200 superimposes the AR image P12 on the first target area, and displays the captured display image Pn on which the AR image P12 is superimposed on the display 201.
  • the AR image P12 is an arrow that prompts the user of the receiver 200 to approach the transmitter 100.
  • the AR image P12 is displayed superimposed on the first target area of the captured display image Pn
  • the user approaches the transmitter 100 with the receiver 200 facing the transmitter 100.
  • the area of the transmitter 100 (corresponding to the reference area described above) displayed in the captured display image Pn becomes larger.
  • the receiver 200 further moves to a second target area, which is an area where the transmitter 100 is projected, as shown in FIG. 72 (b), for example.
  • the AR image P13 is superimposed. That is, the receiver 200 displays the captured display image Pn on which the AR images P12 and P13 are superimposed on the display 201.
  • the AR image P13 is a message that informs the user of the outline of the vicinity of the station indicated by the station name sign.
  • the AR image P13 is equal to the size of the area of the transmitter 100 displayed in the captured display image Pn.
  • the user transmits the transmitter 200 with the receiver 200 facing the transmitter 100.
  • Approaching 100 As the receiver 200 approaches the transmitter 100, the area of the transmitter 100 (corresponding to the reference area described above) displayed in the captured display image Pn becomes larger.
  • the receiver 200 changes the AR image P13 superimposed on the second target area to the AR image P14, for example, as illustrated in FIG. To do. Furthermore, the receiver 200 deletes the AR image P12 superimposed on the first target area.
  • the receiver 200 displays the captured display image Pn on which the AR image P14 is superimposed on the display 201.
  • the AR image P14 is a message that informs the user of the details around the station indicated by the station name sign.
  • the AR image P14 is equal to the size of the area of the transmitter 100 displayed in the captured display image Pn.
  • the area of the transmitter 100 is larger as the receiver 200 is closer to the transmitter 100. Therefore, the AR image P14 is larger than the AR image P13.
  • the receiver 200 enlarges the AR image and displays more information as it approaches the transmitter 100.
  • an arrow that prompts the user to approach such as the AR image P12, is displayed, the user can easily grasp that a large amount of information is displayed when approaching the transmitter 100.
  • FIG. 73 is a diagram illustrating another example in which the receiver 200 in the first modification of the fourth embodiment displays an AR image.
  • the receiver 200 displays a lot of information when approaching the transmitter 100, but displays a lot of information in the form of, for example, a balloon regardless of the distance to the transmitter 100. Also good.
  • the receiver 200 captures the captured display image Po and the decoding image by capturing an image of the transmitter 100 as described above.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100.
  • the receiver 200 transmits the optical ID to the server.
  • the receiver 200 acquires the AR image P15 and the recognition information corresponding to the optical ID from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Po as a target area. For example, the receiver 200 recognizes an area around the transmitter 100 as a target area.
  • the receiver 200 superimposes the AR image P15 on the target area, and displays the captured display image Po on which the AR image P15 is superimposed on the display 201.
  • the AR image P15 is a message that informs the user of the details of the vicinity of the station indicated by the station name in a balloon form.
  • the user of the receiver 200 can display a lot of information on the receiver 200 without approaching the transmitter 100.
  • FIG. 74 is a diagram illustrating another example of the receiver 200 in the first modification of the fourth embodiment.
  • the receiver 200 is configured as a smartphone in the above-described example, but may be configured as a head-mounted display (also referred to as glass) including an image sensor, as in the example illustrated in FIG.
  • Such a receiver 200 acquires the optical ID by performing decoding only on a part of the decoding target area of the decoding image.
  • the receiver 200 includes a line-of-sight detection camera 203 as shown in FIG.
  • the line-of-sight detection camera 203 images the eyes of the user wearing the head-mounted display that is the receiver 200.
  • the receiver 200 detects the user's line of sight based on the eye image obtained by the imaging by the line-of-sight detection camera 203.
  • the receiver 200 displays the line-of-sight frame 204 so that the line-of-sight frame 204 appears in an area of the user's field of view where the detected line of sight is directed, for example. . Accordingly, the line-of-sight frame 204 moves according to the movement of the user's line of sight.
  • the receiver 200 treats an area corresponding to the line-of-sight frame 204 in the decoding image as a decoding target area. In other words, the receiver 200 does not decode the bright line pattern area in the decoding image even if there is a bright line pattern area outside the decoding target area, and performs decoding only on the bright line pattern area in the decoding target area. .
  • decoding is not performed for all of the bright line pattern areas, so that the processing load can be reduced and the display of an extra AR image can be suppressed. Can do.
  • the receiver 200 decodes only the bright line pattern area in the decoding target area, and the bright line pattern area Only the sound corresponding to may be output.
  • the receiver 200 decodes each of the plurality of bright line pattern regions included in the decoding image, outputs a large amount of sound corresponding to the bright line pattern region in the decoding target region, and outputs to the bright line pattern region outside the decoding target region.
  • the corresponding voice may be output small.
  • the receiver 200 may output a larger amount of sound corresponding to the bright line pattern area as the bright line pattern area is closer to the decoding target area.
  • FIG. 75 is a diagram illustrating another example in which the receiver 200 in the first modification of the fourth embodiment displays an AR image.
  • the transmitter 100 is configured as an image display device having a display panel, and transmits an optical ID by changing luminance while displaying an image on the display panel.
  • the receiver 200 acquires the captured display image Pp and the decoding image in the same manner as described above by imaging the transmitter 100.
  • the receiver 200 specifies, from the captured display image Pp, an area having the same position as the bright line pattern area in the decoding image and the same size as the bright line pattern area.
  • the receiver 200 may display the scanning line P100 that repeatedly moves from one end of the region to the other end.
  • the receiver 200 acquires the optical ID by decoding the decoding image and transmits the optical ID to the server. Then, the receiver 200 acquires the AR image corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pp as a target area.
  • the receiver 200 When recognizing such a target area, the receiver 200 ends the display of the scanning line P100, superimposes the AR image on the target area, and displays the captured display image Pp on which the AR image is superimposed on the display 201. .
  • the moving scanning line P100 is displayed from when the transmitter 100 is imaged until the AR image is displayed, processing such as reading of the optical ID and the AR image is performed. Can be notified to the user.
  • FIG. 76 is a diagram illustrating another example in which the receiver 200 in the first modification of the fourth embodiment displays an AR image.
  • Each of the two transmitters 100 is configured as an image display device having a display panel, for example, as shown in FIG. 76, and transmits an optical ID by changing the luminance while displaying the same still image PS on the display panel. is doing.
  • the two transmitters 100 transmit different optical IDs (for example, optical IDs “01” and “02”) by changing the luminance in different manners.
  • the receiver 200 acquires the captured display image Pq and the decoding image by capturing images of the two transmitters 100 as in the example illustrated in FIG.
  • the receiver 200 acquires the optical IDs “01” and “02” by decoding the decoding image. That is, the receiver 200 receives the optical ID “01” from one of the two transmitters 100 and receives the optical ID “02” from the other.
  • the receiver 200 transmits those optical IDs to the server.
  • the receiver 200 acquires the AR image P16 corresponding to the optical ID “01” and the recognition information from the server.
  • the receiver 200 acquires the AR image P17 corresponding to the optical ID “02” and the recognition information from the server.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pq as a target area. For example, the receiver 200 recognizes an area where the display panels of the two transmitters 100 are displayed as the target area. Then, the receiver 200 superimposes the AR image P16 on the target area corresponding to the light ID “01”, and superimposes the AR image P17 on the target area corresponding to the light ID “02”. Then, the receiver 200 displays the captured display image Pq on which the AR images P16 and P17 are superimposed on the display 201.
  • the AR image P16 is a moving image having the same or substantially the same picture as the still picture PS displayed on the display panel of the transmitter 100 corresponding to the optical ID “01” as the first picture in the display order. is there.
  • the AR image P17 is a moving image having the same or substantially the same picture as the still picture PS displayed on the display panel of the transmitter 100 corresponding to the optical ID “02” as the first picture in the display order. is there. That is, the leading pictures of the AR image P16 and the AR image P17, which are moving images, are the same. However, the AR image P16 and the AR image P17 are different moving images, and the pictures other than the head of each are different.
  • the receiver 200 actually has an image display device that displays different moving images reproduced from the same picture.
  • the captured display image Pq can be displayed.
  • FIG. 77 is a flowchart illustrating an example of a processing operation of the receiver 200 in the first modification of the fourth embodiment.
  • the processing operation shown by the flowchart of FIG. 77 is an example of the processing operation of the receiver 200 that individually images each of the transmitters 100 when there are two transmitters 100 shown in FIG. is there.
  • the receiver 200 acquires the first light ID by imaging the first transmitter 100 as the first subject (step S201).
  • the receiver 200 recognizes the first subject from the captured display image (step S202). That is, the receiver 200 acquires the first AR image and the first recognition information corresponding to the first light ID from the server, and recognizes the first subject based on the first recognition information.
  • the receiver 200 starts reproduction of the first moving image that is the first AR image from the beginning (step S203). That is, the receiver 200 starts reproduction from the first picture of the first moving image.
  • the receiver 200 determines whether or not the first subject is out of the captured display image (step S204). That is, the receiver 200 determines whether or not the first subject cannot be recognized from the captured display image. Here, if it is determined that the first subject has deviated from the captured display image (Y in step S204), the receiver 200 interrupts the reproduction of the first moving image that is the first AR image (step S205).
  • the receiver 200 captures a second transmitter 100 different from the first transmitter 100 as a second subject, thereby different from the first light ID acquired in step S201. It is determined whether or not the optical ID of the first one has been acquired (step S206).
  • the receiver 200 determines that the second optical ID has been acquired (Y in step S206)
  • the receiver 200 performs the same processing as steps S202 to S203 after the first optical ID is acquired. That is, the receiver 200 recognizes the second subject from the captured display image (step S207).
  • the receiver 200 starts reproduction of the second moving image that is the second AR image corresponding to the second optical ID from the beginning (step S208). That is, the receiver 200 starts playback from the first picture of the second moving image.
  • the receiver 200 determines in step S206 that the second light ID has not been acquired (N in step S206), it determines whether or not the first subject has entered the captured display image again (step S206). S209). That is, the receiver 200 determines whether or not the first subject is recognized again from the captured display image.
  • the receiver 200 determines that the first subject has entered the captured display image (Y in step S209)
  • the receiver 200 further determines whether or not a predetermined time (that is, a predetermined time) has passed (step S209).
  • Step S210 That is, the receiver 200 determines whether or not a predetermined time has elapsed from when the first subject is removed from the captured display image until it enters again.
  • the receiver 200 starts reproduction from the middle of the interrupted first moving image (step S211).
  • the first picture to be resumed for reproduction which is the first picture of the first moving picture that is displayed at the beginning of the reproduction from the middle, is the next picture that was last displayed when the reproduction of the first moving picture was interrupted.
  • the pictures may be in the display order.
  • the reproduction restart top picture may be a picture preceding the last displayed picture by n (n is an integer of 1 or more) in display order.
  • the receiver 200 starts playback of the interrupted first moving image from the beginning (step S212).
  • the receiver 200 superimposes the AR image on the target area of the captured display image.
  • the brightness of the AR image may be adjusted. That is, the receiver 200 determines whether or not the brightness of the AR image acquired from the server matches the brightness of the target area of the captured display image. If the receiver 200 determines that they do not match, the receiver 200 adjusts the brightness of the AR image to match the brightness of the AR image with the brightness of the target region. Then, the receiver 200 superimposes the AR image whose brightness has been adjusted on the target area of the captured display image. Thereby, the superimposed AR image can be brought closer to the image of the actual object, and the user's uncomfortable feeling with respect to the AR image can be suppressed.
  • the brightness of the AR image is the spatial average brightness of the AR image
  • the brightness of the target area is also the spatial average brightness of the target area.
  • the receiver 200 may enlarge the AR image and display it on the entire display 201.
  • the receiver 200 switches the AR image on which the AR image is tapped to another AR image, but may automatically switch the AR image regardless of the tap. For example, when the AR image is displayed for a predetermined time, the receiver 200 switches the AR image to another AR image for display. Further, when the current time reaches a predetermined time, the receiver 200 switches the AR image that has been displayed so far to another AR image and displays it. Thereby, the user can easily see a new AR image without performing an operation.
  • FIG. 78 is a diagram illustrating an example of a problem when displaying an AR image assumed in the receiver 200 according to the fourth embodiment or the modification 1 thereof.
  • the receiver 200 in the fourth embodiment or its modification example 1 captures the subject at time t1.
  • the above-described subject is a transmitter such as a television that transmits a light ID according to a change in luminance, or a poster, a guide board, or a signboard illuminated by light from the transmitter.
  • the receiver 200 displays the entire image obtained by the effective pixel area of the image sensor (hereinafter, referred to as all captured images) on the display 201 as a captured display image.
  • the receiver 200 recognizes, in the captured display image, an area corresponding to the recognition information acquired based on the light ID as a target area on which the AR image is superimposed.
  • the target area is an area indicating an image of a transmitter such as a television or an image of a poster, for example. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
  • the AR image may be a still image or a moving image, or a character string including one or more characters or symbols.
  • a region corresponding to the target region in the image sensor protrudes from the effective pixel region at time t2.
  • the recognition area is an area in which an image of the target area in the captured display image is projected in the effective pixel area of the image sensor. That is, the effective pixel area and the recognition area in the image sensor correspond to the captured display image and the target area on the display 201, respectively.
  • the receiver 200 When the recognition area protrudes from the effective pixel area, the receiver 200 cannot recognize the target area from the captured display image and cannot display the AR image.
  • the receiver 200 in the present modification acquires an image having a wider angle of view than the captured display image displayed on the entire display 201 as the entire captured image.
  • FIG. 79 is a diagram illustrating an example in which the receiver 200 in the second modification of the fourth embodiment displays an AR image.
  • the field angle of all captured images of the receiver 200 according to this modification that is, the field angle of the effective pixel area of the image sensor is wider than the field angle of the captured display image displayed on the entire display 201.
  • an area corresponding to an image range displayed on the display 201 is hereinafter referred to as a display area.
  • the receiver 200 images the subject at time t1.
  • the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image.
  • the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
  • the recognition area in the image sensor is expanded.
  • the recognition area protrudes from the display area in the image sensor. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201.
  • the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2.
  • the receiver 200 can recognize the target area from all the captured images, and superimposes a part of the AR image corresponding to the target area only on a part of the target area in the captured display image. Are displayed on the display 201.
  • the display of the AR image can be continued.
  • FIG. 80 is a flowchart illustrating an example of a processing operation of the receiver 200 in the second modification of the fourth embodiment.
  • the receiver 200 acquires the entire captured image and the decoding image by the image sensor capturing the subject (step S301). Next, the receiver 200 acquires an optical ID by decoding the decoding image (step S302). Next, the receiver 200 transmits the optical ID to the server (step S303). Next, the receiver 200 acquires an AR image and recognition information corresponding to the optical ID from the server (step S304). Next, the receiver 200 recognizes an area corresponding to the recognition information among all captured images as a target area (step S305).
  • the receiver 200 determines whether or not the recognition area that is the area corresponding to the image of the target area in the effective pixel area of the image sensor protrudes from the display area (step S306). Here, if it is determined that it is protruding (Yes in step S306), the receiver 200 extracts a part of the AR image corresponding to the area only in a part of the target area within the captured display image. It is displayed (step S307). On the other hand, when the receiver 200 determines that it does not protrude (No in step S306), the receiver 200 superimposes the AR image on the target area of the captured display image and displays the captured display image on which the AR image is superimposed. (Step S308).
  • the receiver 200 determines whether or not the AR image display process should be terminated (step S309), and when it is determined that the AR image display process should not be terminated (No in step S309), the process from step S305 is repeatedly executed.
  • FIG. 81 is a diagram illustrating another example in which the receiver 200 in the second modification of the fourth embodiment displays an AR image.
  • the receiver 200 may switch the screen display of the AR image according to the ratio of the size of the recognition area to the display area.
  • the receiver uses the ratio (h2 / h1). ) And (w2 / w1), the larger ratio is compared with the threshold value.
  • the receiver 200 displays the captured display image in which the AR image is superimposed on the target region, and sets the larger ratio to the first threshold ( For example, compare with 0.9).
  • the receiver 200 enlarges and displays the AR image on the entire display 201 as shown in (screen display 2) of FIG. Note that when the recognition area becomes larger than the display area and further when the recognition area becomes larger than the effective pixel area, the receiver 200 continues to enlarge and display the AR image on the entire display 201.
  • the receiver 200 sets the larger ratio to the second threshold ( For example, compare with 0.7).
  • the second threshold is smaller than the first threshold.
  • the receiver 200 displays a captured display image in which the AR image is superimposed on the target area as shown in (screen display 1) of FIG.
  • FIG. 82 is a flowchart illustrating another example of the processing operation of the receiver 200 in the second modification of the fourth embodiment.
  • the receiver 200 performs optical ID processing (step S301a).
  • This optical ID process is a process including steps S301 to S304 shown in FIG.
  • the receiver 200 recognizes an area corresponding to the recognition information in the captured display image as a target area (step S311). Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed (step S312).
  • the receiver 200 repeatedly executes the processing from step S314.
  • the receiver 200 superimposes the AR image on the target area of the captured display image and displays the captured display image on which the AR image is superimposed. (Step S316).
  • the receiver 200 determines whether or not to end the AR image display process (step S317). If the receiver 200 determines that the AR image display process should not be ended (No in step S317), the receiver 200 repeatedly executes the processes from step S313.
  • the screen display of the receiver 200 can be frequently switched between (screen display 1) and (screen display 2). Can be prevented and the state of the screen display can be stabilized.
  • the display area and the effective pixel area may be the same or different.
  • the ratio of the size of the recognition area to the display area is used.
  • the size of the recognition area relative to the effective pixel area is used instead of the display area.
  • a ratio may be used.
  • FIG. 83 is a diagram illustrating another example in which the receiver 200 according to the second modification of the fourth embodiment displays an AR image.
  • the image sensor of the receiver 200 has an effective pixel area wider than the display area.
  • the receiver 200 images the subject at time t1.
  • the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image.
  • the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
  • the recognition area in the image sensor moves, for example, in the upper left direction in FIG. 83, and protrudes from the display area at time t2. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201.
  • the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2.
  • the receiver 200 can recognize the target area from all the captured images, and superimposes a part of the AR image corresponding to the area only on a part of the target area in the captured display image. And displayed on the display 201.
  • the receiver 200 changes the size and position of a part of the displayed AR image in accordance with the movement of the recognition area in the image sensor, that is, the movement of the target area in all captured images.
  • the receiver 200 When the recognition area protrudes from the display area as described above, the receiver 200 counts the number of pixels corresponding to the distance between the edge of the effective pixel area and the edge of the display area (hereinafter referred to as inter-area distance). Is compared to a threshold.
  • the shorter distance (hereinafter referred to as the first distance) among the distance between the upper side of the effective pixel area and the upper side of the display area, and the distance between the lower side of the effective pixel area and the lower side of the display area.
  • dh be the number of pixels corresponding to (distance).
  • the shorter distance (hereinafter referred to as the second distance) among the distance between the left side of the effective pixel area and the left side of the display area, and the distance between the right side of the effective pixel area and the right side of the display area.
  • dw be the number of pixels corresponding to (distance).
  • the above-mentioned inter-region distance is the shorter one of the first and second distances.
  • the receiver 200 compares the smaller pixel number of the pixel numbers dw and dh with the threshold value N. Then, for example, when the smaller number of pixels becomes equal to or smaller than the threshold value N at time t2, the receiver 200 changes the size and position of a part of the AR image according to the position of the recognition area in the image sensor. Fix without. That is, the receiver 200 switches the screen display of the AR image. For example, the receiver 200 sets the size and position of a part of the displayed AR image to the size of the part of the AR image displayed on the display 201 when the smaller number of pixels reaches the threshold value N. Fix in position and position.
  • the receiver 200 continues to display a part of the AR image similarly to time t2. That is, as long as the smaller number of pixels dw and dh is equal to or smaller than the threshold value N, the receiver 200 captures a part of the AR image whose size and position are fixed as at time t2. Continue to superimpose on the display image.
  • the receiver 200 changes the size and position of a part of the displayed AR image in accordance with the movement of the recognition area in the image sensor, but the display magnification and position of the entire AR image are changed. It may be changed.
  • FIG. 84 is a diagram illustrating another example in which the receiver 200 according to the second modification of the fourth embodiment displays an AR image. Specifically, FIG. 84 shows an example in which the display magnification of the AR image is changed.
  • the recognition area in the image sensor is, for example, in the upper left direction in FIG. It moves and protrudes from the display area at time t2. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201.
  • the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all captured images.
  • the receiver 200 displays the AR image display magnification so that the size of the entire AR image matches the size of a part of the target region in the captured display image.
  • the receiver 200 reduces the AR image.
  • the receiver 200 superimposes the AR image whose display magnification has been changed (that is, reduced) on the area and displays the AR image on the display 201.
  • the receiver 200 changes the display magnification and position of the displayed AR image in accordance with the movement of the recognition area in the image sensor, that is, the movement of the target area in all captured images.
  • the receiver 200 compares the smaller pixel number of the pixel numbers dw and dh with the threshold value N. Then, for example, if the smaller number of pixels becomes equal to or less than the threshold value N at time t2, the receiver 200 is fixed without changing the display magnification and position of the AR image according to the position of the recognition area in the image sensor. To do. That is, the receiver 200 switches the screen display of the AR image. For example, the receiver 200 fixes the display magnification and position of the displayed AR image to the display magnification and position of the AR image displayed on the display 201 when the smaller number of pixels reaches the threshold value N. .
  • the receiver 200 continues to display the AR image similarly to the time t2 even when the recognition area further moves at the time t3 and protrudes from the effective pixel area. That is, as long as the smaller number of pixels dw and dh is equal to or smaller than the threshold value N, the receiver 200 converts an AR image with a fixed display magnification and position into a captured display image as at time t2. Continue to overlay and display.
  • the smaller of the number of pixels dw and dh is compared with the threshold value, but the ratio of the smaller number of pixels may be compared with the threshold value.
  • the ratio of the number of pixels dw is, for example, the ratio of the number of pixels dw to the number of pixels w0 in the horizontal direction of the effective pixel region (dw / w0).
  • the ratio of the number of pixels dh is, for example, the ratio of the number of pixels dh to the number of pixels h0 in the vertical direction of the effective pixel region (dh / h0).
  • the ratio between the pixel numbers dw and dh may be expressed by using the number of pixels in the horizontal or vertical direction of the display area instead of the number of pixels in the horizontal or vertical direction of the effective pixel area.
  • the threshold value compared with the ratio of the number of pixels dw and dh is, for example, 0.05.
  • the smaller angle of view of the number of pixels dw and dh may be compared with a threshold value.
  • the angle of view corresponding to the number of pixels dw is ⁇ ⁇ dw / m
  • the angle of view corresponding to the number of pixels dh is ⁇ ⁇ dh / m.
  • the receiver 200 switches the screen display of the AR image based on the inter-region distance between the effective pixel region and the recognition region.
  • the screen display of the AR image may be switched based on the relationship.
  • FIG. 85 is a diagram illustrating another example in which the receiver 200 in the second modification of the fourth embodiment displays an AR image. Specifically, FIG. 85 shows an example in which the screen display of the AR image is switched based on the relationship between the display area and the recognition area. In the example shown in FIG. 85, as in the example shown in FIG. 79, the image sensor of the receiver 200 has an effective pixel area wider than the display area.
  • the receiver 200 images the subject at time t1.
  • the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image.
  • the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
  • the receiver 200 changes the position of the displayed AR image according to the movement of the recognition area in the image sensor.
  • the recognition area in the image sensor moves in the upper left direction in FIG. 85, for example, and at time t2, a part of the edge of the recognition area coincides with a part of the edge of the display area.
  • an image of the target area for example, an image such as a poster
  • the receiver 200 superimposes the AR image on the target area at the corner of the captured display image and displays the AR image on the display 201.
  • the receiver 200 fixes the AR image displayed at time t2 without changing the size and position. That is, the receiver 200 switches the screen display of the AR image.
  • the receiver 200 continues to display the AR image similarly to the time t2 even when the recognition area further moves at the time t3 and protrudes from the effective pixel area. In other words, as long as the recognition area extends beyond the display area, the receiver 200 superimposes the AR image having the same size as that at time t2 on the same position as at time t2 in the captured display image. Continue to display.
  • the receiver 200 switches the screen display of the AR image depending on whether or not the recognition area protrudes from the display area.
  • the receiver 200 may include a determination area that includes the display area and is larger than the display area and smaller than the effective pixel area instead of the display area. In this case, the receiver 200 switches the screen display of the AR image depending on whether or not the recognition area protrudes from the determination area.
  • the screen display of the AR image has been described with reference to FIGS. 79 to 85.
  • the receiver 200 cannot recognize the target area from all the captured images, the target that has been recognized until immediately before the target area is recognized.
  • the AR image having the size of the area may be displayed superimposed on the captured display image.
  • FIG. 86 is a diagram illustrating another example in which the receiver 200 according to the second modification of the fourth embodiment displays an AR image.
  • the receiver 200 acquires the captured display image Pe and the decoding image in the same manner as described above by imaging the guide plate 107 illuminated by the transmitter 100.
  • the receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 107.
  • the receiver 200 receives the light ID correctly because the surface is dark even when illuminated by the transmitter 100. It may not be possible.
  • the receiver 200 may not be able to correctly receive the light ID.
  • a reflecting plate 109 may be arranged near the guide plate 107.
  • the receiver 200 can receive light reflected from the transmitter 100 by the reflector 109, that is, visible light (specifically, light ID) transmitted from the transmitter 100.
  • the receiver 200 can appropriately receive the optical ID and display the AR image P5.
  • FIG. 87A is a flowchart illustrating a display method according to one embodiment of the present invention.
  • the display method according to one aspect of the present invention includes steps S41 to S43.
  • a picked-up image is acquired by picking up an image of an object illuminated by a transmitter that transmits a signal according to a change in luminance of light as a subject with an image sensor.
  • a signal is decoded from the captured image.
  • a moving image corresponding to the decoded signal is read from the memory, and the moving image is superimposed on a target area corresponding to the subject in the captured image and displayed on the display.
  • a predetermined image that is one of a plurality of images included in the moving image and includes display images of the image including the target object and the image including the target object.
  • the moving image is displayed from any one of the plurality of images.
  • the predetermined number is 10 frames.
  • the object is a still image
  • the moving image is displayed from the same image as the still image.
  • the image from which the display of the moving image is started is not limited to the same image as the still image, but is the same number as the still image, that is, the predetermined number of frames in the display order from the image including the target object. It may be an image.
  • the object is not limited to a still image, and may be a doll or the like.
  • the imaging sensor and the captured image are, for example, the image sensor and the entire captured image in the fourth embodiment.
  • the still image to be lit up may be a still image displayed on the display panel of the image display device, or may be a poster, a guide board, a signboard, or the like illuminated by light from the transmitter.
  • Such a display method may further include a transmission step of transmitting a signal to the server and a reception step of receiving a moving image corresponding to the signal from the server.
  • a moving image can be displayed virtually so that a still image starts to move, and an image useful for the user can be displayed.
  • the still image has an outer frame of a predetermined color
  • the display method according to one aspect of the present invention may further include a recognition step of recognizing a target area from the captured image by the predetermined color.
  • the moving image is resized so as to be the same as the size of the recognized target region, and the resized moving image is superimposed on the target region in the captured image and displayed on the display.
  • the outer frame of the predetermined color is a white or black rectangular frame surrounding a still image, and is indicated by the recognition information in the fourth embodiment.
  • the AR image in the fourth embodiment is resized and superimposed as a moving image.
  • the moving image can be displayed more realistically so that the moving image actually exists as a subject.
  • step S43 if the projection area onto which the subject is projected in the imaging area is larger than the display area, an image obtained by a portion of the projection area that exceeds the display area is displayed on the display. You don't have to.
  • the imaging region and the projection region are an effective pixel region and a recognition region of the image sensor.
  • the imaging sensor approaches the still image that is the subject, so that even if a part of the image obtained by the projection region (recognition region in FIG. 79) is not displayed on the display, the subject In some cases, the entire still image is projected onto the imaging region. Therefore, in this case, a still image that is a subject can be appropriately recognized, and a moving image can be appropriately superimposed on a target region corresponding to the subject in the captured image.
  • the horizontal and vertical widths of the display area are w1 and h1
  • the horizontal and vertical widths of the projection area are w2 and h2.
  • step S43 when h2 / h1 or w2 / w1 is greater than a predetermined value, a moving image is displayed on the entire screen of the display, and either h2 / h1 or w2 / w1 is displayed. If the larger value is smaller than the predetermined value, the moving image may be superimposed on the target area in the captured image and displayed on the display.
  • the moving image is displayed on the full screen. Therefore, the user enlarges the moving image by moving the imaging sensor closer to the still image. There is no need to display. Therefore, it is possible to prevent the signal from being unable to be decoded when the imaging sensor is too close to the still image and the projection area (recognition area in FIG. 81) protrudes from the imaging area (effective pixel area). .
  • the display method according to one aspect of the present invention may further include a control step of turning off the operation of the imaging sensor when a moving image is displayed on the entire screen of the display.
  • step S314 in FIG. 82 the power consumption of the image sensor can be suppressed by turning off the operation of the image sensor.
  • step S43 if the target area cannot be recognized from the captured image due to the movement of the imaging sensor, the moving image is displayed in the same size as the size of the target area recognized immediately before it cannot be recognized. May be.
  • the target area cannot be recognized from the captured image for example, is a situation in which at least a part of the target area corresponding to the still image that is the subject is not included in the captured image.
  • a moving image having the same size as the size of the target area recognized immediately before is displayed, for example, at time t3 in FIG. Accordingly, it is possible to suppress the at least part of the moving image from being displayed because the imaging sensor has been moved.
  • step S43 when only a part of the target area is included in the area displayed on the display of the captured image due to the movement of the imaging sensor, it corresponds to a part of the target area.
  • a part of the spatial area of the moving image to be displayed may be superimposed on a part of the target area and displayed on the display.
  • a part of the spatial area of the moving image is a part of each picture constituting the moving image.
  • step S43 if the target area cannot be recognized from the captured image due to the movement of the imaging sensor, the space of the moving image corresponding to a part of the target area displayed immediately before the target area cannot be recognized. A part of the area may be continuously displayed.
  • one space area of the moving image (AR image in FIG. 83) is displayed.
  • the part is displayed continuously.
  • step S43 the horizontal and vertical widths in the imaging area of the imaging sensor are w0 and h0, respectively, and the horizontal area between the projection area where the subject is projected in the imaging area and the imaging area.
  • the projection area is a recognition area shown in FIG. 83, for example.
  • the angle of view corresponding to the shorter one of the horizontal and vertical distances between the projection area where the subject is projected in the imaging area of the imaging sensor and the imaging area is predetermined. If the value is less than or equal to the value, it may be determined that the target area cannot be recognized.
  • FIG. 87B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
  • the display device A10 includes an imaging sensor A11, a decoding unit A12, and a display control unit A13.
  • the imaging sensor A11 acquires a captured image by capturing a still image illuminated by a transmitter that transmits a signal according to a change in the luminance of light as a subject.
  • the decoding unit A12 is a decoding unit that decodes a signal from the captured image.
  • the display control unit A13 reads out the moving image corresponding to the decoded signal from the memory, and displays the moving image on the display by superimposing the moving image on the target area corresponding to the subject in the captured image.
  • the display control unit A13 displays the plurality of images in order from the first image that is the same image as the still image among the plurality of images included in the moving image.
  • the imaging sensor A11 may include a plurality of micromirrors and a photosensor
  • the display device A10 may further include an imaging control unit that controls the imaging sensor.
  • the imaging control unit identifies a region including a signal as a signal region in the captured image, and controls the angle of the micromirror corresponding to the identified signal region among the plurality of micromirrors.
  • an imaging control part makes the above-mentioned photosensor receive only the reflected light by the micro mirror by which the angle was controlled among a plurality of micro mirrors.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the program causes the computer to execute the display method shown by the flowcharts of FIGS. 77, 80, 82, and 87A.
  • the display method according to one or a plurality of aspects has been described based on the above-described embodiments and modifications.
  • the present invention is not limited to the embodiments. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments and modifications are also within the scope of the present invention. May be included.
  • FIG. 88 is a diagram showing an example of expansion and movement of the AR image.
  • the receiver 200 superimposes the AR image P21 on the target area of the captured display image Ppre, as in the fourth embodiment or the first or second modification thereof. Then, the receiver 200 displays the captured display image Ppre on which the AR image P21 is superimposed on the display 201.
  • the AR image P21 is a moving image.
  • the receiver 200 when receiving an instruction to change the size, changes the size of the AR image P21 in accordance with the instruction. For example, when receiving an enlargement instruction, the receiver 200 enlarges the AR image P21 according to the instruction.
  • the size change instruction is given by, for example, a pinch operation, a double tap, or a long press on the AR image P21 by the user. Specifically, when receiving an enlargement instruction performed by pinching out, the receiver 200 enlarges the AR image P21 according to the instruction. Conversely, when receiving an instruction for reduction performed by pinch-in, the receiver 200 reduces the AR image P21 in accordance with the instruction.
  • the receiver 200 when the receiver 200 receives a position change instruction, the receiver 200 changes the position of the AR image P21 in accordance with the instruction.
  • the instruction to change the position is given by, for example, swiping the AR image by the user.
  • the receiver 200 when receiving an instruction to change the position performed by swiping, the receiver 200 changes the position of the AR image P21 according to the instruction. That is, the AR image P21 moves.
  • FIG. 89 is a diagram illustrating an example of the enlargement of the AR image.
  • the receiver 200 superimposes the AR image P22 on the target area of the captured display image Ppre, as in the fourth embodiment or the first or second modification thereof. Then, the receiver 200 displays the captured display image Ppre on which the AR image P22 is superimposed on the display 201.
  • the AR image P22 is a still image in which a character string is described.
  • the receiver 200 when receiving an instruction to change the size, changes the size of the AR image P22 in accordance with the instruction. For example, when receiving an enlargement instruction, the receiver 200 enlarges the AR image P22 according to the instruction.
  • the size change instruction is performed by, for example, a pinch operation, double tap, or long press on the AR image P22 by the user, as described above. Specifically, when receiving an enlargement instruction performed by pinching out, the receiver 200 enlarges the AR image P22 according to the instruction. By enlarging the AR image P22, the character string described in the AR image P22 can be easily read by the user.
  • the receiver 200 when the receiver 200 further receives a size change instruction, the receiver 200 changes the size of the AR image P22 according to the instruction. For example, when receiving a further enlargement instruction, the receiver 200 further enlarges the AR image P22 according to the instruction. By enlarging the AR image P22, it is possible to make it easier for the user to read the character string described in the AR image P22.
  • the receiver 200 may acquire a high-resolution AR image if the enlargement ratio of the AR image corresponding to the instruction is equal to or greater than a threshold value.
  • the receiver 200 may enlarge and display the high-resolution AR image up to the above-described enlargement factor instead of the original AR image that has already been displayed.
  • the receiver 200 displays an AR image of 1920 ⁇ 1080 pixels instead of the AR image of 640 ⁇ 480 pixels.
  • the AR image can be enlarged and a high-resolution image that cannot be obtained by the optical zoom can be displayed so that the AR image is actually captured as a subject.
  • FIG. 90 is a flowchart illustrating an example of processing operations related to enlargement and movement of an AR image by the receiver 200.
  • the receiver 200 starts imaging based on the normal exposure time and the communication exposure time as in step S101 shown in the flowchart of FIG. 45 (step S401).
  • a captured display image Ppre based on the normal exposure time and a decoding image (that is, a bright line image) Pdec based on the communication exposure time are periodically obtained.
  • the receiver 200 acquires the optical ID by decoding the decoding image Pdec.
  • the receiver 200 performs an AR image superimposition process including the processes of steps S102 to S106 shown in the flowchart of FIG. 45 (step S402).
  • the AR image is displayed superimposed on the captured display image Ppre.
  • the receiver 200 decreases the optical ID acquisition rate (step S403).
  • the light ID acquisition rate is a ratio of the number of decoding images (that is, bright line images) Pdec out of the number of captured images per unit time obtained by imaging started in step S401. For example, as the optical ID acquisition rate decreases, the number of decoding images Pdec obtained per unit time becomes smaller than the number of captured display images Ppre obtained per unit time.
  • the receiver 200 determines whether or not a size change instruction has been received (step S404). If it is determined that the size change instruction has been received (Yes in step S404), the receiver 200 further determines whether the size change instruction is an enlargement instruction (step S405). If it is determined that the size change instruction is an enlargement instruction (Yes in step S405), the receiver 200 further determines whether it is necessary to reacquire the AR image (step S406). For example, when the receiver 200 determines that the AR image enlargement rate according to the enlargement instruction is equal to or greater than a threshold, the receiver 200 determines that the AR image needs to be reacquired.
  • the receiver 200 determines that reacquisition is necessary (Yes in step S406), the receiver 200 acquires a high-resolution AR image from, for example, a server, and converts the AR image displayed in a superimposed manner into the high-resolution AR image. Replace with the AR image (step S407).
  • the receiver 200 changes the size of the AR image in accordance with the received size change instruction (step S408). That is, when a high-resolution AR image is acquired in step S407, the receiver 200 enlarges the high-resolution AR image. Further, when it is determined in step S406 that the re-acquisition of the AR image is unnecessary (No in step S406), the receiver 200 enlarges the superimposed AR image. If it is determined in step S405 that the size change instruction is a reduction instruction (No in step S405), the receiver 200 performs superimposition according to the received size change instruction, that is, the reduction instruction. Reduce the displayed AR image.
  • step S404 determines whether a position change instruction has been received (step S409). If it is determined that a position change instruction has been received (Yes in step S409), the receiver 200 changes the position of the superimposed AR image in accordance with the position change instruction (step S410). ). That is, the receiver 200 moves the AR image. If it is determined that the position change instruction has not been received (No in step S409), the receiver 200 repeatedly executes the processing from step S404.
  • the receiver 200 cannot acquire the light ID periodically acquired from step S401. It is determined whether or not (step S411). If it is determined that the light ID is no longer acquired (Yes in step S411), the receiver 200 ends the processing operation related to the expansion and movement of the AR image. On the other hand, if it is determined that the light ID is still acquired (No in step S411), the receiver 200 repeatedly executes the processing from step S404.
  • FIG. 91 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
  • the receiver 200 superimposes the AR image P23 on the target area in the captured display image Ppre.
  • the AR image P23 is configured such that the transmittance of each part of the AR image P23 increases as the part of the AR image P23 is closer to the end of the AR image P23.
  • the transmittance is the degree to which the superimposed image is displayed through. For example, when the overall transmittance of the AR image is 100%, even if the AR image is superimposed on the target area of the captured display image, only the target area is displayed on the display 201 without displaying the AR image. Means that. Conversely, the transmittance of the entire AR image being 0% means that the target area of the captured display image is not displayed on the display 201 and only the AR image superimposed on the target area is displayed. .
  • the transmittance of each part in the AR image P23 is higher as the part is closer to the upper end, lower end, left end, or right end of the rectangle. More specifically, the transmittance at those ends is 100%.
  • the central portion of the AR image P23 there is a rectangular area having a transmittance of 0% which is smaller than that of the AR image P23. In the rectangular area, for example, “Kyoto Station” is written in English. That is, at the peripheral edge of the AR image P23, the transmittance changes stepwise from 0% to 100% like a gradation.
  • the receiver 200 superimposes such an AR image P23 on the target area in the captured display image Ppre as shown in FIG. At this time, the receiver 200 matches the size of the AR image P23 with the size of the target area, and superimposes the resized AR image P23 on the target area.
  • the station name has “Kyoto” written in Japanese.
  • the transmittance of each part of the AR image P23 is higher as the part is closer to the end of the AR image P23. Therefore, when the AR image P23 is superimposed on the target area, even if the rectangular area at the center of the AR image P23 is displayed, the end of the AR image P23 is not displayed, but the end of the target area, that is, the station name mark The edge of the image is displayed.
  • the deviation between the AR image P23 and the target area can be made inconspicuous. That is, even when the AR image P23 is superimposed on the target area, a shift may occur between the AR image P23 and the target area due to the movement of the receiver 200 or the like.
  • the overall transmittance of the AR image P23 is 0%
  • the end of the AR image P23 and the end of the target area are displayed, and the shift becomes conspicuous.
  • the AR image P23 in the present modification the closer to the end, the higher the transmittance of the part, so that the end of the AR image P23 can be made difficult to be displayed.
  • the AR image P23 and the target region It is possible to make the gap inconspicuous.
  • the transmittance changes like gradation in the peripheral portion of the AR image P23 it is difficult to notice that the AR image P23 is superimposed on the target region.
  • FIG. 92 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
  • the receiver 200 superimposes the AR image P24 on the target area in the captured display image Ppre.
  • the imaged subject is, for example, a restaurant menu. This menu is surrounded by a white frame, and the white frame is surrounded by a black frame. That is, the subject includes a menu, a white frame surrounding the menu, and a black frame surrounding the white frame.
  • the receiver 200 recognizes, as a target area, an area larger than the white frame image and smaller than the black frame image in the captured display image Pre. Then, the receiver 200 matches the size of the AR image P24 with the size of the target area, and superimposes the resized AR image P24 on the target area.
  • the AR image P24 can be continuously displayed in a state surrounded by a black frame. Therefore, the shift between the AR image P24 and the target region can be made inconspicuous.
  • the color of the frame is black or white, but is not limited to these colors and may be any color.
  • FIG. 93 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
  • FIG. 93 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
  • the receiver 200 images a poster on which a castle illuminated in the night sky is drawn as a subject.
  • the poster is illuminated by the above-described transmitter 100 configured as a backlight, and a visible light signal (ie, a light ID) is transmitted by the backlight.
  • the receiver 200 acquires the captured display image Ppre including the image of the subject that is the poster and the AR image P25 corresponding to the light ID by the imaging.
  • the AR image P25 has the same shape as the poster image from which the region where the castle is drawn is extracted. That is, the area corresponding to the castle of the poster image in the AR image P25 is masked.
  • the AR image P25 is configured so that the transmittance of each part of the AR image P25 is higher as the part of the AR image P25 is closer to the end of the AR image P25, similarly to the AR image P23 described above.
  • the transmittance is 0% in the AR image P25, fireworks launched in the night sky are displayed as moving images.
  • the receiver 200 matches the size of the AR image P25 with the size of the target area that is the image of the subject, and superimposes the resized AR image P25 on the target area.
  • the castle drawn on the poster is displayed as an image of the subject, not as an AR image, and a moving image of fireworks is displayed as an AR image.
  • the captured display image Ppre can be displayed as if fireworks are actually being launched in the poster.
  • the transmittance of each part of the AR image P25 is higher as the part is closer to the end of the AR image P25. Therefore, when the AR image P25 is superimposed on the target area, even if the center portion of the AR image P25 is displayed, the end of the AR image P25 is not displayed, but the end of the target area is displayed. As a result, the deviation between the AR image P25 and the target region can be made inconspicuous. Furthermore, since the transmittance changes like a gradation at the peripheral portion of the AR image P25, it is difficult to notice that the AR image P25 is superimposed on the target region.
  • FIG. 94 is a diagram illustrating an example of superposition of AR images by the receiver 200.
  • FIG. 94 is a diagram illustrating an example of superposition of AR images by the receiver 200.
  • the receiver 200 images the transmitter 100 configured as a television as a subject. Specifically, the transmitter 100 displays a castle illuminated in the night sky on a display and transmits a visible light signal (that is, a light ID).
  • the receiver 200 acquires the captured display image Ppre displayed by the transmitter 100 and the AR image P26 corresponding to the optical ID by the imaging.
  • the receiver 200 first displays the captured display image Ppre on the display 201.
  • the receiver 200 also displays on the display 201 a message m that prompts the user to turn it off.
  • the message m is, for example, “Turn off the room lighting and darken the room”.
  • the receiver 200 displays the AR image P26 superimposed on the captured display image Ppre.
  • the AR image P26 has the same size as the captured display image Ppre, and the area corresponding to the castle of the captured display image Ppre in the AR image P26 is cut out. That is, the area corresponding to the castle of the captured display image Ppre in the AR image P26 is masked. Therefore, the castle of the captured display image Ppre can be shown to the user from the area.
  • the transmittance may change stepwise from 0% to 100% like a gradation. In this case, the shift between the captured display image Ppre and the AR image P26 can be made inconspicuous.
  • the AR image having a high peripheral edge transmittance is superimposed on the target area of the captured display image Ppre, so that the shift between the AR image and the target area is less noticeable.
  • an AR image that is the same size as the captured display image Ppre and is entirely translucent that is, having a transmittance of 50%
  • the shift between the AR image and the target region can be made inconspicuous.
  • the captured display image Ppre is generally bright, the AR image having a uniform low transparency is superimposed on the captured display image Ppre.
  • the captured display image Ppre is generally dark, the transparency is uniformly uniform.
  • a high AR image may be superimposed on the captured display image Ppre.
  • the receiver 200 displays the message m that prompts the user to turn off the light, but the light may be automatically turned off without performing such display.
  • the receiver 200 outputs a turn-off signal to the lighting device in which the transmitter 100 that is a television is set by Bluetooth (registered trademark), ZigBee, a specific low-power radio station, or the like. Thereby, the lighting device is automatically turned off.
  • FIG. 95A is a diagram illustrating an example of a captured display image Ppre obtained by imaging by the receiver 200.
  • the transmitter 100 is configured as a large display installed in a stadium. Then, the transmitter 100 displays a message indicating that, for example, fast food and drinks can be ordered with the light ID, and transmits a visible light signal (that is, a light ID). When such a message is displayed, the user images the receiver 200 toward the transmitter 100. That is, the receiver 200 images the transmitter 100 configured as a large display installed in the stadium as a subject.
  • the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by the imaging. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and transmits the optical ID and the captured display image Ppre to the server.
  • the server specifies the installation information of the captured large display associated with the light ID transmitted from the receiver 200 from the installation information associated with the light ID.
  • the installation information indicates the position and orientation where the large display is installed, the size of the large display, and the like.
  • the server identifies the seat number where the captured display image Ppre was captured in the stadium based on the size and orientation of the large display displayed in the captured display image Ppre and the installation information. To do. Then, the server causes the receiver 200 to display a menu screen including the seat number.
  • FIG. 95B is a diagram showing an example of a menu screen displayed on the display 201 of the receiver 200.
  • the menu screen m1 includes, for example, for each product, an input field ma1 in which the number of orders for the product is input, a seat field mb1 in which the seat number of the stadium specified by the server is described, and an order button mc1. .
  • the user operates the receiver 200 to input the order quantity of the product in the input field ma1 corresponding to the desired product, and selects the order button mc1. As a result, the order is confirmed, and the receiver 200 transmits the order contents corresponding to the input result to the server.
  • the server When the server receives the order details, it instructs the stadium staff to deliver the number of products according to the order details to the seat of the number specified as described above.
  • FIG. 96 is a flowchart showing an example of processing operation between the receiver 200 and the server.
  • the receiver 200 first images the transmitter 100 configured as a large stadium display (step S421).
  • the receiver 200 acquires the optical ID transmitted from the transmitter 100 by decoding the decoding image Pdec obtained by the imaging (step S422).
  • the receiver 200 transmits the optical ID acquired in step S422 and the captured display image Ppre obtained by imaging in step S421 to the server (step S423).
  • the server When the server receives the light ID and the captured display image Pre (step S424), the server identifies installation information of a large display installed in the stadium based on the light ID (step S425). For example, for each light ID, the server holds a table indicating installation information of a large display associated with the light ID, and the installation information associated with the light ID transmitted from the receiver 200 is stored from the table. The installation information is specified by searching.
  • the server acquires (that is, captures) the captured display image Ppre in the stadium based on the specified installation information and the size and orientation of the large display displayed in the captured display image Ppre.
  • the assigned seat number is specified (step S426).
  • the server transmits the URL (Uniform Resource Locator) of the menu screen m1 including the identified seat number to the receiver 200 (step S427).
  • URL Uniform Resource Locator
  • the receiver 200 When the receiver 200 receives the URL of the menu screen m1 transmitted from the server (step S428), the receiver 200 accesses the URL and displays the menu screen m1 (step S429).
  • the user operates the receiver 200 to input the order contents into the menu screen m1, and selects the order button mc1, thereby confirming the order.
  • the receiver 200 transmits the order details to the server (step S430).
  • the server Upon receiving the order details transmitted from the receiver 200, the server performs an order receiving process according to the order details (step S431). At this time, for example, the server instructs the staff of the stadium to deliver the number of products corresponding to the order contents to the seat of the number specified in step S426.
  • the seat number is specified based on the captured display image Ppre obtained by imaging by the receiver 200, the user of the receiver 200 bothers to input the seat number when ordering a product. There is no need to do. Therefore, the user can easily place an order for a product without inputting the seat number.
  • the server specifies the seat number, but the receiver 200 may specify the seat number.
  • the receiver 200 acquires the installation information from the server, and specifies the seat number based on the installation information and the size and orientation of the large display displayed in the captured display image Pre.
  • FIG. 97 is a diagram for explaining the volume of sound reproduced by the receiver 1800a.
  • the receiver 1800a receives the light ID (visible light signal) transmitted from the transmitter 1800b configured as, for example, street digital signage. Then, the receiver 1800a reproduces sound at the same timing as the image reproduction by the transmitter 1800b. That is, the receiver 1800a reproduces sound so as to be synchronized with the image reproduced by the transmitter 1800b. Note that the receiver 1800a may reproduce the same image as the image reproduced by the transmitter 1800b (reproduced image) or an AR image (AR moving image) related to the reproduced image together with the sound.
  • the receiver 1800a may reproduce the same image as the image reproduced by the transmitter 1800b (reproduced image) or an AR image (AR moving image) related to the reproduced image together with the sound.
  • the receiver 1800a adjusts the volume of the sound according to the distance to the transmitter 1800b. Specifically, the receiver 1800a adjusts the volume smaller as the distance to the transmitter 1800b is longer, and conversely adjusts the volume larger as the distance to the transmitter 1800b is shorter.
  • the receiver 1800a may specify the distance to the transmitter 1800b using a GPS (Global Positioning System) or the like. Specifically, the receiver 1800a acquires position information of the transmitter 1800b associated with the optical ID from a server or the like, and further specifies the position of the receiver 1800a by GPS. Then, the receiver 1800a specifies the distance between the position of the transmitter 1800b indicated by the position information acquired from the server and the position of the specified receiver 1800a as the distance to the above-described transmitter 1800b. . Note that the receiver 1800a may specify the distance to the transmitter 1800b by using Bluetooth (registered trademark) instead of GPS.
  • Bluetooth registered trademark
  • the receiver 1800a may specify the distance to the transmitter 1800b based on the size of the bright line pattern region of the above-described decoding image Pdec obtained by imaging.
  • the bright line pattern region is a region formed of a plurality of bright line patterns that appear by exposure at the exposure time for communication of a plurality of exposure lines included in the image sensor of the receiver 1800a.
  • This bright line pattern area corresponds to the display area of the transmitter 1800b displayed in the captured display image Ppre.
  • the receiver 1800a specifies the shorter distance as the distance to the transmitter 1800b as the bright line pattern region is larger, and conversely specifies the longer distance as the distance to the transmitter 1800b as the bright line pattern region is smaller. .
  • the receiver 1800a uses distance data indicating the relationship between the size of the bright line pattern region and the distance, and in the distance data, the distance associated with the size of the bright line pattern region in the captured display image Pre is The distance to the transmitter 1800b may be specified. Note that the receiver 1800a may transmit the optical ID received as described above to the server, and obtain distance data associated with the optical ID from the server.
  • the user of the receiver 1800a makes the sound reproduced by the receiver 1800a like the sound actually reproduced by the transmitter 1800b. Can be heard.
  • FIG. 98 is a diagram showing the relationship between the distance from the receiver 1800a to the transmitter 1800b and the sound volume.
  • the volume increases or decreases in proportion to the distance in the range from Vmin to Vmax [dB].
  • the receiver 1800a linearly decreases the volume from Vmax [dB] to Vmin [dB] when the distance to the transmitter 1800b increases from L1 [m] to L2 [m].
  • the receiver 1800a maintains the volume at Vmax [dB]
  • the distance to the transmitter 1800b is longer than L2 [m].
  • the volume is maintained at Vmin [dB].
  • the receiver 1800a stores the maximum volume Vmax, the longest distance L1 at which the sound of the maximum volume Vmax is output, the minimum volume Vmin, and the shortest distance L2 at which the sound of the minimum volume Vmin is output. is doing.
  • the receiver 1800a may change the maximum volume Vmax, the minimum volume Vmin, the longest distance L1, and the shortest distance L2 according to the attributes set for the receiver 1800a. For example, when the attribute is the age of the user and the age indicates a high age, the receiver 1800a sets the maximum volume Vmax higher than the reference maximum volume and sets the minimum volume Vmin higher than the reference minimum volume. Also good. Further, the attribute may be information indicating whether audio output is performed from a speaker or an earphone.
  • the minimum volume Vmin is set in the receiver 1800a, it is possible to prevent the receiver 1800a from being inaudible because the receiver 1800a is too far from the transmitter 1800b. Furthermore, since the maximum volume Vmax is set in the receiver 1800a, the receiver 1800a is too close to the transmitter 1800b, so that it is possible to suppress an excessively loud sound from being output.
  • FIG. 99 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
  • the receiver 200 images the illuminated signboard.
  • the signboard is lit up by the illumination device which is the above-described transmitter 100 that transmits the optical ID. Therefore, the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by the imaging. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and acquires a plurality of AR images P27a to P27c associated with the optical ID and the recognition information from the server. Based on the recognition information, the receiver 200 recognizes the periphery of the area m2 in which the signboard is displayed in the captured display image Ppre as a target area.
  • the receiver 200 recognizes an area in contact with the left side of the area m2 as the first target area, and superimposes the AR image P27a on the first target area. To do.
  • the receiver 200 recognizes an area including the lower side of the area m2 as a second target area, and superimposes the AR image P27b on the second target area. .
  • the receiver 200 recognizes an area in contact with the upper side of the area m2 as the third target area, and superimposes the AR image P27c on the third target area.
  • each of the AR images P27a to P27c is, for example, an image of a snowman character and may be a moving image.
  • the receiver 200 switches the recognized target area to any one of the first to third target areas in a predetermined order and timing while continuously acquiring the optical ID. May be. That is, the receiver 200 may switch the recognized target area in the order of the first target area, the second target area, and the third target area. Alternatively, the receiver 200 may switch the recognized target area to any one of the first to third target areas in a predetermined order each time the above-described optical ID is acquired. That is, the receiver 200 first acquires the light ID, and while continuously acquiring the light ID, the receiver 200 recognizes the first target area as shown in FIG. Then, the AR image P27a is superimposed on the first target area. Then, when the receiver 200 cannot acquire the optical ID, the receiver 200 hides the AR image P27a.
  • the receiver 200 recognizes the second target area as shown in FIG. 99 (b) while continuously acquiring the light ID. Then, the AR image P27b is superimposed on the second target area. Then, when the receiver 200 cannot acquire the optical ID again, the receiver 200 hides the AR image P27b.
  • the receiver 200 recognizes the third target area as shown in (c) of FIG. 99 while continuously acquiring the light ID. Then, the AR image P27c is superimposed on the third target area.
  • the receiver 200 displays the AR image displayed once every N (N is an integer of 2 or more) times.
  • N is the number of times the AR image is displayed, and may be 200 times, for example. That is, the AR images P27a to P27c are images of the same white character, but an AR image of a pink character, for example, is displayed at a frequency of once every 200 times.
  • the receiver 200 may give points to the user when receiving an operation on the AR image by the user.
  • the user's interest can be directed to the imaging of the signboard lit up by the transmitter 100.
  • the user can repeatedly obtain the optical ID.
  • FIG. 100 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
  • the receiver 200 functions as a so-called way finder (Way Finder) that presents a route to be followed by the user, for example, by imaging the mark M4 drawn on the floor surface at a position where a plurality of passages intersect in the building.
  • Way Finder Way finder
  • the mark M4 is lit up by the illumination device that is the above-described transmitter 100 that transmits the light ID by a change in luminance. Therefore, the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by capturing the mark M4. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and transmits the optical ID and the terminal information of the receiver 200 to the server.
  • the receiver 200 acquires a plurality of AR images P28 and recognition information associated with the optical ID and terminal information from the server.
  • the optical ID and terminal information are stored in the server in association with a plurality of AR images P28 and recognition information at the time of user check-in.
  • the receiver 200 Based on the recognition information, the receiver 200 recognizes a plurality of target areas around the area m4 in which the mark M4 is displayed in the captured display image Ppre. Then, as shown in FIG. 100, the receiver 200 superimposes and displays an AR image P28 such as an animal footprint on each of the plurality of target regions.
  • an AR image P28 such as an animal footprint
  • the recognition information indicates the course of turning to the right at the position of the mark M4.
  • the receiver 200 identifies a route in the captured display image Ppre and recognizes a plurality of target regions arranged along the route.
  • This route is a route that goes from the lower side of the display 201 to the region m4 and turns right in the region m4.
  • the receiver 200 arranges the AR image P28 in each of the recognized plurality of target regions as if the animal walked along the route.
  • the receiver 200 may use the geomagnetism detected by the 9-axis sensor provided in the receiver 200.
  • the recognition information indicates the direction to proceed at the position of the mark M4 with reference to the direction of geomagnetism.
  • the recognition information indicates west as the direction to proceed at the position of the mark M4.
  • the receiver 200 specifies a route from the lower side of the display 201 toward the area m4 and toward the west in the area m4 in the captured display image Ppre. Then, the receiver 200 recognizes a plurality of target areas arranged along the route. Note that the receiver 200 identifies the lower side of the display 201 by detecting gravitational acceleration using a nine-axis sensor.
  • the route of the user is presented by the receiver 200, the user can easily reach the destination by following the route.
  • the course is displayed as an AR image in the captured display image Ppre, the course can be presented to the user in an easy-to-understand manner.
  • the illuminating device which is the transmitter 100 can appropriately transmit the light ID while suppressing the brightness by illuminating the mark M4 with a short pulse of light.
  • the receiver 200 images the mark M4.
  • the receiver 200 may image the illumination device using a camera (a so-called self-taking camera) disposed on the display 201 side. The receiver 200 may capture both the mark M4 and the illumination device.
  • FIG. 101 is a diagram for explaining an example of how the receiver 200 obtains the line scan time.
  • the receiver 200 performs decoding using the line scan time when decoding the decoding image Pdec.
  • This line scan time is the time from the start of exposure of one exposure line included in the image sensor to the start of exposure of the next exposure line. If the line scan time is known, the receiver 200 decodes the decoding image Pdec using the known line scan time. However, when the line scan time is not known, the receiver 200 obtains the line scan time from the decoding image Pdec.
  • the receiver 200 finds a line having the minimum width from among a plurality of bright lines and a plurality of dark lines constituting a bright line pattern in the decoding image Pdec.
  • the bright line is a line on the decoding image Pdec generated when each of one or a plurality of continuous exposure lines is exposed when the luminance of the transmitter 100 is high.
  • the dark line is a line on the decoding image Pdec generated by exposure of each of one or a plurality of continuous exposure lines when the luminance of the transmitter 100 is low.
  • the receiver 200 When the receiver 200 finds the line with the minimum width, the receiver 200 specifies the number of exposure lines corresponding to the line with the minimum width, that is, the number of pixels.
  • the carrier frequency at which the luminance changes so that the transmitter 100 transmits the optical ID is 9.6 kHz
  • the time when the luminance of the transmitter 100 is high or low is 104 ⁇ s at the shortest. Therefore, the receiver 200 calculates the line scan time by dividing 104 ⁇ s by the number of pixels having the specified minimum width.
  • FIG. 102 is a diagram for explaining an example of how the receiver 200 obtains the line scan time.
  • the receiver 200 may perform a Fourier transform on the bright line pattern of the decoding image Pdec and obtain the line scan time based on the spatial frequency obtained by the Fourier transform.
  • the receiver 200 derives a spectrum indicating the relationship between the spatial frequency and the intensity of the component of the spatial frequency in the decoding image Pdec by the Fourier transform described above.
  • the receiver 200 sequentially selects each of the plurality of peaks indicated in the spectrum.
  • the receiver 200 calculates a line scan time such that the spatial frequency of the selected peak (for example, the spatial frequency f2 in FIG. 102) is obtained by a time frequency of 9.6 kHz.
  • 9.6 kHz is the carrier frequency of the luminance change of the transmitter 100 as described above.
  • the receiver 200 selects the most likely candidate among the plurality of line scan time candidates as the line scan time.
  • the receiver 200 calculates the allowable range of line scan time based on the frame rate in imaging and the number of exposure lines included in the image sensor. That is, the receiver 200 calculates the maximum value of the line scan time by 1 ⁇ 10 6 [ ⁇ s] / ⁇ (frame rate) ⁇ (number of exposure lines) ⁇ . Then, the receiver 200 determines the maximum value ⁇ constant K (K ⁇ 1) to the maximum value as the allowable range of the line scan time.
  • K is, for example, 0.9 or 0.8.
  • the receiver 200 selects a candidate within this allowable range from among a plurality of line scan time candidates as a maximum likelihood candidate, that is, a line scan time.
  • the receiver 200 may evaluate the reliability of the calculated line scan time depending on whether or not the line scan time calculated according to the example illustrated in FIG. 101 is within the above-described allowable range.
  • FIG. 103 is a flowchart showing an example of how to obtain the line scan time by the receiver 200.
  • the receiver 200 may obtain the line scan time by trying to decode the decoding image Pdec. Specifically, first, the receiver 200 starts imaging (step S441). Next, the receiver 200 determines whether or not the line scan time is known (step S442). For example, the receiver 200 may determine whether or not the line scan time is known by notifying the server of the type and model of the receiver 200 and inquiring the line scan time according to the type and model. . If it is determined that it is known (Yes in step S442), the receiver 200 sets the reference acquisition count of the optical ID to n (n is an integer equal to or larger than 2, for example, 4) (step S443). ).
  • the receiver 200 acquires the optical ID by decoding the decoding image Pdec using the known line scan time (step S444). At this time, the receiver 200 acquires a plurality of optical IDs by performing decoding on each of the plurality of decoding images Pdec obtained sequentially by the imaging started in step S441.
  • the receiver 200 determines whether or not the same optical ID has been acquired the reference acquisition times (that is, n times) (step S445). If it is determined that it has been acquired n times (Yes in step S445), the receiver 200 trusts the optical ID and starts processing using the optical ID (for example, superimposition of an AR image) (step S446). On the other hand, if it is determined that it has not been acquired n times (No in step S445), the receiver 200 does not trust the optical ID and ends the process.
  • step S442 If it is determined in step S442 that the line scan time is not known (No in step S442), the receiver 200 sets the optical ID reference acquisition count to n + k (k is an integer equal to or greater than 1) (step S447). . That is, when the line scan time is not known, the receiver 200 sets a larger reference acquisition count than when the line scan time is known. Next, the receiver 200 determines a temporary line scan time (step S448). Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec using the provisional line scan time (step S449).
  • the receiver 200 acquires a plurality of optical IDs by performing decoding on each of the plurality of decoding images Pdec obtained sequentially by the imaging started in step S441, as described above.
  • the receiver 200 determines whether or not the same optical ID has been acquired the reference acquisition times (that is, (n + k) times) (step S450).
  • the receiver 200 determines that the provisional line scan time is the correct line scan time. Then, the receiver 200 notifies the server of the type and model of the receiver 200 and the line scan time (step S451). As a result, the server stores the type and model of the receiver in association with the line scan time suitable for the receiver. Therefore, when another receiver of the same type and type starts imaging, the other receiver can specify its own line scan time by making an inquiry to the server. That is, the other receivers can determine that the line scan time is known in the determination in step S442.
  • the receiver 200 trusts the optical ID acquired (n + k) times, and starts processing using the optical ID (for example, superimposition of an AR image) (step S446).
  • step S450 determines whether or not an end condition is satisfied (step S452).
  • the end condition is, for example, that a predetermined time has elapsed since the start of imaging, or that the optical ID has been acquired more than the maximum number of acquisitions. If it is determined that such an end condition is satisfied (Yes in step S452), the receiver 200 ends the process. On the other hand, when determining that the termination condition is not satisfied (No in step S452), the receiver 200 changes the provisional line scan time (step S453). Then, the receiver 200 repeatedly executes the processing from step S449 using the changed provisional line scan time.
  • the receiver 200 can obtain the line scan time as in the examples shown in FIGS. Accordingly, regardless of the type and model of the receiver 200, the receiver 200 can appropriately decode the decoding image Pdec and obtain an optical ID.
  • FIG. 104 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
  • the receiver 200 images the transmitter 100 configured as a television.
  • the transmitter 100 periodically transmits an optical ID and a time code by changing luminance while displaying a television program, for example.
  • the time code is information indicating the time at the time of transmission each time it is transmitted, and may be, for example, a time packet shown in FIG.
  • the receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec through the above-described imaging.
  • the receiver 200 acquires the above-described optical ID and time code by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201.
  • the receiver 200 transmits the optical ID to the server 300.
  • the server 300 receives the optical ID, the server 300 transmits the audio data associated with the optical ID, the AR start time information, the AR image P29, and the recognition information to the receiver 200.
  • the receiver 200 When the receiver 200 acquires the audio data, the receiver 200 reproduces the audio data in synchronization with the video of the TV program displayed on the transmitter 100. That is, the sound data is composed of a plurality of sound unit data, and the plurality of sound unit data includes a time code.
  • the receiver 200 starts reproduction of a plurality of audio unit data from the audio unit data including the time code indicating the same time as the time code acquired from the transmitter 100 together with the optical ID in the audio data. Thereby, the reproduction of the audio data is synchronized with the video of the television program. It should be noted that such synchronization between audio and video may be performed by the same method as the audio synchronous reproduction shown in each of the drawings after FIG.
  • the receiver 200 When the receiver 200 acquires the AR image P29 and the recognition information, the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Ppre as a target area, and superimposes the AR image P29 on the target area.
  • the AR image P29 is an image showing a crack in the display 201 of the receiver 200
  • the target area is an area that crosses the image of the transmitter 100 in the captured display image Ppre.
  • the receiver 200 displays the captured display image Ppre on which the AR image P29 as described above is superimposed at a timing according to the AR start time information.
  • the AR start time information is information indicating the time at which the AR image P29 is displayed. That is, the receiver 200 captures a display image in which the above-described AR image P29 is superimposed at the timing of receiving the time code indicating the same time as the AR start time information among the time codes transmitted from the transmitter 100 as needed.
  • Display Pre For example, the time indicated by the AR start time information is the time when a scene in which a magician girl applies ice magic appears in a television program. At this time, the receiver 200 may output from the speaker of the receiver 200 a sound in which the AR image P29 is cracked due to the reproduction of the audio data.
  • the receiver 200 may vibrate a vibrator provided in the receiver 200 at a time indicated by the AR start time information, or may cause the light source to emit light like a flash, and the display 201 may be instantaneously displayed. It may be brightened or flashed.
  • the AR image P29 may include not only an image showing a crack but also an image showing a state in which condensation of the display 201 is frozen.
  • FIG. 105 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
  • the receiver 200 images the transmitter 100 configured as a toy cane, for example.
  • the transmitter 100 includes a light source, and transmits an optical ID by changing the luminance of the light source.
  • the receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec through the above-described imaging.
  • the receiver 200 acquires the above-described optical ID by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201.
  • the receiver 200 transmits the optical ID to the server 300.
  • the server 300 receives the optical ID
  • the server 300 transmits the AR image P30 associated with the optical ID and the recognition information to the receiver 200.
  • the recognition information further includes gesture information indicating a gesture (that is, an action) by a person holding the transmitter 100.
  • the gesture information indicates, for example, a gesture in which a person moves the transmitter 100 from right to left.
  • the receiver 200 compares the gesture by the person holding the transmitter 100 displayed on each captured display image Ppre with the gesture indicated by the gesture information. Then, when the gestures coincide with each other, the receiver 200 arranges the AR images P30 such that, for example, many star-shaped AR images P30 are arranged along the trajectory of the transmitter 100 moved by the gestures. Is superimposed on the captured display image Ppre.
  • FIG. 106 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
  • the receiver 200 images the transmitter 100 configured as, for example, a toy cane, as described above.
  • the receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec by the imaging.
  • the receiver 200 acquires the above-described optical ID by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201.
  • the receiver 200 transmits the optical ID to the server 300.
  • the server 300 receives the optical ID, the server 300 transmits the AR image P31 associated with the optical ID and the recognition information to the receiver 200.
  • the recognition information includes gesture information indicating a gesture by a person holding the transmitter 100 as described above.
  • the gesture information indicates, for example, a gesture in which a person moves the transmitter 100 from right to left.
  • the receiver 200 compares the gesture by the person holding the transmitter 100 displayed on each captured display image Ppre with the gesture indicated by the gesture information. Then, when the gestures match, the receiver 200, for example, in the captured display image Ppre, an AR image P30 indicating a dress costume in a target area that is an area in which a person holding the transmitter 100 is projected. Is superimposed.
  • gesture information corresponding to the light ID is acquired from the server. Next, it is determined whether or not the movement of the subject indicated by the periodically acquired captured display image matches the movement indicated by the gesture information acquired from the server. And when it determines with matching, the picked-up display image Ppre on which AR image was superimposed is displayed.
  • an AR image can be displayed according to the movement of a subject such as a person. That is, the AR image can be displayed at an appropriate timing.
  • FIG. 107 is a diagram illustrating an example of the decoding image Pdec acquired according to the attitude of the receiver 200.
  • the receiver 200 images the transmitter 100 that transmits the optical ID according to the luminance change in the horizontal orientation.
  • the longitudinal direction of the display 201 of the receiver 200 is an orientation along the horizontal direction.
  • each exposure line of the image sensor provided in the receiver 200 is orthogonal to the longitudinal direction of the display 201.
  • the user changes the attitude of the receiver 200 from landscape to portrait.
  • the vertical orientation is a posture in which the longitudinal direction of the display 201 of the receiver 200 is along the vertical direction.
  • the receiver 200 having such an attitude can acquire the decoding image Pdec including the bright line pattern region Y having a large number of bright lines when the transmitter 100 that transmits the light ID is imaged.
  • the optical ID may not be appropriately acquired according to the attitude of the receiver 200. Therefore, when the receiver 200 acquires the optical ID, the attitude of the receiver 200 that is imaging is appropriately set. It is good to change. When the posture is changed, the receiver 200 can appropriately acquire the light ID at a timing when the posture is such that the light ID can be easily acquired.
  • FIG. 108 is a diagram illustrating another example of the decoding image Pdec acquired according to the attitude of the receiver 200.
  • the transmitter 100 is configured as a digital signage of a coffee shop, displays a video relating to a coffee shop advertisement during the video display period, and transmits a light ID by a change in luminance during the light ID transmission period. That is, the transmitter 100 alternately and repeatedly performs video display during the video display period and transmission of the optical ID during the optical ID transmission period.
  • the receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec by imaging of the transmitter 100.
  • the decoding cycle including the bright line pattern region is synchronized with the repetition cycle of the video display period and the optical ID transmission period of the transmitter 100 and the repetition cycle of acquisition of the captured display image Ppre and the decoding image Pdec by the receiver 200.
  • the image Pdec may not be acquired.
  • the decoding image Pdec including the bright line pattern region may not be acquired depending on the attitude of the receiver 200.
  • the receiver 200 images the transmitter 100 in a posture as shown in FIG. That is, the receiver 200 approaches the transmitter 100 and images the transmitter 100 so that the image of the transmitter 100 is projected on the entire image sensor of the receiver 200.
  • the receiver 200 appropriately acquires the captured display image Ppre displayed by the transmitter 100. .
  • the receiver 200 Even when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 includes the bright line pattern region Z1. An image Pdec can be acquired.
  • the exposure of each exposure line included in the image sensor is started sequentially from the exposure line at the upper end in the vertical direction downward. Therefore, even if the receiver 200 starts exposure of the image sensor to acquire the decoding image Pdec during the video display period, it is not possible to obtain the bright line pattern region. However, when the video display period is switched to the light ID transmission period, a bright line pattern region corresponding to each exposure line that is exposed in the light ID transmission period can be obtained.
  • the receiver 200 images the transmitter 100 in a posture as shown in FIG. That is, the receiver 200 is separated from the transmitter 100 and images the transmitter 100 so that the image of the transmitter 100 is projected only on the area above the image sensor of the receiver 200.
  • the receiver 200 captures the captured display image Prep on which the transmitter 100 is projected. Get properly.
  • the receiver 200 selects the decoding image Pdec including the bright line pattern region. You may not be able to get it.
  • the decoding image Pdec having the bright line pattern region cannot be acquired.
  • the receiver 200 projects the image of the transmitter 100 only on the lower region of the image sensor of the receiver 200 in a state of being separated from the transmitter 100. Then, the transmitter 100 is imaged. At this time, as described above, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 captures the captured display image Prep on which the transmitter 100 is projected. Get properly. Furthermore, even when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 acquires the decoding image Pdec including the bright line pattern region. There are things that can be done.
  • the decoding image Pdec having the bright line pattern region Z2 can be acquired.
  • the receiver 200 may change the attitude of the receiver 200 when acquiring the optical ID. You may be encouraged. That is, when imaging starts, the receiver 200 displays, for example, a message “Please move” or “Shake” or output sound so that the attitude of the receiver 200 changes. Thereby, since the receiver 200 performs imaging while changing the posture, it can appropriately acquire the light ID.
  • FIG. 109 is a flowchart illustrating an example of processing operation of the receiver 200.
  • the receiver 200 determines whether or not the receiver 200 is shaken during imaging (step S461). Specifically, the receiver 200 determines whether or not it is shaken based on the output of the 9-axis sensor provided in the receiver 200.
  • the receiver 200 increases the above-described optical ID acquisition rate (step S462). Specifically, the receiver 200 acquires all captured images per unit time obtained during imaging as decoding images (that is, bright line images) Pdec, and decodes all the acquired decoding images. .
  • the receiver 200 starts acquisition and decoding when all the captured images are acquired as the captured display image Ppre, that is, when acquisition and decoding of the decoding image Pdec are stopped.
  • the receiver 200 determines that it is not shaken during imaging (No in step S461), the receiver 200 acquires the decoding image Pdec at a low optical ID acquisition rate (step S463). Specifically, if the optical ID acquisition rate is increased in step S462 and is still a high optical ID acquisition rate, the receiver 200 sets the optical ID acquisition rate because the current optical ID acquisition rate is high. Lower. Thereby, since the frequency with which the decoding process of the decoding image Pdec by the receiver 200 is reduced, power consumption can be suppressed.
  • the receiver 200 determines whether or not an end condition for ending the adjustment process of the optical ID acquisition rate is satisfied (step S464).
  • the receiver 200 determines that the end condition is not satisfied (No in step S464), The processing from S461 is repeatedly executed.
  • the receiver 200 determines that the end condition is satisfied (Yes in step S464), the receiver 200 ends the optical ID acquisition rate adjustment process.
  • FIG. 110 is a diagram illustrating an example of a camera lens switching process by the receiver 200.
  • the receiver 200 may include a wide-angle lens 211 and a telephoto lens 212 as camera lenses.
  • a captured image obtained by imaging using the wide-angle lens 211 is an image with a wide angle of view, and a subject is projected to be small in the image.
  • a captured image obtained by imaging using the telephoto lens 212 is an image with a narrow angle of view, and a subject is projected greatly in the image.
  • the receiver 200 as described above may switch the camera lens used for imaging by any one of the methods A to E shown in FIG.
  • the receiver 200 always uses the telephoto lens 212 when imaging, whether in normal imaging or when receiving an optical ID.
  • the case of normal imaging is a case where all captured images are acquired as captured display images Ppre by imaging.
  • the case where the optical ID is received is a case where the captured display image Ppre and the decoding image Pdec are periodically acquired by imaging.
  • the receiver 200 uses the wide-angle lens 211 in the case of normal imaging.
  • the receiver 200 first uses the wide-angle lens 211.
  • the receiver 200 switches the camera lens from the wide-angle lens 211 to the telephoto lens 212 if the bright line pattern region is included in the decoding image Pdec acquired when the wide-angle lens 211 is used. After this switching, the receiver 200 can acquire a decoding image Pdec with a narrow angle of view, that is, a bright line pattern region appearing large.
  • the receiver 200 uses the wide-angle lens 211 in the case of normal imaging.
  • the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212. That is, the receiver 200 acquires the captured display image Ppre using the wide-angle lens 211 and acquires the decoding image Pdec using the telephoto lens 212.
  • the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212 in accordance with the operation by the user regardless of whether it is normal imaging or receives an optical ID.
  • the receiver 200 when receiving the optical ID, decodes the decoding image Pdec acquired using the wide-angle lens 211. If the decoding image Pdec cannot be correctly decoded, the camera lens is changed from the wide-angle lens 211 to the telephoto lens 212. Switch. Alternatively, the receiver 200 decodes the decoding image Pdec acquired using the telephoto lens 212, and switches the camera lens from the telephoto lens 212 to the wide-angle lens 211 if it cannot be decoded correctly. Note that when determining whether or not the decoding image Pdec has been correctly decoded, the receiver 200 first transmits an optical ID obtained by decoding the decoding image Pdec to the server.
  • the server notifies the receiver 200 of the matching information indicating that it matches, and if it does not match, the server does not match. Is sent to the receiver 200.
  • the receiver 200 determines that the decoding image Pdec has been correctly decoded if the information notified from the server is coincidence information. If the information notified from the server is mismatch information, the receiver 200 correctly decodes the decoding image Pdec. Judge that it was not possible. Alternatively, the receiver 200 determines that the decoding image Pdec has been correctly decoded when the optical ID obtained by decoding the decoding image Pdec satisfies a predetermined condition. On the other hand, if the condition is not satisfied, the receiver 200 determines that the decoding image Pdec has not been correctly decoded.
  • FIG. 111 is a diagram illustrating an example of camera switching processing by the receiver 200.
  • the receiver 200 includes an in camera 213 and an out camera (not shown in FIG. 111) as cameras.
  • the in-camera 213 is also referred to as a face camera or a self-portrait camera, and is a camera arranged on the same surface as the display 201 in the receiver 200.
  • the out camera is a camera arranged on the surface of the receiver 200 opposite to the surface of the display 201.
  • Such a receiver 200 images the transmitter 100 configured as a lighting device with the in-camera 213 with the in-camera 213 facing upward. By this imaging, the receiver 200 acquires the decoding image Pdec, and acquires the optical ID transmitted from the transmitter 100 by decoding the decoding image Pdec.
  • the receiver 200 acquires the AR image and the recognition information associated with the optical ID from the server by transmitting the acquired optical ID to the server.
  • the receiver 200 starts a process of recognizing a target area corresponding to the recognition information from the captured display images Ppre obtained by the out camera and the in camera 213, respectively.
  • the receiver 200 prompts the user to move the receiver 200 when the target area cannot be recognized from any of the captured display images Ppre obtained by the out camera and the in camera 213 respectively.
  • the user who is prompted by the receiver 200 moves the receiver 200. Specifically, the user moves the receiver 200 so that the in-camera 213 and the out-camera face the front-rear direction of the user.
  • the receiver 200 recognizes the target area from the captured display image Ppre acquired by the out camera. That is, the receiver 200 recognizes a region in which a person is projected as a target region, superimposes an AR image on the target region in the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed. To do.
  • FIG. 112 is a flowchart showing an example of processing operation between the receiver 200 and the server.
  • the receiver 200 acquires an optical ID transmitted from the transmitter 100 by capturing an image of the transmitter 100, which is a lighting device, with the in-camera 213, and transmits the optical ID to the server (step S471).
  • the server receives the optical ID from the receiver 200 (step S472), and estimates the position of the receiver 200 based on the optical ID (step S473).
  • the server stores, for each light ID, a table indicating a room, a building, a space, or the like in which the transmitter 100 that transmits the light ID is arranged. Then, the server estimates the room associated with the optical ID transmitted from the receiver 200 as the position of the receiver 200 in the table. Further, the server transmits the AR image and recognition information associated with the estimated position to the receiver 200 (step S474).
  • the receiver 200 acquires the AR image and the recognition information transmitted from the server (Step S475).
  • the receiver 200 starts a process of recognizing a target area corresponding to the recognition information from each captured display image Ppre obtained by each of the out camera and the in camera 213.
  • the receiver 200 recognizes the target area from, for example, the captured display image Ppre acquired by the out-camera (step S476).
  • the receiver 200 superimposes the AR image on the target area in the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S477).
  • the receiver 200 when the receiver 200 acquires the AR image and the recognition information transmitted from the server, in step S476, the receiver 200 selects from the captured display images Ppre obtained by the out camera and the in camera 213, respectively.
  • the process of recognizing the target area has started.
  • the receiver 200 may start the process of recognizing the target area from the captured display image Ppre obtained only by the out-camera in step S476. That is, the camera for acquiring the light ID (in-camera 213 in the above example) and the camera for acquiring the captured display image Pre on which the AR image is superimposed (out-camera in the above example) are always different. It may be allowed.
  • the receiver 200 images the transmitter 100 that is an illumination device with the in-camera 213, but the floor illuminated by the transmitter 100 may be captured with the out-camera. Even with such an out-camera imaging, the receiver 200 can acquire the optical ID transmitted from the transmitter 100.
  • FIG. 113 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
  • the receiver 200 images the transmitter 100 configured as a microwave oven installed in a store such as a convenience store.
  • the transmitter 100 includes a camera for imaging the inside of a microwave oven and an illumination device that illuminates the inside of the oven. And the transmitter 100 recognizes the food / beverage (namely, warming object) accommodated in the store
  • the transmitter 100 transmits the light ID indicating the recognized food or drink by causing the lighting device to emit light and changing the luminance of the lighting device.
  • this illuminating device illuminates the inside of the store
  • the user purchases food and drink at a convenience store, and puts the food and drink into the transmitter 100, which is a microwave oven, in order to warm the food and drink.
  • the transmitter 100 recognizes the food and drink with the camera, and starts warming the food and drink while transmitting the optical ID indicating the recognized food and drink.
  • the receiver 200 acquires the optical ID transmitted from the transmitter 100 by capturing an image of the transmitter 100 that has started the warming, and transmits the optical ID to the server. Next, the receiver 200 acquires an AR image, audio data, and recognition information associated with the optical ID from the server.
  • the above-mentioned AR image includes an AR image P32a that is a moving image showing a virtual state inside the transmitter 100, an AR image P32b that shows food and drink stored in the cabinet in detail, and steam from the transmitter 100.
  • AR image P32c which shows a state of being heated by a moving image
  • AR image P32d which shows the remaining time until the completion of warming of food and drink by a moving image.
  • the AR image P32a is a video in which a turntable with a pizza is rotating and a plurality of dwarfs are dancing around the pizza. It is.
  • the AR image P32b is, for example, an image showing the product name “pizza” and the material of the pizza if the food and drink stored in the warehouse is a pizza.
  • the receiver 200 Based on the recognition information, the receiver 200 recognizes the area in which the window of the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32a, and the AR image P32a is displayed in the target area. Superimpose. Further, based on the recognition information, the receiver 200 recognizes the area above the area where the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32b, and the target The AR image P32b is superimposed on the area. Furthermore, based on the recognition information, the receiver 200 recognizes an area between the target area of the AR image P32a and the target area of the AR image P32b in the captured display image Ppre as the target area of the AR image P32c.
  • the AR image P32c is superimposed on the target area. Further, based on the recognition information, the receiver 200 recognizes an area below the area where the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32d, and sets the target area as the target area. The AR image P32d is superimposed.
  • the receiver 200 outputs sound generated when food or drink is heated by reproducing audio data.
  • the AR images P32a to P32d as described above are displayed by the receiver 200, and further, by outputting sound, it is possible to attract the user's interest to the receiver 200 until the warming of food and drink is completed. . As a result, the burden on the user who is waiting for completion of warming can be reduced. Further, the AR image P32c indicating steam or the like is displayed, and a sound generated when the food or drink is heated is output, so that a sizzle can be given to the user. In addition, the display of the AR image P32d allows the user to easily know the remaining time until the completion of the heating of the food and drink. Therefore, the user can read, for example, a book displayed in the store away from the transmitter 100 that is a microwave oven until the warming is completed. The receiver 200 may notify the user that the warming is completed when the remaining time becomes zero.
  • the AR image P32a is a moving image in which a turntable on which a pizza is placed is rotating and a plurality of dwarfs are dancing around the pizza. May be an image that virtually represents.
  • AR image P32b was an image which shows the brand name and material of the food / beverage accommodated in the store
  • the AR image P32b may be an image showing a discount ticket.
  • the subject is a microwave oven provided with an illumination device, and the illumination device illuminates the interior of the microwave oven and changes the luminance to change the light ID of the microwave oven.
  • the captured display image Ppre and the decoding image Pdec are acquired by imaging the microwave oven that transmits the optical ID.
  • the window portion of the microwave oven displayed in the captured display image Ppre is recognized as the target area.
  • the captured display image Ppre on which the AR image indicating the state change in the warehouse is superimposed is displayed.
  • the state of the oven can be easily communicated to the user of the microwave oven.
  • FIG. 114 is a sequence diagram showing the processing operation of the system including the receiver 200, the microwave oven, the relay server, and the electronic settlement server.
  • the microwave oven includes a camera and a lighting device as described above, and transmits the light ID by changing the luminance of the lighting device. That is, the microwave oven has a function as the transmitter 100.
  • the microwave oven recognizes food and drink stored in the cabinet with the camera (step S481).
  • the microwave oven transmits a light ID indicating the recognized food and drink to the receiver 200 by a luminance change of the lighting device.
  • the receiver 200 receives the optical ID transmitted from the microwave oven by imaging the microwave oven (step S483), and transmits the optical ID and the card information to the relay server.
  • the card information is information such as a credit card stored in advance in the receiver 200 and is information necessary for electronic payment.
  • the relay server holds a table indicating an AR image, recognition information, and product information corresponding to each optical ID. This merchandise information indicates the price of food and drink indicated by the light ID.
  • a relay server receives the optical ID and card information transmitted from the receiver 200 (step S486), it finds product information associated with the optical ID from the above table. Then, the relay server transmits the merchandise information and card information to the electronic settlement server (step S486).
  • the electronic settlement server receives the merchandise information and the card information transmitted from the relay server (step S487), the electronic settlement server performs an electronic settlement process based on the merchandise information and the card information (step S488). Then, when the electronic payment processing is completed, the electronic settlement server notifies the relay server of the completion (step S489).
  • the relay server When the relay server confirms the payment completion notification from the electronic payment server (step S490), the relay server instructs the microwave to start heating the food (step S491). Further, the relay server transmits the AR image and the recognition information associated with the optical ID received in step S485 in the above table to the receiver 200 (step S493).
  • the microwave oven When the microwave oven receives a warming start instruction from the relay server, it starts warming the food and drink stored in the cabinet (step S492). Further, when the receiver 200 receives the AR image and the recognition information transmitted from the relay server, the target according to the recognition information from the captured display image Ppre periodically acquired by the imaging started from step S483. Recognize the area. Then, the receiver 200 superimposes the AR image on the target area (step S494).
  • the user of the receiver 200 can easily settle the settlement and start warming the food and drink by placing the food and drink in the microwave oven and taking an image. Moreover, when payment cannot be performed, warming of food and drink by a user can be prohibited. Furthermore, when the warming is started, an AR image P32a shown in FIG. 113 can be displayed, and the user can be informed of the state in the warehouse.
  • FIG. 115 is a sequence diagram showing processing operations of a system including a POS terminal, a server, a receiver 200, and a microwave oven.
  • the microwave oven includes a camera and a lighting device as described above, and transmits the light ID by changing the luminance of the lighting device. That is, the microwave oven has a function as the transmitter 100.
  • a POS (point-of-sale) terminal is a terminal installed in a store such as the same convenience store as the microwave oven.
  • the user of the receiver 200 selects a food or drink as a product at a store and heads to a place where a POS terminal is installed in order to purchase the food or drink.
  • the store clerk operates the POS terminal and receives the price of food and drink from the user.
  • the POS terminal acquires operation input data and sales information (step S501).
  • the sales information indicates, for example, the name, number, and price of the product, the sales location, and the sales date and time.
  • the operation input data indicates, for example, the sex and age of the user input by the store clerk.
  • the POS terminal transmits the operation input data and sales information to the server (step S502).
  • the server receives operation input data and sales information transmitted from the POS terminal (step S503).
  • the user of the receiver 200 pays the clerk for the food and drink
  • the user puts the food and drink in the microwave oven to warm the food and drink.
  • the microwave oven recognizes the food and drink stored in the cabinet with the camera (step S504).
  • the microwave oven transmits the light ID indicating the recognized food and drink to the receiver 200 by the luminance change of the lighting device (step S505).
  • a microwave oven starts warming of food and drink (step S507).
  • the receiver 200 receives the optical ID transmitted from the microwave oven by imaging the microwave oven (step S508), and transmits the optical ID and the terminal information to the server (step S509).
  • the terminal information is information stored in advance in the receiver 200 and indicates, for example, the language type (for example, English or Japanese) displayed on the display 201 of the receiver 200.
  • the server determines whether the access from the receiver 200 is the first access (step S510).
  • the first access is the first access performed within a predetermined time from the time when the process of step S503 is performed. If the server determines that the access from the receiver 200 is the first access (Yes in step S510), the server associates and stores the operation input data and the terminal information (step S511).
  • the server determines whether or not the access from the receiver 200 is the first access, but may determine whether or not the product indicated by the sales information matches the food or drink indicated by the light ID. .
  • the server may store not only the operation input data and the terminal information but also the sales information in association with them.
  • FIG. 116 is a diagram showing a situation of indoor use such as an underground shopping mall.
  • the receiver 200 receives the light ID transmitted from the transmitter 100 configured as a lighting device, and estimates its current position. Further, the receiver 200 displays the current position on a map to provide route guidance, or displays information on nearby stores.
  • transmission of disaster information and evacuation information from the transmitter 100 can be used when communication is congested, when a communication base station fails, or when radio waves from the communication base station do not reach. Even this information can be obtained. This is effective for a hearing impaired person who has missed an emergency broadcast or cannot hear an emergency broadcast.
  • the receiver 200 acquires the optical ID transmitted from the transmitter 100 by taking an image, and further acquires the AR image P33 and the recognition information associated with the optical ID from the server. Then, the receiver 200 recognizes the target area corresponding to the recognition information from the captured display image Ppre obtained by the above-described imaging, and superimposes the AR image P33 in the shape of an arrow on the target area. Thereby, receiver 200 can be used as the above-mentioned way finder (refer to Drawing 100).
  • FIG. 117 is a diagram illustrating a state in which an augmented reality object is displayed.
  • the stage 2718e that displays the augmented reality is configured as the transmitter 100 described above, and the light emitting patterns and position patterns of the light emitting units 2718a, 2718b, 2718c, and 2718d are the reference positions for displaying the augmented reality object information and the augmented reality object. Send.
  • the receiver 200 displays the augmented reality object 2718f, which is an AR image, superimposed on the captured image based on the received information.
  • a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement
  • the computer program which performs the method concerning one Embodiment is preserve
  • FIG. 118 is a diagram illustrating a configuration of a display system according to the fourth modification of the fourth embodiment.
  • This display system 500 performs object recognition and augmented reality (Augmented Reality / Mixed Reality) display using visible light signals.
  • augmented reality Augmented Reality / Mixed Reality
  • the receiver 200 performs imaging, receives a visible light signal, and extracts feature quantities for object recognition or space recognition.
  • the feature amount extraction is extraction of an image feature amount from a captured image obtained by imaging.
  • the visible light signal may be a visible light adjacent carrier signal such as infrared light or ultraviolet light.
  • the receiver 200 is configured as a recognition device that recognizes an object on which an augmented reality image (that is, an AR image) is displayed.
  • the target object is, for example, the AR target object 501.
  • the transmitter 100 transmits information such as an ID for identifying itself or the AR object 501 as a visible light signal or a radio wave signal.
  • the ID is identification information such as the above-described optical ID, for example, and the AR object 501 is the above-described target area.
  • the visible light signal is a signal transmitted by a change in luminance of a light source included in the transmitter 100.
  • the receiver 200 or the server 300 holds the identification information transmitted from the transmitter 100 in association with the AR recognition information and the AR display information.
  • the association may be one-to-one or one-to-many.
  • the AR recognition information is the above-described recognition information, and is information for recognizing the AR object 501 for performing AR display. Specifically, the AR recognition information is the image feature amount (SIFT feature amount, SURF feature amount, ORB feature amount, etc.), color, shape, size, reflectance, transmittance, or three-dimensional of the AR object 501. Model etc.
  • the AR recognition information may include identification information or a recognition algorithm indicating which recognition method is used for recognition.
  • the AR display information is information for performing AR display, and is an image (that is, the above-described AR image), video, audio, three-dimensional model, motion data, display coordinates, display size, transmittance, or the like. Further, the AR display information may be the absolute value or change ratio of each of hue, saturation, and brightness.
  • the transmitter 100 may also function as the server 300. That is, the transmitter 100 may hold the AR recognition information and the AR display information and transmit the information by wired or wireless communication.
  • the receiver 200 captures an image with a camera (specifically, an image sensor).
  • the receiver 200 receives a visible light signal or a radio wave signal such as WiFi or Bluetooth (registered trademark). Further, the receiver 200 acquires position information obtained by GPS or the like, information obtained by a gyro sensor or acceleration sensor, and information such as sound from a microphone, and integrates all or some of these information. AR objects existing in the vicinity may be recognized. Further, the receiver 200 may recognize the AR object using only one of the information without integrating the information.
  • FIG. 119 is a flowchart showing the processing operation of the display system according to the fourth modification of the fourth embodiment.
  • the receiver 200 determines whether or not a visible light signal has already been received (step S521). That is, for example, the receiver 200 determines whether or not the visible light signal indicating the identification information is acquired by photographing the transmitter 100 that transmits the visible light signal according to the luminance change of the light source. At this time, a captured image of the transmitter 100 is acquired by the shooting.
  • the receiver 200 determines that the visible light signal has already been received (Y in step S521)
  • the AR object object, reference point, spatial coordinate, or space in the space is determined from the received information.
  • the position and orientation of the receiver 200 are specified.
  • the receiver 200 recognizes the relative position of the AR object. This relative position is represented by the distance and direction from the receiver 200 to the AR object.
  • the receiver 200 identifies an AR object (that is, a target area that is a bright line pattern area) based on the size and position of the bright line pattern area shown in FIG. 50, and recognizes the relative position of the AR object. To do.
  • the receiver 200 transmits the information such as an ID included in the visible light signal and the relative position to the server 300, and uses the information and the relative position as a key, thereby registering the AR recognition information in the server 300.
  • AR display information are acquired (step S522).
  • the receiver 200 acquires not only the information on the recognized AR object but also information on other AR objects existing in the vicinity of the AR object (that is, the AR recognition information and the AR display information). Also good. Thereby, when another AR object existing in the vicinity is imaged by the receiver 200, the receiver 200 recognizes the other AR object existing in the vicinity quickly and without error. Can do. For example, other AR objects present in the vicinity are different from the first recognized AR object.
  • the receiver 200 may acquire these pieces of information from the database in the receiver 200 instead of accessing the server 300.
  • the receiver 200 receives these pieces of information after a certain period of time has elapsed from the time of acquisition or a specific process (for example, turning off the screen, pressing a button, ending or stopping an application, displaying an AR image, or another AR object. May be discarded after the recognition).
  • the receiver 200 may decrease the reliability of the information every time a certain period of time has elapsed since the acquisition of the information, and use highly reliable information among the plurality of information. Good.
  • the receiver 200 may preferentially acquire the AR recognition information of the AR object that is effective in relation to the relative position based on the relative position to each AR object. For example, the receiver 200 acquires a plurality of visible light signals (that is, identification information) by photographing the plurality of transmitters 100 in step S521, and corresponds to the plurality of visible light signals in step S522. A plurality of AR recognition information (that is, image feature amount) is acquired. At this time, in step S522, the receiver 200 selects an image feature amount of the AR object closest to the receiver 200 that performs imaging of the transmitter 100 among the plurality of AR objects. That is, the selected image feature amount is used to specify one AR object (that is, the first object) specified using the visible light signal. Thereby, even if a plurality of image feature amounts are acquired, an appropriate image feature amount can be used for specifying the first object.
  • a plurality of visible light signals that is, identification information
  • a plurality of AR recognition information that is, image feature amount
  • the receiver 200 further determines whether or not the AR recognition information has already been acquired (step S523). If it is determined that the AR recognition information has not been acquired (N in step S523), the receiver 200 does not use identification information such as an ID indicated by the visible light signal, or by image processing, or position information or radio wave information.
  • the AR object candidate is recognized using other information such as (step S524). This process may be performed only by the receiver 200. Alternatively, the receiver 200 may transmit information such as a captured image or an image feature amount of the captured image to the server 300, and the server 300 may recognize the AR object candidate. As a result, the receiver 200 acquires AR recognition information and AR display information corresponding to the recognized candidate from the server 300 or its own database.
  • the receiver 200 determines whether or not the AR object is detected by another method that does not use identification information such as an ID indicated by the visible light signal, such as image recognition (step S525). ). That is, the receiver 200 determines whether or not the AR object is recognized by a plurality of methods. Specifically, the receiver 200 specifies the AR object (that is, the first object) from the captured image using the image feature amount acquired based on the identification information indicated by the visible light signal. Then, the receiver 200 determines whether or not the AR object (that is, the second object) is specified from the captured image by image processing without using such identification information.
  • identification information such as an ID indicated by the visible light signal
  • the recognition result by the visible light signal is prioritized. That is, the receiver 200 confirms whether or not the AR objects recognized by the respective methods match. If they do not match, the receiver 200 selects one AR object on which the AR image is superimposed in the captured image as the AR object recognized by the visible light signal. Determination is made (step S526). That is, when the first object is different from the second object, the receiver 200 gives priority to the first object and recognizes it as the object on which the AR image is displayed.
  • the object on which the AR image is displayed is an object on which the AR image is superimposed.
  • the receiver 200 may prioritize a method having a higher priority order based on a priority order assigned to each of the plurality of methods. That is, the receiver 200 recognizes one AR object on which the AR image is superimposed in the captured image from among the AR objects recognized by each method, for example, by a method having the highest priority. AR target is determined. Alternatively, the receiver 200 may determine one AR object on which the AR image is superimposed in the captured image by a majority decision or a majority decision with priority. If the recognition result up to that point is overturned by this process, the receiver 200 performs an error handling process.
  • the receiver 200 determines the state of the AR object in the captured image (specifically, the absolute position, the relative position from the receiver 200, the size, the angle, and the illumination status) based on the acquired AR recognition information. Or occlusion or the like) (step S527). Then, the receiver 200 displays the AR display information (that is, the AR image) superimposed on the captured image in accordance with the recognition result (step S528). That is, the receiver 200 superimposes the AR display information on the recognized AR object in the captured image. Alternatively, the receiver 200 displays only the AR display information.
  • the difficult recognition or detection includes, for example, identification of AR objects that are image-similar (such as only different text content), detection of AR objects with fewer patterns, and AR objects with high reflectivity or transmittance. Detection of an object, detection of an AR object (for example, an animal) whose shape or pattern changes, or detection of an AR object from a wide angle (in various directions). That is, in this modification, recognition of these AR objects and AR display can be performed.
  • image processing that does not use a visible light signal, as the number of AR objects to be recognized increases, it takes time to search for neighborhoods of image feature values, which takes time for recognition processing, and the recognition rate also deteriorates. .
  • the influence of the increase in recognition time and the deterioration of the recognition rate due to the increase in recognition objects is not at all or extremely small, and effective AR object recognition is possible.
  • efficient recognition is possible by using the relative position of the AR object. For example, by using an approximate distance to the AR object, processing for making the AR object size independent of the calculation of the image feature amount can be omitted, or a size-dependent feature can be used. Can do.
  • the angle of the AR object is used and it is usually necessary to evaluate the image feature amount for many angles, only the retention and calculation of the image feature amount corresponding to the angle of the AR object is performed. The calculation speed or the memory efficiency can be improved.
  • FIG. 120 is a flowchart illustrating a recognition method according to an aspect of the present invention.
  • the display method is a method for recognizing an object on which an augmented reality image (AR image) is displayed, and includes steps S531 to S535.
  • AR image augmented reality image
  • step S531 the receiver 200 acquires the identification information by photographing the transmitter 100 that transmits a visible light signal by a change in luminance of the light source.
  • the identification information is, for example, an optical ID.
  • step S ⁇ b> 532 the receiver 200 transmits the identification information to the server 300 and acquires an image feature amount corresponding to the identification information from the server 300.
  • the image feature amount is indicated as AR recognition information or recognition information.
  • the receiver 200 specifies the first object from the captured image of the transmitter 100 using the image feature amount.
  • the receiver 200 specifies the second object from the captured image of the transmitter 100 by image processing without using the identification information (that is, the optical ID).
  • step S535 when the first object specified in step S533 is different from the second object specified in step S534, the receiver 200 gives priority to the first object and augments reality. Recognize as an object to be displayed.
  • the augmented reality image, the captured image, and the target object correspond to the AR image, the captured display image, and the target region in the fourth embodiment and the modifications thereof, respectively.
  • the first object specified by using the identification information indicated by the visible light signal, and the second object specified by the image processing without using the identification information Are different, the first object is preferentially recognized as the object on which the augmented reality image is displayed. Therefore, the object on which the augmented reality image is displayed can be appropriately recognized from the captured image.
  • the image feature amount includes an image feature amount of a third object that is located in the vicinity of the first object and is different from the first object. May be.
  • step S522 of FIG. 119 not only the image feature amount of the first object but also the image feature amount of the third object is acquired.
  • the third object can be quickly identified or recognized.
  • the receiver 200 may acquire a plurality of identification information by photographing a plurality of transmitters in step S531, and may acquire a plurality of image feature amounts corresponding to the plurality of identification information in step S532. is there.
  • the receiver 200 determines the image feature amount of the object closest to the receiver 200 that performs imaging by the plurality of transmitters among the plurality of objects as the first object. It may be used to identify
  • step S522 of FIG. 119 even if a plurality of image feature amounts are acquired, an appropriate image feature amount can be used to identify the first object.
  • the recognition device in the present modification is, for example, a device provided in the receiver 200 described above, and includes a processor and a recording medium. On this recording medium, a program for causing the processor to execute the recognition method shown in FIG. 120 is recorded. Moreover, the program in this modification is a program which makes a computer perform the recognition method shown in FIG.
  • FIG. 121 is a diagram showing an example of an operation mode of a visible light signal according to this embodiment.
  • the transmitter As shown in FIG. 121, there are two modes for the operation mode of the physical (PHY) layer of the visible light signal.
  • the first operation mode is a mode in which packet PWM (Pulse Width Modulation) is performed, and the second operation mode is a mode in which packet PPM (Pulse-Position Modulation) is performed.
  • the transmitter according to each of the above-described embodiments or modifications thereof generates and transmits a visible light signal by modulating a signal to be transmitted according to any one of these operation modes.
  • RLL Un-Length Limited
  • FEC forward error correction
  • the pulse width is modulated, and the pulse is represented by two brightness states.
  • the two brightness states are a bright state (Bright or High) and a dark state (Dark or Low), but typically the light is on and off.
  • a chunk of a physical layer signal called a packet corresponds to a MAC (medium access control) frame.
  • the transmitter can repeatedly transmit PHY packets and transmit a set of a plurality of PHY packets regardless of a special order.
  • the packet PWM is used to generate a visible light signal transmitted from a normal transmitter.
  • this pulse is a bright pulse of a bright pulse (High) and a dark pulse (Low), and the position of this pulse is modulated.
  • the position of this pulse is indicated by the interval between the pulse and the next pulse.
  • Packet PPM realizes deep dimming.
  • the format, waveform, and characteristics of the packet PPM not described in each embodiment and its modification are the same as those of the packet PWM.
  • the packet PPM is used to generate a visible light signal transmitted from a transmitter having a light source that emits very bright light.
  • dimming in the physical layer of the visible light signal is controlled by the average luminance of the optional field.
  • FIG. 122A is a flowchart showing another visible light signal generation method according to Embodiment 5.
  • This visible light signal generation method is a method for generating a visible light signal transmitted by a change in luminance of a light source provided in a transmitter, and includes steps SE1 to SE3.
  • step SE1 a preamble which is data in which each of the first and second luminance values, which are different luminance values, appears alternately along the time axis is generated.
  • step SE2 in the data in which the first and second luminance values appear alternately along the time axis, an interval from when the first luminance value appears until the next first luminance value appears is set as a transmission target. 1st payload is produced
  • step SE3 a visible light signal is generated by combining the preamble and the first payload.
  • FIG. 122B is a block diagram illustrating a configuration of another signal generation device according to Embodiment 5.
  • the signal generation device E10 is a signal generation device that generates a visible light signal transmitted by the luminance change of the light source provided in the transmitter, and includes a preamble generation unit E11, a payload generation unit E12, and a combining unit E13. .
  • the signal generation device E10 executes the processing of the flowchart shown in FIG. 122A.
  • the preamble generation unit E11 generates a preamble that is data in which the first and second luminance values, which are different luminance values, appear alternately on the time axis.
  • the payload generation unit E12 sets an interval from when the first luminance value appears until the next first luminance value appears.
  • a first payload is generated by determining according to a method according to a signal to be transmitted.
  • the combining unit E13 generates a visible light signal by combining the preamble and the first payload.
  • the first and second luminance values are Bright (High) and Dark (Low), and the first payload is a PHY payload.
  • the time length of the first luminance value in each of the preamble and the first payload is 10 ⁇ sec or less.
  • the preamble is a header for the first payload
  • the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears.
  • each of the three intervals is 160 ⁇ s. That is, a pattern of intervals between pulses included in the header (SHR) in mode 1 of the packet PPM is defined.
  • Each of the pulses is a pulse having a first luminance value, for example.
  • the preamble is a header for the first payload
  • the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears.
  • the first interval among the three intervals is 160 ⁇ sec
  • the second interval is 180 ⁇ sec
  • the third interval is 160 ⁇ sec. That is, a pattern of intervals between pulses included in the header (SHR) in mode 2 of the packet PPM is defined.
  • the preamble is a header for the first payload
  • the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears.
  • the first one of the three intervals is 80 ⁇ s
  • the second interval is 90 ⁇ s
  • the third interval is 80 ⁇ s. That is, a pattern of intervals between pulses included in the header (SHR) in mode 3 of the packet PPM is defined.
  • the receiver can appropriately receive the first payload in the visible light signal.
  • the signal to be transmitted consists of 6 bits from the first bit x 0 to the bit x 5 in the sixth, the time length of the first payload, first from the first luminance value appears in the following 1 Two intervals until the brightness value of 2 appears.
  • y k x 3k + x 3k + 1 ⁇ 2 + x 3k + 2 ⁇ 4 (k is 0 or 1)
  • P k 180 + 30 ⁇ y k [ ⁇ sec], which is the above-described method. That is, in the mode 1 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse included in the first payload (PHY payload).
  • the signal to be transmitted consists of 12 bits from the first bit x 0 to bit x 11 of the 12, the time length of the first payload, first from the first luminance value appears in the following 1 This includes four intervals until the luminance value appears.
  • y k x 3k + x 3k + 1 ⁇ 2 + x 3k + 2 ⁇ 4 (k is 0, 1, 2, or 3)
  • y k x 3k + x 3k + 1 ⁇ 2 + x 3k + 2 ⁇ 4 (k is 0, 1, 2, or 3)
  • P k 180 + 30 ⁇ y k [ ⁇ sec], which is the above-described method. That is, in the mode 2 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse included in the first payload (PHY payload).
  • the signal to be transmitted consists of 3n bits from the first bit x 0 through bit x 3n-1 of the 3n (n is an integer of 2 or more), the time length of the first payload, a first It includes n intervals from when the luminance value appears until the next first luminance value appears.
  • the first payload is generated.
  • the signal to be transmitted is modulated as an interval between each pulse, so that the receiver appropriately transmits a visible light signal based on the interval. It can be demodulated into the signal of interest.
  • a footer for the first payload may be further generated.
  • the footer In the generation of the visible light signal, the footer may be combined next to the first payload. That is, in mode 3 of the packet PWM and the packet PPM, the footer (SFT) is transmitted following the first payload (PHY payload). Thereby, since the end of the first payload can be clearly specified by the footer, visible light communication can be performed efficiently.
  • a header for a signal next to a transmission target signal may be combined instead of the footer. That is, in the mode 3 of the packet PWM and the packet PPM, instead of the footer (SFT), the header (SHR) for the next first payload is transmitted following the first payload (PHY payload).
  • SFT footer
  • PHY payload the header for the next first payload
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the program causes the computer to execute the visible light signal generation method shown by the flowchart of FIG. 122A.
  • the visible light signal generation method has been described based on the above-described embodiments and modifications.
  • the present invention is not limited to the embodiments. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments and modifications are also within the scope of the present invention. May be included.
  • FIG. 123 is a diagram showing a format of a MAC frame in MPM.
  • the format of the MAC (medium access control) header in MPM (Mirror Pulse Modulation) consists of MHR (medium access control header) and MSDU (medium access control service-data unit).
  • MHR medium access control header
  • MSDU medium access control service-data unit
  • the MHR field includes a sequence number subfield.
  • the MSDU includes a frame payload and has a variable length.
  • the bit length of MPDU (medium access control control protocol-data unit) including MHR and MSDU is set as macMpmMpduLength.
  • the MPM is a modulation method in the fifth embodiment, for example, a method for modulating information or signals to be transmitted as shown in FIG.
  • FIG. 124 is a flowchart showing the processing operation of the encoding device for generating a MAC frame in MPM. Specifically, FIG. 124 is a diagram illustrating how to determine the bit length of the sequence number subfield. Note that the encoding device is provided in, for example, the above-described transmitter or transmission device that transmits a visible light signal.
  • the sequence number subfield includes a frame sequence number (also called a sequence number).
  • the bit length of the sequence number subfield is set as macMpmSnLength.
  • the first bit in the sequence number subfield is used as the last frame flag. That is, in this case, the sequence number subfield includes a final frame flag and a bit string indicating the sequence number.
  • the final frame flag is set to 1 for the final frame, and is set to 0 for the other frames. That is, the final frame flag indicates whether or not the processing target frame is the final frame. This final frame flag corresponds to the above-described stop bit.
  • the sequence number corresponds to the above address.
  • the encoding device determines whether or not SN is set to a variable length (step S101a).
  • SN is the bit length of the sequence number subfield. That is, the encoding apparatus determines whether macMpmSnLength indicates 0xf. When macMpmSnLength indicates 0xf, SN is a variable length, and when macMpmSnLength indicates other than 0xf, SN is a fixed length.
  • the encoding apparatus determines the SN to a value indicated by macMpmSnLength (step S102a). . At this time, the encoding apparatus does not use the final frame flag (that is, LFF).
  • the encoding apparatus determines whether the processing target frame is the last frame (step S103a).
  • the encoding apparatus determines SN to be 5 bits (step S104a). At this time, the encoding apparatus determines a final frame flag indicating 1 as the first bit in the sequence number subfield.
  • the encoding apparatus determines whether the value of the sequence number of the final frame is 1-15 (step S105a).
  • the sequence number is an integer assigned to each frame in ascending order from 0. In the case of N in step S103a, the number of frames is 2 or more. Therefore, in this case, the value of the sequence number of the last frame can take any of 1-15 except 0.
  • step S105a determines that the value of the sequence number of the final frame is 1, it determines SN as 1 bit (step S106a). At this time, the encoding apparatus determines that the value of the last frame flag which is the first bit in the sequence number subfield is 0.
  • the sequence number subfield of the last frame is represented as (1, 1) including the last frame flag (1) and the sequence number value (1).
  • the encoding apparatus determines that the bit length of the sequence number subfield of the processing target frame is 1 bit. That is, the encoding apparatus determines a sequence number subfield including only the last frame flag (0).
  • step S105a determines that the value of the sequence number of the last frame is 2, it determines SN to 2 bits (step S107a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
  • the sequence number subfield of the last frame is represented as (1, 0, 1) including the last frame flag (1) and the sequence number value (2). Is done.
  • the sequence number is indicated by a bit string. In the bit string, the leftmost bit is LSB (leastlesignificant bit) and the rightmost bit is MSB (most significant bit). Therefore, the value (2) of the sequence number is expressed as a bit string (0, 1).
  • the encoding apparatus determines the bit length of the sequence number subfield of the processing target frame to be 2 bits. That is, the encoding apparatus determines the sequence number subfield including the last frame flag (0) and the bit (0) or (1) indicating the sequence number.
  • step S105a determines that the value of the sequence number of the final frame is 3 or 4, it determines SN to 3 bits (step S108a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
  • step S105a determines that the value of the sequence number of the last frame is any integer of 5-8, it determines SN as 4 bits (step S109a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
  • step S105a When the encoding apparatus determines in step S105a that the sequence number value of the final frame is any integer of 9-15, it determines SN as 5 bits (step S110a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
  • FIG. 125 is a flowchart showing the processing operation of the decoding device for decoding the MAC frame in MPM. Specifically, FIG. 125 is a diagram showing how to determine the bit length of the sequence number subfield. Note that the decoding device is provided, for example, in the above-described receiver or receiving device that receives a visible light signal.
  • the decoding apparatus determines whether or not SN is set to a variable length (step S201a). That is, the decoding apparatus determines whether macMpmSnLength indicates 0xf. If the decoding apparatus determines that SN is not set to a variable length, that is, that SN is set to a fixed length (N in step S201a), SN is determined to be a value indicated by macMpmSnLength (step S202a). At this time, the decoding device does not use the final frame flag (ie, LFF).

Abstract

Provided is a communication method that enables communication between various apparatuses. This communication method includes: determining whether or not a terminal can perform visible light communication (step SG11); when it is determined that the terminal can perform visible light communication (Yes in step GS11), acquiring an image for decoding by imaging a luminance-varying subject using an image sensor, and acquiring first identification information transmitted by the subject from a stripe pattern appearing in the image for decoding (step SG12); and, when it is determined in the determination of visible light communication that the terminal cannot perform visible light communication (No in step SG11), acquiring a captured image by imaging the subject using the image sensor, specifying a prescribed specific region by detecting edges of the captured image, and acquiring second identification information transmitted by the subject from a line pattern in the specific region (step SG13).

Description

通信方法、通信装置、送信機、およびプログラムCOMMUNICATION METHOD, COMMUNICATION DEVICE, TRANSMITTER, AND PROGRAM
 本発明は、通信方法、通信装置、送信機およびプログラムなどに関する。 The present invention relates to a communication method, a communication device, a transmitter, a program, and the like.
 近年のホームネットワークでは、Ethernet(登録商標)や無線LAN(Local Area Network)でのIP(Internet Protocol)接続によるAV家電の連携に加え、環境問題に対応した電力使用量の管理や、宅外からの電源ON/OFFといった機能を持つホームエネルギーマネジメントシステム(HEMS)によって、多様な家電機器がネットワークに接続される家電連携機能の導入が進んでいる。しかしながら、通信機能を有するには、演算力が十分ではない家電や、コスト面で通信機能の搭載が難しい家電などもある。 In recent home networks, in addition to cooperation of AV home appliances by IP (Internet Protocol) connection in Ethernet (registered trademark) and wireless LAN (Local Area Network), management of power consumption corresponding to environmental problems, and from outside the home Introduction of a home appliance linkage function in which various home appliances are connected to a network by a home energy management system (HEMS) having a function of turning on / off the power of the home appliance. However, there are home appliances that do not have sufficient computing power to have a communication function, and home appliances that are difficult to install a communication function in terms of cost.
 このような問題を解決するため、特許文献1では、光を用いて自由空間に情報を伝達する光空間伝送装置において、照明光の単色光源を複数用いた通信を行うことで、限られた送信装置のなかで、効率的に機器間の通信を実現する技術が記載されている。 In order to solve such a problem, in Patent Literature 1, in an optical space transmission device that transmits information to free space using light, limited transmission is performed by performing communication using a plurality of monochromatic light sources of illumination light. A technology for efficiently realizing communication between devices is described in the apparatus.
特開2002-290335号公報JP 2002-290335 A
 しかしながら、前記従来の方式では、適用される機器が照明のような3色光源を持つ場合に限定される。また、送信された情報を受信する受信機は、ユーザに有益な画像を表示することができない。 However, the conventional method is limited to a case where a device to be applied has a three-color light source such as illumination. Moreover, the receiver which receives the transmitted information cannot display an image useful for the user.
 本発明は、このような課題を解決し、多様な機器間の通信を可能とする通信方法などを提供する。 The present invention solves such problems and provides a communication method that enables communication between various devices.
 本発明の一形態に係る通信方法は、イメージセンサを備えた端末を用いた通信方法であって、前記端末が可視光通信を行うことが可能か否かを判断し、前記端末が可視光通信を行うことが可能と判断した場合に、前記イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、前記復号用画像に現れる縞模様から前記被写体が送信する第1の識別情報を取得し、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記イメージセンサにより、前記被写体を撮像することによって撮像画像を取得し、前記撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、前記少なくとも1つの輪郭の中から、所定の特定領域を特定し、前記特定領域のラインパターンから前記被写体が送信する第2の識別情報を取得する。 A communication method according to an aspect of the present invention is a communication method using a terminal including an image sensor, and determines whether or not the terminal can perform visible light communication, and the terminal performs visible light communication. When the image sensor determines that it is possible to obtain a decoding image by capturing an image of a subject whose luminance changes, the first image is transmitted from the striped pattern that appears in the decoding image. When acquiring identification information and determining that the terminal is not capable of performing visible light communication in the determination of visible light communication, the image sensor acquires a captured image by capturing the subject, By performing edge detection of the captured image, at least one contour is extracted, a predetermined specific region is specified from the at least one contour, and a line of the specific region The subject from the turn acquires the second identification information to be transmitted.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、システム、方法、集積回路、コンピュータプログラムおよび記録媒体の任意な組み合わせで実現されてもよい。また、一実施形態に関わる方法を実行するコンピュータプログラムがサーバの記録媒体に保存されており、端末の要求に応じて、サーバから端末に配信する態様で実現されてもよい。 Note that these comprehensive or specific modes may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable CD-ROM, and the system, method, integrated circuit, computer program And any combination of recording media. Moreover, the computer program which performs the method concerning one Embodiment is preserve | saved at the recording medium of the server, and may be implement | achieved in the aspect delivered to a terminal from a server according to the request | requirement of a terminal.
 本発明によれば、多様な機器間の通信を実現できる。 According to the present invention, communication between various devices can be realized.
図1は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 1 is a diagram illustrating an example of an observation method of luminance of a light emitting unit in the first embodiment. 図2は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 2 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment. 図3は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 3 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment. 図4は、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 4 is a diagram illustrating an example of a method of observing the luminance of the light emitting unit in the first embodiment. 図5Aは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5A is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Bは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5B is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Cは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5C is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Dは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5D is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Eは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5E is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Fは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5F is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Gは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5G is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図5Hは、実施の形態1における発光部の輝度の観測方法の一例を示す図である。FIG. 5H is a diagram illustrating an example of an observation method of luminance of a light emitting unit in Embodiment 1. 図6Aは、実施の形態1における情報通信方法のフローチャートである。FIG. 6A is a flowchart of the information communication method in Embodiment 1. 図6Bは、実施の形態1における情報通信装置のブロック図である。FIG. 6B is a block diagram of the information communication apparatus according to Embodiment 1. 図7は、実施の形態2における受信機の撮影動作の一例を示す図である。FIG. 7 is a diagram illustrating an example of a photographing operation of the receiver in the second embodiment. 図8は、実施の形態2における受信機の撮影動作の他の例を示す図である。FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment. 図9は、実施の形態2における受信機の撮影動作の他の例を示す図である。FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in the second embodiment. 図10は、実施の形態2における受信機の表示動作の一例を示す図である。FIG. 10 is a diagram illustrating an example of display operation of the receiver in Embodiment 2. 図11は、実施の形態2における受信機の表示動作の一例を示す図である。FIG. 11 is a diagram illustrating an example of display operation of the receiver in Embodiment 2. 図12は、実施の形態2における受信機の動作の一例を示す図である。FIG. 12 is a diagram illustrating an example of operation of a receiver in Embodiment 2. 図13は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 13 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図14は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 14 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図15は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 15 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図16は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 16 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図17は、実施の形態2における受信機の動作の他の例を示す図である。FIG. 17 is a diagram illustrating another example of operation of a receiver in Embodiment 2. 図18Aは、実施の形態2における送信機および受信機の動作の一例を示す図である。FIG. 18A is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2. 図18Bは、実施の形態2における送信機および受信機の動作の一例を示す図である。FIG. 18B is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2. 図18Cは、実施の形態2における送信機および受信機の動作の一例を示す図である。FIG. 18C is a diagram illustrating an example of operation of a transmitter and a receiver in Embodiment 2. 図19は、実施の形態2における道案内への応用例を説明するための図である。FIG. 19 is a diagram for explaining an application example to the route guidance in the second embodiment. 図20は、実施の形態2における利用ログ蓄積と解析への応用例を説明するための図である。FIG. 20 is a diagram for explaining an application example to usage log accumulation and analysis in the second embodiment. 図21は、実施の形態2における送信機と受信機の適用例を示す図である。FIG. 21 is a diagram illustrating an example of application of the transmitter and the receiver in the second embodiment. 図22は、実施の形態2における送信機および受信機の適用例を示す図である。FIG. 22 is a diagram illustrating an example of application of a transmitter and a receiver in Embodiment 2. 図23は、実施の形態3におけるアプリケーションの一例を示す図である。FIG. 23 is a diagram illustrating an example of an application according to the third embodiment. 図24は、実施の形態3におけるアプリケーションの一例を示す図である。FIG. 24 is a diagram illustrating an example of an application according to the third embodiment. 図25は、実施の形態3における送信信号の例と音声同期方法の例とを示す図である。FIG. 25 is a diagram illustrating an example of a transmission signal and an example of a voice synchronization method in the third embodiment. 図26は、実施の形態3における送信信号の例を示す図である。FIG. 26 is a diagram illustrating an example of a transmission signal in the third embodiment. 図27は、実施の形態3における受信機の処理フローの一例を示す図である。FIG. 27 is a diagram illustrating an example of a process flow of the receiver in Embodiment 3. 図28は、実施の形態3における受信機のユーザインタフェースの一例を示す図である。FIG. 28 is a diagram illustrating an example of a user interface of the receiver in the third embodiment. 図29は、実施の形態3における受信機の処理フローの一例を示す図である。FIG. 29 is a diagram illustrating an example of a process flow of the receiver in Embodiment 3. 図30は、実施の形態3における受信機の処理フローの他の例を示す図である。FIG. 30 is a diagram illustrating another example of the processing flow of the receiver in Embodiment 3. 図31Aは、実施の形態3における同期再生の具体的な方法を説明するための図である。FIG. 31A is a diagram for explaining a specific method of synchronized playback in the third embodiment. 図31Bは、実施の形態3における同期再生を行う再生装置(受信機)の構成を示すブロック図である。FIG. 31B is a block diagram illustrating a configuration of a playback device (receiver) that performs synchronized playback in the third embodiment. 図31Cは、実施の形態3における同期再生を行う再生装置(受信機)の処理動作を示すフローチャートである。FIG. 31C is a flowchart illustrating a processing operation of a playback device (receiver) that performs synchronized playback in the third embodiment. 図32は、実施の形態3における同期再生の事前準備を説明するための図である。FIG. 32 is a diagram for explaining preparations for synchronized playback in the third embodiment. 図33は、実施の形態3における受信機の応用例を示す図である。FIG. 33 is a diagram illustrating an example of application of a receiver in Embodiment 3. 図34Aは、実施の形態3における、ホルダーに保持された受信機の正面図である。FIG. 34A is a front view of a receiver held by a holder in the third embodiment. 図34Bは、実施の形態3における、ホルダーに保持された受信機の背面図である。FIG. 34B is a rear view of the receiver held by the holder in the third embodiment. 図35は、実施の形態3における、ホルダーに保持された受信機のユースケースを説明するための図である。FIG. 35 is a diagram for describing a use case of a receiver held by a holder in the third embodiment. 図36は、実施の形態3における、ホルダーに保持された受信機の処理動作を示すフローチャートである。FIG. 36 is a flowchart showing the processing operation of the receiver held by the holder in the third embodiment. 図37は、実施の形態3における受信機によって表示される画像の一例を示す図である。FIG. 37 is a diagram illustrating an example of an image displayed by the receiver in Embodiment 3. 図38は、実施の形態3におけるホルダーの他の例を示す図である。FIG. 38 is a diagram showing another example of the holder in the third embodiment. 図39Aは、実施の形態3における可視光信号の一例を示す図である。FIG. 39A is a diagram illustrating an example of a visible light signal in Embodiment 3. 図39Bは、実施の形態3における可視光信号の一例を示す図である。FIG. 39B is a diagram illustrating an example of a visible light signal in Embodiment 3. 図39Cは、実施の形態3における可視光信号の一例を示す図である。FIG. 39C is a diagram illustrating an example of a visible light signal in Embodiment 3. 図39Dは、実施の形態3における可視光信号の一例を示す図である。FIG. 39D is a diagram illustrating an example of a visible light signal in Embodiment 3. 図40は、実施の形態3における可視光信号の構成を示す図である。FIG. 40 is a diagram illustrating a configuration of a visible light signal in the third embodiment. 図41は、実施の形態4における受信機がAR画像を表示する例を示す図である。FIG. 41 is a diagram illustrating an example in which the receiver in Embodiment 4 displays an AR image. 図42は、実施の形態4における表示システムの一例を示す図である。FIG. 42 is a diagram illustrating an example of a display system in Embodiment 4. 図43は、実施の形態4における表示システムの他の例を示す図である。FIG. 43 is a diagram illustrating another example of the display system according to Embodiment 4. 図44は、実施の形態4における表示システムの他の例を示す図である。FIG. 44 is a diagram illustrating another example of the display system according to Embodiment 4. 図45は、実施の形態4における受信機の処理動作の一例を示すフローチャートである。FIG. 45 is a flowchart illustrating an example of processing operations of a receiver in Embodiment 4. 図46は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 46 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図47は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 47 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図48は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 48 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図49は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 49 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図50は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 50 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図51は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 51 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図52は、実施の形態4における受信機の処理動作の他の例を示すフローチャートである。FIG. 52 is a flowchart illustrating another example of the processing operation of the receiver in the fourth embodiment. 図53は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 53 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図54は、実施の形態4における受信機の撮像によって取得される撮像表示画像Ppreおよび復号用画像Pdecを示す図である。FIG. 54 is a diagram illustrating a captured display image Ppre and a decoding image Pdec acquired by capturing by the receiver in the fourth embodiment. 図55は、実施の形態4における受信機に表示される撮像表示画像Ppreの一例を示す図である。FIG. 55 is a diagram illustrating an example of a captured display image Ppre displayed on the receiver in the fourth embodiment. 図56は、実施の形態4における受信機の処理動作の他の例を示すフローチャートである。FIG. 56 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 4. 図57は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 57 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図58は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 58 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図59は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 59 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図60は、実施の形態4における受信機がAR画像を表示する他の例を示す図である。FIG. 60 is a diagram illustrating another example in which the receiver in Embodiment 4 displays an AR image. 図61は、実施の形態4における認識情報の一例を示す図である。FIG. 61 is a diagram showing an example of recognition information in the fourth embodiment. 図62は、実施の形態4における受信機の処理動作の他の例を示すフローチャートである。FIG. 62 is a flowchart illustrating another example of processing operations of a receiver in Embodiment 4. 図63は、実施の形態4における受信機が輝線パターン領域を識別する一例を示す図である。FIG. 63 is a diagram illustrating an example in which the receiver in Embodiment 4 identifies bright line pattern regions. 図64は、実施の形態4における受信機の他の例を示す図である。FIG. 64 is a diagram illustrating another example of the receiver in Embodiment 4. 図65は、実施の形態4における受信機の処理動作の他の例を示すフローチャートである。FIG. 65 is a flowchart illustrating another example of the processing operation of the receiver in the fourth embodiment. 図66は、実施の形態4における複数の送信機を含む送信システムの一例を示す図である。FIG. 66 is a diagram illustrating an example of a transmission system including a plurality of transmitters in Embodiment 4. 図67は、実施の形態4における複数の送信機および受信機を含む送信システムの一例を示す図である。FIG. 67 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in Embodiment 4. 図68Aは、実施の形態4における受信機の処理動作の一例を示すフローチャートである。68A is a flowchart illustrating an example of processing operations of a receiver in Embodiment 4. FIG. 図68Bは、実施の形態4における受信機の処理動作の一例を示すフローチャートである。FIG. 68B is a flowchart illustrating an example of processing operations of a receiver in Embodiment 4. 図69Aは、実施の形態4における表示方法を示すフローチャートである。FIG. 69A is a flowchart illustrating a display method according to Embodiment 4. 図69Bは、実施の形態4における表示装置の構成を示すブロック図である。FIG. 69B is a block diagram illustrating a structure of a display device in Embodiment 4. 図70は、実施の形態4の変形例1における受信機がAR画像を表示する例を示す図である。FIG. 70 is a diagram illustrating an example in which the receiver in the first modification of the fourth embodiment displays an AR image. 図71は、実施の形態4の変形例1における受信機がAR画像を表示する他の例を示す図である。FIG. 71 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image. 図72は、実施の形態4の変形例1における受信機がAR画像を表示する他の例を示す図である。FIG. 72 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image. 図73は、実施の形態4の変形例1における受信機がAR画像を表示する他の例を示す図である。FIG. 73 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image. 図74は、実施の形態4の変形例1における受信機の他の例を示す図である。74 is a diagram illustrating another example of a receiver in Modification 1 of Embodiment 4. FIG. 図75は、実施の形態4の変形例1における受信機がAR画像を表示する他の例を示す図である。FIG. 75 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image. 図76は、実施の形態4の変形例1における受信機がAR画像を表示する他の例を示す図である。FIG. 76 is a diagram illustrating another example in which the receiver in the first modification of the fourth embodiment displays an AR image. 図77は、実施の形態4の変形例1における受信機の処理動作の一例を示すフローチャートである。FIG. 77 is a flowchart illustrating an example of processing operations of the receiver in the first modification of the fourth embodiment. 図78は、実施の形態4またはその変形例1における受信機において想定されるAR画像を表示するときの課題の一例を示す図である。FIG. 78 is a diagram illustrating an example of a problem when an AR image assumed in the receiver in Embodiment 4 or the modification 1 thereof is displayed. 図79は、実施の形態4の変形例2における受信機がAR画像を表示する例を示す図である。FIG. 79 is a diagram illustrating an example in which the receiver in the second modification of the fourth embodiment displays an AR image. 図80は、実施の形態4の変形例2における受信機の処理動作の一例を示すフローチャートである。FIG. 80 is a flowchart illustrating an example of processing operations of the receiver in the second modification of the fourth embodiment. 図81は、実施の形態4の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 81 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image. 図82は、実施の形態4の変形例2における受信機の処理動作の他の例を示すフローチャートである。FIG. 82 is a flowchart illustrating another example of the processing operation of the receiver in the second modification of the fourth embodiment. 図83は、実施の形態4の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 83 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image. 図84は、実施の形態4の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 84 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image. 図85は、実施の形態4の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 85 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image. 図86は、実施の形態4の変形例2における受信機がAR画像を表示する他の例を示す図である。FIG. 86 is a diagram illustrating another example in which the receiver in the second modification of the fourth embodiment displays an AR image. 図87Aは、本発明の一態様に係る表示方法を示すフローチャートである。FIG. 87A is a flowchart illustrating a display method according to one embodiment of the present invention. 図87Bは、本発明の一態様に係る表示装置の構成を示すブロック図である。FIG. 87B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention. 図88は、実施の形態4の変形例3におけるAR画像の拡大および移動の一例を示す図である。FIG. 88 is a diagram illustrating an example of enlargement and movement of an AR image in the third modification of the fourth embodiment. 図89は、実施の形態4の変形例3におけるAR画像の拡大の一例を示す図である。FIG. 89 is a diagram illustrating an example of expansion of an AR image in the third modification of the fourth embodiment. 図90は、実施の形態4の変形例3における受信機によるAR画像の拡大および移動に関する処理動作の一例を示すフローチャートである。FIG. 90 is a flowchart illustrating an example of a processing operation related to enlargement and movement of an AR image by a receiver according to the third modification of the fourth embodiment. 図91は、実施の形態4の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 91 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment. 図92は、実施の形態4の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 92 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment. 図93は、実施の形態4の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 93 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment. 図94は、実施の形態4の変形例3におけるAR画像の重畳の一例を示す図である。FIG. 94 is a diagram illustrating an example of superimposition of AR images in the third modification of the fourth embodiment. 図95Aは、実施の形態4の変形例3における受信機による撮像によって得られる撮像表示画像の一例を示す図である。FIG. 95A is a diagram illustrating an example of a captured display image obtained by imaging by the receiver in the third modification of the fourth embodiment. 図95Bは、実施の形態4の変形例3における受信機のディスプレイに表示されるメニュー画面の一例を示す図である。FIG. 95B is a diagram illustrating an example of a menu screen displayed on the display of the receiver in the third modification of the fourth embodiment. 図96は、実施の形態4の変形例3における受信機とサーバとの処理動作の一例を示すフローチャートである。FIG. 96 is a flowchart illustrating an example of processing operations of the receiver and the server in the third modification of the fourth embodiment. 図97は、実施の形態4の変形例3における受信機によって再生される音声の音量を説明するための図である。FIG. 97 is a diagram for explaining sound volume reproduced by the receiver in the third modification of the fourth embodiment. 図98は、実施の形態4の変形例3における受信機から送信機までの距離と音量との関係を示す図である。FIG. 98 is a diagram illustrating a relationship between the distance from the receiver to the transmitter and the sound volume in the third modification of the fourth embodiment. 図99は、実施の形態4の変形例3における受信機によるAR画像の重畳の一例を示す図である。FIG. 99 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment. 図100は、実施の形態4の変形例3における受信機によるAR画像の重畳の一例を示す図である。FIG. 100 is a diagram illustrating an example of superimposition of AR images by a receiver in the third modification of the fourth embodiment. 図101は、実施の形態4の変形例3における受信機によるラインスキャン時間の求め方の一例を説明するための図である。FIG. 101 is a diagram for describing an example of how to obtain a line scan time by a receiver in the third modification of the fourth embodiment. 図102は、実施の形態4の変形例3における受信機によるラインスキャン時間の求め方の一例を説明するための図である。FIG. 102 is a diagram for describing an example of how to obtain the line scan time by the receiver in the third modification of the fourth embodiment. 図103は、実施の形態4の変形例3における受信機によるラインスキャン時間の求め方の一例を示すフローチャートである。FIG. 103 is a flowchart illustrating an example of how to obtain the line scan time by the receiver in the third modification of the fourth embodiment. 図104は、実施の形態4の変形例3における受信機によるAR画像の重畳の一例を示す図である。FIG. 104 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment. 図105は、実施の形態4の変形例3における受信機によるAR画像の重畳の一例を示す図である。FIG. 105 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment. 図106は、実施の形態4の変形例3における受信機によるAR画像の重畳の一例を示す図である。FIG. 106 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment. 図107は、実施の形態4の変形例3における受信機の姿勢に応じて取得される復号用画像の一例を示す図である。FIG. 107 is a diagram illustrating an example of a decoding image acquired in accordance with the attitude of the receiver in the third modification of the fourth embodiment. 図108は、実施の形態4の変形例3における受信機の姿勢に応じて取得される復号用画像の他の例を示す図である。FIG. 108 is a diagram illustrating another example of the decoding image acquired in accordance with the attitude of the receiver in the third modification of the fourth embodiment. 図109は、実施の形態4の変形例3における受信機の処理動作の一例を示すフローチャートである。FIG. 109 is a flowchart illustrating an example of processing operations of the receiver in Modification 3 of Embodiment 4. 図110は、実施の形態4の変形例3における受信機によるカメラレンズの切り替え処理の一例を示す図である。FIG. 110 is a diagram illustrating an example of a camera lens switching process performed by a receiver according to the third modification of the fourth embodiment. 図111は、実施の形態4の変形例3における受信機によるカメラの切り替え処理の一例を示す図である。FIG. 111 is a diagram illustrating an example of camera switching processing by the receiver in the third modification of the fourth embodiment. 図112は、実施の形態4の変形例3における受信機とサーバとの処理動作の一例を示すフローチャートである。FIG. 112 is a flowchart illustrating an example of processing operations of the receiver and the server in the third modification of the fourth embodiment. 図113は、実施の形態4の変形例3における受信機によるAR画像の重畳の一例を示す図である。FIG. 113 is a diagram illustrating an example of superimposition of AR images by the receiver in the third modification of the fourth embodiment. 図114は、実施の形態4の変形例3における受信機、電子レンジ、中継サーバおよび電子決済用サーバを含むシステムの処理動作を示すシーケンス図である。FIG. 114 is a sequence diagram illustrating processing operations of a system including a receiver, a microwave oven, a relay server, and an electronic settlement server in Modification 3 of Embodiment 4. 図115は、実施の形態4の変形例3における、POS端末、サーバ、受信機および電子レンジを含むシステムの処理動作を示すシーケンス図である。FIG. 115 is a sequence diagram illustrating processing operations of a system including a POS terminal, a server, a receiver, and a microwave oven according to the third modification of the fourth embodiment. 図116は、実施の形態4の変形例3における屋内での利用の一例を示す図である。116 is a diagram illustrating an example of indoor use in Modification 3 of Embodiment 4. FIG. 図117は、実施の形態4の変形例3における拡張現実オブジェクトの表示の一例を示す図である。FIG. 117 is a diagram illustrating an example of an augmented reality object display according to the third modification of the fourth embodiment. 図118は、実施の形態4の変形例4における表示システムの構成を示す図である。FIG. 118 is a diagram illustrating a configuration of a display system according to the fourth modification of the fourth embodiment. 図119は、実施の形態4の変形例4における表示システムの処理動作を示すフローチャートである。FIG. 119 is a flowchart illustrating the processing operation of the display system in the fourth modification of the fourth embodiment. 図120は、本発明の一態様に係る認識方法を示すフローチャートである。FIG. 120 is a flowchart illustrating a recognition method according to an aspect of the present invention. 図121は、実施の形態5に係る可視光信号の動作モードの一例を示す図である。FIG. 121 is a diagram illustrating an example of an operation mode of a visible light signal according to Embodiment 5. 図122Aは、実施の形態5に係る可視光信号の生成方法を示すフローチャートである。122A is a flowchart illustrating a visible light signal generation method according to Embodiment 5. FIG. 図122Bは、実施の形態5に係る信号生成装置の構成を示すブロック図である。FIG. 122B is a block diagram illustrating a configuration of the signal generation device according to Embodiment 5. 図123は、実施の形態6におけるMPMのMACフレームのフォーマットを示す図である。FIG. 123 is a diagram illustrating a format of an MPM MAC frame according to the sixth embodiment. 図124は、実施の形態6におけるMPMのMACフレームを生成する符号化装置の処理動作を示すフローチャートである。FIG. 124 is a flowchart illustrating a processing operation of the encoding device that generates an MPM MAC frame according to the sixth embodiment. 図125は、実施の形態6におけるMPMのMACフレームを復号する復号装置の処理動作を示すフローチャートである。FIG. 125 is a flowchart showing a processing operation of the decoding apparatus for decoding the MPM MAC frame in the sixth embodiment. 図126は、実施の形態6におけるMACのPIBの属性を示す図である。FIG. 126 shows MAC PIB attributes in the sixth embodiment. 図127は、実施の形態6におけるMPMの調光方法を説明するための図である。FIG. 127 is a diagram for explaining an MPM light control method according to the sixth embodiment. 図128は、実施の形態6におけるPHYのPIBの属性を示す図である。FIG. 128 is a diagram showing attributes of the PHY PIB in the sixth embodiment. 図129は、実施の形態6におけるMPMを説明するための図である。FIG. 129 is a diagram for explaining MPM in the sixth embodiment. 図130は、実施の形態6におけるPLCPヘッダサブフィールドを示す図である。FIG. 130 is a diagram illustrating a PLCP header subfield according to the sixth embodiment. 図131は、実施の形態6におけるPLCPセンタサブフィールドを示す図である。FIG. 131 is a diagram illustrating a PLCP center subfield according to the sixth embodiment. 図132は、実施の形態6におけるPLCPフッタサブフィールドを示す図である。FIG. 132 is a diagram illustrating a PLCP footer subfield according to the sixth embodiment. 図133は、実施の形態6におけるMPMにおけるPHYのPWMモードの波形を示す図である。FIG. 133 is a diagram illustrating a waveform of the PHY PWM mode in the MPM according to the sixth embodiment. 図134は、実施の形態6におけるMPMにおけるPHYのPPMモードの波形を示す図である。FIG. 134 is a diagram illustrating a PHY PPM mode waveform in the MPM according to the sixth embodiment. 図135は、実施の形態6の復号方法の一例を示すフローチャートである。FIG. 135 is a flowchart illustrating an example of the decoding method according to the sixth embodiment. 図136は、実施の形態6の符号化方法の一例を示すフローチャートである。FIG. 136 is a flowchart illustrating an example of the encoding method according to the sixth embodiment. 図137は、実施の形態7における受信機がAR画像を表示する例を示す図である。FIG. 137 is a diagram illustrating an example in which the receiver in Embodiment 7 displays an AR image. 図138は、実施の形態7における、AR画像が重畳された撮像表示画像の例を示す図である。FIG. 138 is a diagram illustrating an example of a captured display image on which an AR image is superimposed, according to the seventh embodiment. 図139は、実施の形態7における受信機がAR画像を表示する他の例を示す図である。FIG. 139 is a diagram illustrating another example in which the receiver in Embodiment 7 displays an AR image. 図140は、実施の形態7における受信機の動作を示すフローチャートである。FIG. 140 is a flowchart illustrating the operation of the receiver in the seventh embodiment. 図141は、実施の形態7における送信機の動作を説明するための図である。141 is a diagram for explaining operation of a transmitter in Embodiment 7. FIG. 図142は、実施の形態7における送信機の他の動作を説明するための図である。142 is a diagram for explaining another operation of the transmitter in Embodiment 7. FIG. 図143は、実施の形態7における送信機の他の動作を説明するための図である。FIG. 143 is a diagram for describing another operation of the transmitter in the seventh embodiment. 図144は、実施の形態7における光IDの受信し易さを説明するための比較例を示す図である。FIG. 144 is a diagram illustrating a comparative example for describing easiness of receiving an optical ID in the seventh embodiment. 図145Aは、実施の形態7における送信機の動作を示すフローチャートである。FIG. 145A is a flowchart illustrating an operation of the transmitter in the seventh embodiment. 図145Bは、実施の形態7における送信機の構成を示すブロック図である。FIG. 145B is a block diagram illustrating a configuration of a transmitter in Embodiment 7. 図146は、実施の形態7における受信機がAR画像を表示する他の例を示す図である。FIG. 146 is a diagram illustrating another example in which the receiver in Embodiment 7 displays an AR image. 図147は、実施の形態8における送信機の動作を説明するための図である。FIG. 147 is a diagram for explaining an operation of the transmitter in the eighth embodiment. 図148Aは、実施の形態8における送信方法を示すフローチャートである。FIG. 148A is a flowchart illustrating a transmission method according to the eighth embodiment. 図148Bは、実施の形態8における送信機の構成を示すブロック図である。148B is a block diagram illustrating a configuration of a transmitter in Embodiment 8. FIG. 図149は、実施の形態8における可視光信号の詳細な構成の一例を示す図である。FIG. 149 is a diagram illustrating an example of a detailed configuration of a visible light signal in Embodiment 8. 図150は、実施の形態8における可視光信号の詳細な構成の他の例を示す図である。FIG. 150 is a diagram illustrating another example of a detailed configuration of a visible light signal according to Embodiment 8. 図151は、実施の形態8における可視光信号の詳細な構成の他の例を示す図である。FIG. 151 is a diagram illustrating another example of a detailed configuration of a visible light signal according to the eighth embodiment. 図152は、実施の形態8における可視光信号の詳細な構成の他の例を示す図である。FIG. 152 is a diagram illustrating another example of a detailed configuration of a visible light signal according to the eighth embodiment. 図153は、実施の形態8における、変数y~yの総和と、全時間長および有効時間長との関係を示す図である。FIG. 153 is a diagram illustrating a relationship between the sum of the variables y 0 to y 3 , the total time length, and the effective time length in the eighth embodiment. 図154Aは、実施の形態8における送信方法を示すフローチャートである。FIG. 154A is a flowchart illustrating a transmission method according to Embodiment 8. 図154Bは、実施の形態8における送信機の構成を示すブロック図である。FIG. 154B is a block diagram illustrating a configuration of a transmitter in Embodiment 8. 図155は、実施の形態9における表示システムの構成を示す図である。FIG. 155 is a diagram illustrating the structure of the display system according to the ninth embodiment. 図156は、実施の形態9における受信機とサーバの処理動作を示すシーケンス図である。FIG. 156 is a sequence diagram illustrating processing operations of the receiver and the server according to Embodiment 9. 図157は、実施の形態9におけるサーバの処理動作を示すフローチャートである。FIG. 157 is a flowchart showing the processing operation of the server in the ninth embodiment. 図158は、実施の形態9における送信機および受信機がそれぞれ車両に搭載された場合における通信の例を示す図である。FIG. 158 is a diagram illustrating an example of communication in the case where the transmitter and the receiver in Embodiment 9 are mounted on a vehicle, respectively. 図159は、実施の形態9における車両の処理動作を示すフローチャートである。FIG. 159 is a flowchart showing the processing operation of the vehicle in the ninth embodiment. 図160は、実施の形態9における受信機がAR画像を表示する例を示す図である。FIG. 160 is a diagram illustrating an example in which the receiver in Embodiment 9 displays an AR image. 図161は、実施の形態9における受信機がAR画像を表示する他の例を示す図である。FIG. 161 is a diagram illustrating another example in which the receiver in Embodiment 9 displays an AR image. 図162は、実施の形態9における受信機の処理動作を示す図である。FIG. 162 is a diagram illustrating processing operation of a receiver in Embodiment 9. 図163は、実施の形態9における受信機に対する操作の一例を示す図である。FIG. 163 is a diagram illustrating an example of operation on a receiver in Embodiment 9. 図164は、実施の形態9における受信機に表示されるAR画像の例を示す図である。FIG. 164 is a diagram illustrating an example of AR image displayed on the receiver in Embodiment 9. 図165は、実施の形態9における、撮像表示画像に重畳されるAR画像の例を示す図である。FIG. 165 is a diagram illustrating an example of the AR image superimposed on the captured display image in the ninth embodiment. 図166は、実施の形態9における、撮像表示画像に重畳されるAR画像の例を示す図である。FIG. 166 is a diagram illustrating an example of the AR image superimposed on the captured display image in Embodiment 9. 図167は、実施の形態9における送信機の一例を示す図である。167 is a diagram illustrating an example of a transmitter in Embodiment 9. FIG. 図168は、実施の形態9における送信機の他の例を示す図である。168 is a diagram illustrating another example of a transmitter in Embodiment 9. FIG. 図169は、実施の形態9における送信機の他の例を示す図である。169 is a diagram illustrating another example of a transmitter in Embodiment 9. FIG. 図170は、実施の形態9における、光通信対応の受信機と、光通信非対応の受信機とを用いたシステムの一例を示す図である。FIG. 170 is a diagram illustrating an example of a system using a receiver compatible with optical communication and a receiver not compatible with optical communication in Embodiment 9. 図171は、実施の形態9における受信機の処理動作を示すフローチャートである。FIG. 171 is a flowchart illustrating processing operations of the receiver in Embodiment 9. 図172は、実施の形態9におけるAR画像の表示の例を示す図である。FIG. 172 is a diagram illustrating an example of display of an AR image in Embodiment 9. 図173Aは、本発明の一態様に係る表示方法を示すフローチャートである。FIG. 173A is a flowchart illustrating a display method according to one embodiment of the present invention. 図173Bは、本発明の一態様に係る表示装置の構成を示すブロック図である。FIG. 173B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention. 図174は、実施の形態10における送信機に描かれる画像の一例を示す図である。174 is a diagram illustrating an example of an image drawn on a transmitter in Embodiment 10. [FIG. 図175は、実施の形態10における送信機に描かれる画像の他の例を示す図である。175 is a diagram illustrating another example of an image drawn on a transmitter in Embodiment 10. FIG. 図176は、実施の形態10における送信機および受信機の例を示す図である。176 is a diagram illustrating an example of a transmitter and a receiver in Embodiment 10. FIG. 図177は、実施の形態10におけるラインパターンの基本周波数を説明するための図である。FIG. 177 is a diagram for describing the fundamental frequency of a line pattern in the tenth embodiment. 図178Aは、実施の形態10における符号化装置の処理動作を示すフローチャートである。FIG. 178A is a flowchart showing a processing operation of the encoding apparatus according to the tenth embodiment. 図178Bは、実施の形態10における符号化装置の処理動作を説明するための図である。FIG. 178B is a diagram for describing a processing operation of the encoding device according to the tenth embodiment. 図179は、実施の形態10における復号装置である受信機の処理動作を示すフローチャートである。FIG. 179 is a flowchart illustrating processing operations of a receiver which is a decoding device according to Embodiment 10. 図180は、実施の形態10における受信機の処理動作を示すフローチャートである。FIG. 180 is a flowchart illustrating processing operations of a receiver in Embodiment 10. 図181Aは、実施の形態10におけるシステムの構成の一例を示す図である。FIG. 181A is a diagram illustrating an example of a system configuration in Embodiment 10. 図181Bは、実施の形態10におけるカメラの処理を示す図である。FIG. 181B is a diagram illustrating processing of the camera according to Embodiment 10. 図182は、実施の形態10におけるシステムの構成の他の例を示す図である。FIG. 182 is a diagram illustrating another example of the configuration of the system according to the tenth embodiment. 図183は、実施の形態10における送信機に描かれる画像の他の例を示す図である。FIG. 183 is a diagram illustrating another example of an image drawn on the transmitter in Embodiment 10. 図184は、実施の形態10におけるフレームIDを構成するMACフレームのフォーマットの一例を示す図である。FIG. 184 is a diagram illustrating an example of a format of a MAC frame constituting the frame ID in the tenth embodiment. 図185は、実施の形態10におけるMACヘッダの構成の一例を示す図である。FIG. 185 is a diagram illustrating an example of a MAC header configuration in the tenth embodiment. 図186は、実施の形態10における、パケット分割数を導出するためのテーブルの一例を示す図である。FIG. 186 is a diagram illustrating an example of a table for deriving the number of packet divisions according to the tenth embodiment. 図187は、実施の形態10におけるPHY符号化を示す図である。FIG. 187 is a diagram illustrating PHY coding according to the tenth embodiment. 図188は、実施の形態10におけるPHYシンボルを有する送信画像Im3の一例を示す図である。FIG. 188 is a diagram illustrating an example of a transmission image Im3 having a PHY symbol in Embodiment 10. 図189は、実施の形態10における2つのPHYバージョンを説明するための図である。FIG. 189 is a diagram for explaining two PHY versions in the tenth embodiment. 図190は、実施の形態10におけるグレイコードを説明するための図である。FIG. 190 is a diagram for explaining the Gray code in the tenth embodiment. 図191は、実施の形態10における受信機による復号処理の一例を示す図である。FIG. 191 is a diagram illustrating an example of decoding processing by the receiver in Embodiment 10. 図192は、実施の形態10における受信機による送信画像の不正検知の方法を説明するための図である。FIG. 192 is a diagram for describing a transmission image fraud detection method by the receiver in the tenth embodiment. 図193は、実施の形態10における受信機による送信画像の不正検知を含む復号処理の一例を示すフローチャートである。FIG. 193 is a flowchart illustrating an example of a decoding process including fraud detection of a transmission image by a receiver in the tenth embodiment. 図194Aは、実施の形態10の変形例に係る表示方法を示すフローチャートである。FIG. 194A is a flowchart showing a display method according to a modification of the tenth embodiment. 図194Bは、実施の形態10の変形例に係る表示装置の構成を示すブロック図である。FIG. 194B is a block diagram illustrating a structure of the display device according to the modification of the tenth embodiment. 図194Cは、実施の形態10の変形例に係る通信方法を示すフローチャートである。FIG. 194C is a flowchart illustrating a communication method according to a modification of the tenth embodiment. 図194Dは、実施の形態10の本変形例に係る通信装置の構成を示すブロック図である。FIG. 194D is a block diagram showing a configuration of a communication apparatus according to this variation of the tenth embodiment. 図194Eは、実施の形態10およびその変形例に係る送信機の構成を示すブロック図である。FIG. 194E is a block diagram illustrating a configuration of the transmitter according to Embodiment 10 and its modifications. 図195は、実施の形態11におけるサーバを含む通信システムの構成の一例を示す図である。FIG. 195 is a diagram illustrating an example of a configuration of a communication system including a server in the eleventh embodiment. 図196は、実施の形態11における第1のサーバによる管理方法を示すフローチャートである。FIG. 196 is a flowchart illustrating a management method by the first server in the eleventh embodiment. 図197は、実施の形態12における照明システムを示す図である。FIG. 197 is a diagram illustrating an illumination system in Embodiment 12. 図198は、実施の形態12における照明装置の配置および復号用画像の一例を示す図である。FIG. 198 is a diagram illustrating an example of arrangement of illumination devices and a decoding image in Embodiment 12. 図199は、実施の形態12における照明装置の配置および復号用画像の他の例を示す図である。FIG. 199 is a diagram illustrating another example of arrangement of illumination devices and a decoding image in Embodiment 12. 図200は、実施の形態12における照明装置を用いた位置推定を説明するための図である。FIG. 200 is a diagram for describing position estimation using the illumination device in Embodiment 12. 図201は、実施の形態12における受信機の処理動作を示すフローチャートである。FIG. 201 is a flowchart illustrating processing operation of a receiver in Embodiment 12. 図202は、実施の形態12における通信システムの一例を示す図である。FIG. 202 is a diagram illustrating an example of a communication system in Embodiment 12. 図203は、実施の形態12における受信機による自己位置推定の処理を説明するための図である。FIG. 203 is a diagram for describing self-position estimation processing by a receiver in Embodiment 12. 図204は、実施の形態12における受信機による自己位置推定の処理を示すフローチャートである。FIG. 204 is a flowchart illustrating self-position estimation processing by a receiver in Embodiment 12. 図205は、実施の形態12における受信機の自己位置推定の処理の概略を示すフローチャートである。FIG. 205 is a flowchart illustrating an outline of receiver self-position estimation processing according to the twelfth embodiment. 図206は、実施の形態12における電波のIDと光IDとの関係を示す図である。FIG. 206 is a diagram illustrating the relationship between radio wave IDs and optical IDs in the twelfth embodiment. 図207は、実施の形態12における受信機による撮像の一例を説明するための図である。207 is a diagram for describing an example of imaging by a receiver in Embodiment 12. FIG. 図208は、実施の形態12における受信機による撮像の他の例を説明するための図である。FIG. 208 is a diagram for describing another example of imaging by a receiver in Embodiment 12. 図209は、実施の形態12における受信機によって用いられるカメラを説明するための図である。FIG. 209 is a diagram for describing a camera used by a receiver in Embodiment 12. 図210は、実施の形態12における受信機が送信機の可視光信号を変更させる処理の一例を示すフローチャートである。FIG. 210 is a flowchart illustrating an example of processing in which the receiver in Embodiment 12 changes the visible light signal of the transmitter. 図211は、実施の形態12における受信機が送信機の可視光信号を変更させる処理の他の例を示すフローチャートである。FIG. 211 is a flowchart illustrating another example of processing in which the receiver in Embodiment 12 changes the visible light signal of the transmitter. 図212は、実施の形態13における受信機によるナビゲーションを説明するための図である。FIG. 212 is a diagram for describing navigation by a receiver in Embodiment 13. 図213は、実施の形態13における受信機による自己位置推定の一例を示すフローチャートである。FIG. 213 is a flowchart illustrating an example of self-position estimation by the receiver in Embodiment 13. 図214は、実施の形態13における受信機によって受信される可視光信号を説明するための図である。FIG. 214 is a diagram for describing a visible light signal received by the receiver in Embodiment 13. 図215は、実施の形態13における受信機による自己位置推定の他の例を示すフローチャートである。FIG. 215 is a flowchart illustrating another example of self-position estimation by the receiver in the thirteenth embodiment. 図216は、実施の形態13における受信機による反射光の判定の例を示すフローチャートである。FIG. 216 is a flowchart illustrating an example of determination of reflected light by the receiver in Embodiment 13. 図217は、実施の形態13における受信機によるナビゲーションの一例を示すフローチャートである。FIG. 217 is a flowchart illustrating an example of navigation by a receiver in Embodiment 13. 図218は、実施の形態13におけるプロジェクタとして構成されている送信機100の例を示す図である。FIG. 218 is a diagram illustrating an example of the transmitter 100 configured as a projector in Embodiment 13. In FIG. 図219は、実施の形態13における受信機による自己位置推定の他の例を示すフローチャートである。FIG. 219 is a flowchart illustrating another example of self-position estimation by the receiver in Embodiment 13. 図220は、実施の形態13における送信機による処理の一例を示すフローチャートである。FIG. 220 is a flowchart illustrating an example of processing performed by a transmitter in the thirteenth embodiment. 図221は、実施の形態13における受信機によるナビゲーションの他の例を示すフローチャートである。FIG. 221 is a flowchart illustrating another example of navigation by a receiver in Embodiment 13. 図222は、実施の形態13における受信機による処理の一例を示すフローチャートである。FIG. 222 is a flowchart illustrating an example of processing by a receiver in Embodiment 13. 図223は、実施の形態13における受信機のディスプレイに表示される画面の一例を示す図である。223 is a diagram illustrating an example of a screen which is displayed on the display of the receiver in Embodiment 13. FIG. 図224は、実施の形態13における受信機によるキャラクターの表示例を示す図である。224 is a diagram illustrating an example of character display by the receiver in Embodiment 13. FIG. 図225は、実施の形態13における受信機のディスプレイに表示される画面の他の例を示す図である。225 is a diagram illustrating another example of a screen displayed on the display of the receiver in Embodiment 13. FIG. 図226は、実施の形態13における、待ち合わせ場所へのナビゲーションを行うためのシステム構成を示す図である。FIG. 226 is a diagram showing a system configuration for performing navigation to a meeting place in the thirteenth embodiment. 図227は、実施の形態13における受信機のディスプレイに表示される画面の他の例を示す図である。227 is a diagram illustrating another example of a screen displayed on the display of the receiver in Embodiment 13. FIG. 図228は、コンサートホールの内部を示す図である。FIG. 228 is a diagram showing the inside of the concert hall. 図229は、本発明の第1の態様における通信方法の一例を示すフローチャートである。FIG. 229 is a flowchart illustrating an example of a communication method according to the first aspect of the present invention.
 本発明の一態様に係る通信方法は、イメージセンサを備えた端末を用いた通信方法であって、前記端末が可視光通信を行うことが可能か否かを判断し、前記端末が可視光通信を行うことが可能と判断した場合に、前記イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、前記復号用画像に現れる縞模様から、前記被写体が送信する第1の識別情報を取得し、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記イメージセンサにより、前記被写体を撮像することによって撮像画像を取得し、前記撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、前記少なくとも1つの輪郭の中から、所定の特定領域を特定し、前記特定領域のラインパターンから前記被写体が送信する第2の識別情報を取得する。 A communication method according to one embodiment of the present invention is a communication method using a terminal including an image sensor, and determines whether or not the terminal can perform visible light communication, and the terminal performs visible light communication. When the image sensor is determined to be able to perform the process, the image sensor captures a subject whose luminance changes to obtain a decoding image, and the subject transmits from the striped pattern appearing in the decoding image. In the determination of visible light communication, when it is determined that the terminal cannot perform visible light communication, a captured image is acquired by capturing the subject with the image sensor. Then, by performing edge detection of the captured image, at least one contour is extracted, a predetermined specific area is specified from the at least one outline, and a line of the specific area is identified. The object acquires the second identification information transmitted from the pattern.
 これにより、例えば実施の形態10のように、受信機などの端末は、可視光通信ができるか否かに関わらず、送信機などの被写体から、第1の識別情報または第2の識別情報を取得することができる。つまり、端末は、可視光通信を行うことができる場合には、被写体から例えば光IDを第1の識別情報として取得する。一方、端末は、可視光通信を行うことができなくても、その被写体から例えば画像IDまたはフレームIDを第2の識別情報として取得することができる。具体的には、例えば図183および図188に示す送信画像が被写体として撮像され、その送信画像の領域が特定領域(すなわち選択領域)として選択され、その送信画像のラインパターンから第2の識別情報が取得される。したがって、可視光通信が不可能な場合でも、第2の識別情報を適切に取得することができる。なお、縞模様は、輝線パターンまたは輝線パターン領域とも呼ばれる。 Thus, for example, as in the tenth embodiment, a terminal such as a receiver can receive first identification information or second identification information from a subject such as a transmitter regardless of whether or not visible light communication is possible. Can be acquired. That is, when the terminal can perform visible light communication, the terminal acquires, for example, the light ID from the subject as the first identification information. On the other hand, even if the terminal cannot perform visible light communication, the terminal can acquire, for example, an image ID or a frame ID from the subject as the second identification information. Specifically, for example, the transmission image illustrated in FIGS. 183 and 188 is captured as a subject, the area of the transmission image is selected as a specific area (that is, a selection area), and second identification information is obtained from the line pattern of the transmission image. Is acquired. Therefore, even when visible light communication is impossible, the second identification information can be appropriately acquired. The striped pattern is also called a bright line pattern or a bright line pattern region.
 また、前記特定領域の特定では、所定の大きさ以上の四角形の輪郭を有する領域、または、所定の大きさ以上の角丸四角形の輪郭を有する領域を、前記特定領域として特定してもよい。 In the specification of the specific area, an area having a quadrangular outline of a predetermined size or an area having a rounded quadrangular outline of a predetermined size or more may be specified as the specific area.
 これにより、例えば図179に示すように、四角形または角丸四角形の領域を特定領域として適切に特定することができる。 Thus, for example, as shown in FIG. 179, a quadrangular or rounded quadrangular region can be appropriately identified as the specific region.
 また、前記可視光通信の判断では、前記端末が露光時間を所定の値以下に変更することができる端末であると特定した場合に、可視光通信を行うことが可能であると判断し、前記端末が露光時間を前記所定の値以下に変更することができない端末であると特定した場合に、可視光通信を行うことが可能でないと判断してもよい。 Further, in the determination of the visible light communication, when the terminal is identified as a terminal that can change the exposure time to a predetermined value or less, it is determined that visible light communication can be performed, When the terminal specifies that the exposure time cannot be changed to the predetermined value or less, it may be determined that the visible light communication cannot be performed.
 これにより、例えば図180に示すように、可視光信号を行うことが可能か否かを適切に判断することができる。 Thereby, for example, as shown in FIG. 180, it is possible to appropriately determine whether or not a visible light signal can be performed.
 また、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能と判断した場合に、前記被写体を撮像するときには、前記イメージセンサの露光時間を第1の露光時間に設定し、前記第1の露光時間で前記被写体を撮像することで、前記復号用画像を取得し、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記被写体を撮像するときには、前記イメージセンサの露光時間を第2の露光時間に設定し、前記第2の露光時間で前記被写体を撮像することで、前記撮像画像を取得し、前記第1の露光時間は、前記第2の露光時間よりも短くてもよい。 Further, in the determination of the visible light communication, when the terminal determines that the visible light communication can be performed, when imaging the subject, the exposure time of the image sensor is set to a first exposure time, By capturing the subject with the first exposure time to obtain the decoding image, and in the determination of the visible light communication, when it is determined that the terminal cannot perform visible light communication, When the subject is imaged, the exposure time of the image sensor is set to a second exposure time, the captured image is acquired by capturing the subject with the second exposure time, and the first exposure time is set. May be shorter than the second exposure time.
 これにより、第1の露光時間での撮像によって、縞模様を有する復号用画像を取得して、その縞模様に対する復号によって、第1の識別情報を適切に取得することができる。さらに、第2の露光時間での撮像によって、通常撮影画像を撮像画像として取得し、その通常撮影画像に現れているラインパターンから第2の識別情報を適切に取得することができる。これにより、端末は、第1の露光時間と第2の露光時間とを使い分けることによって、その端末に適した第1の識別情報または第2の識別情報を取得することができる。 Thereby, it is possible to acquire a decoding image having a striped pattern by imaging at the first exposure time, and appropriately acquire the first identification information by decoding the striped pattern. Furthermore, a normal captured image can be acquired as a captured image by imaging at the second exposure time, and the second identification information can be appropriately acquired from the line pattern appearing in the normal captured image. Accordingly, the terminal can acquire the first identification information or the second identification information suitable for the terminal by properly using the first exposure time and the second exposure time.
 また、前記被写体は、前記イメージセンサから見て矩形形状であり、当該被写体の中心領域が輝度変化することにより、前記第1の識別情報を送信し、当該被写体の周縁にバーコード状のラインパターンが配置されており、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能と判断した場合に、前記被写体を撮像するときには、前記イメージセンサの有する複数の露光ラインに対応する複数の輝線から構成される輝線パターンを含む前記復号用画像を取得し、前記輝線パターンを復号することによって前記第1の識別情報を取得し、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記被写体を撮像するときには、前記撮像画像の前記ラインパターンから前記第2の識別情報を取得してもよい。 The subject has a rectangular shape as viewed from the image sensor, and the central area of the subject changes in luminance, whereby the first identification information is transmitted, and a barcode-like line pattern is formed around the subject. When the subject is imaged when it is determined that the terminal can perform visible light communication in the determination of the visible light communication, the image sensor corresponds to a plurality of exposure lines of the image sensor. The decoding image including a bright line pattern composed of a plurality of bright lines is acquired, the first identification information is acquired by decoding the bright line pattern, and in the determination of the visible light communication, the terminal When it is determined that communication is not possible, when the subject is imaged, the second pattern is extracted from the line pattern of the captured image. It may acquire the different information.
 これにより、中心領域が輝度変化する被写体から、第1の識別情報および第2の識別情報を適切に取得することができる。 Thus, the first identification information and the second identification information can be appropriately acquired from the subject whose central region changes in luminance.
 また、前記復号用画像から得られる前記第1の識別情報と、前記ラインパターンから得られる前記第2の識別情報は、同一の情報であってもよい。 Further, the first identification information obtained from the decoding image and the second identification information obtained from the line pattern may be the same information.
 これにより、可視光通信が可能な端末でも、可視光通信が不可能な端末でも、その被写体から同じ情報を取得することができる。 Thus, the same information can be acquired from the subject both in a terminal capable of visible light communication and a terminal incapable of visible light communication.
 また、前記可視光通信の判断において、前記端末が可視光通信を行うことが可能と判断した場合に、前記第1の識別情報に関連付けられている第1の動画像を表示し、前記第1の動画像をスライドさせる操作を受け付けると、前記第1の動画像の次に前記第1の識別情報に関連付けられている第2の動画像を表示してもよい。 Further, in the determination of the visible light communication, when it is determined that the terminal can perform visible light communication, the first moving image associated with the first identification information is displayed, and the first When the operation of sliding the moving image is received, the second moving image associated with the first identification information may be displayed next to the first moving image.
 例えば、第1の動画像および第2の動画像のそれぞれは、図162に示す第1のAR画像P46および第2のAR画像P46cである。また、第1の識別情報は、例えば上述のように光IDである。上記一態様に係る通信方法では、第1の動画像をスライドさせる操作、つまりスワイプが受け付けられると、第1の動画像の次に第1の識別情報に関連付けられている第2の動画像が表示される。したがって、ユーザに有益な画像を容易に表示することができる。また、図194Aに示すように、事前に可視光通信が可能か否かの判断が行われるため、不可能な場合にまで、可視光信号を取得しようとする無駄な処理を省くことができ、処理負担を軽減することができる。 For example, each of the first moving image and the second moving image is the first AR image P46 and the second AR image P46c shown in FIG. Further, the first identification information is, for example, an optical ID as described above. In the communication method according to the above aspect, when an operation of sliding the first moving image, that is, a swipe is accepted, the second moving image associated with the first identification information is next to the first moving image. Is displayed. Therefore, an image useful for the user can be easily displayed. Further, as shown in FIG. 194A, since it is determined in advance whether or not visible light communication is possible, it is possible to omit useless processing to acquire a visible light signal until impossible, The processing burden can be reduced.
 また、前記第2の動画像の表示では、前記第1の動画像を横方向にスライドさせる操作を受け付けると、前記第2の動画像を表示し、前記第1の動画像を縦方向にスライドさせる動作を受け付けると、前記第1の識別情報に関連付けられている静止画像を表示してもよい。 In the display of the second moving image, when an operation of sliding the first moving image in the horizontal direction is accepted, the second moving image is displayed, and the first moving image is slid in the vertical direction. When the operation to be performed is received, a still image associated with the first identification information may be displayed.
 これにより、例えば図162に示すように、第1の動画像の横方向へのスライド、すなわちスワイプによって、第2の動画像が表示される。さらに、例えば図163および図164に示すように、第1の動画像の縦方向へのスライドによって、第1の識別情報に関連付けられている静止画像が表示される。静止画像は、例えば図164に示すAR画像P47である。したがって、ユーザに有益な多種多様な画像を容易に表示することができる。 Thereby, for example, as shown in FIG. 162, the second moving image is displayed by sliding the first moving image in the horizontal direction, that is, by swiping. Further, for example, as illustrated in FIGS. 163 and 164, a still image associated with the first identification information is displayed by sliding the first moving image in the vertical direction. The still image is, for example, an AR image P47 shown in FIG. Therefore, it is possible to easily display a wide variety of images useful to the user.
 また、前記第1の動画像および前記第2の動画像のそれぞれにおいて、最初に表示されるピクチャ内のオブジェクトは同一の位置にあってもよい。 Further, in each of the first moving image and the second moving image, the object in the picture displayed first may be in the same position.
 これにより、例えば図162に示すように、第1の動画像に代わって第2の動画像が表示されるときには、それらの最初に表示されるオブジェクトが同一位置にあるため、ユーザは、第1の動画像および第2の動画像が互いに関連していることを容易に把握することができる。 Thus, for example, as shown in FIG. 162, when the second moving image is displayed instead of the first moving image, the first displayed object is at the same position, so the user can It is possible to easily grasp that the moving image and the second moving image are related to each other.
 また、前記イメージセンサによる撮像によって前記第1の識別情報を再び取得したときには、表示されている動画像の次に前記第1の識別情報に関連付けられている次の動画像を表示してもよい。 Further, when the first identification information is acquired again by imaging by the image sensor, the next moving image associated with the first identification information may be displayed next to the displayed moving image. .
 これにより、例えば図162に示すように、スライドまたはスワイプなどの操作が行われなくても、第1の識別情報である光IDが取り直されると、次の動画像が表示される。したがって、ユーザに有益な動画像をより容易に表示することができる。 Thus, as shown in FIG. 162, for example, even when an operation such as slide or swipe is not performed, the next moving image is displayed when the light ID that is the first identification information is taken again. Therefore, a moving image useful for the user can be displayed more easily.
 また、前記表示されている動画像および前記次の動画像のそれぞれにおいて、最初に表示されるピクチャ内のオブジェクトは同一の位置にあってもよい。 Further, in each of the displayed moving image and the next moving image, the object in the picture displayed first may be in the same position.
 これにより、例えば図162に示すように、表示されている動画像に代わって次の動画像が表示されるときには、それらの最初に表示されるオブジェクトが同一位置にあるため、ユーザは、それらの動画像が互いに関連していることを容易に把握することができる。 Thus, for example, as shown in FIG. 162, when the next moving image is displayed instead of the displayed moving image, the first displayed object is at the same position, so the user can It is possible to easily grasp that moving images are related to each other.
 また、前記第1の動画像および前記第2の動画像のうちの少なくとも一方の動画像は、前記動画像内の位置が前記動画像の端に近いほど、当該位置における透明度が高くなるように形成されていてもよい。 Further, at least one of the first moving image and the second moving image has a higher transparency at the position as the position in the moving image is closer to the end of the moving image. It may be formed.
 これにより、例えば図93または図166に示すように、その動画像が通常撮影画像に重畳されて表示される場合には、通常撮影画像によって示される環境に、輪郭が曖昧なオブジェクトが現実に存在するように、撮像表示画像を表示することができる。 Thus, for example, as shown in FIG. 93 or FIG. 166, when the moving image is displayed superimposed on the normal captured image, an object with an ambiguous outline actually exists in the environment indicated by the normal captured image. As such, the captured display image can be displayed.
 また、前記第1の動画像および前記第2の動画像のうちの少なくとも一方の動画像が表示される領域の外に、画像を表示してもよい。 Further, an image may be displayed outside an area where at least one of the first moving image and the second moving image is displayed.
 これにより、例えば図161に示すサブ画像Ps46のように、動画像が表示される領域の外に、画像が表示されるため、ユーザにより有益な多種多様な画像を容易に表示することができる。 Thereby, for example, as the sub image Ps46 shown in FIG. 161, since the image is displayed outside the area where the moving image is displayed, a variety of images more useful to the user can be easily displayed.
 また、前記イメージセンサによる第1の露光時間による撮像によって、通常撮影画像を取得し、前記第1の露光時間よりも短い第2の露光時間による撮像によって、複数の輝線のパターンからなる領域である輝線パターン領域を含む前記復号用画像を取得し、前記復号用画像に対する復号によって前記第1の識別情報を取得し、前記第1の動画像または前記第2の動画像のうちの少なくとも一方の動画像の表示では、前記通常撮影画像から、前記復号用画像における前記輝線パターン領域と同一の位置にある基準領域を特定し、前記基準領域に基づいて、前記通常撮影画像において前記動画像が重畳される領域を対象領域として認識し、前記対象領域に前記動画像を重畳してもよい。例えば、前記第1の動画像または前記第2の動画像のうちの少なくとも一方の動画像の表示では、前記通常撮影画像における、前記基準領域の上、下、左または右の領域を前記対象領域として認識してもよい。 In addition, the image sensor is a region that includes a pattern of a plurality of bright lines by acquiring a normal captured image by imaging with a first exposure time by the image sensor and imaging by a second exposure time shorter than the first exposure time. The decoding image including a bright line pattern region is acquired, the first identification information is acquired by decoding the decoding image, and at least one moving image of the first moving image or the second moving image In the image display, a reference area at the same position as the bright line pattern area in the decoding image is specified from the normal captured image, and the moving image is superimposed on the normal captured image based on the reference area. May be recognized as a target area, and the moving image may be superimposed on the target area. For example, in displaying the moving image of at least one of the first moving image and the second moving image, the upper, lower, left, or right region of the reference region in the normal captured image is the target region. You may recognize as.
 これにより、例えば図50~図52および図172に示すように、基準領域に基づいて対象領域が認識され、その対象領域に動画像が重畳されるため、動画像が重畳される領域の自由度を容易に高めることができる。 As a result, for example, as shown in FIGS. 50 to 52 and 172, the target area is recognized based on the reference area, and the moving image is superimposed on the target area. Can be easily increased.
 また、前記第1の動画像または前記第2の動画像のうちの少なくとも一方の動画像の表示では、前記輝線パターン領域のサイズが大きいほど、前記動画像のサイズを大きくしてもよい。 In the display of at least one of the first moving image and the second moving image, the size of the moving image may be increased as the size of the bright line pattern region is increased.
 これにより、図172に示すように、動画像のサイズが輝線パターン領域のサイズに応じて変化するため、動画像のサイズが固定されている場合と比べて、その動画像によって示されるオブジェクトがより現実に存在するように、その動画像を表示することができる。 As a result, as shown in FIG. 172, the size of the moving image changes according to the size of the bright line pattern region, so that the object indicated by the moving image is more compared to the case where the size of the moving image is fixed. The moving image can be displayed so that it exists in reality.
 本発明の一態様に係る送信機は、照明板と、前記照明板の背面側から光を照射する光源と、前記光源の輝度を変化させるマイクロコントローラと、を備え、前記マイクロコントローラは、前記光源を輝度変化させることにより、前記光源から前記照明板を介して第1の識別情報を送信し、前記照明板の前面側の周辺にバーコード状のラインパターンが配置されており、前記ラインパターンに第2の識別情報が符号化されており、前記第1の識別情報と、前記第2の識別情報は、同じ情報である。例えば、前記照明板の形状は、矩形形状である。 A transmitter according to one embodiment of the present invention includes an illumination plate, a light source that emits light from a back side of the illumination plate, and a microcontroller that changes luminance of the light source, and the microcontroller includes the light source. The first identification information is transmitted from the light source through the illuminating plate, and a barcode-like line pattern is arranged around the front side of the illuminating plate. Second identification information is encoded, and the first identification information and the second identification information are the same information. For example, the lighting plate has a rectangular shape.
 これにより、可視光通信を行うことが可能な端末に対しても、不可能な端末に対しても、同じ情報を送信することができる。 Thereby, the same information can be transmitted to a terminal capable of performing visible light communication and a terminal capable of performing visible light communication.
 なお、これらの包括的または具体的な態様は、装置、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、装置、システム、方法、集積回路、コンピュータプログラムまたは記録媒体の任意な組み合わせで実現されてもよい。 Note that these comprehensive or specific modes may be realized by a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement | achieve in arbitrary combinations of a circuit, a computer program, or a recording medium.
 以下、実施の形態について、図面を参照しながら具体的に説明する。 Hereinafter, embodiments will be specifically described with reference to the drawings.
 なお、以下で説明する実施の形態は、いずれも包括的または具体的な例を示すものである。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置及び接続形態、ステップ、ステップの順序などは、一例であり、本発明を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 It should be noted that each of the embodiments described below shows a comprehensive or specific example. The numerical values, shapes, materials, constituent elements, arrangement positions and connecting forms of the constituent elements, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present invention. In addition, among the constituent elements in the following embodiments, constituent elements that are not described in the independent claims indicating the highest concept are described as optional constituent elements.
 (実施の形態1)
 以下、実施の形態1について説明する。
(Embodiment 1)
The first embodiment will be described below.
 (発光部の輝度の観測)
 1枚の画像を撮像するとき、全ての撮像素子を同一のタイミングで露光させるのではなく、撮像素子ごとに異なる時刻に露光を開始・終了する撮像方法を提案する。図1は、1列に並んだ撮像素子は同時に露光させ、列が近い順に露光開始時刻をずらして撮像する場合の例である。ここでは、同時に露光する撮像素子の露光ラインと呼び、その撮像素子に対応する画像上の画素のラインを輝線と呼ぶ。
(Observation of luminance of light emitting part)
We propose an imaging method that starts and ends the exposure at different times for each image sensor, rather than exposing all the image sensors at the same timing when capturing one image. FIG. 1 shows an example in which imaging devices arranged in one row are exposed simultaneously, and imaging is performed by shifting the exposure start time in the order of closer rows. Here, the exposure line of the image sensor that exposes simultaneously is called an exposure line, and the pixel line on the image corresponding to the image sensor is called a bright line.
 この撮像方法を用いて、点滅している光源を撮像素子の全面に写して撮像した場合、図2のように、撮像画像上に露光ラインに沿った輝線(画素値の明暗の線)が生じる。この輝線のパターンを認識することで、撮像フレームレートを上回る速度の光源輝度変化を推定することができる。これにより、信号を光源輝度の変化として送信することで、撮像フレームレート以上の速度での通信を行うことができる。光源が2種類の輝度値をとることで信号を表現する場合、低い方の輝度値をロー(LO),高い方の輝度値をハイ(HI)と呼ぶ。ローは光源が光っていない状態でも良いし、ハイよりも弱く光っていても良い。 When this image capturing method is used to capture an image of a blinking light source on the entire surface of the image sensor, bright lines (light and dark lines of pixel values) along the exposure line appear on the captured image as shown in FIG. . By recognizing the bright line pattern, it is possible to estimate the light source luminance change at a speed exceeding the imaging frame rate. Thereby, by transmitting a signal as a change in light source luminance, communication at a speed higher than the imaging frame rate can be performed. When a signal is expressed by the light source taking two types of luminance values, the lower luminance value is called low (LO), and the higher luminance value is called high (HI). Low may be in a state where the light source is not shining, or may be shining weaker than high.
 この方法によって、撮像フレームレートを超える速度で情報の伝送を行う。 ∙ By this method, information is transmitted at a speed exceeding the imaging frame rate.
 一枚の撮像画像中に、露光時間が重ならない露光ラインが20ラインあり、撮像のフレームレートが30fpsのときは、1.67ミリ秒周期の輝度変化を認識できる。露光時間が重ならない露光ラインが1000ラインある場合は、3万分の1秒(約33マイクロ秒)周期の輝度変化を認識できる。なお、露光時間は例えば10ミリ秒よりも短く設定される。 When there are 20 exposure lines where the exposure times do not overlap in one captured image, and the imaging frame rate is 30 fps, a change in luminance with a period of 1.67 milliseconds can be recognized. When there are 1000 exposure lines whose exposure times do not overlap, it is possible to recognize a luminance change with a period of 1 / 30,000 second (about 33 microseconds). The exposure time is set shorter than 10 milliseconds, for example.
 図2は、一つの露光ラインの露光が完了してから次の露光ラインの露光が開始される場合を示している。 FIG. 2 shows a case where the exposure of the next exposure line is started after the exposure of one exposure line is completed.
 この場合、1秒あたりのフレーム数(フレームレート)がf、1画像を構成する露光ライン数がlのとき、各露光ラインが一定以上の光を受光しているかどうかで情報を伝送すると、最大でflビット毎秒の速度で情報を伝送することができる。 In this case, when the number of frames per second (frame rate) is f and the number of exposure lines constituting one image is 1, if information is transmitted depending on whether or not each exposure line receives a certain amount of light, the maximum Can transmit information at a rate of fl bits per second.
 なお、ラインごとではなく、画素ごとに時間差で露光を行う場合は、さらに高速で通信が可能である。 It should be noted that communication is possible at higher speeds when exposure is performed with a time difference for each pixel, not for each line.
 このとき、露光ラインあたりの画素数がm画素であり、各画素が一定以上の光を受光しているかどうかで情報を伝送する場合には、伝送速度は最大でflmビット毎秒となる。 At this time, when the number of pixels per exposure line is m pixels and information is transmitted depending on whether each pixel receives light above a certain level, the transmission speed is a maximum of flm bits per second.
 図3のように、発光部の発光による各露光ラインの露光状態を複数のレベルで認識可能であれば、発光部の発光時間を各露光ラインの露光時間より短い単位の時間で制御することで、より多くの情報を伝送することができる。 As shown in FIG. 3, if the exposure state of each exposure line by the light emission of the light emitting unit can be recognized at a plurality of levels, the light emission time of the light emitting unit is controlled by a unit time shorter than the exposure time of each exposure line. More information can be transmitted.
 露光状態をElv段階で認識可能である場合には、最大でflElvビット毎秒の速度で情報を伝送することができる。 If the exposure state can be recognized in the Elv stage, information can be transmitted at a maximum rate of flElv bits per second.
 また、各露光ラインの露光のタイミングと少しずつずらしたタイミングで発光部を発光させることで、発信の基本周期を認識することができる。 Also, the basic period of transmission can be recognized by causing the light emitting unit to emit light at a timing slightly different from the exposure timing of each exposure line.
 図4は、一つの露光ラインの露光が完了する前に次の露光ラインの露光が開始される場合を示している。即ち、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成となっている。このような構成により、(1)一つの露光ラインの露光時間の終了を待って次の露光ラインの露光を開始する場合に比べ、所定の時間内におけるサンプル数を多くすることができる。所定時間内におけるサンプル数が多くなることにより、被写体である光送信機が発生する光信号をより適切に検出することが可能となる。即ち、光信号を検出する際のエラー率を低減することが可能となる。更に、(2)一つの露光ラインの露光時間の終了を待って次の露光ラインの露光を開始する場合に比べ、各露光ラインの露光時間を長くすることができるため、被写体が暗い場合であっても、より明るい画像を取得することが可能となる。即ち、S/N比を向上させることが可能となる。なお、全ての露光ラインにおいて、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成となる必要はなく、一部の露光ラインについて部分的に時間的な重なりを持たない構成とすることも可能である。一部の露光ラインについて部分的に時間的な重なりを持たないように構成するにより、撮像画面上における露光時間の重なりによる中間色の発生を抑制でき、より適切に輝線を検出することが可能となる。 FIG. 4 shows a case where the exposure of the next exposure line is started before the exposure of one exposure line is completed. That is, the exposure times of adjacent exposure lines are partially overlapped in time. With such a configuration, (1) it is possible to increase the number of samples within a predetermined time as compared with the case where the exposure of the next exposure line is started after waiting for the end of the exposure time of one exposure line. By increasing the number of samples in a predetermined time, it becomes possible to more appropriately detect the optical signal generated by the optical transmitter that is the subject. That is, it is possible to reduce the error rate when detecting an optical signal. Further, (2) the exposure time of each exposure line can be made longer than when the exposure time of the next exposure line is started after waiting for the end of the exposure time of one exposure line. However, a brighter image can be acquired. That is, the S / N ratio can be improved. In all exposure lines, it is not necessary that the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. By configuring a part of the exposure lines so as not to partially overlap in time, it is possible to suppress the generation of intermediate colors due to the overlap of exposure times on the imaging screen, and to detect bright lines more appropriately. .
 この場合は、各露光ラインの明るさから露光時間を算出し、発光部の発光の状態を認識する。 In this case, the exposure time is calculated from the brightness of each exposure line, and the light emission state of the light emitting unit is recognized.
 なお、各露光ラインの明るさを、輝度が閾値以上であるかどうかの2値で判別する場合には、発光していない状態を認識するために、発光部は発光していない状態を各ラインの露光時間以上の時間継続しなければならない。 When the brightness of each exposure line is determined by a binary value indicating whether the luminance is equal to or higher than a threshold value, in order to recognize the state where no light is emitted, the state where the light emitting unit does not emit light is indicated for each line. It must last longer than the exposure time.
 図5Aは、各露光ラインの露光開始時刻が等しい場合に、露光時間の違いによる影響を示している。7500aは前の露光ラインの露光終了時刻と次の露光ラインの露光開始時刻とが等しい場合であり、7500bはそれより露光時間を長くとった場合である。7500bのように、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成とすることにより、露光時間を長くとることが可能となる。即ち、撮像素子に入射する光が増大し、明るい画像を得ることができる。また、同一の明るさの画像を撮像するための撮像感度を低く抑えられることで、ノイズの少ない画像が得られるため、通信エラーが抑制される。 FIG. 5A shows the influence of the difference in exposure time when the exposure start times of the exposure lines are equal. 7500a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line, and 7500b is the case where the exposure time is longer than that. As in the case of 7500b, the exposure time of adjacent exposure lines is partially overlapped in time, so that the exposure time can be increased. That is, the light incident on the image sensor increases and a bright image can be obtained. In addition, since the imaging sensitivity for capturing images with the same brightness can be suppressed to a low level, an image with less noise can be obtained, so that communication errors are suppressed.
 図5Bは、露光時間が等しい場合に、各露光ラインの露光開始時刻の違いによる影響を示している。7501aは前の露光ラインの露光終了時刻と次の露光ラインの露光開始時刻とが等しい場合であり、7501bは前の露光ラインの露光終了より早く次の露光ラインの露光を開始する場合である。7501bのように、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成とすることにより、時間あたりに露光できるラインを増やすことが可能となる。これにより、より解像度が高くなり、多くの情報量が得られる。サンプル間隔(=露光開始時刻の差)が密になることで、より正確に光源輝度の変化を推定することができ、エラー率が低減でき、更に、より短い時間における光源輝度の変化を認識することができる。露光時間に重なりを持たせることで、隣接する露光ラインの露光量の差を利用して、露光時間よりも短い光源の点滅を認識することができる。 FIG. 5B shows the influence of the difference in the exposure start time of each exposure line when the exposure times are equal. 7501a is the case where the exposure end time of the previous exposure line is equal to the exposure start time of the next exposure line, and 7501b is the case where the exposure of the next exposure line is started earlier than the end of exposure of the previous exposure line. By adopting a configuration in which the exposure times of adjacent exposure lines partially overlap in time as in 7501b, the number of lines that can be exposed per time can be increased. Thereby, the resolution becomes higher and a large amount of information can be obtained. Since the sample interval (= difference in exposure start time) becomes dense, the change in the light source luminance can be estimated more accurately, the error rate can be reduced, and the change in the light source luminance in a shorter time is recognized. be able to. By making the exposure time overlap, it is possible to recognize blinking of the light source that is shorter than the exposure time by using the difference in exposure amount between adjacent exposure lines.
 また、上述のサンプル数が少ない場合、つまり、サンプル間隔(図5Bに示す時間差t)が長いと、光源輝度の変化を正確に検出することができない可能性が高くなる。この場合には、露光時間を短くすることによって、その可能性を抑えることができる。つまり、光源輝度の変化を正確に検出することができる。また、露光時間は、露光時間>(サンプル間隔-パルス幅)を満たすことが望ましい。パルス幅は、光源の輝度がHighになっている期間である光のパルス幅である。これにより、Highの輝度を適切に検出することができる。 Further, when the number of samples is small, that is, when the sample interval (time difference t D shown in FIG. 5B) is long, there is a high possibility that a change in light source luminance cannot be detected accurately. In this case, the possibility can be suppressed by shortening the exposure time. That is, it is possible to accurately detect a change in light source luminance. Further, it is desirable that the exposure time satisfy the exposure time> (sample interval−pulse width). The pulse width is a pulse width of light that is a period during which the luminance of the light source is High. Thereby, the High brightness can be detected appropriately.
 図5A、図5Bで説明したように、隣接する露光ラインの露光時間が、部分的に時間的な重なりをもつように、各露光ラインを順次露光する構成において、露光時間を通常撮影モードよりも短く設定することにより発生する輝線パターンを信号伝送に用いることにより通信速度を飛躍的に向上させることが可能になる。ここで、可視光通信時における露光時間を1/480秒以下に設定することにより適切な輝線パターンを発生させることが可能となる。ここで、露光時間は、フレーム周波数=fとすると、露光時間<1/8×fと設定する必要がある。撮影の際に発生するブランキングは、最大で1フレームの半分の大きさになる。即ち、ブランキング時間は、撮影時間の半分以下であるため、実際の撮影時間は、最も短い時間で1/2fとなる。更に、1/2fの時間内において、4値の情報を受ける必要があるため、少なくとも露光時間は、1/(2f×4)よりも短くする必要が生じる。通常フレームレートは、60フレーム/秒以下であることから、1/480秒以下の露光時間に設定することにより、適切な輝線パターンを画像データに発生させ、高速の信号伝送を行うことが可能となる。 As described with reference to FIGS. 5A and 5B, in the configuration in which each exposure line is sequentially exposed so that the exposure times of adjacent exposure lines partially overlap in time, the exposure time is set to be longer than that in the normal shooting mode. By using the bright line pattern generated by setting it short for signal transmission, the communication speed can be dramatically improved. Here, it is possible to generate an appropriate bright line pattern by setting the exposure time during visible light communication to 1/480 seconds or less. Here, when the frame frequency = f, the exposure time needs to be set as exposure time <1/8 × f. Blanking that occurs during shooting is at most half the size of one frame. That is, since the blanking time is less than half of the shooting time, the actual shooting time is 1 / 2f at the shortest time. Furthermore, since it is necessary to receive quaternary information within a time of 1 / 2f, at least the exposure time needs to be shorter than 1 / (2f × 4). Since the normal frame rate is 60 frames / second or less, it is possible to generate an appropriate bright line pattern in the image data and perform high-speed signal transmission by setting the exposure time to 1/480 seconds or less. Become.
 図5Cは、各露光ラインの露光時間が重なっていない場合、露光時間が短い場合の利点を示している。露光時間が長い場合は、光源は7502aのように2値の輝度変化をしていたとしても、撮像画像では7502eのように中間色の部分ができ、光源の輝度変化を認識することが難しくなる傾向がある。しかし、7502dのように、一つの露光ラインの露光終了後、次の露光ラインの露光開始まで所定の露光しない空き時間(所定の待ち時間)tD2を設ける構成とすることにより、光源の輝度変化を認識しやすくすることが可能となる。即ち、7502fのような、より適切な輝線パターンを検出することが可能となる。7502dのように、所定の露光しない空き時間を設ける構成は、露光時間tを各露光ラインの露光開始時刻の時間差tよりも小さくすることにより実現することが可能となる。通常撮影モードが、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成である場合において、露光時間を通常撮影モード時よりも、所定の露光しない空き時間が生じるまで短く設定することにより、実現することができる。また、通常撮影モードが、前の露光ラインの露光終了時刻と次の露光ラインの露光開始時刻とが等しい場合であっても、所定の露光しない時間が生じるまで露光時間を短く設定することにより、実現することができる。また、7502gのように、各露光ラインの露光開始時刻の間隔tを大きくすることによっても、一つの露光ラインの露光終了後、次の露光ラインの露光開始まで所定の露光しない空き時間(所定の待ち時間)tD2を設ける構成をとることができる。この構成では、露光時間を長くすることができるため、明るい画像を撮像することができ、ノイズが少なくなることからエラー耐性が高い。一方で、この構成では、一定時間内に露光できる露光ラインが少なくなるため、7502hのように、サンプル数が少なくなるという欠点があるため、状況によって使い分けることが望ましい。例えば、撮像対象が明るい場合には前者の構成を用い、暗い場合には後者の構成を用いることで、光源輝度変化の推定誤差を低減することができる。 FIG. 5C shows an advantage when the exposure times are short when the exposure times of the exposure lines do not overlap. When the exposure time is long, even if the light source has a binary luminance change as in 7502a, the captured image has an intermediate color portion as in 7502e, and it becomes difficult to recognize the luminance change of the light source. There is. However, as the 7502D, after completion exposure of one exposure line, by a configuration in which the free time (predetermined waiting time) t D2 not predetermined exposure start exposure of the next exposure line, the luminance variation of the light source Can be easily recognized. That is, a more appropriate bright line pattern such as 7502f can be detected. As in 7502D, be provided with a free time without predetermined exposure becomes an exposure time t E can be realized to be smaller than the time difference t D of the exposure start time of each exposure line. When the normal shooting mode has a configuration in which the exposure times of adjacent exposure lines partially overlap in time, the exposure time is set shorter than the normal shooting mode until a predetermined idle time occurs. This can be realized. Further, even when the normal photographing mode is the case where the exposure end time of the previous exposure line and the exposure start time of the next exposure line are equal, by setting the exposure time short until a predetermined non-exposure time occurs, Can be realized. Further, as 7502G, also by increasing the distance t D of the exposure start time of each exposure line, after the exposure of one exposure line, following exposure line exposure start until a predetermined exposure was not free time (predetermined Waiting time) t D2 can be provided. In this configuration, since the exposure time can be extended, a bright image can be taken, and noise is reduced, so that error tolerance is high. On the other hand, in this configuration, since the number of exposure lines that can be exposed within a certain time is reduced, there is a disadvantage that the number of samples is reduced as in 7502h. For example, when the imaging target is bright, the former configuration is used, and when the imaging target is dark, the latter configuration can be used to reduce the estimation error of the light source luminance change.
 なお、全ての露光ラインにおいて、隣接する露光ラインの露光時間が、部分的に時間的な重なりを持つ構成となる必要はなく、一部の露光ラインについて部分的に時間的な重なりを持たない構成とすることも可能である。また、全ての露光ラインにおいて、一つの露光ラインの露光終了後、次の露光ラインの露光開始まで所定の露光しない空き時間(所定の待ち時間)を設ける構成となる必要はなく、一部の露光ラインについて部分的に時間的な重なりを持つ構成とすることも可能である。このような構成とすることにより、それぞれの構成における利点を生かすことが可能となる。また、通常のフレームレート(30fps、60fps)にて撮影を行う通常撮影モードと、可視光通信を行う1/480秒以下の露光時間にて撮影を行う可視光通信モードとにおいて、同一の読み出し方法または回路にて信号の読み出しを行ってもよい。同一の読み出し方法または回路にて信号を読み出すことにより、通常撮影モードと、可視光通信モードとに対して、それぞれ別の回路を用いる必要がなくなり、回路規模を小さくすることが可能となる。 In all exposure lines, it is not necessary that the exposure time of adjacent exposure lines has a partial overlap in time, and a configuration in which some exposure lines have no partial overlap. It is also possible. Further, in all exposure lines, it is not necessary to provide a configuration in which an idle time (predetermined waiting time) in which a predetermined exposure is not performed is provided after the exposure of one exposure line is completed until the exposure of the next exposure line is started. It is also possible to have a configuration in which the lines partially overlap in time. With such a configuration, it is possible to take advantage of the advantages of each configuration. Further, the same readout method is used in the normal shooting mode in which shooting is performed at a normal frame rate (30 fps, 60 fps) and in the visible light communication mode in which shooting is performed with an exposure time of 1/480 second or less in which visible light communication is performed. Alternatively, a signal may be read by a circuit. By reading out signals with the same reading method or circuit, it is not necessary to use different circuits for the normal imaging mode and the visible light communication mode, and the circuit scale can be reduced.
 図5Dは、光源輝度の最小変化時間tと、露光時間tと、各露光ラインの露光開始時刻の時間差tと、撮像画像との関係を示している。t+t<tとした場合は、必ず一つ以上の露光ラインが露光の開始から終了まで光源が変化しない状態で撮像するため、7503dのように輝度がはっきりとした画像が得られ、光源の輝度変化を認識しやすい。2t>tとした場合は、光源の輝度変化とは異なるパターンの輝線が得られる場合があり、撮像画像から光源の輝度変化を認識することが難しくなる。 FIG. 5D shows the relationship between the minimum change time t S of the light source luminance, the exposure time t E , the time difference t D of the exposure start time of each exposure line, and the captured image. When t E + t D <t S , since one or more exposure lines are always imaged in a state where the light source does not change from the start to the end of exposure, an image with clear brightness as in 7503d is obtained. It is easy to recognize the luminance change of the light source. When 2t E > t S , a bright line having a pattern different from the luminance change of the light source may be obtained, and it becomes difficult to recognize the luminance change of the light source from the captured image.
 図5Eは、光源輝度の遷移時間tと、各露光ラインの露光開始時刻の時間差tとの関係を示している。tに比べてtが大きいほど、中間色になる露光ラインが少なくなり、光源輝度の推定が容易になる。t>tのとき中間色の露光ラインは連続で2ライン以下になり、望ましい。tは、光源がLEDの場合は1マイクロ秒以下、光源が有機ELの場合は5マイクロ秒程度となるため、tを5マイクロ秒以上とすることで、光源輝度の推定を容易にすることができる。 Figure 5E, the transition and time t T of the light source luminance, which shows the relationship between the time difference t D of the exposure start time of each exposure line. as t D is larger than the t T, exposure lines to be neutral is reduced, it is easy to estimate the light source luminance. When t D > t T , the exposure line of the intermediate color is continuously 2 or less, which is desirable. t T, the light source is less than 1 microsecond in the case of LED, light source for an approximately 5 microseconds in the case of organic EL, a t D by 5 or more microseconds, to facilitate estimation of the light source luminance be able to.
 図5Fは、光源輝度の高周波ノイズtHTと、露光時間tとの関係を示している。tHTに比べてtが大きいほど、撮像画像は高周波ノイズの影響が少なくなり、光源輝度の推定が容易になる。tがtHTの整数倍のときは高周波ノイズの影響がなくなり、光源輝度の推定が最も容易になる。光源輝度の推定には、t>tHTであることが望ましい。高周波ノイズの主な原因はスイッチング電源回路に由来し、多くの電灯用のスイッチング電源ではtHTは20マイクロ秒以下であるため、tを20マイクロ秒以上とすることで、光源輝度の推定を容易に行うことができる。 Figure 5F shows a high frequency noise t HT of light source luminance, the relationship between the exposure time t E. As t E is larger than t HT , the captured image is less affected by high frequency noise, and light source luminance is easily estimated. When t E is an integral multiple of t HT , the influence of high frequency noise is eliminated, and the light source luminance is most easily estimated. For estimation of the light source luminance, it is desirable that t E > t HT . The main cause of high frequency noise derived from the switching power supply circuit, since many of the t HT in the switching power supply for the lamp is less than 20 microseconds, by the t E and 20 micro-seconds or more, the estimation of the light source luminance It can be done easily.
 図5Gは、tHTが20マイクロ秒の場合の、露光時間tと高周波ノイズの大きさとの関係を表すグラフである。tHTは光源によってばらつきがあることを考慮すると、グラフより、tは、ノイズ量が極大をとるときの値と等しくなる値である、15マイクロ秒以上、または、35マイクロ秒以上、または、54マイクロ秒以上、または、74マイクロ秒以上として定めると効率が良いことが確認できる。高周波ノイズ低減の観点からはtは大きいほうが望ましいが、前述のとおり、tが小さいほど中間色部分が発生しづらくなるという点で光源輝度の推定が容易になるという性質もある。そのため、光源輝度の変化の周期が15~35マイクロ秒のときはtは15マイクロ秒以上、光源輝度の変化の周期が35~54マイクロ秒のときはtは35マイクロ秒以上、光源輝度の変化の周期が54~74マイクロ秒のときはtは54マイクロ秒以上、光源輝度の変化の周期が74マイクロ秒以上のときはtは74マイクロ秒以上として設定すると良い。 Figure 5G is the case t HT is 20 microseconds, which is a graph showing the relationship between the size of the exposure time t E and the high frequency noise. When t HT is considered that there is variation by the light source, from the graph, t E is the value becomes equal to the value when the amount of noise takes a maximum, 15 microseconds or more, or, 35 microseconds or more, or, It can be confirmed that the efficiency is good when it is set to 54 microseconds or more, or 74 microseconds or more. From the viewpoint of reducing high-frequency noise, it is desirable that t E be large. However, as described above, there is a property that light source luminance can be easily estimated in that the smaller the t E , the more difficult the intermediate color portion is generated. Therefore, when the light source luminance change period is 15 to 35 microseconds, t E is 15 microseconds or more, and when the light source luminance change period is 35 to 54 microseconds, t E is 35 microseconds or more. t E is 54 microseconds or more when the cycle is 54 to 74 microseconds of change, t E when the period of the change in light source luminance is 74 microseconds or more may be set as 74 microseconds or more.
 図5Hは、露光時間tと認識成功率との関係を示す。露光時間tは光源の輝度が一定である時間に対して相対的な意味を持つため、光源輝度が変化する周期tを露光時間tで割った値(相対露光時間)を横軸としている。グラフより、認識成功率をほぼ100%としたい場合は、相対露光時間を1.2以下にすれば良いことがわかる。例えば、送信信号を1kHzとする場合は露光時間を約0.83ミリ秒以下とすれば良い。同様に、認識成功率を95%以上としたい場合は相対露光時間を1.25以下に、認識成功率を80%以上としたい場合は相対露光時間を1.4以下にすれば良いということがわかる。また、相対露光時間が1.5付近で認識成功率が急激に下がり、1.6でほぼ0%となるため、相対露光時間が1.5を超えないように設定すべきであることがわかる。また、認識率が7507cで0になった後、7507dや、7507e、7507fで、再度上昇していることがわかる。そのため、露光時間を長くして明るい画像を撮像したい場合などは、相対露光時間が1.9から2.2、2.4から2.6、2.8から3.0となる露光時間を利用すれば良い。例えば、中間モードとして、これらの露光時間を使うと良い。 Figure 5H shows the relationship between the exposure time t E and the recognition success rate. Since the exposure time t E has a relative meaning with respect to the time when the luminance of the light source is constant, the value (relative exposure time) obtained by dividing the period t S in which the light source luminance changes by the exposure time t E is taken as the horizontal axis. Yes. From the graph, it can be seen that if the recognition success rate is desired to be almost 100%, the relative exposure time should be 1.2 or less. For example, when the transmission signal is 1 kHz, the exposure time may be about 0.83 milliseconds or less. Similarly, when it is desired to set the recognition success rate to 95% or more, the relative exposure time may be set to 1.25 or less, and when the recognition success rate is set to 80% or more, the relative exposure time may be set to 1.4 or less. Recognize. Also, the recognition success rate drops sharply when the relative exposure time is around 1.5, and becomes almost 0% at 1.6, so it can be seen that the relative exposure time should not be set to exceed 1.5. . It can also be seen that after the recognition rate becomes 0 at 7507c, it rises again at 7507d, 7507e, and 7507f. Therefore, when it is desired to take a bright image by extending the exposure time, use an exposure time in which the relative exposure time is 1.9 to 2.2, 2.4 to 2.6, and 2.8 to 3.0. Just do it. For example, these exposure times may be used as the intermediate mode.
 図6Aは、本実施の形態における情報通信方法のフローチャートである。 FIG. 6A is a flowchart of the information communication method in the present embodiment.
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、ステップSK91~SK93を含む。 The information communication method in the present embodiment is an information communication method for acquiring information from a subject, and includes steps SK91 to SK93.
 つまり、この情報通信方法は、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる複数の露光ラインに対応する複数の輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの第1の露光時間を設定する第1の露光時間設定ステップSK91と、前記イメージセンサが、輝度変化する前記被写体を、設定された前記第1の露光時間で撮影することによって、前記複数の輝線を含む輝線画像を取得する第1の画像取得ステップSK92と、取得された前記輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップSK93とを含み、前記第1の画像取得ステップSK92では、前記複数の露光ラインのそれぞれは、順次異なる時刻で露光を開始し、かつ、当該露光ラインに隣接する隣接露光ラインの露光が終了してから所定の空き時間経過後に、露光を開始する。 That is, in this information communication method, a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor are generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. A first exposure time setting step SK91 for setting a first exposure time of the image sensor; and the image sensor shoots the subject whose luminance changes with the set first exposure time, First image acquisition step SK92 for acquiring a bright line image including a plurality of bright lines, and information acquisition for acquiring information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image Step SK93, and in the first image acquisition step SK92, the plurality of exposure lines Re starts exposure at successively different times, and the exposure of the adjacent exposure line after a predetermined idle time from the end, the exposure is started adjacent to the exposure line.
 図6Bは、本実施の形態における情報通信装置のブロック図である。 FIG. 6B is a block diagram of the information communication apparatus according to the present embodiment.
 本実施の形態における情報通信装置K90は、被写体から情報を取得する情報通信装置であって、構成要素K91~K93を備える。 The information communication device K90 in the present embodiment is an information communication device that acquires information from a subject, and includes constituent elements K91 to K93.
 つまり、この情報通信装置K90は、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる複数の露光ラインに対応する複数の輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定部K91と、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記複数の輝線を含む輝線画像を取得する前記イメージセンサを有する画像取得部K92と、取得された前記輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得部K93とを備え、前記複数の露光ラインのそれぞれは、順次異なる時刻で露光を開始し、かつ、当該露光ラインに隣接する隣接露光ラインの露光が終了してから所定の空き時間経過後に、露光を開始する。 That is, the information communication apparatus K90 causes a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor to be generated in response to a change in luminance of the subject in an image obtained by photographing the subject by an image sensor. And an exposure time setting unit K91 for setting an exposure time of the image sensor, and the image sensor for acquiring a bright line image including the plurality of bright lines by photographing the subject whose luminance changes with the set exposure time. And an information acquisition unit K93 that acquires information by demodulating data specified by the patterns of the plurality of bright lines included in the acquired bright line image, and the plurality of exposure lines. Each of these starts exposure at different times sequentially and is adjacent to the exposure line. From the exposure of the down is completed after a predetermined idle time has elapsed, exposure is started.
 このような図6Aおよび図6Bによって示される情報通信方法および情報通信装置K90では、例えば図5Cなどに示すように、複数の露光ラインのそれぞれは、その露光ラインに隣接する隣接露光ラインの露光が終了してから所定の空き時間経過後に、露光を開始するため、被写体の輝度変化を認識しやすくすることができる。その結果、被写体から情報を適切に取得することができる。 In the information communication method and the information communication apparatus K90 shown in FIGS. 6A and 6B as described above, for example, as shown in FIG. 5C, each of the plurality of exposure lines is exposed to the adjacent exposure line adjacent to the exposure line. Since exposure is started after a lapse of a predetermined idle time after the end, it is possible to easily recognize a change in luminance of the subject. As a result, information can be appropriately acquired from the subject.
 なお、上記実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図6Aのフローチャートによって示される情報通信方法をコンピュータに実行させる。 In the above embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the information communication method shown by the flowchart of FIG. 6A.
 (実施の形態2)
 本実施の形態では、上記実施の形態1における情報通信装置K90であるスマートフォンなどの受信機と、LEDや有機ELなどの光源の点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。
(Embodiment 2)
In the present embodiment, each application example using a receiver such as a smartphone that is the information communication device K90 in the first embodiment and a transmitter that transmits information as a blinking pattern of a light source such as an LED or an organic EL. explain.
 なお、以下の説明では、通常撮影モード、または通常撮影モードによる撮影を通常撮影といい、可視光通信モード、または可視光通信モードによる撮影を可視光撮影(可視光通信)という。また、通常撮影および可視光撮影の代わりに、中間モードによる撮影を用いてもよく、後述の合成画像の代わりに中間画像を用いてもよい。 In the following description, shooting in the normal shooting mode or normal shooting mode is referred to as normal shooting, and shooting in the visible light communication mode or visible light communication mode is referred to as visible light shooting (visible light communication). Further, instead of normal shooting and visible light shooting, shooting in an intermediate mode may be used, and an intermediate image may be used instead of a composite image described later.
 図7は、本実施の形態における受信機の撮影動作の一例を示す図である。 FIG. 7 is a diagram illustrating an example of the photographing operation of the receiver in this embodiment.
 受信機8000は、撮影モードを通常撮影、可視光通信、通常撮影、・・・のように切り替える。そして、受信機8000は、通常撮影画像と可視光通信画像とを合成することによって、輝線模様と被写体およびその周囲とが鮮明に映し出された合成画像を生成し、その合成画像をディスプレイに表示する。この合成画像は、通常撮影画像における信号が送信されている箇所に、可視光通信画像の輝線模様を重畳することによって生成された画像である。また、この合成画像によって映し出される輝線模様、被写体およびその周囲はそれぞれ鮮明であって、ユーザによって十分に認識される鮮明度を有する。このような合成画像が表示されることによって、ユーザは、どこから、またはどの位置から信号が送信されているかをより明確に知ることができる。 The receiver 8000 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on. Then, the receiver 8000 generates a composite image in which the bright line pattern, the subject, and the surrounding area are clearly displayed by combining the normal captured image and the visible light communication image, and displays the composite image on the display. . This composite image is an image generated by superimposing the bright line pattern of the visible light communication image on the portion where the signal in the normal captured image is transmitted. Further, the bright line pattern, the subject, and the surroundings displayed by the composite image are clear and have a sharpness sufficiently recognized by the user. By displaying such a composite image, the user can more clearly know from where or from where the signal is transmitted.
 図8は、本実施の形態における受信機の撮影動作の他の例を示す図である。 FIG. 8 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
 受信機8000は、カメラCa1およびカメラCa2を備える。このような受信機8000では、カメラCa1は通常撮影を行い、カメラCa2は可視光撮影を行う。これにより、カメラCa1は、上述のような通常撮影画像を取得し、カメラCa2は、上述のような可視光通信画像を取得する。そして、受信機8000は、通常撮影画像および可視光通信画像を合成することによって、上述の合成画像を生成してディスプレイに表示する。 The receiver 8000 includes a camera Ca1 and a camera Ca2. In such a receiver 8000, the camera Ca1 performs normal photographing, and the camera Ca2 performs visible light photographing. Thereby, the camera Ca1 acquires the normal captured image as described above, and the camera Ca2 acquires the visible light communication image as described above. Then, the receiver 8000 generates the above-described combined image by combining the normal captured image and the visible light communication image, and displays the combined image on the display.
 図9は、本実施の形態における受信機の撮影動作の他の例を示す図である。 FIG. 9 is a diagram illustrating another example of the photographing operation of the receiver in this embodiment.
 2つのカメラを有する上記受信機8000では、カメラCa1は、撮影モードを通常撮影、可視光通信、通常撮影、・・・のように切り替える。一方、カメラCa2は、通常撮影を継続して行う。そして、カメラCa1とカメラCa2とで同時に通常撮影が行われているときには、受信機8000は、それらのカメラによって取得された通常撮影画像から、ステレオ視(三角測量の原理)を利用して、受信機8000から被写体までの距離(以下、被写体距離という)を推定する。このように推定された被写体距離を用いることによって、受信機8000は、可視光通信画像の輝線模様を通常撮影画像の適切な位置に重畳することができる。つまり、適切な合成画像を生成することができる。 In the receiver 8000 having two cameras, the camera Ca1 switches the shooting mode to normal shooting, visible light communication, normal shooting, and so on. On the other hand, the camera Ca2 continuously performs normal shooting. When the normal shooting is performed simultaneously with the cameras Ca1 and Ca2, the receiver 8000 receives from the normal shooting images acquired by these cameras using stereo vision (the principle of triangulation). The distance from the machine 8000 to the subject (hereinafter referred to as subject distance) is estimated. By using the subject distance estimated in this way, the receiver 8000 can superimpose the bright line pattern of the visible light communication image on an appropriate position of the normal captured image. That is, an appropriate composite image can be generated.
 図10は、本実施の形態における受信機の表示動作の一例を示す図である。 FIG. 10 is a diagram illustrating an example of the display operation of the receiver in this embodiment.
 受信機8000は、上述のように、撮影モードを可視光通信、通常撮影、可視光通信、・・・のように切り替える。ここで、受信機8000は、最初に可視光通信を行うときに、アプリケーションプログラムを起動する。そして、受信機8000は、可視光通信によって受信した信号に基づいて、自らの位置を推定する。次に、受信機8000は、通常撮影を行うときには、その通常撮影によって取得された通常撮影画像に、AR(Augmented Reality)情報を表示する。このAR情報は、上述のように推定された位置などに基づいて取得されるものである。また、受信機8000は、9軸センサによる検出結果、および通常撮影画像の動き検出などに基づいて、受信機8000の移動および方向の変化を推定し、その推定された移動および方向の変化に合わせてAR情報の表示位置を移動させる。これにより、AR情報を通常撮影画像の被写体像に追随させることができる。 As described above, the receiver 8000 switches the photographing mode to visible light communication, normal photographing, visible light communication, and so on. Here, the receiver 8000 activates an application program when performing visible light communication for the first time. Then, the receiver 8000 estimates its own position based on the signal received by visible light communication. Next, when performing normal shooting, the receiver 8000 displays AR (Augmented Reality) information on the normal shot image acquired by the normal shooting. This AR information is acquired based on the position estimated as described above. Further, the receiver 8000 estimates the movement and direction change of the receiver 8000 based on the detection result of the 9-axis sensor and the motion detection of the normal captured image, and matches the estimated movement and direction change. To move the display position of the AR information. Thereby, the AR information can be made to follow the subject image of the normal captured image.
 また、受信機8000は、通常撮影から可視光通信に撮影モードを切り替えると、その可視光通信時には、直前の通常撮影時に取得された最新の通常撮影画像にAR情報を重畳する。そして、受信機8000は、AR情報が重畳された通常撮影画像を表示する。また、受信機8000は、通常撮影時と同様に、9軸センサによる検出結果に基づいて、受信機8000の移動および方向の変化を推定し、その推定された移動および方向の変化に合わせてAR情報および通常撮影画像を移動させる。これにより、可視光通信時にも、通常撮影時と同様に、受信機8000の移動などに合わせてAR情報を通常撮影画像の被写体像に追随させることができる。また、受信機8000の移動などに合わせて、その通常画像を拡大および縮小することができる。 Further, when the receiver 8000 switches the shooting mode from the normal shooting to the visible light communication, the AR information is superimposed on the latest normal shooting image acquired at the time of the normal shooting immediately before the visible light communication. The receiver 8000 displays a normal captured image on which the AR information is superimposed. Similarly to the normal shooting, the receiver 8000 estimates the movement and direction change of the receiver 8000 on the basis of the detection result by the 9-axis sensor, and AR in accordance with the estimated movement and direction change. Move information and normal captured images. Thereby, AR information can be made to follow the subject image of the normal captured image in accordance with the movement of the receiver 8000 or the like in the case of visible light communication as in the case of normal imaging. Further, the normal image can be enlarged and reduced in accordance with the movement of the receiver 8000 or the like.
 図11は、本実施の形態における受信機の表示動作の一例を示す図である。 FIG. 11 is a diagram showing an example of the display operation of the receiver in this embodiment.
 例えば、受信機8000は、図11の(a)に示すように、輝線模様が映し出された上記合成画像を表示してもよい。また、受信機8000は、図11の(b)に示すように、輝線模様の代わりに、信号が送信されていることを通知するための所定の色を有する画像である信号明示オブジェクトを通常撮影画像に重畳することによって合成画像を生成し、その合成画像を表示してもよい。 For example, the receiver 8000 may display the composite image on which the bright line pattern is projected, as shown in FIG. In addition, as shown in FIG. 11B, the receiver 8000 normally captures a signal explicit object that is an image having a predetermined color for notifying that a signal is transmitted instead of the bright line pattern. A composite image may be generated by superimposing on the image, and the composite image may be displayed.
 また、受信機8000は、図11の(c)に示すように、信号が送信されている箇所が点線の枠と識別子(例えば、ID:101、ID:102など)とによって示されている通常撮影画像を合成画像として表示してもよい。また、受信機8000は、図11の(d)に示すように、輝線模様の代わりに、特定の種類の信号が送信されていることを通知するための所定の色を有する画像である信号識別オブジェクトを通常撮影画像に重畳することによって合成画像を生成し、その合成画像を表示してもよい。この場合、その信号識別オブジェクトの色は、送信機から出力されている信号の種類によって異なる。例えば、送信機から出力されている信号が位置情報である場合には、赤色の信号識別オブジェクトが重畳され、送信機から出力されている信号がクーポンである場合には、緑色の信号識別オブジェクトが重畳される。 Further, as shown in FIG. 11 (c), the receiver 8000 normally has a location where a signal is transmitted indicated by a dotted frame and an identifier (for example, ID: 101, ID: 102, etc.). The captured image may be displayed as a composite image. Further, as shown in FIG. 11D, the receiver 8000 recognizes a signal that is an image having a predetermined color for notifying that a specific type of signal is transmitted instead of the bright line pattern. A composite image may be generated by superimposing an object on a normal captured image, and the composite image may be displayed. In this case, the color of the signal identification object differs depending on the type of signal output from the transmitter. For example, when the signal output from the transmitter is position information, a red signal identification object is superimposed, and when the signal output from the transmitter is a coupon, the green signal identification object is Superimposed.
 図12は、本実施の形態における受信機の動作の一例を示す図である。 FIG. 12 is a diagram illustrating an example of the operation of the receiver in this embodiment.
 例えば、受信機8000は、可視光通信によって信号を受信した場合には、通常撮影画像を表示するとともに、送信機を発見したことをユーザに通知するための音を出力してもよい。この場合、受信機8000は、発見した送信機の個数、受信した信号の種類、または、その信号によって特定される情報の種類などによって、出力される音の種類、出力回数、または出力時間を異ならせてもよい。 For example, when receiving a signal through visible light communication, the receiver 8000 may display a normal captured image and output a sound for notifying the user that the transmitter has been found. In this case, the receiver 8000 varies the type of output sound, the number of outputs, or the output time depending on the number of transmitters found, the type of received signal, or the type of information specified by the signal. It may be allowed.
 図13は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 13 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像に映し出された輝線模様にユーザがタッチすると、受信機8000は、そのタッチされた輝線模様に対応する被写体から送信された信号に基づいて、情報通知画像を生成し、その情報通知画像を表示する。この情報通知画像は、例えば、店舗のクーポンや場所などを示す。なお、輝線模様は、図11に示す信号明示オブジェクト、信号識別オブジェクト、または点線枠などであってもよい。以下に記載されている輝線模様についても同様である。 For example, when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image. This information notification image indicates, for example, a store coupon or a place. The bright line pattern may be a signal explicit object, a signal identification object, a dotted line frame, or the like shown in FIG. The same applies to the bright line patterns described below.
 図14は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 14 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像に映し出された輝線模様にユーザがタッチすると、受信機8000は、そのタッチされた輝線模様に対応する被写体から送信された信号に基づいて、情報通知画像を生成し、その情報通知画像を表示する。この情報通知画像は、例えば、受信機8000の現在地を地図などによって示す。 For example, when the user touches the bright line pattern displayed in the composite image, the receiver 8000 generates an information notification image based on a signal transmitted from the subject corresponding to the touched bright line pattern, and the information notification Display an image. For example, the information notification image indicates the current location of the receiver 8000 by a map or the like.
 図15は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 15 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像が表示されている受信機8000に対してユーザがスワイプを行うと、受信機8000は、図11の(c)に示す通常撮影画像と同様の、点線枠および識別子を有する通常撮影画像を表示するとともに、スワイプの操作に追随するように情報の一覧を表示する。この一覧には、各識別子によって示される箇所(送信機)から送信される信号によって特定される情報が示されている。また、スワイプは、例えば、受信機8000におけるディスプレイの右側の外から中に指を動かす操作であってもよい。なお、スワイプは、ディスプレイの上側から、下側から、または左側から中に指を動かす操作であってもよい。 For example, when the user performs a swipe on the receiver 8000 on which the composite image is displayed, the receiver 8000 performs normal shooting having a dotted frame and an identifier, similar to the normal shot image illustrated in FIG. An image is displayed and a list of information is displayed so as to follow the swipe operation. In this list, information specified by a signal transmitted from a location (transmitter) indicated by each identifier is shown. The swipe may be, for example, an operation of moving a finger from outside the right side of the display in the receiver 8000. The swipe may be an operation of moving a finger from the upper side, the lower side, or the left side of the display.
 また、その一覧に含まれる情報がユーザによってタップされると、受信機8000は、その情報をより詳細に示す情報通知画像(例えばクーポンを示す画像)を表示してもよい。 In addition, when information included in the list is tapped by the user, the receiver 8000 may display an information notification image (for example, an image showing a coupon) showing the information in more detail.
 図16は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 16 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、合成画像が表示されている受信機8000に対してユーザがスワイプを行うと、受信機8000は、スワイプの操作に追随するように情報通知画像を合成画像に重畳して表示する。この情報通知画像は、被写体距離を矢印とともにユーザに分かり易く示すものである。また、スワイプは、例えば、受信機8000におけるディスプレイの下側の外から中に指を動かす操作であってもよい。なお、スワイプは、ディスプレイの左側から、上側から、または右側から中に指を動かす操作であってもよい。 For example, when the user swipes the receiver 8000 on which the composite image is displayed, the receiver 8000 displays the information notification image superimposed on the composite image so as to follow the swipe operation. This information notification image shows the subject distance with an arrow in an easy-to-understand manner for the user. The swipe may be, for example, an operation of moving a finger from outside the lower side of the display in the receiver 8000. The swipe may be an operation of moving a finger from the left side of the display, from the upper side, or from the right side.
 図17は、本実施の形態における受信機の動作の他の例を示す図である。 FIG. 17 is a diagram illustrating another example of the operation of the receiver in this embodiment.
 例えば、受信機8000は、複数の店舗を示すサイネージである送信機を被写体として撮影し、その撮影によって取得された通常撮影画像を表示する。ここで、通常撮影画像に映し出された被写体に含まれる、1つの店舗のサイネージの画像をユーザがタップすると、受信機8000は、その店舗のサイネージから送信される信号に基づいて情報通知画像を生成し、その情報通知画像8001を表示する。この情報通知画像8001は、例えば店舗の空席状況などを示す画像である。 For example, the receiver 8000 images a transmitter, which is a signage indicating a plurality of stores, as a subject, and displays a normal captured image acquired by the imaging. Here, when the user taps the signage image of one store included in the subject displayed in the normal captured image, the receiver 8000 generates an information notification image based on a signal transmitted from the signage of the store Then, the information notification image 8001 is displayed. This information notification image 8001 is an image showing, for example, a vacant seat situation in a store.
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む画像である輝線画像を取得する輝線画像取得ステップと、前記輝線画像に基づいて、前記輝線が現われた部位の空間的な位置が識別し得る態様で、前記被写体と当該被写体の周囲とが映し出された表示用画像を表示する画像表示ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより送信情報を取得する情報取得ステップとを含む。 The information communication method according to the present embodiment is an information communication method for acquiring information from a subject, and an bright line corresponding to an exposure line included in the image sensor is included in an image obtained by photographing the subject with an image sensor. A first exposure time setting step for setting an exposure time of the image sensor so as to occur in accordance with a change in luminance of the subject; and the image sensor photographs the subject whose luminance changes with the set exposure time. The bright line image acquisition step of acquiring a bright line image that is an image including the bright line, and the spatial position of the portion where the bright line appears can be identified based on the bright line image, and the subject and the subject An image display step for displaying a display image in which the surroundings of the subject are projected, and before the image is included in the acquired bright line image Including an information acquisition step of acquiring transmission information by demodulating the data identified by the pattern of bright lines.
 例えば、図7、図8および図11に示すような合成画像または中間画像が表示用画像として表示される。また、被写体と当該被写体の周囲とが映し出された表示用画像において、輝線が現われた部位の空間的な位置は、輝線模様、信号明示オブジェクト、信号識別オブジェクト、または点線枠などによって識別される。したがって、ユーザは、このような表示画像を見ることによって、輝度変化によって信号を送信している被写体を容易に見つけることができる。 For example, a composite image or an intermediate image as shown in FIGS. 7, 8, and 11 is displayed as a display image. In the display image in which the subject and the surroundings of the subject are projected, the spatial position of the part where the bright line appears is identified by a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like. Therefore, the user can easily find a subject that is transmitting a signal due to a change in luminance by viewing such a display image.
 また、前記情報通信方法は、さらに、前記露光時間よりも長い露光時間を設定する第2の露光時間設定ステップと、前記イメージセンサが、前記被写体と当該被写体の周囲とを前記長い露光時間で撮影することによって、通常撮影画像を取得する通常画像取得ステップと、前記通常撮影画像において前記輝線が現われた部位を、前記輝線画像に基づいて特定し、前記部位を指し示す画像である信号オブジェクトを前記通常撮影画像に重畳することによって、合成画像を生成する合成ステップとを含み、前記画像表示ステップでは、前記合成画像を前記表示用画像として表示してもよい。 The information communication method further includes a second exposure time setting step for setting an exposure time longer than the exposure time, and the image sensor photographs the subject and the surroundings of the subject with the long exposure time. A normal image acquisition step of acquiring a normal photographed image, a part where the bright line appears in the normal photographed image is identified based on the bright line image, and a signal object which is an image indicating the part is designated as the normal image A composite step of generating a composite image by superimposing the captured image, and the composite image may be displayed as the display image in the image display step.
 例えば、信号オブジェクトは、輝線模様、信号明示オブジェクト、信号識別オブジェクト、または点線枠などであって、図7、図8および図11に示すように、合成画像が表示用画像として表示される。これにより、ユーザは、輝度変化によって信号を送信している被写体をさらに容易に見つけることができる。 For example, the signal object is a bright line pattern, a signal explicit object, a signal identification object, a dotted line frame, or the like, and a composite image is displayed as a display image as shown in FIGS. Thus, the user can more easily find the subject that is transmitting the signal due to the luminance change.
 また、前記第1の露光時間設定ステップでは、露光時間を1/3000秒に設定し、前記輝線画像取得ステップでは、前記被写体の周囲が映し出された前記輝線画像を取得し、前記画像表示ステップでは、前記輝線画像を前記表示用画像として表示してもよい。 In the first exposure time setting step, an exposure time is set to 1/3000 sec. In the bright line image acquisition step, the bright line image in which the periphery of the subject is projected is acquired, and in the image display step. The bright line image may be displayed as the display image.
 例えば、輝線画像は中間画像として取得されて表示される。したがって、通常撮影画像と可視光通信画像とを取得して合成するなどの処理を行う必要がなく、処理の簡略化を図ることができる。 For example, the bright line image is acquired and displayed as an intermediate image. Therefore, it is not necessary to perform processing such as acquiring and synthesizing the normal captured image and the visible light communication image, and the processing can be simplified.
 また、前記イメージセンサは、第1のイメージセンサと第2のイメージセンサを含み、前記通常画像取得ステップでは、前記第1のイメージセンサが撮影することによって、前記通常撮影画像を取得し、前記輝線画像取得ステップでは、前記第2のイメージセンサが前記第1のイメージセンサの撮影と同時に撮影することによって、前記輝線画像を取得してもよい。 The image sensor includes a first image sensor and a second image sensor. In the normal image acquisition step, the first image sensor captures the normal captured image, and the bright line is acquired. In the image acquisition step, the bright line image may be acquired by capturing the second image sensor simultaneously with the capturing of the first image sensor.
 例えば、図8に示すように、通常撮影画像と輝線画像である可視光通信画像とがそれぞれのカメラで取得される。したがって、1つのカメラで通常撮影画像と可視光通信画像とを取得する場合と比べて、それらの画像を早く取得することができ、処理を高速化することができる。 For example, as shown in FIG. 8, a normal photographed image and a visible light communication image that is a bright line image are acquired by each camera. Therefore, compared with the case where a normal captured image and a visible light communication image are acquired with one camera, those images can be acquired earlier, and the processing can be speeded up.
 また、前記情報通信方法は、さらに、前記表示用画像における前記輝線が現われた部位がユーザによる操作によって指定された場合には、指定された部位の前記輝線のパターンから取得された前記送信情報に基づく提示情報を提示する情報提示ステップを含んでもよい。例えば、前記ユーザによる操作は、タップ、スワイプ、前記部位に指先を所定の時間以上継続して当てる操作、前記部位に視線を向けた状態を所定の時間以上継続する操作、前記部位に関連付けて示される矢印に前記ユーザの身体の一部を動かす操作、輝度変化するペン先を前記部位に当てる操作、または、タッチセンサに触れることによって、前記表示用画像に表示されているポインタを前記部位に当てる操作である。 Further, in the information communication method, when the part where the bright line appears in the display image is designated by a user operation, the information communication method further includes the transmission information acquired from the pattern of the bright line of the designated part. An information presentation step of presenting presentation information based on the information may be included. For example, the operation by the user is shown in association with a tap, swipe, an operation in which a fingertip is continuously applied to the part for a predetermined time, an operation in which a line of sight is directed to the part for a predetermined time or more, An operation of moving a part of the user's body to an arrow, an operation of applying a pen tip that changes in luminance to the part, or a touch sensor is touched, and a pointer displayed on the display image is applied to the part It is an operation.
 例えば、図13~図17に示すように、提示情報が情報通知画像として表示される。これにより、ユーザに所望の情報を提示することができる。 For example, as shown in FIGS. 13 to 17, the presentation information is displayed as an information notification image. Thereby, desired information can be presented to the user.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む画像である輝線画像を取得する輝線画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含み、前記輝線画像取得ステップでは、前記イメージセンサが移動されている期間に、複数の前記被写体を撮影することによって、前記輝線が現われた部位を複数含む前記輝線画像を取得し、前記情報取得ステップでは、前記部位ごとに、当該部位の前記輝線のパターンによって特定されるデータを復調することによって、複数の前記被写体のそれぞれの位置を取得し、前記情報通信方法は、さらに、取得された複数の前記被写体のそれぞれの位置、および前記イメージセンサの移動状態に基づいて、前記イメージセンサの位置を推定する位置推定ステップを含んでもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time. A bright line image acquiring step for acquiring the bright line image, and an information acquiring step for acquiring information by demodulating data specified by the pattern of the bright line included in the acquired bright line image. In the acquisition step, the plurality of subjects are photographed during the period in which the image sensor is moved. By acquiring the bright line image including a plurality of parts where the bright lines appear, and in the information acquisition step, for each part, by demodulating data specified by the pattern of the bright lines of the parts, The information communication method further estimates the position of the image sensor based on the acquired positions of the plurality of subjects and the movement state of the image sensor. A position estimation step may be included.
 これにより、複数の照明などの被写体による輝度変化によって、イメージセンサを含む受信機の位置を正確に推定することができる。 This makes it possible to accurately estimate the position of the receiver including the image sensor based on luminance changes caused by subjects such as a plurality of lights.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む画像である輝線画像を取得する輝線画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップと、取得された前記情報を提示する情報提示ステップとを含み、前記情報提示ステップでは、前記イメージセンサのユーザに対して、予め定められたジェスチャを促す画像を前記情報として提示してもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the first exposure time setting step of setting the exposure time of the image sensor, and the image including the bright line by the image sensor photographing the subject whose luminance changes with the set exposure time. A bright line image acquisition step of acquiring a bright line image, an information acquisition step of acquiring information by demodulating data specified by a pattern of the bright line included in the acquired bright line image, and the acquired information Information presenting step for presenting, in the information presenting step, the image sensor Against over The may present an image that prompts the gesture that has been predetermined as the information.
 これにより、ユーザが、促されたとおりのジェスチャを行うか否かによって、そのユーザに対する認証などを行うことができ、利便性を高めることができる。 Thus, depending on whether or not the user performs the gesture as prompted, the user can be authenticated and the convenience can be improved.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含み、前記画像取得ステップでは、反射面に映る複数の前記被写体を撮影することによって前記輝線画像を取得し、前記情報取得ステップでは、前記輝線画像に含まれる輝線の強度に応じて、前記輝線を、複数の前記被写体のそれぞれに対応する輝線に分離し、前記被写体ごとに、当該被写体に対応する輝線のパターンによって特定されるデータを復調することにより情報を取得してもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the exposure time setting step for setting the exposure time of the image sensor, and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time. And an information acquisition step of acquiring information by demodulating data specified by the bright line pattern included in the acquired bright line image. In the image acquisition step, the image is reflected on the reflection surface. The bright line image is acquired by photographing a plurality of the subjects, and the information acquisition step is performed. The bright line is separated into bright lines corresponding to each of the plurality of subjects according to the intensity of the bright lines included in the bright line image, and each subject is specified by a bright line pattern corresponding to the subject. Information may be acquired by demodulating the data.
 これにより、複数の照明などの被写体がそれぞれ輝度変化する場合でも、被写体のそれぞれから適切な情報を取得することができる。 This makes it possible to acquire appropriate information from each of the subjects even when the subject such as a plurality of illuminations changes in luminance.
 また、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体の撮影によって得られる画像に、前記イメージセンサに含まれる露光ラインに対応する輝線が前記被写体の輝度変化に応じて生じるように、前記イメージセンサの露光時間を設定する露光時間設定ステップと、前記イメージセンサが、輝度変化する前記被写体を、設定された前記露光時間で撮影することによって、前記輝線を含む輝線画像を取得する画像取得ステップと、取得された前記輝線画像に含まれる前記輝線のパターンによって特定されるデータを復調することにより情報を取得する情報取得ステップとを含み、前記画像取得ステップでは、反射面に映る前記被写体を撮影することによって前記輝線画像を取得し、前記情報通信方法は、さらに、前記輝線画像内における輝度分布に基づいて、前記被写体の位置を推定する位置推定ステップを含んでもよい。 In addition, in the information communication method for acquiring information from a subject, a bright line corresponding to an exposure line included in the image sensor is generated in an image obtained by photographing the subject by an image sensor according to a change in luminance of the subject. As described above, the exposure time setting step for setting the exposure time of the image sensor, and the image sensor captures the bright line image including the bright line by photographing the subject whose luminance changes at the set exposure time. And an information acquisition step of acquiring information by demodulating data specified by the bright line pattern included in the acquired bright line image. In the image acquisition step, the image is reflected on the reflection surface. The bright line image is acquired by photographing the subject, and the information communication method includes: , Based on the luminance distribution in the emission line image may include position estimation step for estimating the position of the object.
 これにより、輝度分布に基づいて適切な被写体の位置を推定することができる。 This makes it possible to estimate an appropriate subject position based on the luminance distribution.
 また、前記送信ステップでは、輝度変化を、前記第1のパターンにしたがった輝度変化と、前記第2のパターンにしたがった輝度変化とで切り替えるときには、緩衝時間を空けて切り替えてもよい。 Further, in the transmission step, when the luminance change is switched between the luminance change according to the first pattern and the luminance change according to the second pattern, it may be switched with a buffer time.
 これにより、第1の信号と第2の信号との混信を抑えることができる。 Thereby, interference between the first signal and the second signal can be suppressed.
 また、輝度変化によって信号を送信する情報通信方法であって、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記信号は、複数の大ブロックからなり、前記複数の大ブロックのそれぞれは、第1のデータと、前記第1のデータに対するプリアンブルと、前記第1のデータに対するチェック信号とを含み、前記第1のデータは、複数の小ブロックからなり、前記小ブロックは、第2のデータと、前記第2のデータに対するプリアンブルと、前記第2のデータに対するチェック信号とを含んでもよい。 An information communication method for transmitting a signal according to a luminance change, wherein a determination step of determining a luminance change pattern by modulating a signal to be transmitted, and the light emitter changes in luminance according to the determined pattern And transmitting the signal to be transmitted, the signal comprising a plurality of large blocks, each of the plurality of large blocks including first data and a preamble for the first data. And a check signal for the first data, wherein the first data is composed of a plurality of small blocks, and the small blocks include second data, a preamble for the second data, and the first data. And a check signal for the second data may be included.
 これにより、ブランキング期間を要する受信機でも、ブランキング期間を必要としない受信機でも、適切にデータを取得することができる。 This makes it possible to acquire data appropriately for both receivers that require a blanking period and receivers that do not require a blanking period.
 また、輝度変化によって信号を送信する情報通信方法であって、複数の送信機がそれぞれ、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、送信機ごとに、当該送信機に備えられた発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記送信ステップでは、互いに周波数またはプロトコルが異なる信号を送信してもよい。 An information communication method for transmitting a signal by luminance change, wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, And a transmitting step in which a light emitter provided in the transmitter changes the luminance according to the determined pattern and transmits the signal to be transmitted. In the transmitting step, signals having different frequencies or protocols are transmitted. May be.
 これにより、複数の送信機からの信号の混信を抑えることができる。 This makes it possible to suppress signal interference from multiple transmitters.
 また、輝度変化によって信号を送信する情報通信方法であって、複数の送信機がそれぞれ、送信対象の信号を変調することによって、輝度変化のパターンを決定する決定ステップと、送信機ごとに、当該送信機に備えられた発光体が、決定された前記パターンにしたがって輝度変化することによって前記送信対象の信号を送信する送信ステップとを含み、前記送信ステップでは、前記複数の送信機のうちの1つの送信機は、他方の送信機から送信される信号を受信し、受信された信号と混信しない態様で、他の信号を送信してもよい。 An information communication method for transmitting a signal by luminance change, wherein a plurality of transmitters each modulate a signal to be transmitted to determine a luminance change pattern, and for each transmitter, A transmitter in which a light emitter provided in the transmitter transmits a signal to be transmitted by changing in luminance according to the determined pattern, wherein in the transmitting step, one of the plurality of transmitters One transmitter may receive a signal transmitted from the other transmitter and transmit another signal in a manner that does not interfere with the received signal.
 これにより、複数の送信機からの信号の混信を抑えることができる。 This makes it possible to suppress signal interference from multiple transmitters.
 (駅での案内)
 図18Aは、電車のホームにおける本発明の利用形態の一例を示したものである。ユーザが、携帯端末を電子掲示板や照明にかざし、可視光通信により、電子掲示板に表示されている情報、または、電子掲示板の設置されている駅の電車情報・駅の構内情報などを取得する。ここでは、電子掲示板に表示されている情報自体が、可視光通信により、携帯端末に送信されてもよいし、電子掲示板に対応するID情報が携帯端末に送信され、携帯端末が取得したID情報をサーバに問い合わせることにより、電子掲示板に表示されている情報を取得してもよい。サーバは、携帯端末からID情報が送信されてきた場合に、ID情報に基づき、電子掲示板に表示されている内容を携帯端末に送信する。携帯端末のメモリに保存されている電車のチケット情報と、電子掲示板に表示されている情報とを対比し、ユーザのチケットに対応するチケット情報が電子掲示板に表示されている場合に、携帯端末のディスプレイに、ユーザの乗車予定の電車が到着するホームへの行き先を示す矢印を表示する。降車時に出口や乗り換え経路に近い車両までの経路を表示するとしてもよい。
(Guidance at the station)
FIG. 18A shows an example of a usage form of the present invention in a train platform. The user holds the portable terminal over an electronic bulletin board or lighting, and obtains information displayed on the electronic bulletin board, train information of a station where the electronic bulletin board is installed, information on the premises of the station, or the like by visible light communication. Here, the information itself displayed on the electronic bulletin board may be transmitted to the portable terminal by visible light communication, or ID information corresponding to the electronic bulletin board is transmitted to the portable terminal, and the ID information acquired by the portable terminal The information displayed on the electronic bulletin board may be acquired by inquiring the server. When the ID information is transmitted from the mobile terminal, the server transmits the content displayed on the electronic bulletin board to the mobile terminal based on the ID information. The train ticket information stored in the memory of the mobile terminal is compared with the information displayed on the electronic bulletin board, and the ticket information corresponding to the user's ticket is displayed on the electronic bulletin board. An arrow indicating the destination to the home where the user's scheduled train arrives is displayed on the display. When getting off, the route to the vehicle near the exit or the transfer route may be displayed.
 座席指定がされている場合は、その座席までの経路を表示するとしてもよい。矢印を表示する際には、地図や、電車案内情報における電車の路線の色と同じ色を用いて矢印を表示することにより、より分かりやすく表示することができる。また、矢印の表示とともに、ユーザの予約情報(ホーム番号、車両番号、発車時刻、座席番号)を表示することもできる。ユーザの予約情報を併せて表示することにより、誤認識を防ぐことが可能となる。チケット情報がサーバに保存されている場合には、携帯端末からサーバに問い合わせてチケット情報を取得し対比するか、または、サーバ側でチケット情報と電子掲示板に表示されている情報とを対比することにより、チケット情報に関連する情報を取得することができる。ユーザが乗換検索を行った履歴から目的の路線を推定し、経路を表示してもよい。また、電子掲示板に表示されている内容だけでなく、電子掲示板が設置されている駅の電車情報・構内情報を取得し、対比を行ってもよい。ディスプレイ上の電子掲示板の表示に対してユーザに関連する情報を強調表示してもよいし、書き換えて表示してもよい。ユーザの乗車予定が不明である場合には、各路線の乗り場への案内の矢印を表示してもよい。駅の構内情報を取得した場合には、売店・お手洗いへなどの案内する矢印をディスプレイに表示してもよい。ユーザの行動特性を予めサーバで管理しておき、ユーザが駅構内で売店・お手洗いに立ち寄ることが多い場合に、売店・お手洗いなどへ案内する矢印をディスプレイに表示する構成にしてもよい。売店・お手洗いに立ち寄る行動特性を有するユーザに対してのみ、売店・お手洗いなどへ案内する矢印を表示し、その他のユーザに対しては表示を行わないため処理量を減らすことが可能となる。売店・お手洗いなどへ案内する矢印の色を、ホームへの行き先を案内する矢印と異なる色としてもよい。両方の矢印を同時に表示する際には、異なる色とすることにより、誤認識を防ぐことが可能となる。尚、図18Aでは電車の例を示したが、飛行機やバスなどでも同様の構成で表示を行うことが可能である。 If a seat is specified, the route to that seat may be displayed. When the arrow is displayed, it can be displayed more easily by displaying the arrow using the same color as the color of the train route in the map or the train guide information. In addition to the arrow display, the user's reservation information (home number, vehicle number, departure time, seat number) can also be displayed. By displaying the user reservation information together, it is possible to prevent erroneous recognition. If the ticket information is stored in the server, query the server from the mobile terminal to obtain and compare the ticket information, or compare the ticket information with the information displayed on the electronic bulletin board on the server side. Thus, information related to the ticket information can be acquired. The target route may be estimated from the history of the user performing a transfer search, and the route may be displayed. Further, not only the contents displayed on the electronic bulletin board but also the train information / premises information of the station where the electronic bulletin board is installed may be acquired and compared. Information related to the user may be highlighted with respect to the display of the electronic bulletin board on the display, or may be rewritten and displayed. When the user's boarding schedule is unknown, an arrow for guiding to the boarding place on each route may be displayed. When station premises information is acquired, an arrow for guiding to a store or restroom may be displayed on the display. The user's behavior characteristics may be managed in advance by a server, and an arrow for guiding the user to a store / restroom may be displayed on the display when the user often stops at a store / restaurant in the station. Only users who have behavioral characteristics of stopping at a store / restroom will display an arrow that directs them to a store / restroom, etc., and will not be displayed to other users, so the amount of processing can be reduced. . The color of the arrow leading to a store / restroom may be different from the arrow guiding the destination to the home. When both arrows are displayed simultaneously, it is possible to prevent erroneous recognition by using different colors. Although FIG. 18A shows an example of a train, it is possible to perform display with a similar configuration even on an airplane or a bus.
 具体的には、スマートフォンなどの携帯端末(すなわち後述の受信機200などの受信機)は、図18Aの(1)に示すように、電子掲示板を撮像することによって、その電子掲示板から可視光信号を光IDまたは光データとして受信する。このとき、携帯端末は、自己位置推定を行う。つまり、携帯端末は、光データによって直接的または間接的に示される電子掲示板の地図上の位置を取得する。そして、携帯端末は、例えば9軸センサによって得られる自らの姿勢と、撮像によって得られた画像に映し出されている電子掲示板の画像内における位置、形状および大きさなどとに基づいて、電子掲示板に対する携帯端末の相対位置を算出する。携帯端末は、電子掲示板の地図上の位置と、その相対位置とに基づいて、携帯端末の地図上の位置である自己位置を推定する。携帯端末は、この自己位置である起点から、例えばチケット情報によって示される目的地までの経路を検索し、その経路に沿って目的地までユーザを案内するナビゲーションを開始する。なお、携帯端末は、その起点および目的地を示す情報をサーバに送信し、サーバによって検索された上述の経路を、そのサーバから取得してもよい。このとき、携帯端末は、サーバからその経路を含む地図をサーバから取得してもよい。 Specifically, a portable terminal such as a smartphone (that is, a receiver such as a receiver 200 described later) captures a visible light signal from the electronic bulletin board by imaging the electronic bulletin board as illustrated in (1) of FIG. 18A. Is received as an optical ID or optical data. At this time, the mobile terminal performs self-position estimation. That is, the mobile terminal acquires the position on the map of the electronic bulletin board indicated directly or indirectly by the optical data. Then, the mobile terminal, for example, with respect to the electronic bulletin board based on its own posture obtained by the 9-axis sensor and the position, shape, size, etc. in the image of the electronic bulletin board displayed in the image obtained by imaging. Calculate the relative position of the mobile terminal. The portable terminal estimates its own position, which is the position of the portable terminal on the map, based on the position of the electronic bulletin board on the map and its relative position. The mobile terminal searches for a route from the starting point that is the self-location to a destination indicated by, for example, ticket information, and starts navigation for guiding the user to the destination along the route. The mobile terminal may transmit information indicating the starting point and the destination to the server, and obtain the above-described route searched by the server from the server. At this time, the mobile terminal may acquire a map including the route from the server.
 携帯端末は、ナビゲーションでは、図18Aの(2)~(4)に示すように、カメラによる撮像を繰り返し、その撮像によって得られる通常撮影画像をリアルタイムに順次表示しながら、ユーザの行き先を示す矢印などの方向指示画像をその通常撮影画像に重畳する。ユーザは、携帯端末を携帯しながら、その表示される方向指示画像にしたがって移動する。そして、携帯端末は、上述の各通常撮影画像に映し出されている物体または特徴点の動きに基づいて、携帯端末の自己位置を更新する。例えば、携帯端末は、上述の各通常撮影画像に映し出されている物体または特徴点の動きを検出し、その動きに基づいて、携帯端末の移動方向および移動距離を推定する。そして、携帯端末は、その推定された移動方向および移動距離と、図18Aの(1)において推定された自己位置とに基づいて、現在の自己位置を更新する。この自己位置の更新は、通常撮影画像のフレーム周期ごとに行われてもよく、そのフレーム周期よりも長い周期ごとに行われてもよい。つまり、携帯端末が地下のフロアまたは経路にあるときには、その携帯端末は、GPSデータを取得することができない。したがって、このよう場合には、携帯端末は、GPSデータを用いることなく、上述の各通常撮影画像の特徴点などの動きに基づいて、自己位置の推定または更新を行う。 In navigation, as shown in (2) to (4) of FIG. 18A, the mobile terminal repeatedly captures images by the camera, and sequentially displays normal captured images obtained by the imaging in real time, while indicating an arrow indicating the user's destination Or the like is superimposed on the normal captured image. The user moves according to the displayed direction instruction image while carrying the portable terminal. Then, the mobile terminal updates the self-position of the mobile terminal based on the movement of the object or the feature point displayed in each of the above normal captured images. For example, the portable terminal detects the movement of the object or feature point displayed in each of the above-described normal captured images, and estimates the movement direction and movement distance of the portable terminal based on the movement. And a portable terminal updates the present self position based on the estimated moving direction and moving distance, and the self position estimated in (1) of FIG. 18A. This self-position update may be performed every frame period of the normal captured image, or may be performed every period longer than the frame period. That is, when the mobile terminal is on the underground floor or route, the mobile terminal cannot acquire GPS data. Therefore, in such a case, the mobile terminal estimates or updates its own position based on the movement of the above-described feature points of each normal captured image without using GPS data.
 ここで、携帯端末は、図18Aの(4)に示すように、目的地への経路の途中などで、ユーザにエレベータを案内してもよい。また、携帯端末は、図18Aの(5)および(6)に示すように、光データを送信する送信機、または、その光データの反射光を撮像すると、その光データを受信し、図18Aの(1)に示す例と同様に、自己位置を推定する。例えば、ユーザがエレベータに搭乗した場合でも、携帯端末は、エレベータの籠の内部に照明装置などとして設置されている送信機(すなわち後述の送信機100などの送信機)から送信される光データを受信する。例えば、その光データは、エレベータの籠が現在位置する階を直接的または間接的に示している。したがって、携帯端末は、その光データを受信することによって、携帯端末が現在位置する階を特定することができる。光データによって、籠の現在位置が直接的に示されていない場合には、携帯端末は、その光データによって示される情報をサーバに送信し、サーバにおいてその情報に対応付けられている階数情報を、そのサーバから取得する。これにより、携帯端末は、その階数情報によって示される階を、携帯端末が現在位置する階として特定する。このように特定される階は、自己位置として扱われる。 Here, as shown in (4) of FIG. 18A, the mobile terminal may guide the elevator to the user during the route to the destination. Further, as shown in (5) and (6) of FIG. 18A, when the portable terminal images the transmitter that transmits the optical data or the reflected light of the optical data, the mobile terminal receives the optical data, and FIG. As in the example shown in (1), the self-position is estimated. For example, even when a user gets on an elevator, the mobile terminal transmits optical data transmitted from a transmitter (that is, a transmitter such as a transmitter 100 described later) installed as an illumination device or the like inside the elevator cage. Receive. For example, the light data directly or indirectly indicates the floor on which the elevator car is currently located. Therefore, the mobile terminal can identify the floor on which the mobile terminal is currently located by receiving the optical data. If the current position of the bag is not directly indicated by the optical data, the mobile terminal transmits the information indicated by the optical data to the server, and stores the floor information associated with the information in the server. , Get from that server. Thus, the mobile terminal specifies the floor indicated by the floor information as the floor where the mobile terminal is currently located. The floor specified in this way is treated as a self-position.
 その結果、端末装置は、図18Aの(7)に示すように、通常撮像画像の特徴点などの動きから導出された自己位置を、その光データを用いて導出された自己位置に置き換えることによって、自己位置の再設定を行う。 As a result, as shown in (7) of FIG. 18A, the terminal device replaces the self-position derived from the movement of the feature points of the normal captured image with the self-position derived using the optical data. , Reset the self-position.
 そして、携帯端末は、図18Aの(8)に示すように、ユーザがエレベータから降りた後、目的地に辿り着いていなければ、図18Aの(2)~(4)と同様の処理を行いながらナビゲーションを行う。また、携帯端末は、ナビゲーションを行っているときには、GPSデータを取得することができるか否かを繰り返し確認している。したがって、携帯端末は、地下のフロアまたは経路から地上に上がると、GPSデータを取得することができると判断する。そして、携帯端末は、自己位置の推定方法を、特徴点などの動きに基づく推定方法から、GPSデータに基づく推定方法に切り替える。そして、携帯端末は、図18Aの(9)に示すように、GPSデータに基づいて自己位置を推定しながら、ユーザが目的地に到着するまでナビゲーションを引き続き実行する。なお、携帯端末は、例えばユーザが再び地下に入ると、GPSデータを取得することができなくなるため、自己位置の推定方法を、GPSデータに基づく推定方法から、特徴点などの動きに基づく推定方法に切り替える。 Then, as shown in (8) of FIG. 18A, the mobile terminal performs the same processing as (2) to (4) of FIG. 18A if the user does not reach the destination after getting off the elevator. While navigating. Moreover, the portable terminal repeatedly confirms whether GPS data can be acquired during navigation. Therefore, the portable terminal determines that the GPS data can be acquired when it goes up from the underground floor or route. And a portable terminal switches the estimation method of a self-position from the estimation method based on motion, such as a feature point, to the estimation method based on GPS data. Then, as shown in (9) of FIG. 18A, the mobile terminal continues to execute navigation until the user arrives at the destination while estimating its own position based on the GPS data. Note that since the mobile terminal cannot acquire GPS data, for example, when the user enters the basement again, the self-position estimation method is changed from the estimation method based on GPS data to the estimation method based on the movement of feature points or the like. Switch to.
 以下、図18Aの例について、詳細に説明する。 Hereinafter, the example of FIG. 18A will be described in detail.
 図18Aに示す例において、例えばスマートフォンまたはスマートグラスなどのウェアラブル機器として実装される受信機は、図18Aの(1)で、送信機から送信された可視光信号(光データ)を受信する。送信機は、例えば、電飾看板、ポスター、または、像を照らす照明として実装される。受信機は、受信した光データと、受信機にあらかじめ設定されている情報と、ユーザの指示とに従って、目的地までのナビゲーションを開始する。受信機は、光データをサーバへ送信し、このデータに関連付けられたナビ情報を取得する。ナビ情報には、以下の第1の情報から第6の情報が含まれている。第1の情報は、送信機の位置および形状を示す情報である。第2の情報は、目的地までの経路を示す情報である。第3の情報は、目的地までの経路上およびその付近にある別の送信機の情報である。具体的には、別の送信機の情報は、その送信機が送信している光データと、送信機の位置および形状と、反射光の位置および形状とを示す。第4の情報は、経路上およびその付近に関する位置特定情報である。具体的には、位置特定情報は、画像特徴量もしくは位置を特定するための電波情報または音波情報である。第5の情報は、目的地までの距離および到達予想時間を示す情報である。第6の情報は、AR表示を行うためのコンテンツ情報の一部または全部である。ナビ情報は、受信機の中にあらかじめ格納されていてもよい。なお、上述の形状には、大きさが含まれていてもよい。 In the example shown in FIG. 18A, for example, a receiver mounted as a wearable device such as a smartphone or smart glass receives the visible light signal (optical data) transmitted from the transmitter in (1) of FIG. 18A. The transmitter is implemented, for example, as an illumination signboard, a poster, or illumination that illuminates an image. The receiver starts navigation to the destination according to the received optical data, information preset in the receiver, and a user instruction. The receiver transmits optical data to the server and obtains navigation information associated with the data. The navigation information includes the following first information to sixth information. The first information is information indicating the position and shape of the transmitter. The second information is information indicating a route to the destination. The third information is information on another transmitter on and near the route to the destination. Specifically, the information of another transmitter indicates the optical data transmitted by the transmitter, the position and shape of the transmitter, and the position and shape of the reflected light. The fourth information is position specifying information regarding the route and its vicinity. Specifically, the position specifying information is radio wave information or sound wave information for specifying an image feature amount or position. The fifth information is information indicating the distance to the destination and the estimated arrival time. The sixth information is part or all of the content information for performing the AR display. The navigation information may be stored in advance in the receiver. Note that the above-described shape may include a size.
 受信機は、撮像によって得られた画像中の送信機の写り方と加速度センサのセンサ値とから計算された送信機と受信機との相対位置と、送信機の位置情報とから、受信機の自己位置を推定し、その自己位置をナビゲーションの起点とする。受信機は、光データではなく、画像特徴量、バーコードもしくは2次元コード、電波、または音波などによって、受信機の自己位置を推定して、ナビゲーションを開始してもよい。 The receiver uses the relative position between the transmitter and the receiver, which is calculated from the way the transmitter is reflected in the image obtained by imaging and the sensor value of the acceleration sensor, and the position information of the transmitter. The self position is estimated, and the self position is set as the starting point of navigation. The receiver may start navigation by estimating the receiver's own position not by optical data but by image feature values, barcodes or two-dimensional codes, radio waves, or sound waves.
 受信機は、図18Aの(2)のように、目的地までのナビゲーションを表示する。このナビゲーションの表示は、カメラの撮像によって得られる通常撮影画像に他の画像を重畳するAR表示であってもよく、地図の表示であってもよく、音声またはバイブレーションによる指示であってもよく、これらの組み合わせであってもよい。受信機、光データ、または、サーバ上の設定により、表示方法が選択されてもよい。いずれかの設定が他に優先されてもよい。また、到達地(すなわち目的地)が交通機関の搭乗場所であれば、受信機は、時刻表を取得し、予約済みの時刻、または、到達予想時刻付近の発車時刻もしくは搭乗時刻を表示してもよい。また、到達地が劇場などであれば、受信機は、開演時刻または入場期限を表示してもよい。 The receiver displays the navigation to the destination as shown in (2) of FIG. 18A. The navigation display may be an AR display in which another image is superimposed on a normal captured image obtained by imaging by a camera, may be a map display, may be an instruction by voice or vibration, A combination of these may also be used. The display method may be selected by a receiver, optical data, or settings on the server. Any setting may be prioritized. If the destination (that is, the destination) is a boarding place for transportation, the receiver acquires the timetable and displays the reserved time or the departure time or boarding time near the expected arrival time. Also good. If the destination is a theater or the like, the receiver may display the start time or the admission deadline.
 受信機は、図18Aの(3)および(4)に示すように、受信機の移動に従ってナビゲーションを進める。絶対的な位置情報が得られない状況では、受信機は、複数枚の画像中の特徴点の画像間での移動距離から、それらの画像の撮像が行われた間における受信機の移動距離と方向を推定してもよい。また、受信機は、加速度センサ、電波、または音波の推移から、受信機の移動距離と方向を推定してもよい。また、受信機は、SLAM(Simultaneous Localization and Mapping)またはPTAM(Parallel Tracking and Mapping)により受信機の移動距離と方向を推定してもよい。 The receiver advances navigation according to the movement of the receiver as shown in (3) and (4) of FIG. 18A. In a situation where absolute position information cannot be obtained, the receiver determines from the moving distance between the images of the feature points in the plurality of images, to the moving distance of the receiver during the imaging of those images. The direction may be estimated. The receiver may estimate the moving distance and direction of the receiver from the transition of the acceleration sensor, radio waves, or sound waves. Further, the receiver may estimate the moving distance and direction of the receiver by SLAM (Simultaneous Localization and Mapping) or PTAM (Parallel Tracking and Mapping).
 図18Aの(5)で、受信機は、図18Aの(1)で受信した光データとは別の光データを、例えばエレベータの外で受信した場合、その光データをサーバへ送り、その光データに関連付けられた送信機の形状および位置を取得してもよい。そして、受信機は、図18Aの(1)と同様の方法で受信機の自己位置を推定してもよい。これにより、受信機は、図18Aの(3)および(4)の過程で生じた受信機の自己位置推定の誤差を解消してナビゲーションの現在位置を補正する。受信機は、可視光信号の一部のみを受信し、完全な光データが得られなかった場合は、ナビ情報から最近傍にある送信機を、その可視光信号を送信している送信機であると推定し、以降、上述と同様に、受信機の自己位置推定を行う。これにより、受信条件が十分でない送信機、例えば、小さい送信機、遠くにある送信機、または、暗い送信機であっても、それらの送信機を、受信機の自己位置推定に利用することができる。 In (5) of FIG. 18A, when the receiver receives optical data different from the optical data received in (1) of FIG. 18A, for example, outside the elevator, the receiver sends the optical data to the server. The shape and position of the transmitter associated with the data may be obtained. Then, the receiver may estimate the receiver's own position by the same method as (1) in FIG. 18A. Thereby, the receiver corrects the current position of navigation by eliminating the error of the receiver's self-position estimation that has occurred in the processes (3) and (4) of FIG. 18A. If the receiver receives only part of the visible light signal and complete optical data cannot be obtained, the transmitter that is nearest to the navigation information is the transmitter that is transmitting the visible light signal. It is estimated that there is, and thereafter, the receiver's self-position is estimated in the same manner as described above. This allows transmitters with poor reception conditions, such as small transmitters, remote transmitters, or dark transmitters, to be used for receiver self-position estimation. it can.
 受信機は、図18Aの(6)で、反射光により光データを受信する。受信機は、撮像方向、光の強さ、または、輪郭の明瞭さから、受信された光データの媒体が反射光であると識別する。反射光である場合は、受信機は、ナビ情報から反射光の位置(すなわち地図上の位置)を特定し、撮像されている反射光の領域の中心部を、その反射光の位置と推定する。そして、受信機は、図18Aの(5)と同様に、受信機の自己位置を推定し、ナビゲーションの現在位置の補正を行う。 The receiver receives optical data by reflected light in (6) of FIG. 18A. The receiver identifies that the medium of the received optical data is reflected light based on the imaging direction, the light intensity, or the clarity of the outline. In the case of reflected light, the receiver identifies the position of the reflected light (that is, the position on the map) from the navigation information, and estimates the center of the reflected light area being imaged as the position of the reflected light. . Then, as in (5) of FIG. 18A, the receiver estimates the receiver's own position and corrects the current navigation position.
 受信機は、GPS、GLONASS、Galileo、北斗衛星測位システム、IRNSS等の位置を特定するための信号を受信した場合は、その信号により受信機の位置を特定し、ナビゲーションの現在位置(すなわち自己位置)を補正する。受信機は、上記信号の強さが十分であれば、すなわち所定の強度よりも強ければ、その信号のみによって自己位置を推定し、所定の強さ以下であれば、図18Aの(3)および(4)で利用した方法を併用してもよい。 When the receiver receives a signal for specifying the position of GPS, GLONASS, Galileo, Hokuto satellite positioning system, IRNSS, etc., the receiver specifies the position of the receiver based on the signal, and the current position of navigation (ie, self-position). ) Is corrected. If the strength of the signal is sufficient, that is, if the strength is higher than a predetermined strength, the receiver estimates the self-position based only on the signal. If the strength is equal to or lower than the predetermined strength, (3) in FIG. 18A and The method used in (4) may be used in combination.
 受信機は、可視光信号を受信した場合、[1]その可視光信号と同時に受信している所定のIDを持つ電波信号、[2]最後に受信した所定のIDを持つ電波信号、または、[3]最後に推定した受信機の位置を示す情報を、可視光信号によって示される情報と合わせてサーバに送信する。これにより、その可視光信号を送信する送信機が特定される。または、受信機は、上述の電波信号、もしくは、受信機の位置を示す情報によって特定されるアルゴリズムで可視光信号を受信し、上述と同様に特定されるサーバに、その可視光信号によって示される情報を送信してもよい。 When the receiver receives a visible light signal, [1] a radio signal having a predetermined ID received simultaneously with the visible light signal, [2] a radio signal having a predetermined ID received last, or [3] The information indicating the position of the last estimated receiver is transmitted to the server together with the information indicated by the visible light signal. Thereby, the transmitter which transmits the visible light signal is specified. Alternatively, the receiver receives a visible light signal with an algorithm specified by the above-described radio signal or information indicating the position of the receiver, and is indicated by the visible light signal to a server specified in the same manner as described above. Information may be transmitted.
 受信機は、自己位置を推定し、その自己位置付近にある商品の情報を表示してもよい。また、受信機は、ユーザの指定する商品の位置までのナビゲーションをしてもよい。また、受信機は、ユーザが指定する複数の商品の所在をすべて周るために最適なルートを提示してもよい。その最適なルートは、最短距離のルート、最短所要時間のルート、または、移動の労力が最も少ないルートである。また、受信機は、ユーザの指定する商品または場所に加えて、所定の場所を通るようにナビゲーションを行ってもよい。これにより、所定の場所の宣伝、または、その場所にある商品または店の宣伝を行うことができる。 The receiver may estimate the self-position and display information on products near the self-position. Further, the receiver may navigate to the position of the product designated by the user. In addition, the receiver may present an optimal route to go around all the locations of the plurality of products specified by the user. The optimum route is the route with the shortest distance, the route with the shortest required time, or the route with the least amount of movement effort. Further, the receiver may perform navigation so as to pass through a predetermined place in addition to the product or place designated by the user. Thereby, the advertisement of a predetermined place, or the goods or store in the place can be performed.
 図18Bは、本実施の形態におけるエレベータでの受信機200によるナビゲーションを説明するための図である。 FIG. 18B is a diagram for describing navigation by the receiver 200 in the elevator according to the present embodiment.
 例えば、スマートフォンとして構成されている受信機は、ユーザが地下3階(B3)にいるときに、図18Bの(1)に示すように、AR表示を用いたガイド、すなわちARナビゲーションを実行する。ARナビゲーションは、図18Aに示すように、通常撮影画像に矢印などの方向指示画像を重畳することによって、ユーザを目的地まで案内するナビゲーション機能である。なお、以下、ARナビゲーションを、単に、ナビゲーションまたはナビともいう。 For example, when a user is on the third basement floor (B3), a receiver configured as a smartphone executes a guide using AR display, that is, AR navigation, as shown in (1) of FIG. 18B. As shown in FIG. 18A, AR navigation is a navigation function that guides a user to a destination by superimposing a direction indicating image such as an arrow on a normal captured image. Hereinafter, AR navigation is also simply referred to as navigation or navigation.
 そして、ユーザがエレベータに搭乗すると、受信機は、図18Bの(2)に示すように、エレベータの籠に配置されている送信機から光信号(すなわち可視光信号、光データ、または光ID)を受信する。これにより、受信機は、その光信号に基づいて、エレベータIDと階数情報とを取得する。エレベータIDは、その送信機が配置されているエレベータまたはその籠を識別するための識別情報であり、階数情報は、その籠が現在位置する階(または階数)を示す情報である。例えば、受信機は、光信号(またはその光信号によって示される情報)をサーバに送信し、サーバにおいてその光信号に対して関連付けられているエレベータIDと階数情報とを、そのサーバから取得する。送信機は、エレベータの籠がどの階にあっても、常に同じ光信号を送信してもよく、籠が位置する階に応じて異なる光信号を送信してもよい。また、送信機は、例えば照明装置として構成されている。この送信機からの光は、エレベータの籠内部を明るく照らす。したがって、受信機は、このような光に重畳される光信号を送信機から直接受信することもでき、籠の内壁面または床による反射を介して間接的に受信することもできる。 Then, when the user gets on the elevator, as shown in (2) of FIG. 18B, the receiver receives an optical signal (that is, a visible light signal, optical data, or optical ID) from a transmitter disposed in the elevator cage. Receive. Thereby, a receiver acquires elevator ID and floor information based on the optical signal. The elevator ID is identification information for identifying the elevator in which the transmitter is arranged or the cage, and the floor information is information indicating the floor (or the number of floors) where the cage is currently located. For example, the receiver transmits an optical signal (or information indicated by the optical signal) to a server, and obtains an elevator ID and rank information associated with the optical signal at the server from the server. The transmitter may always transmit the same optical signal regardless of the floor of the elevator, and may transmit different optical signals depending on the floor on which the fence is located. The transmitter is configured as a lighting device, for example. The light from this transmitter shines brightly inside the elevator car. Therefore, the receiver can directly receive the optical signal superimposed on such light from the transmitter, and can also indirectly receive the signal through reflection from the inner wall surface or floor of the fence.
 受信機は、その受信機が入っている籠が上昇しているときにも、送信機から送信される光信号に基づいて取得されるエレベータIDおよび階数情報にしたがって、受信機が現在位置している階を逐次特定している。そして、受信機は、図18Bの(3)に示すように、受信機が現在位置している階が目的の階になると、エレベータから降りることを促すメッセージまたは画像を、受信機のディスプレイに表示する。また、受信機は、エレベータから降りることを促す音声を出力してもよい。 The receiver is located at the current position according to the elevator ID and floor number information acquired based on the optical signal transmitted from the transmitter even when the cage containing the receiver is rising. Sequentially identifies the floors. Then, as shown in (3) of FIG. 18B, when the floor on which the receiver is currently located is the target floor, the receiver displays a message or an image prompting to get off the elevator on the display of the receiver. To do. Further, the receiver may output a sound prompting to get off the elevator.
 そして、目的の階が地下1階のように、GPSデータが届かない場所であれば、受信機は、上述のような通常撮影画像の特徴点の移動を用いた推定方法によって、図18Bの(4)に示すように、自己位置を推定しながら上述のARナビゲーションを再開する。一方、目的の階が地上1階のように、GPSデータが届く場所であれば、受信機は、そのGPSデータを用いた推定方法によって、図18Bの(4)に示すように、自己位置を推定しながら上述のARナビゲーションを再開する。 If the target floor is a place where GPS data does not reach, such as the first basement floor, the receiver uses the estimation method using the movement of the feature points of the normal captured image as described above (( As shown in 4), the above-mentioned AR navigation is resumed while estimating the self-position. On the other hand, if the target floor is a place where GPS data arrives, such as the first floor above the ground, the receiver determines its own position by an estimation method using the GPS data as shown in (4) of FIG. 18B. The above-mentioned AR navigation is resumed while estimating.
 図18Cは、本実施の形態におけるエレベータに備えられるシステム構成の一例を示す図である。 FIG. 18C is a diagram illustrating an example of a system configuration provided in the elevator according to the present embodiment.
 エレベータの籠420には、上述の送信機である送信機100が設置されている。この送信機100は、エレベータの籠420の照明装置として、その籠420の天井に配置されている。また、送信機100は、内蔵カメラ404とマイク411とを備えている。内蔵カメラ404は、籠420の内部を撮像し、マイク411は、籠420の内部の音を収音する。 The transmitter 100, which is the above-mentioned transmitter, is installed in the elevator cage 420. The transmitter 100 is disposed on the ceiling of the eaves 420 as an illuminating device for the eaves 420 of the elevator. The transmitter 100 also includes a built-in camera 404 and a microphone 411. The built-in camera 404 takes an image of the inside of the bag 420, and the microphone 411 collects the sound inside the bag 420.
 また、籠420には、監視カメラシステム401と、階数表示部414と、センサ403とが設置されている。監視カメラシステム401は、籠420の内部を撮像する少なくとも1つのカメラを有するシステムである。階数表示部414は、籠420が現在位置する階を表示する。センサ403は、例えば、気圧センサおよび加速度センサのうちの少なくとも1つを備える。 In addition, a monitoring camera system 401, a floor display unit 414, and a sensor 403 are installed in the basket 420. The surveillance camera system 401 is a system having at least one camera that captures an image of the interior of the bag 420. The floor display unit 414 displays the floor on which the bag 420 is currently located. The sensor 403 includes, for example, at least one of an atmospheric pressure sensor and an acceleration sensor.
 また、エレベータは、画像認識部402、現在階検出部405、光変調部406、発光回路407、無線部409、および音声認識部410を備える。 The elevator also includes an image recognition unit 402, a current floor detection unit 405, a light modulation unit 406, a light emitting circuit 407, a wireless unit 409, and a voice recognition unit 410.
 画像認識部402は、監視カメラシステム401または内蔵カメラ404による撮像によって得られた画像から、階数表示部414に表示されている文字(つまり階数)を認識し、その認識によって得られる現在階データを出力する。現在階データは、階数表示部414に表示されている階数を示す。 The image recognizing unit 402 recognizes a character (that is, a floor) displayed on the floor display unit 414 from an image obtained by imaging by the monitoring camera system 401 or the built-in camera 404, and obtains current floor data obtained by the recognition. Output. The current floor data indicates the number of floors displayed in the floor number display unit 414.
 音声認識部410は、マイク411から出力される音声データに基づいて、籠420が現在位置する階を認識し、その階を示す階データを出力する。 The voice recognition unit 410 recognizes the floor where the bag 420 is currently located based on the voice data output from the microphone 411, and outputs floor data indicating the floor.
 現在階検出部405は、センサ403、画像認識部402、および音声認識部410のうちの少なくとも1つから出力されるデータに基づいて、籠420が現在位置する階を検出する。そして、現在階検出部405は、その検出された階を示す情報を光変調部406に出力する。 The current floor detection unit 405 detects the floor on which the bag 420 is currently located based on data output from at least one of the sensor 403, the image recognition unit 402, and the voice recognition unit 410. Then, the current floor detection unit 405 outputs information indicating the detected floor to the light modulation unit 406.
 光変調部406は、現在階検出部405から出力される階を示す情報と、エレベータIDとを示す信号を変調し、その変調された信号を発光回路407に出力する。発光回路407は、その変調された信号にしたがって送信機100の輝度を変化させる。これにより、籠420が現在位置する階と、エレベータIDとを示す上述の可視光信号、光信号、光データ、または光IDが送信機100から送信される。 The light modulation unit 406 modulates the signal indicating the floor output from the current floor detection unit 405 and the signal indicating the elevator ID, and outputs the modulated signal to the light emitting circuit 407. The light emitting circuit 407 changes the luminance of the transmitter 100 in accordance with the modulated signal. As a result, the visible light signal, the optical signal, the optical data, or the optical ID, which indicates the floor where the fence 420 is currently located and the elevator ID, is transmitted from the transmitter 100.
 また、無線部409は、光変調部406と同様、現在階検出部405から出力される階を示す情報と、エレベータIDとを示す信号を変調し、その変調された信号を無線によって送信する。例えば、無線部409は、Wi-FiまたはBluetoothによって信号を送信する。 Similarly to the light modulation unit 406, the radio unit 409 modulates information indicating the floor output from the current floor detection unit 405 and a signal indicating the elevator ID, and transmits the modulated signal by radio. For example, the wireless unit 409 transmits a signal by Wi-Fi or Bluetooth.
 これにより、受信機200は、無線信号および光信号のうちの少なくとも一方を受信することによって、受信機200が現在位置する階とエレベータIDとを特定することができる。 Thereby, the receiver 200 can identify the floor and the elevator ID where the receiver 200 is currently located by receiving at least one of the radio signal and the optical signal.
 また、エレベータは、上述の階数表示部414を有する現在階検出部412を備えていてもよい。この現在階検出部412は、エレベータ制御部413と階数表示部414とから構成されている。エレベータ制御部413は、籠420の昇降および停止を制御する。このようなエレベータ制御部413は、籠420が現在位置する階を把握している。そして、このエレベータ制御部413は、その把握されている階を示すデータを現在階データとして光変調部406および無線部409に出力してもよい。 Further, the elevator may include a current floor detection unit 412 having the above-described floor number display unit 414. The current floor detection unit 412 includes an elevator control unit 413 and a floor number display unit 414. The elevator control unit 413 controls the elevating and stopping of the eaves 420. Such an elevator control unit 413 keeps track of the floor on which the fence 420 is currently located. The elevator control unit 413 may output data indicating the grasped floor to the light modulation unit 406 and the radio unit 409 as current floor data.
 このような構成によって、受信機200は、図18Aおよび図18Bに示すARナビゲーションを実現することができる。 With such a configuration, the receiver 200 can realize the AR navigation shown in FIGS. 18A and 18B.
 (道案内への応用例)
 図19は、実施の形態2における送受信システムの応用例の一例を示す図である。
(Example of application to directions)
FIG. 19 is a diagram illustrating an example of application of the transmission / reception system in the second embodiment.
 受信機8955aは、例えば案内板として構成される送信機8955bの送信IDを受信し、案内板に表示された地図のデータをサーバから取得して表示する。このとき、サーバは受信機8955aのユーザに適した広告を送信し、受信機8955aはこの広告情報も表示するとしてもよい。受信機8955aは、現在地から、ユーザが指定した場所までの経路を表示する。 The receiver 8955a receives, for example, the transmission ID of the transmitter 8955b configured as a guide plate, acquires the map data displayed on the guide plate from the server, and displays the map data. At this time, the server may transmit an advertisement suitable for the user of the receiver 8955a, and the receiver 8955a may also display this advertisement information. The receiver 8955a displays a route from the current location to a location designated by the user.
 (利用ログ蓄積と解析への応用例)
 図20は、実施の形態2における送受信システムの応用例の一例を示す図である。
(Application log storage and application examples)
FIG. 20 is a diagram illustrating an example of an application example of the transmission and reception system in the second embodiment.
 受信機8957aは、例えば看板として構成される送信機8957bの送信するIDを受信し、サーバからクーポン情報を取得して表示する。受信機8957aは、その後のユーザの行動、例えば、クーポンを保存したり、クーポンに表示された店舗に移動したり、その店舗で買い物を行ったり、クーポンを保存せずに立ち去ったりといった行動をサーバ8957cに保存する。これにより、看板8957bから情報を取得したユーザのその後の行動を解析することができ、看板8957bの広告価値を見積もることができる。 The receiver 8957a receives the ID transmitted from the transmitter 8957b configured as a signboard, for example, acquires coupon information from the server, and displays the coupon information. The receiver 8957a stores subsequent user actions such as saving a coupon, moving to a store displayed on the coupon, shopping at the store, and leaving without saving the coupon. Save to 8957c. As a result, the subsequent behavior of the user who has acquired information from the sign 8957b can be analyzed, and the advertising value of the sign 8957b can be estimated.
 本実施の形態における情報通信方法は、被写体から情報を取得する情報通信方法であって、イメージセンサによる前記被写体である第1の被写体の撮影によって得られる画像に、前記イメージセンサに含まれる各露光ラインに対応する複数の輝線が前記第1の被写体の輝度変化に応じて生じるように、前記イメージセンサの第1の露光時間を設定する第1の露光時間設定ステップと、前記イメージセンサが、輝度変化する前記第1の被写体を、設定された前記第1の露光時間で撮影することによって、前記複数の輝線を含む画像である第1の輝線画像を取得する第1の輝線画像取得ステップと、取得された前記第1の輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより第1の送信情報を取得する第1の情報取得ステップと、前記第1の送信情報が取得された後に、制御信号を送信することによって、扉の開閉駆動機器に対して前記扉を開かせる扉制御ステップとを含む。 The information communication method in the present embodiment is an information communication method for acquiring information from a subject, and each exposure included in the image sensor is included in an image obtained by photographing the first subject that is the subject by an image sensor. A first exposure time setting step for setting a first exposure time of the image sensor so that a plurality of bright lines corresponding to a line are generated according to a change in luminance of the first subject; A first bright line image acquisition step of acquiring a first bright line image that is an image including the plurality of bright lines by photographing the first subject that changes with the set first exposure time; The first transmission information is acquired by demodulating the data specified by the plurality of bright line patterns included in the acquired first bright line image. Comprising a first information obtaining step, after the first transmission information is acquired by sending a control signal, and a door control steps to open the door against the opening and closing devices of the door.
 これにより、イメージセンサを備えた受信機を扉の鍵のように用いることができ、特別な電子錠を不要にすることができる。その結果、演算力が少ないような機器を含む多様な機器間で通信を行うことができる。 This makes it possible to use a receiver equipped with an image sensor like a door key and eliminate the need for a special electronic lock. As a result, it is possible to perform communication between various devices including a device having a small computing power.
 また、前記情報通信方法は、さらに、前記イメージセンサが、輝度変化する第2の被写体を、設定された前記第1の露光時間で撮影することによって、複数の輝線を含む画像である第2の輝線画像を取得する第2の輝線画像取得ステップと、取得された前記第2の輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより第2の送信情報を取得する第2の情報取得ステップと、取得された前記第1および第2の送信情報に基づいて、前記イメージセンサを備えた受信装置が前記扉に近づいているか否かを判定する接近判定ステップとを含み、前記扉制御ステップでは、前記受信装置が前記扉に近づいていると判定されたときに、前記制御信号を送信してもよい。 The information communication method may further include a second image in which the image sensor includes a plurality of bright lines by photographing the second subject whose luminance changes with the set first exposure time. Second transmission information is acquired by demodulating data specified by a pattern of the plurality of bright lines included in the acquired second bright line image and a second bright line image acquiring step of acquiring a bright line image A second information acquisition step; and an approach determination step of determining whether or not a receiving device including the image sensor is approaching the door based on the acquired first and second transmission information. In the door control step, the control signal may be transmitted when it is determined that the receiving device is approaching the door.
 これにより、受信装置(受信機)が扉に近づいたときにのみ、つまり、適切なタイミングにのみ、その扉を開かせることができる。 Thereby, the door can be opened only when the receiving device (receiver) approaches the door, that is, only at an appropriate timing.
 また、前記情報通信方法は、さらに、前記第1の露光時間よりも長い第2の露光時間を設定する第2の露光時間設定ステップと、前記イメージセンサが、第3の被写体を、設定された前記第2の露光時間で撮影することによって、前記第3の被写体が映し出された通常画像を取得する通常画像取得ステップとを含み、前記通常画像取得ステップでは、前記イメージセンサのオプティカルブラックを含む領域にある複数の露光ラインのそれぞれに対して、当該露光ラインの隣の露光ラインに対する電荷の読み出しが行われた時点から所定の時間経過後に、電荷の読み出しを行い、前記第1の輝線画像取得ステップでは、前記オプティカルブラックを電荷の読み出しに用いることなく、前記イメージセンサにおける前記オプティカルブラック以外の領域にある複数の露光ラインのそれぞれに対して、当該露光ラインの隣の露光ラインに対する電荷の読み出しが行われた時点から、前記所定の時間よりも長い時間経過後に、電荷の読み出しを行ってもよい。 Further, in the information communication method, a second exposure time setting step for setting a second exposure time longer than the first exposure time, and the image sensor sets a third subject. A normal image acquisition step of acquiring a normal image on which the third subject is projected by photographing at the second exposure time, and the normal image acquisition step includes an area including optical black of the image sensor For each of the plurality of exposure lines, the charge is read after a predetermined time has elapsed from the time when the charge is read for the exposure line adjacent to the exposure line, and the first bright line image obtaining step is performed. Then, without using the optical black for the charge readout, the optical black in the image sensor is different from the optical black. For each of the plurality of exposure lines in the region, the charge is read after a time longer than the predetermined time from when the charge is read for the exposure line adjacent to the exposure line. Also good.
 これにより、第1の輝線画像が取得されるときには、オプティカルブラックに対する電荷の読み出し(露光)は行われないため、イメージセンサにおけるオプティカルブラック以外の領域である有効画素領域に対する電荷の読み出し(露光)にかかる時間を長くすることができる。その結果、その有効画素領域において信号を受信する時間を長くすることができ、多くの信号を取得することができる。 As a result, when the first bright line image is acquired, the readout (exposure) of charges from the optical black is not performed. Therefore, the readout (exposure) of charges from the effective pixel area, which is an area other than the optical black in the image sensor, is performed. This time can be lengthened. As a result, it is possible to increase the time for receiving a signal in the effective pixel region, and it is possible to acquire many signals.
 また、前記情報通信方法は、さらに、前記第1の輝線画像に含まれる前記複数の輝線のパターンにおける、当該複数の輝線のそれぞれに垂直な方向の長さが、予め定められた長さ未満であるか否かを判定する長さ判定ステップと、前記パターンの長さが前記予め定められた長さ未満であると判定された場合には、前記イメージセンサのフレームレートを、前記第1の輝線画像を取得したときの第1のフレームレートよりも遅い第2のフレームレートに変更するフレームレート変更ステップと、前記イメージセンサが、輝度変化する前記第1の被写体を、前記第2のフレームレートで、且つ、設定された前記第1の露光時間で撮影することによって、複数の輝線を含む画像である第3の輝線画像を取得する第3の輝線画像取得ステップと、取得された前記第3の輝線画像に含まれる前記複数の輝線のパターンによって特定されるデータを復調することにより前記第1の送信情報を取得する第3の情報取得ステップとを含んでもよい。 Further, in the information communication method, the length in the direction perpendicular to each of the plurality of bright lines in the plurality of bright line patterns included in the first bright line image is less than a predetermined length. A length determination step for determining whether or not there is a frame rate of the image sensor when the pattern length is determined to be less than the predetermined length; A frame rate changing step of changing to a second frame rate that is slower than the first frame rate when the image is acquired; and the image sensor detects the first subject whose luminance changes at the second frame rate. And a third bright line image acquisition step of acquiring a third bright line image, which is an image including a plurality of bright lines, by taking an image with the set first exposure time. It may include a third information obtaining step of obtaining the first transmission information by demodulating the data identified by said plurality of emission lines pattern contained in has been the third bright line image.
 これにより、第1の輝線画像に含まれる輝線のパターン(輝線領域)によって示される信号長が、送信された信号の例えば1ブロック分に満たない場合には、フレームレートが落とされて、改めて輝線画像が第3の輝線画像として取得される。その結果、第3の輝線画像に含まれる輝線のパターンの長さを長くすることができ、送信された信号を1ブロック分取得することができる。 As a result, when the signal length indicated by the bright line pattern (bright line region) included in the first bright line image is less than, for example, one block of the transmitted signal, the frame rate is reduced and the bright line is renewed. An image is acquired as a third bright line image. As a result, the length of the bright line pattern included in the third bright line image can be increased, and the transmitted signal can be acquired for one block.
 また、前記情報通信方法は、さらに、前記イメージセンサによって得られる画像の縦幅と横幅の比率を設定する比率設定ステップを含み、前記第1の輝線画像取得ステップは、設定された前記比率によって、前記画像における前記各露光ラインと垂直な方向の端がクリッピングされるか否かを判定するクリッピング判定ステップと、前記端がクリッピングされると判定されたときには、前記比率設定ステップで設定された前記比率を、前記端がクリッピングされない比率である非クリッピング比率に変更する比率変更ステップと、前記イメージセンサが、輝度変化する前記第1の被写体を撮影することによって、前記非クリッピング比率の前記第1の輝線画像を取得する取得ステップとを含んでもよい。 The information communication method further includes a ratio setting step for setting a ratio between a vertical width and a horizontal width of an image obtained by the image sensor, and the first bright line image acquisition step includes the set ratio. A clipping determination step for determining whether or not an end in a direction perpendicular to each exposure line in the image is clipped, and when it is determined that the end is clipped, the ratio set in the ratio setting step Changing the ratio to a non-clipping ratio which is a ratio at which the edge is not clipped, and the image sensor captures the first bright line with the non-clipping ratio by photographing the first subject whose luminance changes. An acquisition step of acquiring an image.
 これにより、例えばイメージセンサの有効画素領域の横幅と縦幅の比率が4:3であって、画像の横幅と縦幅の比率が16:9に設定され、水平方向に沿う輝線が表れる場合、つまり、露光ラインが水平方向に沿っている場合には、上述の画像の上端および下端がクリッピングされると判定される。つまり、第1の輝線画像の端が欠落してしまうと判定される。この場合には、その画像の比率が、クリッピングされない比率である例えば4:3に変更される。その結果、第1の輝線画像の端の欠落を防ぐことができ、第1の輝線画像から多くの情報を取得することができる。 Thereby, for example, when the ratio of the horizontal width to the vertical width of the effective pixel area of the image sensor is 4: 3, the ratio of the horizontal width to the vertical width of the image is set to 16: 9, and a bright line along the horizontal direction appears. That is, when the exposure line is along the horizontal direction, it is determined that the upper end and the lower end of the image are clipped. That is, it is determined that the end of the first bright line image is missing. In this case, the ratio of the image is changed to 4: 3, which is a ratio that is not clipped. As a result, it is possible to prevent the end of the first bright line image from being lost, and a large amount of information can be acquired from the first bright line image.
 また、前記情報通信方法は、さらに、前記第1の輝線画像に含まれる前記複数の輝線のそれぞれに平行な方向に、前記第1の輝線画像を圧縮することによって、圧縮画像を生成する圧縮ステップと、前記圧縮画像を送信する圧縮画像送信ステップとを含んでもよい。 The information communication method further includes a compression step of generating a compressed image by compressing the first bright line image in a direction parallel to each of the plurality of bright lines included in the first bright line image. And a compressed image transmission step of transmitting the compressed image.
 これにより、複数の輝線によって示される情報を欠落させることなく適切に第1の輝線画像を圧縮することができる。 Thereby, it is possible to appropriately compress the first bright line image without losing information indicated by a plurality of bright lines.
 また、前記情報通信方法は、さらに、前記イメージセンサを備える受信装置が、予め定められた態様で動かされたか否かを判定するジェスチャ判定ステップと、前記予め定められた態様で動かされたと判定したときには、前記イメージセンサを起動する起動ステップとを含んでもよい。 The information communication method further determines that the receiving device including the image sensor has been moved in a predetermined manner, and a gesture determination step for determining whether or not the receiving device has been moved in a predetermined manner. In some cases, an activation step of activating the image sensor may be included.
 これにより、必要なときにのみイメージセンサを簡単に起動させることができ、消費電力効率の向上を図ることができる。 This makes it possible to easily start the image sensor only when necessary, and to improve power consumption efficiency.
 本実施の形態では、上述のスマートフォンなどの受信機と、LEDや有機ELの点滅パターンとして情報を送信する送信機とを用いた各適用例について説明する。 In this embodiment, application examples using the above-described receiver such as a smartphone and a transmitter that transmits information as a blinking pattern of an LED or an organic EL will be described.
 図21は、実施の形態2における送信機と受信機の適用例を示す図である。 FIG. 21 is a diagram illustrating an application example of the transmitter and the receiver in the second embodiment.
 ロボット8970は、例えば自走式の掃除機としての機能と、上記各実施の形態における受信機としての機能とを有する。照明機器8971a,8971bは、それぞれ上記各実施の形態における送信機としての機能を有する。 The robot 8970 has, for example, a function as a self-propelled cleaner and a function as a receiver in each of the above embodiments. The lighting devices 8971a and 8971b each have a function as a transmitter in each of the above embodiments.
 例えば、ロボット8970は、室内を移動しながら、掃除を行うとともに、その室内を照らす照明機器8971aを撮影する。この照明機器8971aは、輝度変化することによって照明機器8971aのIDを送信している。その結果、ロボット8970は、上記各実施の形態と同様に、照明機器8971aからそのIDを受信し、そのIDに基づいて自らの位置(自己位置)を推定する。つまり、ロボット8970は、9軸センサによる検出結果と、撮影によって得られる画像に映る照明機器8971aの相対位置と、IDによって特定される照明機器8971aの絶対位置とに基づいて、移動しながら自己位置を推定している。 For example, the robot 8970 performs cleaning while moving in the room and photographs the lighting device 8971a that illuminates the room. The lighting device 8971a transmits the ID of the lighting device 8971a by changing the luminance. As a result, the robot 8970 receives the ID from the lighting device 8971a and estimates its own position (self-position) based on the ID as in the above embodiments. That is, the robot 8970 moves itself based on the detection result by the 9-axis sensor, the relative position of the lighting device 8971a reflected in the image obtained by photographing, and the absolute position of the lighting device 8971a specified by the ID. Is estimated.
 さらに、ロボット8970は、移動することによって照明機器8971aから離れると、照明機器8971aに対して消灯を命令する信号(消灯命令)を送信する。例えば、ロボット8970は、予め定められた距離だけ照明機器8971aから離れると、消灯命令を送信する。または、ロボット8970は、撮影によって得られる画像にその照明機器8971aが映らなくなったときに、あるいは、その画像に他の照明機器が映ると、消灯命令を照明機器8971aに送信する。照明機器8971aは、消灯命令をロボット8970から受信すると、その消灯命令に応じて消灯する。 Furthermore, when the robot 8970 moves away from the lighting device 8971a by moving, the robot 8970 transmits a signal to turn off the lighting device 8971a (turn-off command). For example, when the robot 8970 leaves the lighting device 8971a by a predetermined distance, the robot 8970 transmits a turn-off command. Alternatively, the robot 8970 transmits a turn-off command to the lighting device 8971a when the lighting device 8971a does not appear in the image obtained by shooting or when another lighting device appears in the image. When the lighting device 8971a receives a turn-off command from the robot 8970, the lighting device 8971a turns off according to the turn-off command.
 次に、ロボット8970は、移動して掃除を行っている途中で、推定された自己位置に基づいて、照明機器8971bに近づいたことを検知する。つまり、ロボット8970は、照明機器8971bの位置を示す情報を保持しており、自己位置とその照明機器8971bの位置との間の距離が予め定められた距離以下になったときに、照明機器8971bに近づいたことを検知する。そして、ロボット8970は、その照明機器8971bに対して点灯を命令する信号(点灯命令)を送信する。照明機器8971bは、点灯命令を受けると、その点灯命令に応じて点灯する。 Next, the robot 8970 detects that it has approached the lighting device 8971b based on the estimated self-position while moving and cleaning. That is, the robot 8970 holds information indicating the position of the lighting device 8971b, and when the distance between the self position and the position of the lighting device 8971b is equal to or less than a predetermined distance, the lighting device 8971b. Detecting that you are approaching. Then, the robot 8970 transmits a signal (lighting command) for instructing lighting to the lighting device 8971b. When the lighting device 8971b receives the lighting command, the lighting device 8971b lights up in accordance with the lighting command.
 これにより、ロボット8970は、移動しながら自らの周りだけを明るくして、掃除を容易に行うことができる。 Thereby, the robot 8970 can brighten only the surroundings while moving and can easily perform cleaning.
 図22は、実施の形態2における送信機および受信機の適用例を示す図である。 FIG. 22 is a diagram illustrating an application example of the transmitter and the receiver in the second embodiment.
 照明機器8974は、上記各実施の形態における送信機としての機能を有する。この照明機器8974は、輝度変化しながら例えば鉄道の駅にある路線掲示板8975を照らす。ユーザによってその路線掲示板8975に向けられた受信機8973は、その路線掲示板8975を撮影する。これにより、受信機8973は、その路線掲示板8975のIDを取得し、そのIDに関連付けられている情報であって、その路線掲示板8975に記載されている各路線についての詳細な情報を取得する。そして、受信機8973は、その詳細な情報を示す案内画像8973aを表示する。例えば、案内画像8973aは、路線掲示板8975に記載されている路線までの距離と、その路線に向かう方向と、その路線において次に電車が到着する時刻とを示す。 The lighting device 8974 has a function as a transmitter in each of the above embodiments. The lighting device 8974 illuminates a route bulletin board 8975 at a railway station, for example, while changing in luminance. The receiver 8973 pointed to the route bulletin board 8975 by the user photographs the route bulletin board 8975. As a result, the receiver 8973 acquires the ID of the route bulletin board 8975, and acquires detailed information about each route described in the route bulletin board 8975, which is information associated with the ID. The receiver 8973 displays a guide image 8973a indicating the detailed information. For example, the guidance image 8973a indicates the distance to the route described on the route bulletin board 8975, the direction toward the route, and the time when the next train arrives on the route.
 ここで、受信機8973は、その案内画像8973aがユーザによってタッチされると、補足案内画像8973bを表示する。この補足案内画像8973bは、例えば、鉄道の時刻表、案内画像8973aによって示される路線とは異なる別の路線に関する情報、および、その駅に関する詳細な情報、のうちの何れかをユーザによる選択操作に応じて表示するための画像である。 Here, when the guide image 8973a is touched by the user, the receiver 8973 displays a supplementary guide image 8973b. This supplementary guide image 8973b is, for example, a user's selection operation of any one of a railway timetable, information on another route different from the route indicated by the guide image 8973a, and detailed information on the station. It is an image for displaying accordingly.
 (実施の形態3)
 ここで、音声同期再生の応用例について以下に説明する。
(Embodiment 3)
Here, an application example of audio synchronized playback will be described below.
 図23は、実施の形態3におけるアプリケーションの一例を示す図である。 FIG. 23 is a diagram illustrating an example of an application according to the third embodiment.
 例えばスマートフォンとして構成される受信機1800aは、例えば街頭デジタルサイネージとして構成される送信機1800bから送信された信号(可視光信号)を受信する。つまり、受信機1800aは、送信機1800bによる画像再生のタイミングを受信する。受信機1800aは、その画像再生と同じタイミングで、音声を再生する。言い換えれば、受信機1800aは、送信機1800bによって再生される画像と音声とが同期するように、その音声の同期再生を行う。なお、受信機1800aは、送信機1800bによって再生される画像(再生画像)と同一の画像、または、その再生画像に関連する関連画像を、音声とともに再生してもよい。また、受信機1800aは、受信機1800aに接続された機器に、音声などの再生をさせてもよい。また、受信機1800aは、可視光信号を受信した後には、その可視光信号に対応付けられている音声または関連画像などのコンテンツをサーバからダウンロードしてもよい。受信機1800aは、そのダウンロード後に同期再生を行う。 For example, a receiver 1800a configured as a smartphone receives a signal (visible light signal) transmitted from a transmitter 1800b configured as, for example, a street digital signage. That is, the receiver 1800a receives the timing of image reproduction by the transmitter 1800b. The receiver 1800a reproduces sound at the same timing as the image reproduction. In other words, the receiver 1800a performs synchronized reproduction of the sound so that the image and sound reproduced by the transmitter 1800b are synchronized. Note that the receiver 1800a may reproduce the same image as the image (reproduced image) reproduced by the transmitter 1800b or a related image related to the reproduced image together with the sound. Further, the receiver 1800a may cause a device connected to the receiver 1800a to reproduce sound and the like. Further, after receiving the visible light signal, the receiver 1800a may download content such as sound or related images associated with the visible light signal from the server. The receiver 1800a performs synchronous reproduction after the download.
 これにより、送信機1800bからの音声が聞こえない場合や、街頭音声再生が禁止されているため送信機1800bからの音声が再生されていない場合でも、ユーザは、送信機1800bの表示に合わせた音声を聞くことができる。また、音声到達までに時間がかかるような距離がある場合でも、表示に合わせた音声を聞くことが出来る。 Thereby, even when the sound from the transmitter 1800b cannot be heard or when the sound from the transmitter 1800b is not reproduced because the street sound reproduction is prohibited, the user can select the sound that matches the display of the transmitter 1800b. Can hear. Further, even when there is a distance that takes time to reach the voice, it is possible to listen to the voice that matches the display.
 ここで、音声同期再生による多言語対応について以下に説明する。 Here, multilingual support by synchronized playback is described below.
 図24は、実施の形態3におけるアプリケーションの一例を示す図である。 FIG. 24 is a diagram illustrating an example of an application according to the third embodiment.
 受信機1800aおよび受信機1800cのそれぞれは、その受信機に設定された言語の音声であって、送信機1800dに表示されている例えば映画などの映像に対応する音声を、サーバから取得して再生する。具体的には、送信機1800dは、表示されている映像を識別するためのIDを示す可視光信号を受信機に送信する。受信機は、その可視光信号を受信すると、その可視光信号に示されるIDと、自らに設定されている言語とを含む要求信号をサーバに送信する。受信機は、その要求信号に対応する音声をサーバから取得して再生する。これにより、ユーザは、自分の設定した言語で送信機1800dに表示された作品を楽しむことが出来る。 Each of the receiver 1800a and the receiver 1800c obtains and reproduces audio corresponding to a video such as a movie displayed on the transmitter 1800d from the server, in the language set in the receiver. To do. Specifically, the transmitter 1800d transmits a visible light signal indicating an ID for identifying the displayed video to the receiver. When the receiver receives the visible light signal, the receiver transmits a request signal including the ID indicated in the visible light signal and the language set in the receiver to the server. The receiver acquires the audio corresponding to the request signal from the server and reproduces it. Thereby, the user can enjoy the work displayed on the transmitter 1800d in the language set by the user.
 ここで、音声同期方法について以下に説明する。 Here, the audio synchronization method will be described below.
 図25および図26は、実施の形態3における送信信号の例と音声同期方法の例とを示す図である。 25 and 26 are diagrams showing an example of a transmission signal and an example of a voice synchronization method in the third embodiment.
 それぞれ異なるデータ(例えば図25に示すデータ:1~6など)は、一定時間(N秒)ごとの時刻に関連付けられている。これらのデータは、例えば、時間を識別するためのIDであってもよく、時間であってもよく、音声データ(例えば64Kbpsのデータ)であってもよい。以下、データがIDであることを前提に説明する。それぞれ異なるIDは、IDに付随する付加情報部分が異なったものであるとしても良い。 Different data (for example, data shown in FIG. 25: 1 to 6 and the like) are associated with a time every fixed time (N seconds). These data may be, for example, an ID for identifying time, may be time, or may be audio data (for example, 64 Kbps data). The following description is based on the assumption that the data is an ID. Different IDs may have different additional information parts attached to the ID.
 IDを構成するパケットは異なっているほうが望ましい。そのためIDは連続していないほうが望ましい。もしくは、IDをパケット化する際に、非連続な部分を一つのパケットとして構成するパケット化方法が望ましい。誤り訂正信号は、連続したIDであっても異なるパターンとなる傾向が高いため、誤り訂正信号を一つのパケットにまとめるのではなく、複数のパケットに分散させて構成するとしても良い。 It is desirable that the packets that make up the ID are different. Therefore, it is desirable that IDs are not continuous. Alternatively, it is desirable to use a packetizing method in which the discontinuous portion is configured as one packet when the ID is packetized. Since error correction signals tend to have different patterns even with consecutive IDs, the error correction signals may be configured to be distributed in a plurality of packets rather than being combined into one packet.
 送信機1800dは、例えば表示している画像の再生時刻に合わせてIDを送信する。受信機は、IDが変更されたタイミングを検出することで、送信機1800dの画像の再生時刻(同期時刻)を認識することができる。 The transmitter 1800d transmits the ID in accordance with the reproduction time of the displayed image, for example. The receiver can recognize the reproduction time (synchronization time) of the image of the transmitter 1800d by detecting the timing when the ID is changed.
 (a)の場合は、ID:1とID:2の変化時点を受信しているため、正確に同期時刻を認識することができる。 In the case of (a), since the change time points of ID: 1 and ID: 2 are received, the synchronization time can be accurately recognized.
 IDが送信されている時間Nが長い場合は、このような機会が少なく、(b)のようにIDが受信されることがある。この場合でも、以下の方法で同期時刻を認識することができる。 When the time N during which the ID is transmitted is long, there are few such opportunities, and the ID may be received as shown in (b). Even in this case, the synchronization time can be recognized by the following method.
 (b1)IDが変化した受信区間の中点をID変化点と想定する。また、過去に推定したID変化点から時間Nの整数倍後の時刻もID変化点と推定し、複数のID変化点の中点をより正確なID変化点と推定する。このような推定のアルゴリズムにより、徐々に正確なID変化点を推定することができる。 (B1) Assume that the midpoint of the receiving section where the ID has changed is the ID changing point. Further, the time after an integer multiple of the time N from the ID change point estimated in the past is also estimated as the ID change point, and the midpoint of the plurality of ID change points is estimated as a more accurate ID change point. With such an estimation algorithm, an accurate ID change point can be gradually estimated.
 (b2)上記に加え、IDが変化しなかった受信区間、及び、その時間Nの整数倍後の時刻はID変化点が含まれないと推定することで、徐々にID変化点である可能性のある区間が減り、正確なID変化点を推定することができる。 (B2) In addition to the above, it is possible that the reception section where the ID has not changed and the time after an integer multiple of the time N are gradually included in the ID change point by estimating that the ID change point is not included. There are fewer sections of the ID, and an accurate ID change point can be estimated.
 Nを0.5秒以下に設定することで、正確に同期させることができる。 N By setting N to 0.5 seconds or less, it can be synchronized accurately.
 Nを2秒以下に設定することで、ユーザに遅延を感じさせずに同期させることができる。 設定 By setting N to 2 seconds or less, it is possible to synchronize without causing the user to feel a delay.
 Nを10秒以下に設定することで、IDの浪費を抑えて同期させることができる。 ¡By setting N to 10 seconds or less, it is possible to synchronize while suppressing wasted ID.
 図26は、実施の形態3における送信信号の例を示す図である。 FIG. 26 is a diagram illustrating an example of a transmission signal in the third embodiment.
 図26では、時間パケットによって同期を行うことで、IDの浪費を避けることができる。時間パケットは、送信された時刻を保持しているパケットである。長い時間を表現する必要がある場合は、細かい時間を表す時間パケット1と粗い時間を表す時間パケット2に分割して時間パケットを構成する。例えば、時間パケット2は、時刻のうちの時および分を示し、時間パケット1は、時刻のうちの秒のみを示す。時刻を示すパケットを3以上の時間パケットに分割するとしても良い。粗い時間は必要性が薄いため、細かい時間パケットを荒い時間パケットより多く送信することで、受信機は、素早く正確に同期時刻を認識することができる。 In FIG. 26, waste of ID can be avoided by performing synchronization by time packets. A time packet is a packet that holds the time of transmission. When it is necessary to express a long time, the time packet is divided into a time packet 1 representing a fine time and a time packet 2 representing a rough time. For example, time packet 2 indicates the hour and minute of the time, and time packet 1 indicates only the second of the time. A packet indicating the time may be divided into three or more time packets. Since the coarse time is less necessary, the receiver can recognize the synchronization time quickly and accurately by transmitting more fine time packets than coarse time packets.
 つまり、本実施の形態では、可視光信号は、時刻のうちの時および分を示す第2の情報(時間パケット2)と、時刻のうちの秒を示す第1の情報(時間パケット1)とを含むことによって、可視光信号が送信機1800dから送信される時刻を示す。そして、受信機1800aは、第2の情報を受信するとともに、その第2の情報を受信する回数よりも多くの回数だけ第1の情報を受信する。 That is, in the present embodiment, the visible light signal includes the second information (hour packet 2) indicating the hour and minute of the time, and the first information (time packet 1) indicating the second of the time. The time when the visible light signal is transmitted from the transmitter 1800d is indicated. The receiver 1800a receives the second information and receives the first information more times than the number of times of receiving the second information.
 ここで、同期時刻調整について以下に説明する。 Here, the synchronization time adjustment will be described below.
 図27は、実施の形態3における受信機1800aの処理フローの一例を示す図である。 FIG. 27 is a diagram illustrating an example of a processing flow of the receiver 1800a according to the third embodiment.
 信号が送信されてから受信機1800aで処理され、音声または動画が再生されるまでにはある程度の時間がかかるため、この処理時間を見越して音声または動画を再生する処理を行うことで、正確に同期再生を行うことができる。 Since it takes a certain amount of time for the audio or video to be played back after the signal is transmitted and processed by the receiver 1800a, it is possible to accurately reproduce the audio or video in anticipation of this processing time. Synchronous playback can be performed.
 まず、受信機1800aには、処理遅延時間が指定される(ステップS1801)。これは、処理プログラム中に保持されていてもよいし、ユーザが指定してもよい。ユーザが補正を行うことで、受信機個体に合わせたより正確な同期が実現可能となる。この処理遅延時間は、受信機のモデル毎、受信機の温度やCPU使用割合によって変化させることで、より正確に同期を行うことが出来る。 First, a processing delay time is designated for the receiver 1800a (step S1801). This may be stored in the processing program or specified by the user. When the user performs correction, it is possible to realize more accurate synchronization according to the individual receiver. This processing delay time can be synchronized more accurately by changing it depending on the receiver model, the temperature of the receiver, and the CPU usage rate.
 受信機1800aは、時間パケットを受信したか否か、または、音声同期用として関連付けられたIDを受信したか否かを判定する(ステップS1802)。ここで、受信機1800aは、受信したと判定すると(ステップS1802のY)、さらに、処理待ち画像があるか否かを判定する(ステップS1804)。処理待ち画像があると判定すると(ステップS1804のY)、受信機1800aは、その処理待ち画像を廃棄し、または、処理待ち画像の処理を後に回して、取得された最新の画像からの受信処理を行う(ステップS1805)。これにより、処理待ち量による不測の遅延を回避することができる。 The receiver 1800a determines whether or not a time packet has been received or whether or not an ID associated for voice synchronization has been received (step S1802). Here, when the receiver 1800a determines that it has been received (Y in step S1802), it further determines whether there is an image waiting for processing (step S1804). If it is determined that there is an image waiting for processing (Y in step S1804), the receiver 1800a discards the image waiting for processing or delays processing of the image waiting for processing to receive from the latest acquired image. Is performed (step S1805). Thereby, it is possible to avoid an unexpected delay due to the amount of waiting for processing.
 受信機1800aは、可視光信号(具体的には輝線)が画像中のどの位置にあるのかを計測する(ステップS1806)。つまり、イメージセンサにおける最初の露光ラインから、露光ラインに垂直な方向のどの位置に信号が現れているかを計測することで、画像取得開始時刻から信号受信時刻までの時間差(画像内遅延時間)を計算することができる。 The receiver 1800a measures the position in the image where the visible light signal (specifically the bright line) is located (step S1806). In other words, by measuring the position in the direction perpendicular to the exposure line from the first exposure line in the image sensor, the time difference (delay time in the image) from the image acquisition start time to the signal reception time is obtained. Can be calculated.
 受信機1800aは、認識した同期時刻に、処理遅延時間と画像内遅延時間を加えた時刻の音声または動画を再生することで、正確に同期再生を行うことができる(ステップS1807)。 The receiver 1800a can accurately perform synchronized reproduction by reproducing the sound or moving image at the time obtained by adding the processing delay time and the in-image delay time to the recognized synchronization time (step S1807).
 一方、ステップS1802において、受信機1800aは、時間パケットまたは音声同期用IDを受信していないと判定すると、撮像によって得られた画像から信号を受信する(ステップS1803)。 On the other hand, if it is determined in step S1802 that the receiver 1800a has not received the time packet or the voice synchronization ID, the receiver 1800a receives a signal from the image obtained by imaging (step S1803).
 図28は、実施の形態3における受信機1800aのユーザインタフェースの一例を示す図である。 FIG. 28 is a diagram illustrating an example of a user interface of the receiver 1800a according to the third embodiment.
 ユーザは、図28の(a)に示すように、受信機1800aに表示されたボタンBt1~Bt4の何れかを押すことで、上述の処理遅延時間を調整することができる。また、図28の(b)のようにスワイプ動作で処理遅延時間を設定できるとしてもよい。これにより、ユーザの感覚に基づいてより正確に同期再生を行うことができる。 As shown in FIG. 28A, the user can adjust the processing delay time described above by pressing one of the buttons Bt1 to Bt4 displayed on the receiver 1800a. Alternatively, the processing delay time may be set by a swipe operation as shown in FIG. Thereby, synchronous reproduction can be performed more accurately based on the user's sense.
 ここで、イヤホン限定再生について以下に説明する。 Here, the earphone-only playback will be described below.
 図29は、実施の形態3における受信機1800aの処理フローの一例を示す図である。 FIG. 29 is a diagram illustrating an example of a processing flow of the receiver 1800a according to the third embodiment.
 この処理フローによって示されるイヤホン限定再生によって、周囲に迷惑をかけずに音声再生を行うことができる。 The earphone-only playback shown by this processing flow enables audio playback without disturbing the surroundings.
 受信機1800aは、イヤホン限定の設定が行われているかどうかを確認する(ステップS1811)。イヤホン限定の設定が行われている場合には、例えば、受信機1800aにイヤホン限定の設定がなされている。あるいは、受信された信号(可視光信号)中にイヤホン限定である設定がされている。または、イヤホン限定であることが、受信された信号に関連付けられてサーバまたは受信機1800aに記録されている。 The receiver 1800a checks whether or not the setting limited to the earphone is performed (step S1811). When the setting limited to the earphone is performed, for example, the setting limited to the earphone is set in the receiver 1800a. Alternatively, settings that are limited to earphones are made in the received signal (visible light signal). Alternatively, it is recorded in the server or the receiver 1800a in association with the received signal that it is limited to the earphone.
 受信機1800aは、イヤホン限定されていることを確認すると(ステップS1811のY)、イヤホンが受信機1800aに接続されているか否かを判定する(ステップS1813)。 When it is confirmed that the receiver 1800a is limited to the earphone (Y in step S1811), it is determined whether or not the earphone is connected to the receiver 1800a (step S1813).
 受信機1800aは、イヤホン限定がされていないことを確認すると(ステップS1811のN)、または、イヤホンが接続されていると判定すると(ステップS1813のY)、音声を再生する(ステップS1812)。音声を再生するときには、受信機1800aは、音量が設定範囲内となるようにその音量を調整する。この設定範囲は、イヤホン限定の設定と同様に設定されている。 When the receiver 1800a confirms that the earphone is not limited (N in Step S1811) or determines that the earphone is connected (Y in Step S1813), the receiver 1800a reproduces the sound (Step S1812). When playing back audio, the receiver 1800a adjusts the volume so that the volume is within the set range. This setting range is set similarly to the setting limited to the earphone.
 受信機1800aは、イヤホンが接続されていないと判定すると(ステップS1813のN)、イヤホンの接続をユーザに促す通知を行う(ステップS1814)。この通知は、例えば、画面表示、音声出力または振動によって行われる。 If the receiver 1800a determines that the earphone is not connected (N in step S1813), the receiver 1800a performs a notification prompting the user to connect the earphone (step S1814). This notification is performed by, for example, screen display, audio output, or vibration.
 また、受信機1800aは、強制的に音声再生を行うことを禁じる設定がされていない場合には、強制再生のためのインタフェース用意し、ユーザが強制再生の操作を行ったか否かを判定する(ステップS1815)。ここで、強制再生の操作を行ったと判定すると(ステップS1815のY)、受信機1800aは、イヤホンが接続されていない場合でも音声を再生する(ステップS1812)。 In addition, when the setting for prohibiting forced audio reproduction is not set, the receiver 1800a prepares an interface for forced reproduction and determines whether or not the user has performed an operation of forced reproduction ( Step S1815). If it is determined that the forced playback operation has been performed (Y in step S1815), the receiver 1800a plays back the audio even when the earphone is not connected (step S1812).
 一方、強制再生の操作を行っていないと判定すると(ステップS1815のN)、受信機1800aは、あらかじめ受信した音声データ、および解析した同期時刻を保持しておくことで、イヤホンが接続された際に速やかに音声の同期再生を行う。 On the other hand, if it is determined that the forced regeneration operation is not performed (N in step S1815), the receiver 1800a retains the audio data received in advance and the analyzed synchronization time so that the earphone is connected. Quickly synchronize audio playback.
 図30は、実施の形態3における受信機1800aの処理フローの他の例を示す図である。 FIG. 30 is a diagram illustrating another example of the processing flow of the receiver 1800a according to the third embodiment.
 受信機1800aは、まず、送信機1800dからIDを受信する(ステップS1821)。つまり、受信機1800aは、送信機1800dのID、または、送信機1800dに表示されているコンテンツのID、を示す可視光信号を受信する。 First, the receiver 1800a receives an ID from the transmitter 1800d (step S1821). That is, the receiver 1800a receives a visible light signal indicating the ID of the transmitter 1800d or the ID of the content displayed on the transmitter 1800d.
 次に、受信機1800aは、その受信したIDに関連付けられている情報(コンテンツ)を、サーバからダウンロードする(ステップS1822)。または、受信機1800aは、受信機1800aの内部にあるデータ保持部からその情報を読み出す。以下、この情報を関連情報という。 Next, the receiver 1800a downloads information (content) associated with the received ID from the server (step S1822). Alternatively, the receiver 1800a reads out the information from the data holding unit in the receiver 1800a. Hereinafter, this information is referred to as related information.
 次に、受信機1800aは、その関連情報に含まれている同期再生フラグがONを示しているか否かを判定する(ステップS1823)。ここで、同期再生フラグがONを示していないと判定すると(ステップS1823のN)、受信機1800aは、その関連情報によって示される内容を出力する(ステップS1824)。つまり、その内容が画像である場合には、受信機1800aは画像を表示し、その内容が音声である場合には、受信機1800aは音声を出力する。 Next, the receiver 1800a determines whether or not the synchronous reproduction flag included in the related information indicates ON (step S1823). If it is determined that the synchronous reproduction flag does not indicate ON (N in step S1823), the receiver 1800a outputs the content indicated by the related information (step S1824). That is, when the content is an image, the receiver 1800a displays an image, and when the content is audio, the receiver 1800a outputs audio.
 一方、受信機1800aは、同期再生フラグがONを示していると判定すると(ステップS1823のY)、さらに、その関連情報に含まれている時刻合わせモードが、送信機基準モードに設定されているか、絶対時刻モードに設定されているかを判定する(ステップS1825)。絶対時刻モードに設定されていると判定すると、受信機1800aは、最後の時刻合わせが現在時刻から一定時間以内に行われたか否かを判定する(ステップS1826)。このときの時刻合わせは、所定の方法によって時刻情報を入手し、その時刻情報を用いて、受信機1800aに備えられている時計の時刻を、基準クロックの絶対時刻に合わせる処理である。所定の方法は、例えばGPS(Global Positioning System)電波またはNTP(Network Time Protocol)電波を用いた方法である。なお、上述の現在時刻は、端末装置である受信機1800aが可視光信号を受信した時刻であってもよい。 On the other hand, when receiver 1800a determines that the synchronous reproduction flag indicates ON (Y in step S1823), is the time adjustment mode included in the related information set to the transmitter reference mode? Then, it is determined whether or not the absolute time mode is set (step S1825). If it is determined that the absolute time mode is set, the receiver 1800a determines whether or not the last time adjustment has been performed within a certain time from the current time (step S1826). The time adjustment at this time is processing for obtaining time information by a predetermined method and using the time information to adjust the time of a clock provided in the receiver 1800a to the absolute time of the reference clock. The predetermined method is, for example, a method using a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave. Note that the current time described above may be a time when the receiver 1800a, which is a terminal device, receives a visible light signal.
 受信機1800aは、最後の時刻合わせが一定時間以内に行われたと判定すると(ステップS1826のY)、受信機1800aの時計の時刻に基づいて関連情報を出力することにより、送信機1800dに表示されるコンテンツと関連情報とを同期させる(ステップS1827)。関連情報によって示される内容が例えば動画像である場合には、受信機1800aは、送信機1800dに表示されるコンテンツに同期するように、その動画像を表示する。関連情報によって示される内容が例えば音声である場合には、受信機1800aは、送信機1800dに表示されるコンテンツに同期するように、その音声を出力する。例えば、関連情報が音声を示す場合には、関連情報は、音声を構成する各フレームを含み、それらのフレームにはタイムスタンプが付けられている。受信機1800aは、自らの時計の時刻に該当するタイプスタンプが付けられているフレームを再生することによって、送信機1800dのコンテンツに同期された音声を出力する。 If the receiver 1800a determines that the last time adjustment has been performed within a certain time (Y in step S1826), the receiver 1800a outputs the related information based on the time of the clock of the receiver 1800a, and is displayed on the transmitter 1800d. Content and related information are synchronized (step S1827). When the content indicated by the related information is, for example, a moving image, the receiver 1800a displays the moving image so as to be synchronized with the content displayed on the transmitter 1800d. When the content indicated by the related information is, for example, audio, the receiver 1800a outputs the audio so as to be synchronized with the content displayed on the transmitter 1800d. For example, when the related information indicates sound, the related information includes each frame constituting the sound, and these frames are time stamped. The receiver 1800a outputs a sound synchronized with the content of the transmitter 1800d by playing back a frame with a type stamp corresponding to the time of its own clock.
 受信機1800aは、最後の時刻合わせが一定時間以内に行われていないと判定すると(ステップS1826のN)、所定の方法で時刻情報の入手を試み、その時刻情報を入手することができたか否かを判定する(ステップS1828)。ここで、時刻情報を入手することができたと判定すると(ステップS1828のY)、受信機1800aは、その時刻情報を用いて、受信機1800aの時計の時刻を更新する(ステップS1829)。そして、受信機1800aは、上述のステップS1827の処理を実行する。 If the receiver 1800a determines that the last time adjustment has not been performed within a certain time (N in step S1826), the receiver 1800a attempts to obtain the time information by a predetermined method, and whether or not the time information has been obtained. Is determined (step S1828). If it is determined that the time information has been obtained (Y in step S1828), the receiver 1800a updates the time of the clock of the receiver 1800a using the time information (step S1829). Then, the receiver 1800a executes the process of step S1827 described above.
 また、ステップS1825において、時刻合わせモードが送信機基準モードであると判定したとき、または、ステップS1828において、時刻情報を入手することができなかったと判定すると(ステップS1828のN)、受信機1800aは、送信機1800dから時刻情報を取得する(ステップS1830)。つまり、受信機1800aは、可視光通信によって同期信号である時刻情報を送信機1800dから取得する。例えば、同期信号は、図26に示す時間パケット1および時間パケット2である。または、受信機1800aは、Bluetooth(登録商標)またはWi-Fiなどの電波によって時刻情報を送信機1800dから取得する。そして、受信機1800aは、上述のステップS1829およびS1827の処理を実行する。 If it is determined in step S1825 that the time adjustment mode is the transmitter reference mode, or if it is determined in step S1828 that time information could not be obtained (N in step S1828), the receiver 1800a The time information is acquired from the transmitter 1800d (step S1830). That is, the receiver 1800a acquires time information that is a synchronization signal from the transmitter 1800d through visible light communication. For example, the synchronization signals are time packet 1 and time packet 2 shown in FIG. Alternatively, the receiver 1800a acquires time information from the transmitter 1800d by radio waves such as Bluetooth (registered trademark) or Wi-Fi. Then, the receiver 1800a executes the processes of steps S1829 and S1827 described above.
 本実施の形態では、ステップS1829,S1830のように、GPS電波またはNTP電波によって、受信機1800aである端末装置の時計と基準クロックとの間で同期をとるための処理(時刻合わせ)が行われた時刻が、端末装置が可視光信号を受信した時刻から所定の時間より前である場合、送信機1800dから送信された可視光信号が示す時刻により、端末装置の時計と、送信機の時計との間で同期をとる。これにより、端末装置は、送信機1800dで再生される送信機側コンテンツと同期するタイミングに、コンテンツ(動画または音声)を再生することができる。 In the present embodiment, as in steps S1829 and S1830, processing (time adjustment) is performed for synchronization between the clock of the terminal device that is the receiver 1800a and the reference clock by GPS radio waves or NTP radio waves. The time of the terminal device, the time of the terminal device, and the time of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter 1800d. Synchronize between. Accordingly, the terminal device can reproduce the content (moving image or sound) at the timing synchronized with the transmitter-side content reproduced by the transmitter 1800d.
 図31Aは、実施の形態3における同期再生の具体的な方法を説明するための図である。同期再生の方法には、図31Aに示す方法a~eがある。 FIG. 31A is a diagram for explaining a specific method of synchronized playback in the third embodiment. There are methods a to e shown in FIG.
 (方法a)
 方法aでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、コンテンツIDおよびコンテンツ再生中時刻を示す可視光信号を出力する。コンテンツ再生中時刻は、コンテンツIDが送信機1800dから送信されたときに送信機1800dによって再生されている、コンテンツの一部であるデータの再生時刻である。データは、コンテンツが動画像であれば、その動画像を構成するピクチャまたはシーケンスなどであり、コンテンツが音声であれば、その音声を構成するフレームなどである。再生時刻は、例えば、コンテンツの先頭からの再生時間を時刻として示す。コンテンツが動画像であれば、再生時刻はPTS(Presentation Time Stamp)としてコンテンツに含まれている。つまり、コンテンツには、そのコンテンツを構成するデータごとに、そのデータの再生時刻(表示時刻)が含まれている。
(Method a)
In the method a, the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments. The content playback time is the playback time of data that is part of the content that is being played back by the transmitter 1800d when the content ID is transmitted from the transmitter 1800d. The data is a picture or a sequence constituting the moving image if the content is a moving image, or a frame constituting the sound if the content is sound. The playback time indicates, for example, the playback time from the beginning of the content as the time. If the content is a moving image, the playback time is included in the content as a PTS (Presentation Time Stamp). That is, the content includes the reproduction time (display time) of the data for each data constituting the content.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示されるコンテンツIDを含む要求信号をサーバ1800fに送信する。サーバ1800fは、その要求信号を受信し、要求信号に含まれるコンテンツIDに対応付けられているコンテンツを受信機1800aに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal to the server 1800f. The server 1800f receives the request signal, and transmits the content associated with the content ID included in the request signal to the receiver 1800a.
 受信機1800aは、そのコンテンツを受信すると、そのコンテンツを、(コンテンツ再生中時刻+ID受信からの経過時間)の時点から再生する。ID受信からの経過時間は、コンテンツIDが受信機1800aによって受信されたときからの経過時間である。 When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception). The elapsed time from the reception of the ID is an elapsed time from when the content ID is received by the receiver 1800a.
 (方法b)
 方法bでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、コンテンツIDおよびコンテンツ再生中時刻を示す可視光信号を出力する。受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示されるコンテンツIDおよびコンテンツ再生中時刻を含む要求信号をサーバ1800fに送信する。サーバ1800fは、その要求信号を受信し、要求信号に含まれるコンテンツIDに対応付けられているコンテンツのうち、コンテンツ再生中時刻以降の一部のコンテンツのみを受信機1800aに送信する。
(Method b)
In the method b, the transmitter 1800d outputs a visible light signal indicating the content ID and the content playback time by changing the luminance of the display, as in the above embodiments. The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the content ID indicated by the visible light signal and the content playback time to the server 1800f. The server 1800f receives the request signal, and transmits only a part of the content after the content playback time to the receiver 1800a among the content associated with the content ID included in the request signal.
 受信機1800aは、その一部のコンテンツを受信すると、その一部のコンテンツを、(ID受信からの経過時間)の時点から再生する。 When the receiver 1800a receives the part of the content, the receiver 1800a reproduces the part of the content from the time point (elapsed time since the reception of the ID).
 (方法c)
 方法cでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、送信機IDおよびコンテンツ再生中時刻を示す可視光信号を出力する。送信機IDは、送信機を識別するための情報である。
(Method c)
In the method c, the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the content reproduction time by changing the luminance of the display, as in the above embodiments. The transmitter ID is information for identifying the transmitter.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示される送信機IDを含む要求信号をサーバ1800fに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
 サーバ1800fは、送信機IDごとに、その送信機IDの送信機によって再生されるコンテンツのタイムテーブルである再生予定表を保持している。さらに、サーバ1800fは時計を備えている。このようなサーバ1800fは、その要求信号を受信すると、その要求信号に含まれる送信機IDと、サーバ1800fの時計の時刻(サーバ時刻)とに対応付けられているコンテンツを、再生中のコンテンツとして、再生予定表から特定する。そして、サーバ1800fは、そのコンテンツを受信機1800aに送信する。 The server 1800f holds, for each transmitter ID, a reproduction schedule that is a timetable of content reproduced by the transmitter with the transmitter ID. Further, the server 1800f includes a clock. When such a server 1800f receives the request signal, the content associated with the transmitter ID included in the request signal and the clock time (server time) of the server 1800f is the content being played back. Identify from the playback schedule. Then, the server 1800f transmits the content to the receiver 1800a.
 受信機1800aは、そのコンテンツを受信すると、そのコンテンツを、(コンテンツ再生中時刻+ID受信からの経過時間)の時点から再生する。 When the receiver 1800a receives the content, the receiver 1800a plays the content from the time of (content playback time + elapsed time since ID reception).
 (方法d)
 方法dでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、送信機IDおよび送信機時刻を示す可視光信号を出力する。送信機時刻は、送信機1800dに備えられている時計によって示される時刻である。
(Method d)
In the method d, the transmitter 1800d outputs a visible light signal indicating the transmitter ID and the transmitter time by changing the luminance of the display as in the above embodiments. The transmitter time is a time indicated by a clock provided in the transmitter 1800d.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示される送信機IDおよび送信機時刻を含む要求信号をサーバ1800fに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal and the transmitter time to the server 1800f.
 サーバ1800fは、上述の再生予定表を保持している。このようなサーバ1800fは、その要求信号を受信すると、その要求信号に含まれる送信機IDと送信機時刻とに対応付けられているコンテンツを、再生中のコンテンツとして、再生予定表から特定する。さらに、サーバ1800fは、送信機時刻からコンテンツ再生中時刻を特定する。つまり、サーバ1800fは、特定されたコンテンツの再生開始時刻を再生予定表から見つけ出し、送信機時刻と再生開始時刻との間の時間をコンテンツ再生中時刻として特定する。そして、サーバ1800fは、そのコンテンツおよびコンテンツ再生中時刻を受信機1800aに送信する。 The server 1800f holds the above reproduction schedule. When such a server 1800f receives the request signal, the server 1800f identifies the content associated with the transmitter ID and the transmitter time included in the request signal as the content being reproduced from the reproduction schedule. Furthermore, the server 1800f specifies the content playback time from the transmitter time. That is, the server 1800f finds the playback start time of the specified content from the playback schedule, and specifies the time between the transmitter time and the playback start time as the content playback time. Then, the server 1800f transmits the content and the content playback time to the receiver 1800a.
 受信機1800aは、そのコンテンツおよびコンテンツ再生中時刻を受信すると、そのコンテンツを、(コンテンツ再生中時刻+ID受信からの経過時間)の時点から再生する。 Upon receiving the content and the content playback time, the receiver 1800a plays the content from the time of (content playback time + elapsed time since reception of ID).
 このように、本実施の形態では、可視光信号は、その可視光信号が送信機1800dから送信される時刻を示す。したがって、端末装置である受信機1800aは、可視光信号が送信機1800dから送信される時刻(送信機時刻)に対応付けられたコンテンツを受信することができる。例えば、送信機時刻が5時43分であれば、5時43分に再生されるコンテンツを受信することができる。 Thus, in the present embodiment, the visible light signal indicates the time when the visible light signal is transmitted from the transmitter 1800d. Therefore, the receiver 1800a, which is a terminal device, can receive content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter 1800d. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
 また、本実施の形態では、サーバ1800fは、それぞれ時刻に関連付けられている複数のコンテンツを有している。しかし、可視光信号が示す時刻に関連付けられたコンテンツがサーバ1800fに存在しない場合がある。このような場合には、端末装置である受信機1800aは、その複数のコンテンツのうち、可視光信号が示す時刻に最も近く、かつ、可視光信号が示す時刻の後の時刻に関連付けられているコンテンツを受信してもよい。これにより、可視光信号が示す時刻に関連付けられたコンテンツがサーバ1800fに存在しなくても、そのサーバ1800fにある複数のコンテンツの中から、適切なコンテンツを受信することができる。 In the present embodiment, the server 1800f has a plurality of contents each associated with a time. However, the content associated with the time indicated by the visible light signal may not exist in the server 1800f. In such a case, the receiver 1800a as the terminal device is closest to the time indicated by the visible light signal and is associated with the time after the time indicated by the visible light signal among the plurality of contents. Content may be received. Thereby, even if the content associated with the time indicated by the visible light signal does not exist in the server 1800f, it is possible to receive appropriate content from among the plurality of contents in the server 1800f.
 また、本実施の形態における再生方法は、光源の輝度変化により可視光信号を送信する送信機1800dから、可視光信号を受信機1800a(端末装置)のセンサにより受信する信号受信ステップと、受信機1800aから、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する送信ステップと、受信機1800aが、サーバ1800fからコンテンツを受信するコンテンツ受信ステップと、コンテンツを再生する再生ステップとを含む。可視光信号は、送信機IDと送信機時刻とを示す。送信機IDはID情報である。また、送信機時刻は、送信機1800dの時計によって示される時刻であり、その可視光信号が送信機1800dから送信される時刻である。そして、コンテンツ受信ステップでは、受信機1800aは、可視光信号によって示される送信機IDおよび送信機時刻に対応付けられたコンテンツを受信する。これにより、受信機1800aは、送信機IDおよび送信機時刻に対して適切なコンテンツを再生することができる。 In addition, the reproduction method according to the present embodiment includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal according to a change in luminance of the light source, and a receiver. 1800a transmits a request signal for requesting the content associated with the visible light signal to the server 1800f, the receiver 1800a receives the content from the server 1800f, and reproduces the content. A playback step. The visible light signal indicates a transmitter ID and a transmitter time. The transmitter ID is ID information. The transmitter time is the time indicated by the clock of the transmitter 1800d, and the time when the visible light signal is transmitted from the transmitter 1800d. In the content reception step, the receiver 1800a receives the content associated with the transmitter ID and the transmitter time indicated by the visible light signal. As a result, the receiver 1800a can reproduce appropriate content with respect to the transmitter ID and the transmitter time.
 (方法e)
 方法eでは、送信機1800dは、上記各実施の形態と同様に、ディスプレイを輝度変化させることよって、送信機IDを示す可視光信号を出力する。
(Method e)
In the method e, the transmitter 1800d outputs a visible light signal indicating the transmitter ID by changing the luminance of the display as in the above embodiments.
 受信機1800aは、上記各実施の形態と同様に送信機1800dを撮影することによって、その可視光信号を受信する。そして、受信機1800aは、可視光信号によって示される送信機IDを含む要求信号をサーバ1800fに送信する。 The receiver 1800a receives the visible light signal by photographing the transmitter 1800d as in the above embodiments. Then, the receiver 1800a transmits a request signal including the transmitter ID indicated by the visible light signal to the server 1800f.
 サーバ1800fは、上述の再生予定表を保持し、さらに、時計を備えている。このようなサーバ1800fは、その要求信号を受信すると、その要求信号に含まれる送信機IDとサーバ時刻とに対応付けられているコンテンツを、再生中のコンテンツとして、再生予定表から特定する。なお、サーバ時刻は、サーバ1800fの時計によって示される時刻である。さらに、サーバ1800fは、特定されたコンテンツの再生開始時刻も再生予定表から見つけ出す。そして、サーバ1800fは、そのコンテンツおよびコンテンツ再生開始時刻を受信機1800aに送信する。 The server 1800f holds the above-described reproduction schedule and further includes a clock. When such a server 1800f receives the request signal, the server 1800f identifies the content associated with the transmitter ID and the server time included in the request signal from the reproduction schedule as content being reproduced. The server time is the time indicated by the clock of the server 1800f. Further, the server 1800f finds the reproduction start time of the specified content from the reproduction schedule table. Then, the server 1800f transmits the content and the content reproduction start time to the receiver 1800a.
 受信機1800aは、そのコンテンツおよびコンテンツ再生開始時刻を受信すると、そのコンテンツを、(受信機時刻-コンテンツ再生開始時刻)の時点から再生する。なお、受信機時刻は、受信機1800aに備えられている時計によって示される時刻である。 When the receiver 1800a receives the content and the content playback start time, the receiver 1800a plays the content from the time of (receiver time-content playback start time). The receiver time is a time indicated by a clock provided in the receiver 1800a.
 このように、本実施の形態における再生方法は、光源の輝度変化により可視光信号を送信する送信機1800dから、可視光信号を受信機1800a(端末装置)のセンサにより受信する信号受信ステップと、受信機1800aから、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する送信ステップと、受信機1800aが、各時刻と、各時刻に再生されるデータとを含むコンテンツを、サーバ1800fから受信するコンテンツ受信ステップと、そのコンテンツのうち、受信機1800aに備えられている時計の時刻に該当するデータを再生する再生ステップとを含む。したがって、受信機1800aは、そのコンテンツにおけるデータを、間違った時刻に再生してしまうことなく、そのコンテンツに示される正しい時刻に、適切に再生することができる。また、送信機1800dにおいても、そのコンテンツに関連するコンテンツ(送信機側コンテンツ)が再生されていれば、受信機1800aは、コンテンツをその送信機側コンテンツに適切に同期させて再生することができる。 As described above, the reproduction method according to the present embodiment includes a signal receiving step of receiving a visible light signal by a sensor of the receiver 1800a (terminal device) from a transmitter 1800d that transmits a visible light signal due to a luminance change of the light source; The transmitting step of transmitting a request signal for requesting the content associated with the visible light signal from the receiver 1800a to the server 1800f, and the receiver 1800a include each time and data reproduced at each time A content receiving step of receiving content from the server 1800f and a playback step of playing back data corresponding to the time of the clock provided in the receiver 1800a among the content. Therefore, the receiver 1800a can appropriately reproduce the data in the content at the correct time indicated by the content without reproducing the data at the wrong time. Also, in the transmitter 1800d, if content related to the content (transmitter-side content) is reproduced, the receiver 1800a can reproduce the content in synchronization with the transmitter-side content appropriately. .
 なお、上記方法c~eであっても、方法bのように、サーバ1800fは、コンテンツのうち、コンテンツ再生中時刻以降の一部のコンテンツのみを受信機1800aに送信してもよい。 Note that even in the above methods c to e, like the method b, the server 1800f may transmit only a part of the content after the content playback time to the receiver 1800a.
 また、上記方法a~eでは、受信機1800aは、サーバ1800fに要求信号を送信して、サーバ1800fから必要なデータを受信するが、このよう送受信をすることなく、サーバ1800fにあるデータを予め保持しておいてもよい。 In the above methods a to e, the receiver 1800a transmits a request signal to the server 1800f and receives necessary data from the server 1800f. However, the data in the server 1800f is transmitted in advance without performing such transmission / reception. You may keep it.
 図31Bは、上述の方法eによって同期再生を行う再生装置の構成を示すブロック図である。 FIG. 31B is a block diagram showing the configuration of a playback apparatus that performs synchronized playback by the method e described above.
 再生装置B10は、上述の方法eによって同期再生を行う受信機1800aまたは端末装置であって、センサB11と、要求信号送信部B12と、コンテンツ受信部B13と、時計B14と、再生部B15とを備えている。 The playback device B10 is a receiver 1800a or a terminal device that performs synchronous playback by the method e described above, and includes a sensor B11, a request signal transmission unit B12, a content reception unit B13, a clock B14, and a playback unit B15. I have.
 センサB11は、例えばイメージセンサであって、光源の輝度変化により可視光信号を送信する送信機1800dから、その可視光信号を受信する。要求信号送信部B12は、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する。コンテンツ受信部B13は、各時刻と、各時刻に再生されるデータとを含むコンテンツを、サーバ1800fから受信する。再生部B15は、そのコンテンツのうち、時計B14の時刻に該当するデータを再生する。 Sensor B11 is, for example, an image sensor, and receives the visible light signal from a transmitter 1800d that transmits a visible light signal according to a change in luminance of the light source. The request signal transmission unit B12 transmits a request signal for requesting content associated with the visible light signal to the server 1800f. The content receiving unit B13 receives content including each time and data reproduced at each time from the server 1800f. The reproduction unit B15 reproduces data corresponding to the time of the clock B14 in the content.
 図31Cは、上述の方法eによって同期再生を行う端末装置の処理動作を示すフローチャートである。 FIG. 31C is a flowchart showing the processing operation of the terminal device that performs synchronous reproduction by the method e described above.
 再生装置B10は、上述の方法eによって同期再生を行う受信機1800aまたは端末装置であって、ステップSB11~SB15の各処理を実行する。 The playback device B10 is a receiver 1800a or a terminal device that performs synchronized playback by the method e described above, and executes each process of steps SB11 to SB15.
 ステップSB11では、光源の輝度変化により可視光信号を送信する送信機1800dから、その可視光信号を受信する。ステップSB12では、可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバ1800fに送信する。ステップSB13では、各時刻と、各時刻に再生されるデータとを含むコンテンツを、サーバ1800fから受信する。ステップSB15では、そのコンテンツのうち、時計B14の時刻に該当するデータを再生する。 In step SB11, the visible light signal is received from the transmitter 1800d that transmits the visible light signal according to the luminance change of the light source. In step SB12, a request signal for requesting content associated with the visible light signal is transmitted to server 1800f. In step SB13, content including each time and data reproduced at each time is received from server 1800f. In step SB15, data corresponding to the time of the clock B14 is reproduced from the content.
 このように、本実施の形態における再生装置B10および再生方法では、コンテンツにおけるデータを、間違った時刻に再生してしまうことなく、そのコンテンツに示される正しい時刻に、適切に再生することができる。 As described above, in the playback device B10 and the playback method in the present embodiment, the data in the content can be appropriately played back at the correct time indicated by the content without being played back at the wrong time.
 なお、本実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。ここで、本実施の形態の再生装置B10などを実現するソフトウェアは、図31Cに示すフローチャートに含まれる各ステップをコンピュータに実行させるプログラムである。 In the present embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. Here, the software that realizes the playback apparatus B10 and the like of the present embodiment is a program that causes a computer to execute each step included in the flowchart shown in FIG. 31C.
 図32は、実施の形態3における同期再生の事前準備を説明するための図である。 FIG. 32 is a diagram for explaining preparations for synchronized playback in the third embodiment.
 受信機1800aは、同期再生を行うために、受信機1800aに備えられている時計の時刻を基準クロックの時刻に合わせる時刻合わせを行う。この時刻合わせのために、受信機1800aは、以下の(1)~(5)の処理を行う。 The receiver 1800a adjusts the time of the clock provided in the receiver 1800a to the time of the reference clock in order to perform synchronized playback. For this time adjustment, the receiver 1800a performs the following processes (1) to (5).
 (1)受信機1800aは、信号を受信する。この信号は、送信機1800dのディスプレイの輝度変化によって送信される可視光信号であっても、無線機器からのWi-FiまたはBluetooth(登録商標)に基づく電波信号であってもよい。または、受信機1800aは、このような信号を受信する代わりに、受信機1800aの位置を示す位置情報を例えばGPSなどによって取得する。そして、受信機1800aは、その位置情報によって、受信機1800aが予め定められた場所または建物に入ったことを認識する。 (1) The receiver 1800a receives a signal. This signal may be a visible light signal transmitted by a change in luminance of the display of the transmitter 1800d, or a radio wave signal based on Wi-Fi or Bluetooth (registered trademark) from a wireless device. Alternatively, the receiver 1800a acquires position information indicating the position of the receiver 1800a by, for example, GPS instead of receiving such a signal. Then, the receiver 1800a recognizes that the receiver 1800a has entered a predetermined place or building based on the position information.
 (2)受信機1800aは、上記信号を受信すると、または、予め定められた場所に入ったことを認識すると、その信号または場所などに関連付けられているデータ(関連情報)を要求する要求信号をサーバ(可視光ID解決サーバ)1800fに送信する。 (2) When the receiver 1800a receives the above signal or recognizes that it has entered a predetermined location, it receives a request signal for requesting data (related information) associated with the signal or location. It transmits to the server (visible light ID resolution server) 1800f.
 (3)サーバ1800fは、上述のデータと、受信機1800aに時刻合わせをさせるための時刻合わせ要求とを受信機1800aに送信する。 (3) The server 1800f transmits the above-described data and a time adjustment request for causing the receiver 1800a to adjust the time to the receiver 1800a.
 (4)受信機1800aは、データと時刻合わせ要求とを受信すると、時刻合わせ要求をGPSタイムサーバ、NTPサーバまたは、電気通信事業者(キャリア)の基地局に送信する。 (4) When receiving the data and the time adjustment request, the receiver 1800a transmits the time adjustment request to the GPS time server, the NTP server, or the base station of the telecommunications carrier (carrier).
 (5)上記サーバまたは基地局は、その時刻合わせ要求を受信すると、現在時刻(基準クロックの時刻または絶対時刻)を示す時刻データ(時刻情報)を受信機1800aに送信する。受信機1800aは、自らに備えられている時計の時刻を、その時刻データに示される現在時刻に合わせることによって、時刻合わせを行う。 (5) Upon receiving the time adjustment request, the server or the base station transmits time data (time information) indicating the current time (reference clock time or absolute time) to the receiver 1800a. The receiver 1800a adjusts the time by adjusting the time of the clock provided to the receiver 1800a to the current time indicated by the time data.
 このように本実施の形態では、受信機1800a(端末装置)に備えられている時計と、基準クロックとの間では、GPS(Global Positioning System)電波、または、NTP(Network Time Protocol)電波によって、同期がとられている。したがって、受信機1800aは、基準クロックにしたがった適切な時刻に、その時刻に該当するデータを再生することができる。 As described above, in this embodiment, a GPS (Global Positioning System) radio wave or an NTP (Network Time Protocol) radio wave is used between the clock provided in the receiver 1800a (terminal device) and the reference clock. Synchronized. Therefore, the receiver 1800a can reproduce the data corresponding to the time at an appropriate time according to the reference clock.
 図33は、実施の形態3における受信機1800aの応用例を示す図である。 FIG. 33 is a diagram illustrating an example of application of the receiver 1800a in the third embodiment.
 受信機1800aは、上述のようにスマートフォンとして構成されて、例えば、透光性を有する樹脂またはガラスなどの部材で構成されたホルダー1810に保持されて利用される。このホルダー1810は、背板部1810aと、背板部1810aに立設された係止部1810bとを有する。受信機1800aは、背板部1810aと係止部1810bとの間に、その背板部1810aに沿わせるように挿入される。 The receiver 1800a is configured as a smartphone as described above, and is used by being held by a holder 1810 formed of, for example, a translucent resin or glass member. The holder 1810 includes a back plate portion 1810a and a locking portion 1810b provided upright on the back plate portion 1810a. The receiver 1800a is inserted between the back plate portion 1810a and the locking portion 1810b so as to be along the back plate portion 1810a.
 図34Aは、実施の形態3における、ホルダー1810に保持された受信機1800aの正面図である。 FIG. 34A is a front view of receiver 1800a held by holder 1810 in the third embodiment.
 受信機1800aは、上述のように挿入された状態でホルダー1810に保持される。このとき、係止部1810bは、受信機1800aの下部と係止し、その下部を背板部1810aと挟持する。また、受信機1800aの背面は、背板部1810aと対向し、受信機1800aのディスプレイ1801は露出した状態となる。 The receiver 1800a is held by the holder 1810 in the inserted state as described above. At this time, the locking portion 1810b locks with the lower portion of the receiver 1800a and sandwiches the lower portion with the back plate portion 1810a. In addition, the back surface of the receiver 1800a faces the back plate portion 1810a, and the display 1801 of the receiver 1800a is exposed.
 図34Bは、実施の形態3における、ホルダー1810に保持された受信機1800aの背面図である。 FIG. 34B is a rear view of receiver 1800a held by holder 1810 in the third embodiment.
 また、背板部1810aには、通孔1811が形成され、その通孔1811の近くに可変フィルタ1812が取り付けられている。受信機1800aがホルダー1810に保持されると、受信機1800aのカメラ1802は、背板部1810aから通孔1811を介して露出する。また、受信機1800aのフラッシュライト1803は、可変フィルタ1812に対向する。 Further, a through hole 1811 is formed in the back plate portion 1810a, and a variable filter 1812 is attached in the vicinity of the through hole 1811. When receiver 1800a is held by holder 1810, camera 1802 of receiver 1800a is exposed through back hole 1811 from back plate portion 1810a. The flashlight 1803 of the receiver 1800a faces the variable filter 1812.
 可変フィルタ1812は、例えば円盤状に形成され、それぞれ扇状で同じサイズの3つの色フィルタ(赤色フィルタ、黄色フィルタ、および緑色フィルタ)を有する。また、可変フィルタ1812は、可変フィルタ1812の中心を軸にして回転自在に背板部1810aに取り付けられている。また、赤色フィルタは、赤色の透光性を有するフィルタであって、黄色フィルタは、黄色の透光性を有するフィルタであって、緑色フィルタは、緑色の透光性を有するフィルタである。 The variable filter 1812 is formed in a disk shape, for example, and has three color filters (a red filter, a yellow filter, and a green filter) each having a fan shape and the same size. The variable filter 1812 is attached to the back plate portion 1810a so as to be rotatable about the center of the variable filter 1812. The red filter is a filter having red translucency, the yellow filter is a filter having yellow translucency, and the green filter is a filter having green translucency.
 したがって、可変フィルタ1812が回転されて、例えば、赤色フィルタがフラッシュライト1803aに対向する位置に配置される。この場合、フラッシュライト1803aから放たれる光は、赤色フィルタを透過することによって、赤色の光としてホルダー1810の内部で拡散する。その結果、ホルダー1810の略全体が赤色に発光する。 Therefore, the variable filter 1812 is rotated, and, for example, the red filter is disposed at a position facing the flashlight 1803a. In this case, the light emitted from the flashlight 1803a is diffused inside the holder 1810 as red light by passing through the red filter. As a result, substantially the entire holder 1810 emits red light.
 同様に、可変フィルタ1812が回転されて、例えば、黄色フィルタがフラッシュライト1803aに対向する位置に配置される。この場合、フラッシュライト1803aから放たれる光は、黄色フィルタを透過することによって、黄色の光としてホルダー1810の内部で拡散する。その結果、ホルダー1810の略全体が黄色に発光する。 Similarly, the variable filter 1812 is rotated and, for example, the yellow filter is disposed at a position facing the flashlight 1803a. In this case, the light emitted from the flashlight 1803a is diffused inside the holder 1810 as yellow light by passing through the yellow filter. As a result, substantially the entire holder 1810 emits yellow light.
 同様に、可変フィルタ1812が回転されて、例えば、緑色フィルタがフラッシュライト1803aに対向する位置に配置される。この場合、フラッシュライト1803aから放たれる光は、緑色フィルタを透過することによって、緑色の光としてホルダー1810の内部で拡散する。その結果、ホルダー1810の略全体が緑色に発光する。 Similarly, the variable filter 1812 is rotated so that, for example, the green filter is disposed at a position facing the flashlight 1803a. In this case, the light emitted from the flashlight 1803a is diffused inside the holder 1810 as green light by passing through the green filter. As a result, substantially the entire holder 1810 emits green light.
 つまり、ホルダー1810は、ペンライトのように、赤色、黄色または緑色に点灯する。 That is, the holder 1810 lights in red, yellow or green like a penlight.
 図35は、実施の形態3における、ホルダー1810に保持された受信機1800aのユースケースを説明するための図である。 FIG. 35 is a diagram for describing a use case of the receiver 1800a held by the holder 1810 in the third embodiment.
 例えば、ホルダー1810に保持された受信機1800aであるホルダー付受信機は、遊園地などで利用される。つまり、遊園地において移動するフロートに向けられた複数のホルダー付受信機は、そのフロートから流れる音楽に合わせて、同期しながら点滅する。つまり、フロートは、上記各実施の形態における送信機として構成され、フロートに取り付けられている光源の輝度変化によって可視光信号を送信する。例えば、フロートは、フロートのIDを示す可視光信号を送信する。そして、ホルダー付受信機は、上記各実施の形態と同様に、受信機1800aのカメラ1802の撮影によって、その可視光信号、つまりIDを受信する。IDを受信した受信機1800aは、そのIDに対応付けられたプログラムを例えばサーバから取得する。このプログラムは、所定の各時刻において受信機1800aのフラッシュライト1803を点灯させる命令からなる。この所定の各時刻は、フロートから流れる音楽に合わせて(同期するように)設定されている。そして、受信機1800aは、そのプログラムにしたがって、フラッシュライト1803aを点滅させる。 For example, a receiver with a holder that is a receiver 1800a held by a holder 1810 is used in an amusement park or the like. That is, the plurality of receivers with holders that are directed to the float moving in the amusement park blink in synchronization with the music flowing from the float. That is, the float is configured as a transmitter in each of the above embodiments, and transmits a visible light signal by a change in luminance of a light source attached to the float. For example, the float transmits a visible light signal indicating the ID of the float. And the receiver with a holder receives the visible light signal, ie, ID, by imaging | photography with the camera 1802 of the receiver 1800a similarly to said each embodiment. The receiver 1800a that has received the ID acquires a program associated with the ID from, for example, a server. This program includes instructions for turning on the flashlight 1803 of the receiver 1800a at each predetermined time. Each predetermined time is set in accordance with the music flowing from the float (so as to be synchronized). Then, the receiver 1800a blinks the flashlight 1803a according to the program.
 これにより、そのIDを受信した各受信機1800aのホルダー1810は、そのIDのフロートから流れる音楽に合わせて同じタイミングで点灯することを繰り返す。 Thereby, the holder 1810 of each receiver 1800a that has received the ID repeats lighting at the same timing according to the music flowing from the float of the ID.
 ここで、各受信機1800aは、設定されている色フィルタ(以下、設定フィルタという)に応じてフラッシュライト1803の点滅を行う。設定フィルタとは、受信機1800aのフラッシュライト1803に対向している色フィルタである。また、各受信機1800aは、ユーザによる操作に基づいて、現在の設定フィルタを認識している。または、各受信機1800aは、カメラ1802の撮影によって得られる画像の色などに基づいて、現在の設定フィルタを認識している。 Here, each receiver 1800a blinks the flashlight 1803 in accordance with a set color filter (hereinafter referred to as a setting filter). The setting filter is a color filter that faces the flashlight 1803 of the receiver 1800a. Each receiver 1800a recognizes the current setting filter based on an operation by the user. Alternatively, each receiver 1800a recognizes the current setting filter based on the color of an image obtained by photographing with the camera 1802.
 つまり、IDを受信した複数の受信機1800aのうち、所定の時刻では、設定フィルタが赤色フィルタであることを認識している複数の受信機1800aのホルダー1810のみが同時に点灯する。次の時刻では、設定フィルタが緑色フィルタであることを認識している複数の受信機1800aのホルダー1810のみが同時に点灯する。さらに次の時刻では、設定フィルタが黄色フィルタであることを認識している複数の受信機1800aのホルダー1810のみが同時に点灯する。 That is, among the plurality of receivers 1800a that have received the ID, only the holders 1810 of the plurality of receivers 1800a that recognize that the setting filter is a red filter are lit simultaneously at a predetermined time. At the next time, only the holders 1810 of the plurality of receivers 1800a that recognize that the setting filter is a green filter are lit simultaneously. Further, at the next time, only the holders 1810 of the plurality of receivers 1800a that recognize that the setting filter is a yellow filter are lit simultaneously.
 このように、ホルダー1810に保持される受信機1800aは、上述の図23~図29に示す同期再生と同様に、フロートの音楽と、他のホルダー1810に保持される受信機1800aとに同期して、フラッシュライト1803、すなわちホルダー1810を点滅させる。 As described above, the receiver 1800a held in the holder 1810 is synchronized with the float music and the receiver 1800a held in the other holder 1810 in the same manner as the synchronous playback shown in FIGS. Then, the flashlight 1803, that is, the holder 1810 is blinked.
 図36は、実施の形態3における、ホルダー1810に保持された受信機1800aの処理動作を示すフローチャートである。 FIG. 36 is a flowchart showing the processing operation of the receiver 1800a held by the holder 1810 in the third embodiment.
 受信機1800aは、フロートからの可視光信号によって示されるフロートのIDを受信する(ステップS1831)。次に、受信機1800aは、そのIDに対応付けられているプログラムをサーバから取得する(ステップS1832)。次に、受信機1800aは、そのプログラムを実行することにより、設定フィルタに応じた所定の各時刻にフラッシュライト1803を点灯させる(ステップS1833)。 The receiver 1800a receives the float ID indicated by the visible light signal from the float (step S1831). Next, the receiver 1800a acquires a program associated with the ID from the server (step S1832). Next, the receiver 1800a executes the program to turn on the flashlight 1803 at each predetermined time according to the setting filter (step S1833).
 ここで、受信機1800aは、受信したIDまたは取得したプログラムに応じた画像をディスプレイ1801に表示させてもよい。 Here, the receiver 1800a may cause the display 1801 to display an image corresponding to the received ID or the acquired program.
 図37は、実施の形態3における受信機1800aによって表示される画像の一例を示す図である。 FIG. 37 is a diagram illustrating an example of an image displayed by the receiver 1800a according to the third embodiment.
 受信機1800aは、例えばサンタクロースのフロートからIDを受信すると、図37の(a)に示すように、サンタクロースの画像を表示させる。さらに、受信機1800aは、図37の(b)に示すように、フラッシュライト1803の点灯と同時に、そのサンタクロースの画像の背景色を、設定フィルタの色に変更してもよい。例えば、設定フィルタの色が赤色の場合には、フラッシュライト1803の点灯によって、ホルダー1810が赤色に点灯すると同時に、赤色の背景色を有するサンタクロースの画像がディスプレイ1801に表示される。つまり、ホルダー1810の点滅と、ディスプレイ1801の表示とが同期する。 For example, when the receiver 1800a receives an ID from a Santa Claus float, the receiver 1800a displays a Santa Claus image as shown in FIG. Further, as shown in FIG. 37B, the receiver 1800a may change the background color of the Santa Claus image to the color of the setting filter simultaneously with the lighting of the flashlight 1803. For example, when the color of the setting filter is red, the holder 1810 is lit red by turning on the flashlight 1803, and at the same time, a Santa Claus image having a red background color is displayed on the display 1801. That is, the blinking of the holder 1810 and the display on the display 1801 are synchronized.
 図38は、実施の形態3におけるホルダーの他の例を示す図である。 FIG. 38 is a diagram showing another example of the holder in the third embodiment.
 ホルダー1820は、上述のホルダー1810と同様に構成されているが、通孔1811および可変フィルタ1812がない。このようなホルダー1820は、背板部1820aに受信機1800aのディスプレイ1801が向けられた状態で、その受信機1800aを保持する。この場合、受信機1800aは、フラッシュライト1803の代わりに、ディスプレイ1801を発光させる。これにより、ディスプレイ1801からの光がホルダー1820の略全体に拡散する。したがって、受信機1800aが、上述のプログラムに応じて、赤色の光でディスプレイ1801を発光させると、ホルダー1820は赤色に点灯する。同様に、受信機1800aが、上述のプログラムに応じて、黄色の光でディスプレイ1801を発光させると、ホルダー1820は黄色に点灯する。受信機1800aが、上述のプログラムに応じて、緑色の光でディスプレイ1801を発光させると、ホルダー1820は緑色に点灯する。このようなホルダー1820を用いれば、可変フィルタ1812の設定を省くことができる。 The holder 1820 is configured in the same manner as the holder 1810 described above, but does not include the through hole 1811 and the variable filter 1812. Such a holder 1820 holds the receiver 1800a in a state where the display 1801 of the receiver 1800a is directed to the back plate portion 1820a. In this case, the receiver 1800a causes the display 1801 to emit light instead of the flashlight 1803. As a result, light from the display 1801 is diffused over substantially the entire holder 1820. Therefore, when the receiver 1800a causes the display 1801 to emit light with red light according to the above-described program, the holder 1820 is lit red. Similarly, when the receiver 1800a causes the display 1801 to emit light with yellow light according to the above-described program, the holder 1820 is lit in yellow. When the receiver 1800a causes the display 1801 to emit light with green light according to the above-described program, the holder 1820 lights up in green. If such a holder 1820 is used, the setting of the variable filter 1812 can be omitted.
 (可視光信号)
 図39A~図39Dは、実施の形態3における可視光信号の一例を示す図である。
(Visible light signal)
FIG. 39A to FIG. 39D are diagrams illustrating examples of visible light signals in the third embodiment.
 送信機は、上述と同様、例えば図39Aに示すように、4PPMの可視光信号を生成し、この可視光信号にしたがって輝度変化する。具体的には、送信機は、4スロットを一信号単位に割り当て、複数の信号単位からなる可視光信号を生成する。信号単位は、スロットごとにHigh(H)またはLow(L)を示す。そして、送信機は、Hのスロットにおいて明るく発光し、Lのスロットにおいて暗く発光または消灯する。例えば、1スロットは、1/9600秒の時間に相当する期間である。 Similarly to the above, the transmitter generates a 4PPM visible light signal and changes the luminance according to the visible light signal, for example, as shown in FIG. 39A. Specifically, the transmitter allocates 4 slots to one signal unit, and generates a visible light signal composed of a plurality of signal units. The signal unit indicates High (H) or Low (L) for each slot. The transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot. For example, one slot is a period corresponding to a time of 1/9600 seconds.
 また、送信機は、例えば図39Bに示すように、一信号単位に割り当てられるスロット数が可変となる可視光信号を生成してもよい。この場合、信号単位では、1つ以上の連続するスロットにおいてHを示す信号と、そのHの信号に続く1つのスロットにおいてLを示す信号とからなる。Hのスロット数が可変であるため、信号単位の全体のスロット数が可変となる。例えば図39Bに示すように、送信機は、3スロットの信号単位、4スロットの信号単位、6スロットの信号単位の順に、それらの信号単位を含む可視光信号を生成する。そして、送信機は、この場合にも、Hのスロットにおいて明るく発光し、Lのスロットにおいて暗く発光または消灯する。 Further, for example, as shown in FIG. 39B, the transmitter may generate a visible light signal in which the number of slots allocated to one signal unit is variable. In this case, the signal unit includes a signal indicating H in one or more consecutive slots and a signal indicating L in one slot following the H signal. Since the number of slots of H is variable, the total number of slots in the signal unit is variable. For example, as shown in FIG. 39B, the transmitter generates a visible light signal including these signal units in the order of a signal unit of 3 slots, a signal unit of 4 slots, and a signal unit of 6 slots. Also in this case, the transmitter emits light brightly in the H slot and emits light darkly or extinguishes in the L slot.
 また、送信機は、例えば図39Cに示すように、複数のスロットを一信号単位に割り当てることなく、任意の期間(信号単位期間)を一信号単位に割り当ててもよい。この信号単位期間は、Hの期間と、そのHの期間に続くLの期間とからなる。Hの期間は、変調前の信号に応じて調整される。Lの期間は、固定であって、上記スロットに相当する期間であってもよい。また、Hの期間およびLの期間はそれぞれ例えば100μs以上の期間である。例えば図39Cに示すように、送信機は、信号単位期間が210μsの信号単位、信号単位期間が220μsの信号単位、信号単位期間が230μsの信号単位の順に、それらの信号単位を含む可視光信号を送信する。そして、送信機は、この場合にも、Hの期間において明るく発光し、Lの期間において暗く発光または消灯する。 Further, for example, as shown in FIG. 39C, the transmitter may allocate an arbitrary period (signal unit period) to one signal unit without allocating a plurality of slots to one signal unit. The signal unit period includes an H period and an L period following the H period. The period of H is adjusted according to the signal before modulation. The period L may be fixed and may be a period corresponding to the slot. The H period and the L period are, for example, periods of 100 μs or more. For example, as shown in FIG. 39C, the transmitter transmits a visible light signal including signal units in the order of a signal unit having a signal unit period of 210 μs, a signal unit having a signal unit period of 220 μs, and a signal unit having a signal unit period of 230 μs. Send. In this case as well, the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
 また、送信機は、例えば図39Dに示すように、LとHとを交互に示す信号を可視光信号として生成してもよい。この場合、可視光信号においてLの期間と、Hの期間とは、それぞれ変調前の信号に応じて調整される。例えば図39Dに示すように、送信機は、100μsの期間においてHを示し、次に、120μsの期間においてLを示し、次に、110μsの期間においてHを示し、さらに、200μsの期間においてLを示す可視光信号を送信する。そして、送信機は、この場合にも、Hの期間において明るく発光し、Lの期間において暗く発光または消灯する。 Further, for example, as shown in FIG. 39D, the transmitter may generate a signal indicating L and H alternately as a visible light signal. In this case, the L period and the H period in the visible light signal are adjusted according to the signals before modulation. For example, as shown in FIG. 39D, the transmitter indicates H for a period of 100 μs, then indicates L for a period of 120 μs, then indicates H for a period of 110 μs, and further indicates L for a period of 200 μs. A visible light signal is transmitted. In this case as well, the transmitter emits light brightly during the H period and emits light darkly or extinguishes during the L period.
 図40は、実施の形態3における可視光信号の構成を示す図である。 FIG. 40 is a diagram showing a configuration of a visible light signal in the third embodiment.
 可視光信号は、例えば、信号1と、その信号1に対応する明るさ調整信号と、信号2と、その信号2に対応する明るさ調整信号とを含む。送信機は、変調前の信号を変調することによって信号1および信号2を生成すると、それらの信号に対する明るさ調整信号を生成し、上述の可視光信号を生成する。 The visible light signal includes, for example, a signal 1, a brightness adjustment signal corresponding to the signal 1, a signal 2, and a brightness adjustment signal corresponding to the signal 2. When the transmitter generates the signal 1 and the signal 2 by modulating the signals before modulation, the transmitter generates a brightness adjustment signal for the signals and generates the above-described visible light signal.
 信号1に対応する明るさ調整信号は、信号1にしたがった輝度変化による明るさの増減を補う信号である。信号2に対応する明るさ調整信号は、信号2にしたがった輝度変化による明るさの増減を補う信号である。ここで、信号1と、その信号1の明るさ調整信号とにしたがった輝度変化によって、明るさB1が表現され、信号2と、その信号2の明るさ調整信号とにしたがった輝度変化によって、明るさB2が表現される。本実施の形態における送信機は、その明るさB1と明るさB2とが等しくなるように、信号1および信号2のそれぞれの明るさ調整信号を可視光信号の一部として生成する。これにより、明るさが一定に保たれ、ちらつきを抑えることができる。 The brightness adjustment signal corresponding to signal 1 is a signal that compensates for increase / decrease in brightness due to a luminance change according to signal 1. The brightness adjustment signal corresponding to the signal 2 is a signal that compensates for increase / decrease in brightness due to a luminance change according to the signal 2. Here, the brightness B1 is expressed by the luminance change according to the signal 1 and the brightness adjustment signal of the signal 1, and the brightness change according to the signal 2 and the brightness adjustment signal of the signal 2 Brightness B2 is expressed. The transmitter in the present embodiment generates the brightness adjustment signals of signal 1 and signal 2 as part of the visible light signal so that the brightness B1 and brightness B2 are equal. Thereby, the brightness is kept constant and flicker can be suppressed.
 また、送信機は、上述の信号1を生成するときには、データ1と、そのデータ1に続くプリアンブル(ヘッダ)と、そのプリアンブルに続くデータ1とを含む信号1を生成する。ここで、プリアンブルは、その前後に配置されているデータ1に対応する信号である。例えば、このプリアンブルは、データ1を読み出すための識別子となる信号である。このように、2つのデータ1と、それらの間に配置されるプリアンブルとから信号1が構成されているため、受信機は、前にあるデータ1の途中から可視光信号を読み出しても、そのデータ1(すなわち信号1)を正しく復調することができる。 In addition, when the transmitter 1 generates the signal 1, the transmitter 1 generates the signal 1 including the data 1, the preamble (header) following the data 1, and the data 1 following the preamble. Here, the preamble is a signal corresponding to data 1 arranged before and after the preamble. For example, this preamble is a signal serving as an identifier for reading data 1. Thus, since the signal 1 is composed of the two data 1 and the preamble arranged between them, even if the receiver reads the visible light signal from the middle of the preceding data 1, Data 1 (ie, signal 1) can be correctly demodulated.
 本発明の一態様に係る再生方法は、光源の輝度変化により可視光信号を送信する送信機から、前記可視光信号を端末装置のセンサにより受信する信号受信ステップと、前記端末装置から、前記可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバに送信する送信ステップと、前記端末装置が、各時刻と、前記各時刻に再生されるデータとを含むコンテンツを、前記サーバから受信するコンテンツ受信ステップと、前記コンテンツのうち、前記端末装置に備えられている時計の時刻に該当するデータを再生する再生ステップとを含む。 The reproduction method according to an aspect of the present invention includes a signal receiving step of receiving the visible light signal by a sensor of a terminal device from a transmitter that transmits a visible light signal according to a luminance change of a light source, and the visible light signal from the terminal device. A transmission step of transmitting a request signal for requesting the content associated with the optical signal to the server, and the terminal device transmits content including each time and data reproduced at each time from the server. A content receiving step of receiving, and a playback step of playing back data corresponding to a time of a clock provided in the terminal device among the content.
 これにより、図31Cに示すように、各時刻と、その各時刻に再生されるデータとを含むコンテンツが端末装置に受信され、端末装置の時計の時刻に該当するデータが再生される。したがって、端末装置は、そのコンテンツにおけるデータを、間違った時刻に再生してしまうことなく、そのコンテンツに示される正しい時刻に、適切に再生することができる。具体的には、図31Aの方法eのように、端末装置である受信機は、コンテンツを(受信機時刻-コンテンツ再生開始時刻)の時点から再生する。上述の端末装置の時計の時刻に該当するデータは、コンテンツのうちの(受信機時刻-コンテンツ再生開始時刻)の時点にあるデータである。また、送信機においても、そのコンテンツに関連するコンテンツ(送信機側コンテンツ)が再生されていれば、端末装置は、コンテンツをその送信機側コンテンツに適切に同期させて再生することができる。なお、コンテンツは音声または画像である。 Thus, as shown in FIG. 31C, content including each time and data reproduced at each time is received by the terminal device, and data corresponding to the clock time of the terminal device is reproduced. Therefore, the terminal device can appropriately reproduce the data in the content at the correct time indicated by the content without reproducing the data at the wrong time. Specifically, as in the method e of FIG. 31A, the receiver as the terminal device reproduces the content from the time of (receiver time−content reproduction start time). The data corresponding to the clock time of the terminal device described above is data at the time of (receiver time−content reproduction start time) in the content. Also, in the transmitter, if content related to the content (transmitter-side content) is being played back, the terminal device can play back the content appropriately synchronized with the transmitter-side content. The content is sound or image.
 また、前記端末装置に備えられている時計と、基準クロックとの間では、GPS(Global Positioning System)電波、または、NTP(Network Time Protocol)電波によって、同期がとられていてもよい。 Further, the clock provided in the terminal device and the reference clock may be synchronized with each other by GPS (Global Positioning System) radio waves or NTP (Network Time Protocol) radio waves.
 これにより、図30および図32に示すように、端末装置(受信機)の時計と基準クロックとの間で同期がとられているため、基準クロックにしたがった適切な時刻に、その時刻に該当するデータを再生することができる。 As a result, as shown in FIGS. 30 and 32, since the clock of the terminal device (receiver) is synchronized with the reference clock, the time corresponds to the appropriate time according to the reference clock. Can be played back.
 また、前記可視光信号は、前記可視光信号が前記送信機から送信される時刻を示してもよい。 Further, the visible light signal may indicate a time when the visible light signal is transmitted from the transmitter.
 これにより、図31Aの方法dに示すように、端末装置(受信機)は、可視光信号が送信機から送信される時刻(送信機時刻)に対応付けられたコンテンツを受信することができる。例えば、送信機時刻が5時43分であれば、5時43分に再生されるコンテンツを受信することができる。 Thereby, as shown in the method d in FIG. 31A, the terminal device (receiver) can receive the content associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter. For example, if the transmitter time is 5:43, content played back at 5:43 can be received.
 また、前記再生方法では、さらに、前記GPS電波または前記NTP電波によって、前記端末装置の時計と前記基準クロックとの間で同期をとるための処理が行われた時刻が、前記端末装置が前記可視光信号を受信した時刻から所定の時間より前である場合、前記送信機から送信された前記可視光信号が示す時刻により、前記端末装置の時計と、前記送信機の時計との間で同期をとってもよい。 Further, in the reproduction method, the time at which the process for synchronizing the clock of the terminal device and the reference clock is performed by the GPS radio wave or the NTP radio wave is determined by the terminal device as the visible signal. If it is before a predetermined time from the time when the optical signal is received, synchronization is performed between the clock of the terminal device and the clock of the transmitter according to the time indicated by the visible light signal transmitted from the transmitter. It may be taken.
 例えば、端末装置の時計と基準クロックとの間で同期をとるための処理が行われてから所定の時間が経過してしまうと、その同期が適切に保たれていない場合がある。このような場合には、端末装置は、送信機で再生される送信機側コンテンツと同期する時刻に、コンテンツを再生することできない可能性がある。そこで、上記本発明の一態様に係る再生方法では、図30のステップS1829,S1830のように、所定の時間が経過したときには、端末装置(受信機)の時計と送信機の時計との間で同期がとられる。したがって、端末装置は、送信機で再生される送信機側コンテンツと同期する時刻に、コンテンツを再生することができる。 For example, if a predetermined time elapses after the processing for synchronizing the clock of the terminal device and the reference clock is performed, the synchronization may not be properly maintained. In such a case, there is a possibility that the terminal device cannot reproduce the content at a time synchronized with the transmitter-side content reproduced by the transmitter. Therefore, in the playback method according to one aspect of the present invention, as shown in steps S1829 and S1830 of FIG. 30, when a predetermined time has elapsed, the clock of the terminal device (receiver) and the clock of the transmitter are set. Synchronized. Therefore, the terminal device can reproduce the content at a time synchronized with the transmitter-side content reproduced by the transmitter.
 また、前記サーバは、それぞれ時刻に関連付けられている複数のコンテンツを有しており、前記コンテンツ受信ステップでは、前記可視光信号が示す時刻に関連付けられたコンテンツが前記サーバに存在しない場合には、前記複数のコンテンツのうち、前記可視光信号が示す時刻に最も近く、かつ、前記可視光信号が示す時刻の後の時刻に関連付けられているコンテンツを受信してもよい。 Further, the server has a plurality of contents each associated with a time, and in the contents receiving step, when the contents associated with the time indicated by the visible light signal does not exist in the server, Among the plurality of contents, content that is closest to the time indicated by the visible light signal and that is associated with a time after the time indicated by the visible light signal may be received.
 これにより、図31Aの方法dに示すように、可視光信号が示す時刻に関連付けられたコンテンツがサーバに存在しなくても、そのサーバにある複数のコンテンツの中から、適切なコンテンツを受信することができる。 As a result, as shown in the method d in FIG. 31A, even if the content associated with the time indicated by the visible light signal does not exist in the server, the appropriate content is received from the plurality of contents in the server. be able to.
 また、光源の輝度変化により可視光信号を送信する送信機から、前記可視光信号を端末装置のセンサにより受信する信号受信ステップと、前記端末装置から、前記可視光信号に対応付けられたコンテンツを要求するための要求信号をサーバに送信する送信ステップと、前記端末装置が、前記サーバからコンテンツを受信するコンテンツ受信ステップと、前記コンテンツを再生する再生ステップと、を含み、前記可視光信号は、ID情報と、前記可視光信号が前記送信機から送信される時刻とを示し、前記コンテンツ受信ステップでは、前記可視光信号によって示されるID情報および時刻に対応付けられた前記コンテンツを受信してもよい。 In addition, a signal receiving step of receiving the visible light signal by a sensor of a terminal device from a transmitter that transmits a visible light signal due to a luminance change of the light source, and a content associated with the visible light signal from the terminal device. A transmission step of transmitting a request signal for requesting to the server, a content reception step in which the terminal device receives the content from the server, and a reproduction step of reproducing the content, wherein the visible light signal is: ID information and a time at which the visible light signal is transmitted from the transmitter are indicated. In the content receiving step, the ID information indicated by the visible light signal and the content associated with the time are received. Good.
 これにより、図31Aの方法dのように、ID情報(送信機ID)に関連付けられている複数のコンテンツの中から、可視光信号が送信機から送信される時刻(送信機時刻)に対応付けられたコンテンツが受信されて再生される。したがって、その送信機IDおよび送信機時刻に対して適切なコンテンツを再生することができる。 Thus, as in method d of FIG. 31A, the visible light signal is associated with the time (transmitter time) at which the visible light signal is transmitted from the transmitter among the plurality of contents associated with the ID information (transmitter ID). The received content is received and played back. Therefore, it is possible to reproduce content appropriate for the transmitter ID and transmitter time.
 また、前記可視光信号は、時刻のうちの時および分を示す第2の情報と、時刻のうちの秒を示す第1の情報とを含むことによって、前記可視光信号が前記送信機から送信される時刻を示し、前記信号受信ステップでは、前記第2の情報を受信するとともに、前記第2の情報を受信する回数よりも多くの回数だけ前記第1の情報を受信してもよい。 The visible light signal includes second information indicating the hour and minute of the time and first information indicating the second of the time, so that the visible light signal is transmitted from the transmitter. In the signal receiving step, the second information may be received and the first information may be received more times than the number of times the second information is received.
 これにより、例えば、可視光信号に含まれる各パケットが送信される時刻を秒単位で端末装置に通知する場合には、時、分および秒の全てを用いて表現される現時点の時刻を示すパケットを、1秒経過ごとに端末装置に送信する手間を軽減することができる。つまり、図26に示すように、パケットが送信される時刻のうちの時および分が、前に送信されたパケットに示される時および分から更新されていなければ、秒のみを示すパケット(時間パケット1)である第1の情報だけを送信すればよい。したがって、送信機によって送信される、秒を示すパケット(時間パケット1)である第1の情報よりも、時および分を示すパケット(時間パケット2)である第2の情報を少なくすることによって、冗長な内容を含むパケットの送信を抑えることができる。 Thereby, for example, when notifying the terminal device of the time at which each packet included in the visible light signal is transmitted in seconds, the packet indicating the current time expressed using all of the hour, minute, and second Can be saved in every second. That is, as shown in FIG. 26, if the hour and minute of the time when the packet is transmitted is not updated from the time and minute indicated in the previously transmitted packet, the packet indicating only the second (time packet 1 Only the first information is required to be transmitted. Therefore, by reducing the second information that is the packet indicating the hour and the minute (time packet 2) than the first information that is the packet indicating the second (time packet 1) transmitted by the transmitter, Transmission of packets containing redundant contents can be suppressed.
 (実施の形態4)
 本実施の形態では、光IDを用いたAR(Augmented Reality)を実現する表示方法などについて説明する。
(Embodiment 4)
In the present embodiment, a display method for realizing AR (Augmented Reality) using an optical ID will be described.
 図41は、本実施の形態における受信機がAR画像を表示する例を示す図である。 FIG. 41 is a diagram illustrating an example in which the receiver according to the present embodiment displays an AR image.
 本実施の形態における受信機200は、上記実施の形態1~3のうちの何れかの実施の形態における、イメージセンサおよびディスプレイ201を備えた受信機であって、例えばスマートフォンとして構成されている。このような受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Paと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 according to the present embodiment is a receiver including the image sensor and the display 201 according to any one of the first to third embodiments, and is configured as a smartphone, for example. Such a receiver 200 acquires the above-described captured display image Pa, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは、駅名標として構成されている送信機100を撮像する。送信機100は、上記実施の形態1~3のうちの何れかの実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)を備える。この送信機100は、その1つまたは複数の発光素子を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100 configured as a station name sign. The transmitter 100 is the transmitter according to any one of the first to third embodiments, and includes one or a plurality of light emitting elements (for example, LEDs). The transmitter 100 changes in luminance by blinking one or more light emitting elements, and transmits an optical ID (light identification information) by the change in luminance. This light ID is the above-mentioned visible light signal.
 受信機200は、送信機100を通常露光時間で撮像することによって、その送信機100が映し出された撮像表示画像Paを取得するとともに、その通常露光時間よりも短い通信用露光時間で送信機100を撮像することによって、復号用画像を取得する。なお、通常露光時間は、上述の通常撮影モードにおける露光時間であり、通信用露光時間は、上述の可視光通信モードにおける露光時間である。 The receiver 200 captures the transmitter 100 with the normal exposure time, thereby acquiring the captured display image Pa projected by the transmitter 100, and the transmitter 100 with a communication exposure time shorter than the normal exposure time. A decoding image is acquired by imaging. The normal exposure time is the exposure time in the above-described normal photographing mode, and the communication exposure time is the exposure time in the above-described visible light communication mode.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P1と認識情報とをサーバから取得する。受信機200は、撮像表示画像Paのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100である駅名標が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P1を重畳し、AR画像P1が重畳された撮像表示画像Paをディスプレイ201に表示する。例えば、送信機100である駅名標に、駅名として日本語で「京都駅」が記載されている場合、受信機200は、英語で駅名が記載されたAR画像P1、つまり「Kyoto Station」と記載されているAR画像P1を取得する。この場合、撮像表示画像Paの対象領域にそのAR画像P1が重畳されるため、英語で駅名が記載されている駅名標が現実に存在するように、撮像表示画像Paを表示することができる。その結果、英語を理解できるユーザは、日本語が読めなくても、その撮像表示画像Paを見れば、その送信機100である駅名標に記載されている駅名を容易に理解することができる。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P1 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. For example, the receiver 200 recognizes an area in which a station name sign that is the transmitter 100 is displayed as a target area. Then, the receiver 200 superimposes the AR image P1 on the target area, and displays the captured display image Pa on which the AR image P1 is superimposed on the display 201. For example, when “Kyoto Station” is written in Japanese as a station name in the station name that is the transmitter 100, the receiver 200 describes the AR image P1 in which the station name is written in English, that is, “Kyoto Station”. Obtained AR image P1. In this case, since the AR image P1 is superimposed on the target area of the captured display image Pa, the captured display image Pa can be displayed so that a station name with a station name written in English actually exists. As a result, a user who can understand English can easily understand the station name described in the station name sign that is the transmitter 100 by looking at the captured display image Pa even if the user cannot read Japanese.
 例えば、認識情報は、認識対象の画像(例えば上述の駅名標の画像)であってもよく、その画像の特徴点および特徴量であってもよい。特徴点および特徴量は、例えば、SIFT(Scale-invariant feature transform)、SURF(Speed-Upped Robust Feature)、ORB(Oriented-BRIEF)、AKAZE(Accelerated KAZE)などの画像処理によって得られる。または、認識情報は、認識対象の画像に類似する白い四角形の画像であってもよく、さらに、その四角形の縦横比(アスペクト比)を示してもよい。または、識別情報は、認識対象の画像に現れるランダムドットであってもよい。さらに、認識情報は、上述の白い四角形またはランダムドットなどの、所定の方向を基準とする向きを示してもよい。所定の方向は、例えば重力方向である。 For example, the recognition information may be an image to be recognized (for example, an image of the above-described station name sign), or may be a feature point and a feature amount of the image. The feature points and feature quantities are obtained by, for example, SIFT (Scale-invariant feature transform), SURF (Speed-Uploaded Robust Feature), ORB (Oriented-BREF), AKAZE (Accelerated) image, etc. Alternatively, the recognition information may be a white square image similar to the image to be recognized, and may further indicate the aspect ratio (aspect ratio) of the square. Alternatively, the identification information may be random dots that appear in the recognition target image. Furthermore, the recognition information may indicate a direction based on a predetermined direction, such as the above-described white square or random dot. The predetermined direction is, for example, the direction of gravity.
 受信機200は、撮像表示画像Paの中から、このような認識情報に応じた領域を対象領域として認識する。具体的には、認識情報が画像であれば、受信機200は、その認識情報である画像に類似する領域を対象領域として認識する。また、認識情報が、画像処理によって得られた特徴点および特徴量であれば、受信機200は、その画像処理を撮像表示画像Paに対して行うことによって、特徴点検出および特徴量抽出を行う。そして、受信機200は、撮像表示画像Paにおいて、認識情報である特徴点および特徴量に類似する、特徴点および特徴量を有する領域を対象領域として認識する。また、認識情報が、白い四角形とその向きを示してれば、受信機200は、まず、自らに備えられた加速度センサによって重力方向を検出する。そして、受信機200は、重力方向を基準にして配置された撮像表示画像Paから、認識情報により示される向きに向けられた白い四角形に類似する領域を対象領域として認識する。 The receiver 200 recognizes an area corresponding to such recognition information as a target area from the captured display image Pa. Specifically, if the recognition information is an image, the receiver 200 recognizes a region similar to the image that is the recognition information as a target region. If the recognition information is a feature point and a feature amount obtained by image processing, the receiver 200 performs feature point detection and feature amount extraction by performing the image processing on the captured display image Pa. . Then, the receiver 200 recognizes, in the captured display image Pa, a region having feature points and feature amounts that are similar to the feature points and feature amounts that are recognition information as target regions. If the recognition information indicates a white square and its direction, the receiver 200 first detects the direction of gravity using an acceleration sensor provided in the receiver 200. Then, the receiver 200 recognizes, as a target area, an area similar to a white square directed in the direction indicated by the recognition information from the captured display image Pa arranged with reference to the direction of gravity.
 ここで、認識情報は、撮像表示画像Paのうちの基準領域を特定するための基準情報と、その基準領域に対する対象領域の相対位置を示す対象情報とを含んでいてもよい。基準情報は、上述のような、認識対象の画像、特徴点および特徴量、白い四角形の画像、またはランダムドットなどである。この場合、受信機200は、対象領域を認識するときには、まず、基準情報に基づいて撮像表示画像Paから基準領域を特定する。そして、受信機200は、撮像表示画像Paのうち、基準領域の位置を基準として対象情報により示される相対位置にある領域を、対象領域として認識する。なお、対象情報は、対象領域が基準領域と同じ位置にあることを示していてもよい。このように、認識情報が基準情報と対象情報とを含むことによって、幅広い範囲で対象領域を認識することができる。また、AR画像が重畳される場所をサーバが自由に設定して受信機200に教えることができる。 Here, the recognition information may include reference information for specifying a reference area in the captured display image Pa and target information indicating a relative position of the target area with respect to the reference area. The reference information is an image to be recognized, a feature point and a feature amount, a white square image, or a random dot as described above. In this case, when recognizing the target area, the receiver 200 first specifies the reference area from the captured display image Pa based on the reference information. And the receiver 200 recognizes the area | region in the relative position shown by object information among the picked-up display images Pa as a reference | standard by using the position of a reference | standard area | region as a reference | standard. The target information may indicate that the target area is in the same position as the reference area. As described above, since the recognition information includes the reference information and the target information, the target region can be recognized in a wide range. Further, the server can freely set the location where the AR image is superimposed and can be instructed to the receiver 200.
 また、基準情報は、撮像表示画像Paにおける基準領域が、撮像表示画像のうちのディスプレイが映し出されている領域であることを示していてもよい。この場合、送信機100が例えばテレビなどのディスプレイとして構成されていれば、そのディスプレイが映し出されている領域を基準にして対象領域を認識することができる。 Further, the reference information may indicate that the reference area in the captured display image Pa is an area where the display of the captured display image is displayed. In this case, if the transmitter 100 is configured as a display such as a television, for example, the target area can be recognized with reference to the area where the display is displayed.
 言い換えれば、本実施の形態における受信機200は、光IDに基づいて、基準画像と、画像認識方法とを特定する。画像認識方法は、撮像表示画像Paを認識する方法であって、例えば、幾何学的特徴量抽出、スペクトル特徴量抽出、またはテクスチャ特徴量抽出などである。基準画像は、基準となる特徴量を示すデータである。その特徴量は、例えば、画像の白色の外枠の特徴量であって、具体的には、画像の特徴をベクトルで表現したデータであってもよい。受信機200は、撮像表示画像Paから、画像認識方法にしたがって特徴量を抽出し、その特徴量と基準画像の特徴量とを比較することによって、撮像表示画像Paから上述の基準領域または対象領域を見つけ出す。 In other words, the receiver 200 in the present embodiment specifies the reference image and the image recognition method based on the light ID. The image recognition method is a method for recognizing the captured display image Pa, for example, geometric feature extraction, spectral feature extraction, texture feature extraction, or the like. The reference image is data indicating a reference feature amount. The feature amount is, for example, the feature amount of the white outer frame of the image, and specifically, may be data expressing the feature of the image as a vector. The receiver 200 extracts a feature amount from the captured display image Pa in accordance with an image recognition method, and compares the feature amount with the feature amount of the reference image, so that the above-described reference region or target region is extracted from the captured display image Pa. Find out.
 また、画像認識方法には、例えば、ロケーション利用方法、マーカー利用方法、およびマーカーレス方法があってもよい。ロケーション利用方法は、GPSの位置情報(すなわち受信機200の位置)を活用した方法であって、その位置情報に基づいて撮像表示画像Paから対象領域が認識される。マーカー利用方法は、二次元バーコードのような白および黒の図形で構成されたマーカーをターゲット特定用のマークとして用いる方法である。つまり、このマーカー利用方法では、撮像表示画像Paに映し出されているマーカーに基づいて対象領域が認識される。マーカーレス方法では、撮像表示画像Paに対する画像分析により、その撮像表示画像Paから特徴点または特徴量を抽出し、その抽出された特徴点または特徴量に基づいて、ターゲットの位置および領域を特定する方法である。つまり、画像認識方法がマーカーレス方法である場合、その画像認識方法は、上述の幾何学的特徴量抽出、スペクトル特徴量抽出、またはテクスチャ特徴量抽出などである。 Also, the image recognition method may include, for example, a location use method, a marker use method, and a markerless method. The location utilization method is a method utilizing GPS position information (that is, the position of the receiver 200), and the target area is recognized from the captured display image Pa based on the position information. The marker utilization method is a method of using a marker composed of white and black graphics such as a two-dimensional barcode as a target specifying mark. That is, in this marker usage method, the target region is recognized based on the marker displayed in the captured display image Pa. In the markerless method, a feature point or feature amount is extracted from the captured display image Pa by image analysis on the captured display image Pa, and the position and region of the target are specified based on the extracted feature point or feature amount. Is the method. That is, when the image recognition method is a markerless method, the image recognition method is the above-described geometric feature amount extraction, spectral feature amount extraction, texture feature amount extraction, or the like.
 このような受信機200は、送信機100から光IDを受信し、その光ID(以下、受信光IDという)に対応付けられた基準画像および画像認識方法をサーバから取得することによって、その基準画像および画像認識方法を特定してもよい。つまり、サーバには、基準画像および画像認識方法を含むセットが複数保存され、複数のセットのそれぞれは互いに異なる光IDに対応付けられている。これにより、サーバに保存されている複数のセットの中から、受信光IDに対応付けられた1つのセットを特定することができる。したがって、AR画像の重畳のための画像処理の速度を向上させることができる。また、受信機200は、サーバに問い合わせることによって、受信光IDに対応付けられた基準画像などを取得してもよく、自らが予め保持している複数の基準画像の中から、その受信光IDに対応付けられた基準画像を取得してもよい。 Such a receiver 200 receives an optical ID from the transmitter 100 and acquires a reference image and an image recognition method associated with the optical ID (hereinafter referred to as a received optical ID) from the server, thereby obtaining the reference. Images and image recognition methods may be specified. That is, the server stores a plurality of sets including the reference image and the image recognition method, and each of the plurality of sets is associated with a different light ID. Thus, one set associated with the received light ID can be identified from among a plurality of sets stored in the server. Therefore, the speed of image processing for superimposing the AR image can be improved. Further, the receiver 200 may acquire a reference image associated with the received light ID by inquiring of the server, and the received light ID may be obtained from a plurality of reference images held by the receiver 200 in advance. You may acquire the reference | standard image matched with.
 また、サーバは、光IDごとに、その光IDに対応付けられた相対位置情報を、基準画像、画像認識方法およびAR画像とともに保持していてもよい。相対位置情報は、例えば、上述の基準領域と対象領域との相対的な位置関係を示す情報である。これにより、受信機200は、受信光IDをサーバに送信して問い合わせたときには、その受信光IDに対応付けられた基準画像、画像認識方法、AR画像および相対位置情報を取得する。この場合、受信機200は、基準画像および画像認識方法に基づいて撮像表示画像Paから上述の基準領域を特定する。そして、受信機200は、その基準領域の位置から、上述の相対位置情報によって示される方向および距離にある領域を、上述の対象領域として認識し、その対象領域にAR画像を重畳する。また、受信機200は、相対位置情報がなければ、上述の基準領域を対象領域として認識し、その基準領域にAR画像を重畳してもよい。つまり、受信機200は、相対位置情報の取得に代えて、基準画像に基づいてAR画像を表示するプログラムを予め保持し、例えば、基準領域である白枠内にAR画像を表示してもよい。この場合には、相対位置情報は不要である。 Further, the server may hold the relative position information associated with each light ID together with the reference image, the image recognition method, and the AR image for each light ID. The relative position information is information indicating the relative positional relationship between the reference area and the target area, for example. Thereby, when the receiver 200 sends an inquiry to the server by receiving the received light ID, the receiver 200 acquires the reference image, the image recognition method, the AR image, and the relative position information associated with the received light ID. In this case, the receiver 200 specifies the above-described reference region from the captured display image Pa based on the reference image and the image recognition method. Then, the receiver 200 recognizes the region in the direction and the distance indicated by the relative position information from the position of the reference region as the target region, and superimposes the AR image on the target region. Further, if there is no relative position information, the receiver 200 may recognize the above-described reference area as a target area and superimpose an AR image on the reference area. That is, the receiver 200 may store a program for displaying an AR image based on the reference image in advance, instead of acquiring the relative position information, and display the AR image in a white frame that is a reference region, for example. . In this case, relative position information is not necessary.
 基準画像、相対位置情報、AR画像、および画像認識方法の保持または取得には、以下の4つのバリエーション(1)~(4)がある。 There are the following four variations (1) to (4) for holding or acquiring the reference image, relative position information, AR image, and image recognition method.
 (1)サーバは、基準画像、相対位置情報、AR画像、および画像認識方法からなるセットを複数保持している。受信機200は、それらのセットの中から、受信光IDに対応付けられた1つのセットを取得する。 (1) The server holds a plurality of sets including a reference image, relative position information, an AR image, and an image recognition method. The receiver 200 acquires one set associated with the received light ID from these sets.
 (2)サーバは、基準画像およびAR画像からなるセットを複数保持している。受信機200は、予め定められた相対位置情報および画像認識方法を用い、かつ、それらのセットの中から、受信光IDに対応付けられた1つのセットを取得する。または、受信機200は、相対位置情報および画像認識方法からなる複数のセットを予め保持し、その複数のセットの中から、受信光IDに対応付けられた1つのセットを選択してもよい。この場合、受信機200は、受信光IDをサーバに送信して問い合わせ、その受信光IDに対応する相対位置情報および画像認識方法を特定するための情報をサーバから取得してもよい。そして、受信機200は、予め保持している、それぞれ相対位置情報および画像認識方法からなる複数のセットの中から、そのサーバから取得された情報に基づいて1つのセットを選択する。あるいは、受信機200は、サーバに問い合わせることなく、予め保持している、それぞれ相対位置情報および画像認識方法からなる複数のセットの中から、受信光IDに対応付けられた1つのセットを選択してもよい。 (2) The server holds a plurality of sets including a reference image and an AR image. The receiver 200 uses a predetermined relative position information and an image recognition method, and acquires one set associated with the received light ID from the set. Alternatively, the receiver 200 may hold a plurality of sets including the relative position information and the image recognition method in advance, and select one set associated with the received light ID from the plurality of sets. In this case, the receiver 200 may inquire by transmitting the received light ID to the server, and acquire relative position information corresponding to the received light ID and information for specifying the image recognition method from the server. Then, the receiver 200 selects one set based on the information acquired from the server from among a plurality of sets each having the relative position information and the image recognition method. Alternatively, the receiver 200 selects one set associated with the received light ID from a plurality of sets each including the relative position information and the image recognition method stored in advance without inquiring of the server. May be.
 (3)受信機200は、基準画像、相対位置情報、AR画像、および画像認識方法からなるセットを複数保持し、それらのセットの中から1つのセットを選択する。受信機200は、上記(2)と同様に、サーバに問い合わせることによって、1つのセットを選択してもよく、受信機光IDに対応付けられた1つのセットを選択してもよい。 (3) The receiver 200 holds a plurality of sets including a reference image, relative position information, an AR image, and an image recognition method, and selects one set from these sets. Similarly to (2) above, the receiver 200 may select one set by making an inquiry to the server, or may select one set associated with the receiver optical ID.
 (4)受信機200は、基準画像およびAR画像からなるセットを複数保持し、受信光IDに対応付けられた1つのセットを選択する。受信機200は、予め定められた画像認識方法および相対位置情報を用いる。 (4) The receiver 200 holds a plurality of sets including the reference image and the AR image, and selects one set associated with the received light ID. The receiver 200 uses a predetermined image recognition method and relative position information.
 図42は、本実施の形態における表示システムの一例を示す図である。 FIG. 42 is a diagram showing an example of the display system in the present embodiment.
 本実施の形態における表示システムは、例えば、上述の駅名標である送信機100と、受信機200と、サーバ300とを備える。 The display system in the present embodiment includes, for example, the transmitter 100, the receiver 200, and the server 300, which are the above-described station names.
 受信機200は、上述のようにAR画像が重畳された撮像表示画像を表示するために、まず、送信機100から光IDを受信する。次に、受信機200は、その光IDをサーバ300に送信する。 The receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the server 300.
 サーバ300は、光IDごとに、その光IDに対応付けられたAR画像および認識情報を保持している。そこで、サーバ300は、受信機200から光IDを受信すると、その受信された光IDに対応付けられたAR画像および認識情報を選択し、その選択されたAR画像および認識情報を受信機200に送信する。これにより、受信機200は、サーバ300から送信されたAR画像および認識情報を受信し、AR画像が重畳された撮像表示画像を表示する。 The server 300 holds an AR image and recognition information associated with each light ID for each light ID. Therefore, when the server 300 receives the optical ID from the receiver 200, the server 300 selects the AR image and the recognition information associated with the received optical ID, and sends the selected AR image and the recognition information to the receiver 200. Send. Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the server 300, and displays the captured display image on which the AR image is superimposed.
 図43は、本実施の形態における表示システムの他の例を示す図である。 FIG. 43 is a diagram showing another example of the display system in the present embodiment.
 本実施の形態における表示システムは、例えば、上述の駅名標である送信機100と、受信機200と、第1のサーバ301と、第2のサーバ302とを備える。 The display system according to the present embodiment includes, for example, the transmitter 100 that is the above-described station name sign, the receiver 200, the first server 301, and the second server 302.
 受信機200は、上述のようにAR画像が重畳された撮像表示画像を表示するために、まず、送信機100から光IDを受信する。次に、受信機200は、その光IDを第1のサーバ301に送信する。 The receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the first server 301.
 第1のサーバ301は、受信機200から光IDを受信すると、その受信された光IDに対応付けられたURL(Uniform Resource Locator)とKeyを受信機200に通知する。このような通知を受けた受信機200は、そのURL基づいて第2のサーバ302にアクセスし、Keyを第2のサーバ302に受け渡す。 When receiving the optical ID from the receiver 200, the first server 301 notifies the receiver 200 of a URL (Uniform Resource Locator) and Key associated with the received optical ID. Receiving such notification, the receiver 200 accesses the second server 302 based on the URL, and passes the key to the second server 302.
 第2のサーバ302は、Keyごとに、そのKeyに対応付けられたAR画像および認識情報を保持している。そこで、第2のサーバ302は、受信機200からKeyを受け取ると、そのKeyに対応付けられたAR画像および認識情報を選択し、その選択されたAR画像および認識情報を受信機200に送信する。これにより、受信機200は、第2のサーバ302から送信されたAR画像および認識情報を受信し、AR画像が重畳された撮像表示画像を表示する。 The second server 302 holds an AR image and recognition information associated with each key for each key. Therefore, when receiving a key from the receiver 200, the second server 302 selects an AR image and recognition information associated with the key, and transmits the selected AR image and recognition information to the receiver 200. . Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the second server 302, and displays a captured display image on which the AR image is superimposed.
 図44は、本実施の形態における表示システムの他の例を示す図である。 FIG. 44 is a diagram showing another example of the display system in the present embodiment.
 本実施の形態における表示システムは、例えば、上述の駅名標である送信機100と、受信機200と、第1のサーバ301と、第2のサーバ302とを備える。 The display system according to the present embodiment includes, for example, the transmitter 100 that is the above-described station name sign, the receiver 200, the first server 301, and the second server 302.
 受信機200は、上述のようにAR画像が重畳された撮像表示画像を表示するために、まず、送信機100から光IDを受信する。次に、受信機200は、その光IDを第1のサーバ301に送信する。 The receiver 200 first receives an optical ID from the transmitter 100 in order to display the captured display image on which the AR image is superimposed as described above. Next, the receiver 200 transmits the optical ID to the first server 301.
 第1のサーバ301は、受信機200から光IDを受信すると、その受信された光IDに対応付けられたKeyを第2のサーバ302に通知する。 When receiving the optical ID from the receiver 200, the first server 301 notifies the second server 302 of the Key associated with the received optical ID.
 第2のサーバ302は、Keyごとに、そのKeyに対応付けられたAR画像および認識情報を保持している。そこで、第2のサーバ302は、第1のサーバ301からKeyを受け取ると、そのKeyに対応付けられたAR画像および認識情報を選択し、その選択されたAR画像および認識情報を、第1のサーバ301に送信する。第1のサーバ301は、第2のサーバ302からAR画像および認識情報を受信すると、そのAR画像および認識情報を受信機200に送信する。これにより、受信機200は、第1のサーバ301から送信されたAR画像および認識情報を受信し、AR画像が重畳された撮像表示画像を表示する。 The second server 302 holds an AR image and recognition information associated with each key for each key. Therefore, when the second server 302 receives the key from the first server 301, the second server 302 selects the AR image and the recognition information associated with the key, and the selected AR image and the recognition information are used as the first server. To the server 301. When receiving the AR image and the recognition information from the second server 302, the first server 301 transmits the AR image and the recognition information to the receiver 200. Thereby, the receiver 200 receives the AR image and the recognition information transmitted from the first server 301, and displays the captured display image on which the AR image is superimposed.
 なお、上述の例では、第2のサーバ302は、AR画像および認識情報を第1のサーバ301に送信したが、第1のサーバ301に送信することなく、受信機200に送信してもよい。 In the above-described example, the second server 302 transmits the AR image and the recognition information to the first server 301. However, the second server 302 may transmit the AR image and the recognition information to the receiver 200 without transmitting to the first server 301. .
 図45は、本実施の形態における受信機200の処理動作の一例を示すフローチャートである。 FIG. 45 is a flowchart showing an example of the processing operation of the receiver 200 in the present embodiment.
 まず、受信機200は、上述の通常露光時間および通信用露光時間による撮像を開始する(ステップS101)。そして、受信機200は、通信用露光時間での撮像により得られる復号用画像に対する復号によって、光IDを取得する(ステップS102)。次に、受信機200は、その光IDをサーバに送信する(ステップS103)。 First, the receiver 200 starts imaging with the above-described normal exposure time and communication exposure time (step S101). Then, the receiver 200 acquires an optical ID by decoding the decoding image obtained by imaging with the communication exposure time (step S102). Next, the receiver 200 transmits the optical ID to the server (step S103).
 受信機200は、送信された光IDに対応するAR画像と認識情報とをサーバから取得する(ステップS104)。次に、受信機200は、通常露光時間の撮像により得られる撮像表示画像のうち、その認識情報に応じた領域を対象領域として認識する(ステップS105)。そして、受信機200は、その対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS106)。 The receiver 200 acquires the AR image corresponding to the transmitted optical ID and the recognition information from the server (step S104). Next, the receiver 200 recognizes, as a target area, an area corresponding to the recognition information in the captured display image obtained by imaging with the normal exposure time (step S105). Then, the receiver 200 superimposes the AR image on the target area, and displays the captured display image on which the AR image is superimposed (step S106).
 次に、受信機200は、撮像と撮像表示画像の表示とを終了すべきか否かを判定する(ステップS107)。ここで、受信機200は、終了すべきでないと判定すると(ステップS107のN)、さらに、受信機200の加速度が閾値以上であるか否かを判定する(ステップS108)。この加速度は、受信機200に備えられている加速度センサによって計測される。受信機200は、加速度が閾値未満であると判定すると(ステップS108のN)、ステップS105からの処理を実行する。これにより、受信機200のディスプレイ201に表示されている撮像表示画像がずれる場合であっても、その撮像表示画像の対象領域にAR画像を追従させることができる。また、受信機200は、加速度が閾値以上であると判定すると(ステップS108のY)、ステップS102からの処理を実行する。これにより、撮像表示画像に送信機100が映らなくなった場合に、送信機100と異なる被写体が映し出されている領域を誤って対象領域として認識してしまうことを抑えることができる。 Next, the receiver 200 determines whether or not the imaging and the display of the captured display image should be terminated (step S107). Here, when the receiver 200 determines that it should not be terminated (N in step S107), it further determines whether or not the acceleration of the receiver 200 is equal to or greater than a threshold value (step S108). This acceleration is measured by an acceleration sensor provided in the receiver 200. When the receiver 200 determines that the acceleration is less than the threshold value (N in step S108), the receiver 200 executes the processing from step S105. Thereby, even when the captured display image displayed on the display 201 of the receiver 200 is shifted, the AR image can follow the target area of the captured display image. If the receiver 200 determines that the acceleration is equal to or greater than the threshold (Y in step S108), the receiver 200 executes the processing from step S102. Thereby, when the transmitter 100 is no longer displayed in the captured display image, it is possible to suppress erroneously recognizing an area where a subject different from the transmitter 100 is displayed as the target area.
 このように本実施の形態では、AR画像が撮像表示画像に重畳されて表示されるため、ユーザに有益な画像を表示することができる。さらに、処理負荷を抑えて適切な対象領域にAR画像を重畳することができる。 Thus, in the present embodiment, since the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
 つまり、一般的な拡張現実(すなわちAR)では、予め保存されている膨大な数の認識対象画像と、撮像表示画像とを比較することによって、その撮像表示画像に何れかの認識対象画像が含まれているか否かが判定される。そして、認識対象画像が含まれていると判定されれば、その認識対象画像に対応するAR画像が撮像表示画像に重畳される。このとき、認識対象画像を基準にAR画像の位置合わせが行われる。このように、一般的な拡張現実では、膨大な数の認識対象画像と撮像表示画像とを比較するため、さらに、位置合わせにおいても撮像表示画像における認識対象画像の位置検出が必要となるため、計算量が多く、処理負荷が高いという問題がある。 That is, in general augmented reality (that is, AR), a huge number of recognition target images stored in advance are compared with the captured display image, and any of the recognition target images is included in the captured display image. It is determined whether or not If it is determined that the recognition target image is included, the AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image. In this way, in general augmented reality, a huge number of recognition target images are compared with captured display images, and further, position detection of the recognition target images in the captured display image is necessary even in alignment. There is a problem of a large amount of calculation and a high processing load.
 しかし、本実施の形態における表示方法では、被写体の撮像によって得られる復号用画像を復号することによって光IDが取得される。つまり、被写体である送信機から送信された光IDが受信される。さらに、この光IDに対応するAR画像と認識情報とがサーバから取得される。したがって、サーバでは、膨大な数の認識対象画像と撮像表示画像とを比較する必要がなく、光IDに予め対応付けられているAR画像を選択して表示装置に送信することができる。これにより、計算量を減らして処理負荷を大幅に抑えることができる。さらに、AR画像の表示処理を高速にすることができる。 However, in the display method according to the present embodiment, the light ID is acquired by decoding the decoding image obtained by imaging the subject. That is, the optical ID transmitted from the transmitter that is the subject is received. Furthermore, the AR image corresponding to this optical ID and the recognition information are acquired from the server. Therefore, the server does not need to compare a huge number of recognition target images and captured display images, and can select and transmit an AR image previously associated with the optical ID to the display device. Thereby, the amount of calculation can be reduced and the processing load can be significantly suppressed. Furthermore, the AR image display process can be speeded up.
 また、本実施の形態では、この光IDに対応する認識情報がサーバから取得される。認識情報は、撮像表示画像においてAR画像が重畳される領域である対象領域を認識するための情報である。この認識情報は、例えば白い四角形が対象領域であることを示す情報であってもよい。この場合には、対象領域を簡単に認識することができ、処理負荷をさらに抑えることができる。つまり、認識情報の内容に応じて、処理負荷をさらに抑えることができる。また、サーバでは、光IDに応じてその認識情報の内容を任意に設定することができるため、処理負荷と認識精度とのバランスを適切に保つことができる。 In this embodiment, recognition information corresponding to this optical ID is acquired from the server. The recognition information is information for recognizing a target area that is an area in which an AR image is superimposed in a captured display image. This recognition information may be information indicating that a white square is the target area, for example. In this case, the target area can be easily recognized, and the processing load can be further suppressed. That is, the processing load can be further suppressed according to the content of the recognition information. Further, since the server can arbitrarily set the content of the recognition information according to the optical ID, the balance between the processing load and the recognition accuracy can be appropriately maintained.
 なお、本実施の形態では、受信機200が光IDをサーバに送信した後に、受信機200がその光IDに対応するAR画像および認識情報をサーバから取得するが、AR画像および認識情報のうちの少なくとも一方を予め取得しておいてもよい。つまり、受信機200は、受信される可能性のある複数の光IDに対応する複数のAR画像および複数の認識情報をまとめてサーバから取得して保存しておく。その後、受信機200は、光IDを受信すると、自らに保存されている複数のAR画像および複数の認識情報から、その光IDに対応するAR画像および認識情報を選択する。これにより、AR画像の表示処理をさらに高速にすることができる。 In this embodiment, after the receiver 200 transmits the optical ID to the server, the receiver 200 acquires the AR image and the recognition information corresponding to the optical ID from the server. Of the AR image and the recognition information, At least one of these may be acquired in advance. That is, the receiver 200 collects a plurality of AR images and a plurality of recognition information corresponding to a plurality of optical IDs that may be received from the server and stores them. Thereafter, when receiving the optical ID, the receiver 200 selects an AR image and recognition information corresponding to the optical ID from a plurality of AR images and a plurality of recognition information stored in the receiver 200. Thereby, the display processing of the AR image can be further accelerated.
 図46は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 46 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図46に示すように、照明装置として構成され、施設の案内板101を照らしながら輝度変化することによって、光IDを送信している。案内板101は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 46, the transmitter 100 is configured as a lighting device, and transmits a light ID by changing the luminance while illuminating the facility guide plate 101. Since the guide plate 101 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
 受信機200は、送信機100によって照らされた案内板101を撮像することによって、上述と同様に、撮像表示画像Pbと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板101から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P2と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pbのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、案内板101における枠102が映し出されている領域を対象領域として認識する。この枠102は、施設の待ち時間を示すための枠である。そして、受信機200は、その対象領域にAR画像P2を重畳し、AR画像P2が重畳された撮像表示画像Pbをディスプレイ201に表示する。例えば、AR画像P2は、文字列「30分」を含む画像である。この場合、撮像表示画像Pbの対象領域にそのAR画像P2が重畳されるため、受信機200は、待ち時間「30分」が記載された案内板101が現実に存在するように、撮像表示画像Pbを表示することができる。これにより、案内板101に特別な表示装置を備えることなく、受信機200のユーザに待ち時間を簡単に、かつ、分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pb and the decoding image in the same manner as described above by imaging the guide plate 101 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 101. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P2 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pb as a target area. For example, the receiver 200 recognizes the area where the frame 102 on the guide plate 101 is projected as the target area. This frame 102 is a frame for indicating the waiting time of the facility. Then, the receiver 200 superimposes the AR image P2 on the target area, and displays the captured display image Pb on which the AR image P2 is superimposed on the display 201. For example, the AR image P2 is an image including the character string “30 minutes”. In this case, since the AR image P2 is superimposed on the target area of the captured display image Pb, the receiver 200 captures the captured display image so that the guide plate 101 on which the waiting time “30 minutes” is described actually exists. Pb can be displayed. Thereby, it is possible to inform the user of the receiver 200 of the waiting time easily and easily without providing a special display device on the guide plate 101.
 図47は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 47 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図47に示すように、2つの照明装置からなる。送信機100は、施設の案内板104を照らしながら輝度変化することによって、光IDを送信している。案内板104は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、案内板104は、例えば「ABCランド」および「アドベンチャーランド」などの複数の施設の名称を示す。 The transmitter 100 includes two illumination devices as shown in FIG. 47, for example. The transmitter 100 transmits the light ID by changing the luminance while illuminating the facility guide plate 104. Since the guide plate 104 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted. The guide plate 104 indicates names of a plurality of facilities such as “ABC land” and “adventure land”.
 受信機200は、送信機100によって照らされた案内板104を撮像することによって、上述と同様に、撮像表示画像Pcと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板104から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P3と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pcのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、案内板104が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P3を重畳し、AR画像P3が重畳された撮像表示画像Pcをディスプレイ201に表示する。例えば、AR画像P3は、複数の施設の名称を示す画像である。このAR画像P3では、施設の待ち時間が長いほど、その施設の名称が小さく表示され、逆に、施設の待ち時間が短いほど、その施設の名称が大きく表示されている。この場合、撮像表示画像Pcの対象領域にそのAR画像P3が重畳されるため、受信機200は、待ち時間に応じた大きさの各施設名称が記載された案内板104が現実に存在するように、撮像表示画像Pcを表示することができる。これにより、案内板104に特別な表示装置を備えることなく、受信機200のユーザに各施設の待ち時間を簡単に、かつ、分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pc and the decoding image in the same manner as described above by imaging the guide plate 104 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 104. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P3 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pc as a target area. For example, the receiver 200 recognizes an area where the guide plate 104 is projected as a target area. Then, the receiver 200 superimposes the AR image P3 on the target area, and displays the captured display image Pc on which the AR image P3 is superimposed on the display 201. For example, the AR image P3 is an image indicating names of a plurality of facilities. In this AR image P3, the longer the waiting time of the facility, the smaller the name of the facility is displayed. Conversely, the shorter the waiting time of the facility, the larger the name of the facility is displayed. In this case, since the AR image P3 is superimposed on the target area of the captured display image Pc, the receiver 200 seems to actually have a guide plate 104 on which each facility name having a size corresponding to the waiting time is described. The captured display image Pc can be displayed. Thereby, without providing a special display device on the guide plate 104, the user of the receiver 200 can be informed easily and easily of the waiting time of each facility.
 図48は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 48 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図48に示すように、2つの照明装置からなる。送信機100は、城壁105を照らしながら輝度変化することによって、光IDを送信している。城壁105は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、城壁105には、例えば、キャラクターの顔を模った小さいマークが隠れキャラクター106として刻まれている。 The transmitter 100 includes two illumination devices as shown in FIG. 48, for example. The transmitter 100 transmits the light ID by changing the luminance while illuminating the castle wall 105. Since the castle wall 105 is illuminated by the light from the transmitter 100, the luminance is changed in the same manner as the transmitter 100, and the light ID is transmitted. Further, on the castle wall 105, for example, a small mark imitating the character's face is engraved as a hidden character 106.
 受信機200は、送信機100によって照らされた城壁105を撮像することによって、上述と同様に、撮像表示画像Pdと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、城壁105から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P4と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pdのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、城壁105のうち隠れキャラクター106を含む範囲が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P4を重畳し、AR画像P4が重畳された撮像表示画像Pdをディスプレイ201に表示する。例えば、AR画像P4は、キャラクターの顔を模った画像である。このAR画像P4は、撮像表示画像Pdに映し出されている隠れキャラクター106よりも十分に大きい画像である。この場合、撮像表示画像Pdの対象領域にそのAR画像P4が重畳されるため、受信機200は、キャラクターの顔を模った大きなマークが刻まれた城壁105が現実に存在するように、撮像表示画像Pdを表示することができる。これにより、受信機200のユーザに、隠れキャラクター106の位置を分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pd and the decoding image in the same manner as described above by capturing an image of the castle wall 105 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the castle wall 105. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P4 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pd as a target area. For example, the receiver 200 recognizes, as the target area, an area in which a range including the hidden character 106 in the castle wall 105 is projected. Then, the receiver 200 superimposes the AR image P4 on the target area, and displays the captured display image Pd on which the AR image P4 is superimposed on the display 201. For example, the AR image P4 is an image imitating a character's face. The AR image P4 is an image that is sufficiently larger than the hidden character 106 displayed in the captured display image Pd. In this case, since the AR image P4 is superimposed on the target area of the captured display image Pd, the receiver 200 captures the image so that the castle wall 105 engraved with a large mark imitating the character's face actually exists. The display image Pd can be displayed. Thereby, it is possible to inform the user of the receiver 200 of the position of the hidden character 106 in an easy-to-understand manner.
 図49は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 49 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図49に示すように、2つの照明装置からなる。送信機100は、施設の案内板107を照らしながら輝度変化することによって、光IDを送信している。案内板107は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、案内板107の隅の複数箇所には、赤外線遮断塗料108が塗布されている。 The transmitter 100 includes two illumination devices as shown in FIG. 49, for example. The transmitter 100 transmits the light ID by changing the luminance while illuminating the facility guide plate 107. Since the guide plate 107 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted. In addition, an infrared shielding paint 108 is applied to a plurality of corners of the guide plate 107.
 受信機200は、送信機100によって照らされた案内板107を撮像することによって、上述と同様に、撮像表示画像Peと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板107から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P5と認識情報とをサーバから取得する。受信機200は、撮像表示画像Peのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、案内板107が映し出されている領域を対象領域として認識する。 The receiver 200 acquires the captured display image Pe and the decoding image in the same manner as described above by imaging the guide plate 107 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 107. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P5 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pe as a target area. For example, the receiver 200 recognizes an area where the guide plate 107 is projected as a target area.
 具体的には、認識情報には、複数箇所の赤外線遮断塗料108に外接する矩形が対象領域であることが示されている。また、赤外線遮断塗料108は、送信機100から照射される光に含まれる赤外線を遮断する。したがって、受信機200のイメージセンサには、赤外線遮断塗料108がその周辺よりも暗い像として認識される。受信機200は、それぞれ暗い像として現れる複数箇所の赤外線遮断塗料108に外接する矩形を対象領域として認識する。 Specifically, the recognition information indicates that a rectangle circumscribing the plurality of infrared shielding paints 108 is the target region. The infrared blocking paint 108 blocks infrared rays included in the light emitted from the transmitter 100. Therefore, the image sensor of the receiver 200 recognizes the infrared shielding paint 108 as an image darker than the surrounding area. The receiver 200 recognizes rectangles circumscribing the plurality of infrared shielding paints 108 that appear as dark images as target regions.
 そして、受信機200は、その対象領域にAR画像P5を重畳し、AR画像P5が重畳された撮像表示画像Peをディスプレイ201に表示する。例えば、AR画像P5は、案内板107の施設において行われる催しのスケジュールを示す。この場合、撮像表示画像Peの対象領域にそのAR画像P5が重畳されるため、受信機200は、催しのスケジュールが記載された案内板107が現実に存在するように、撮像表示画像Peを表示することができる。これにより、案内板107に特別な表示装置を備えることなく、受信機200のユーザに施設の催しのスケジュールを分かりやすく知らせることができる。 Then, the receiver 200 superimposes the AR image P5 on the target area, and displays the captured display image Pe on which the AR image P5 is superimposed on the display 201. For example, the AR image P5 shows a schedule of events to be performed in the facility of the guide board 107. In this case, since the AR image P5 is superimposed on the target area of the captured display image Pe, the receiver 200 displays the captured display image Pe so that the guide plate 107 on which the event schedule is described actually exists. can do. Thereby, without providing a special display device on the guide plate 107, it is possible to inform the user of the receiver 200 of the schedule of the facility event in an easy-to-understand manner.
 なお、案内板107には、赤外線遮断塗料108の代わりに、赤外線反射塗料が塗布されていてもよい。赤外線反射塗料は、送信機100から照射される光に含まれる赤外線を反射する。したがって、受信機200のイメージセンサには、赤外線反射塗料がその周辺よりも明るい像として認識される。つまり、この場合には、受信機200は、それぞれ明るい像として現れる複数箇所の赤外線反射塗料に外接する矩形を対象領域として認識する。 Note that an infrared reflecting paint may be applied to the guide plate 107 instead of the infrared shielding paint 108. The infrared reflecting paint reflects infrared rays included in the light irradiated from the transmitter 100. Therefore, the image sensor of the receiver 200 recognizes the infrared reflective paint as an image brighter than the surrounding area. That is, in this case, the receiver 200 recognizes rectangles circumscribing a plurality of infrared reflective paints that appear as bright images as target regions.
 図50は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 50 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、駅名標として構成され、駅出口案内板110の近くに配置されている。駅出口案内板110は、光源を備えて発光しているが、送信機100とは異なり、光IDを送信しない。 The transmitter 100 is configured as a station name sign and is disposed near the station exit guide plate 110. The station exit guide plate 110 includes a light source and emits light, but, unlike the transmitter 100, does not transmit an optical ID.
 受信機200が送信機100および駅出口案内板110を撮像することによって撮像表示画像Ppreおよび復号用画像Pdecを取得する。送信機100は輝度変化し、駅出口案内板110は発光しているため、その復号用画像Pdecには、送信機100に対応する輝線パターン領域Pdec1と、駅出口案内板110に対応する明領域Pdec2とが現れる。輝線パターン領域Pdec1は、受信機200のイメージセンサが有する複数の露光ラインの通信用露光時間での露光によって現れる複数の輝線のパターンからなる領域である。 When the receiver 200 images the transmitter 100 and the station exit guide plate 110, the captured display image Ppre and the decoding image Pdec are acquired. Since the transmitter 100 changes in luminance and the station exit guide plate 110 emits light, the decoding image Pdec includes a bright line pattern region Pdec1 corresponding to the transmitter 100 and a bright region corresponding to the station exit guide plate 110. Pdec2 appears. The bright line pattern region Pdec1 is a region composed of a plurality of bright line patterns that appear by exposure at a communication exposure time of a plurality of exposure lines of the image sensor of the receiver 200.
 ここで、識別情報は、上述のように、撮像表示画像Ppreのうちの基準領域Pbasを特定するための基準情報と、その基準領域Pbasに対する対象領域Ptarの相対位置を示す対象情報とを含んでいる。例えば、その基準情報は、撮像表示画像Ppreにおける基準領域Pbasの位置が、復号用画像Pdecにおける輝線パターン領域Pdec1の位置と同じであることを示す。さらに、対象情報は、対象領域の位置が基準領域の位置であることを示す。 Here, as described above, the identification information includes reference information for specifying the reference region Pbas in the captured display image Ppre and target information indicating the relative position of the target region Ptar with respect to the reference region Pbas. Yes. For example, the reference information indicates that the position of the reference area Pbas in the captured display image Ppre is the same as the position of the bright line pattern area Pdec1 in the decoding image Pdec. Further, the target information indicates that the position of the target area is the position of the reference area.
 したがって、受信機200は、基準情報に基づいて撮像表示画像Ppreから基準領域Pbasを特定する。つまり、受信機200は、撮像表示画像Ppreにおいて、復号用画像Pdecにおける輝線パターン領域Pdec1の位置と同一の位置にある領域を、基準領域Pbasとして特定する。さらに、受信機200は、撮像表示画像Ppreのうち、基準領域Pbasの位置を基準として対象情報により示される相対位置にある領域を、対象領域Ptarとして認識する。上述の例では、対象情報は、対象領域Ptarの位置が基準領域Pbasの位置であることを示すため、受信機200は、撮像表示画像Ppreのうちの基準領域Pbasを対象領域Ptarとして認識する。 Therefore, the receiver 200 specifies the reference region Pbas from the captured display image Ppre based on the reference information. That is, the receiver 200 specifies, in the captured display image Ppre, an area that is at the same position as the bright line pattern area Pdec1 in the decoding image Pdec as the reference area Pbas. Further, the receiver 200 recognizes, as the target region Ptar, a region in the relative position indicated by the target information with reference to the position of the reference region Pbas in the captured display image Ppre. In the above example, since the target information indicates that the position of the target area Ptar is the position of the reference area Pbas, the receiver 200 recognizes the reference area Pbas in the captured display image Ppre as the target area Ptar.
 そして、受信機200は、撮像表示画像Ppreにおける対象領域PtarにAR画像P1を重畳する。 Then, the receiver 200 superimposes the AR image P1 on the target area Ptar in the captured display image Ppre.
 このように、上述の例では、対象領域Ptarを認識するために、輝線パターン領域Pdec1を利用している。一方、輝線パターン領域Pdec1を利用せずに、撮像表示画像Ppreだけから、送信機100が映し出されている領域を、対象領域Ptarとして認識しようとする場合には、誤認識が生じる可能性がある。つまり、撮像表示画像Ppreのうちの、送信機100が映し出されている領域ではなく、駅出口案内板110が映し出されている領域を、対象領域Ptarとして誤認識してしまう可能性がある。これは、撮像表示画像Ppreにおける、送信機100の画像と駅出口案内板110の画像とが似ているからである。しかし、上述の例のように、輝線パターン領域Pdec1を利用する場合には、誤認識の発生を抑えて、正確に対象領域Ptarを認識することができる。 Thus, in the above-described example, the bright line pattern region Pdec1 is used to recognize the target region Ptar. On the other hand, when the region where the transmitter 100 is projected is to be recognized as the target region Ptar from only the captured display image Ppre without using the bright line pattern region Pdec1, erroneous recognition may occur. . That is, in the captured display image Ppre, not the area where the transmitter 100 is projected but the area where the station exit guide plate 110 is projected may be erroneously recognized as the target area Ptar. This is because the image of the transmitter 100 and the image of the station exit guide plate 110 in the captured display image Ppre are similar. However, as in the above example, when the bright line pattern region Pdec1 is used, it is possible to accurately recognize the target region Ptar while suppressing the occurrence of erroneous recognition.
 図51は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 51 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 図50に示す例では、送信機100は、駅名標の全体を輝度変化させることによって光IDを送信し、対象情報は、対象領域の位置が基準領域の位置であることを示している。しかし、本実施の形態では、送信機100は、駅名標の全体を輝度変化させることなく、駅名標の外枠の一部に配置された発光素子を輝度変化させることによって光IDを送信してもよい。また、対象情報は、基準領域Pbasに対する対象領域Ptarの相対位置を示していればよく、例えば、対象領域Ptarの位置が基準領域Pbasの上(具体的には、鉛直方向上向き)であることを示していてもよい。 In the example shown in FIG. 50, the transmitter 100 transmits a light ID by changing the luminance of the entire station name sign, and the target information indicates that the position of the target area is the position of the reference area. However, in this embodiment, the transmitter 100 transmits the light ID by changing the luminance of the light emitting elements arranged in a part of the outer frame of the station name sign without changing the brightness of the entire station name sign. Also good. The target information only needs to indicate the relative position of the target area Ptar with respect to the reference area Pbas. For example, the position of the target area Ptar is above the reference area Pbas (specifically, vertically upward). May be shown.
 図51に示す例では、送信機100は、駅名標の外枠下部に水平方向に沿って配置された複数の発光素子を輝度変化させることによって光IDを送信する。また、対象情報は、対象領域Ptarの位置が基準領域Pbasの上であることを示す。 In the example shown in FIG. 51, the transmitter 100 transmits the light ID by changing the luminance of a plurality of light emitting elements arranged along the horizontal direction below the outer frame of the station name sign. The target information indicates that the position of the target area Ptar is above the reference area Pbas.
 このような場合、受信機200は、基準情報に基づいて撮像表示画像Ppreから基準領域Pbasを特定する。つまり、受信機200は、撮像表示画像Ppreにおいて、復号用画像Pdecにおける輝線パターン領域Pdec1の位置と同一の位置にある領域を、基準領域Pbasとして特定する。具体的には、受信機200は、水平方向に長く垂直方向に短い矩形状の基準領域Pbasを特定する。さらに、受信機200は、撮像表示画像Ppreのうち、基準領域Pbasの位置を基準として対象情報により示される相対位置にある領域を、対象領域Ptarとして認識する。つまり、受信機200は、撮像表示画像Ppreのうちの基準領域Pbasよりも上にある領域を、対象領域Ptarとして認識する。なお、このときには、受信機200は、自らに備えられている加速度センサによって計測される重力方向に基づいて、基準領域Pbasよりも上の向きを特定する。 In such a case, the receiver 200 specifies the reference region Pbas from the captured display image Ppre based on the reference information. That is, the receiver 200 specifies, in the captured display image Ppre, an area that is at the same position as the bright line pattern area Pdec1 in the decoding image Pdec as the reference area Pbas. Specifically, the receiver 200 specifies a rectangular reference region Pbas that is long in the horizontal direction and short in the vertical direction. Further, the receiver 200 recognizes, as the target region Ptar, a region in the relative position indicated by the target information with reference to the position of the reference region Pbas in the captured display image Ppre. That is, the receiver 200 recognizes an area above the reference area Pbas in the captured display image Ppre as the target area Ptar. At this time, the receiver 200 specifies the direction above the reference region Pbas based on the direction of gravity measured by the acceleration sensor provided in the receiver 200.
 なお、対象情報は、対象領域Ptarの相対位置だけでなく、対象領域Ptarのサイズ、形状およびアスペクト比を示してもよい。この場合、受信機200は、対象情報によって示されるサイズ、形状およびアスペクト比の対象領域Ptarを認識する。また、受信機200は、基準領域Pbasのサイズに基づいて、対象領域Ptarのサイズを決定してもよい。 Note that the target information may indicate not only the relative position of the target area Ptar but also the size, shape, and aspect ratio of the target area Ptar. In this case, the receiver 200 recognizes the target area Ptar having the size, shape, and aspect ratio indicated by the target information. In addition, the receiver 200 may determine the size of the target area Ptar based on the size of the reference area Pbas.
 図52は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 52 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、図45に示す例と同様に、ステップS101~S104の処理を実行する。 The receiver 200 executes the processing of steps S101 to S104, as in the example shown in FIG.
 次に、受信機200は、復号用画像Pdecから輝線パターン領域Pdec1を特定する(ステップS111)。次に、受信機200は、撮像表示画像Ppreから、その輝線パターン領域Pdec1に対応する基準領域Pbasを特定する(ステップS112)。そして、受信機200は、認識情報(具体的には対象情報)とその基準領域Pbasとに基づいて、撮像表示画像Ppreから対象領域Ptarを認識する(ステップS113)。 Next, the receiver 200 identifies the bright line pattern region Pdec1 from the decoding image Pdec (step S111). Next, the receiver 200 specifies a reference area Pbas corresponding to the bright line pattern area Pdec1 from the captured display image Ppre (step S112). The receiver 200 recognizes the target area Ptar from the captured display image Ppre based on the recognition information (specifically, target information) and the reference area Pbas (step S113).
 次に、受信機200は、図45に示す例と同様に、撮像表示画像Ppreの対象領域PtarにAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppreを表示する(ステップS106)。そして、受信機200は、撮像と撮像表示画像Ppreの表示とを終了すべきか否かを判定する(ステップS107)。ここで、受信機200は、終了すべきでないと判定すると(ステップS107のN)、さらに、受信機200の加速度が閾値以上であるか否かを判定する(ステップS114)。この加速度は、受信機200に備えられている加速度センサによって計測される。受信機200は、加速度が閾値未満であると判定すると(ステップS114のN)、ステップS113からの処理を実行する。これにより、受信機200のディスプレイ201に表示されている撮像表示画像Ppreがずれる場合であっても、その撮像表示画像Ppreの対象領域PtarにAR画像を追従させることができる。また、受信機200は、加速度が閾値以上であると判定すると(ステップS114のY)、ステップS111またはステップS102からの処理を実行する。これにより、送信機100と異なる被写体(例えば、駅出口案内板110)が映し出されている領域を誤って対象領域Ptarとして認識してしまうことを抑えることができる。 Next, similarly to the example shown in FIG. 45, the receiver 200 superimposes the AR image on the target area Ptar of the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S106). . Then, the receiver 200 determines whether or not the imaging and the display of the captured display image Pre are to be ended (Step S107). Here, if the receiver 200 determines that it should not be ended (N in step S107), it further determines whether or not the acceleration of the receiver 200 is equal to or greater than a threshold value (step S114). This acceleration is measured by an acceleration sensor provided in the receiver 200. When the receiver 200 determines that the acceleration is less than the threshold value (N in step S114), the receiver 200 executes the processing from step S113. Thereby, even when the captured display image Ppre displayed on the display 201 of the receiver 200 is shifted, the AR image can follow the target area Ptar of the captured display image Ppre. If the receiver 200 determines that the acceleration is equal to or greater than the threshold (Y in step S114), the receiver 200 executes the processing from step S111 or step S102. Thereby, it can suppress that the area | region where the to-be-photographed object (for example, station exit guide board 110) different from the transmitter 100 is reflected is mistakenly recognized as the object area Ptar.
 図53は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 53 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 受信機200は、表示されている撮像表示画像PpreにおけるAR画像P1がタップされると、そのAR画像P1を拡大して表示する。または、受信機200は、タップされると、AR画像P1に示されている内容よりも詳細な内容を示す新たなAR画像を、そのAR画像P1の代わりに表示してもよい。また、AR画像P1が、複数ページからなる情報誌の1ページ分の情報を示す場合には、受信機200は、AR画像P1のページの次のページの情報を示す新たなAR画像を、そのAR画像P1の代わりに表示してもよい。または、受信機200は、タップされると、そのAR画像P1に関連する動画像を新たなAR画像として、そのAR画像P1の代わりに表示してもよい。このとき、受信機200は、対象領域Ptarからオブジェクト(図53の例では紅葉)が出ていくような動画像をAR画像として表示してもよい。 When the AR image P1 in the displayed captured display image Ppre is tapped, the receiver 200 enlarges and displays the AR image P1. Alternatively, when tapped, the receiver 200 may display a new AR image showing more detailed content than that shown in the AR image P1 instead of the AR image P1. In addition, when the AR image P1 indicates information for one page of an information magazine including a plurality of pages, the receiver 200 displays a new AR image indicating information on the next page of the page of the AR image P1. You may display instead of AR image P1. Alternatively, when tapped, the receiver 200 may display a moving image related to the AR image P1 as a new AR image instead of the AR image P1. At this time, the receiver 200 may display a moving image in which an object (autumn leaves in the example of FIG. 53) comes out from the target area Ptar as an AR image.
 図54は、本実施の形態における受信機200の撮像によって取得される撮像表示画像Ppreおよび復号用画像Pdecを示す図である。 FIG. 54 is a diagram showing a captured display image Ppre and a decoding image Pdec acquired by imaging of the receiver 200 in the present embodiment.
 受信機200は、撮像しているときには、例えば図54の(a1)に示すように、30fpsのフレームレートで撮像表示画像Ppreおよび復号用画像Pdecなどの撮像画像を取得する。具体的には、受信機200は、時刻t1に撮像表示画像Ppre「A」を取得し、時刻t2に復号用画像Pdecを取得し、時刻t3に撮像表示画像Ppre「B」を取得するように、撮像表示画像Ppreと復号用画像Pdecとを交互に取得する。 The receiver 200 acquires captured images such as a captured display image Ppre and a decoding image Pdec at a frame rate of 30 fps as shown in FIG. Specifically, the receiver 200 acquires the captured display image Ppre “A” at time t1, acquires the decoding image Pdec at time t2, and acquires the captured display image Ppre “B” at time t3. The captured display image Ppre and the decoding image Pdec are obtained alternately.
 また、受信機200は、撮像画像を表示しているときには、撮像画像のうち撮像表示画像Ppreのみを表示し、復号用画像Pdecを表示しない。つまり、受信機200は、図54の(a2)に示すように、復号用画像Pdecを取得するときには、その復号用画像Pdecの代わりに、直前に取得された撮像表示画像Ppreを表示する。具体的には、受信機200は、時刻t1には、取得された撮像表示画像Ppre「A」を表示し、時刻t2には、時刻t1で取得された撮像表示画像Ppre「A」を再び表示する。これにより、受信機200は、15fpsのフレームレートで撮像表示画像Ppreを表示する。 In addition, when displaying the captured image, the receiver 200 displays only the captured display image Ppre among the captured images, and does not display the decoding image Pdec. That is, as shown in (a2) of FIG. 54, the receiver 200 displays the captured display image Ppre acquired immediately before, instead of the decoding image Pdec, when acquiring the decoding image Pdec. Specifically, the receiver 200 displays the acquired captured display image Ppre “A” at time t1, and again displays the captured display image Ppre “A” acquired at time t1 at time t2. To do. Thereby, the receiver 200 displays the captured display image Ppre at a frame rate of 15 fps.
 ここで、図54の(a1)に示す例では、受信機200は、撮像表示画像Ppreと復号用画像Pdecとを交互に取得するが、本実施の形態におけるこれらの画像の取得形態は、このような形態に限らない。つまり、受信機200は、N(Nは1以上の整数)枚の復号用画像Pdecを連続して取得し、その後、M(Mは1以上の整数)枚の撮像表示画像Ppreを連続して取得することを繰り返してもよい。 Here, in the example shown in (a1) of FIG. 54, the receiver 200 alternately acquires the captured display image Ppre and the decoding image Pdec, but the acquisition form of these images in the present embodiment is It is not restricted to such a form. That is, the receiver 200 continuously obtains N (N is an integer equal to or greater than 1) decoding images Pdec, and then continuously captures M (M is an integer equal to or greater than 1) captured display images Ppre. You may repeat acquiring.
 また、受信機200は、取得される撮像画像を、撮像表示画像Ppreと復号用画像Pdecとに切り替える必要があり、この切り替えに時間がかかってしまうことがある。そこで、図54の(b1)に示すように、受信機200は、撮像表示画像Ppreの取得と、復号用画像Pdecの取得と間の切り替え時において、切り替え期間を設けてもよい。具体的には、受信機200は、時刻t3に復号用画像Pdecを取得すると、時刻t3~t5までの切り替え期間において、撮像画像を切り替えるための処理を実行し、時刻t5に撮像表示画像Ppre「A」を取得する。その後、受信機200は、時刻t5~t7までの切り替え期間において、撮像画像を切り替えるための処理を実行し、時刻t7に復号用画像Pdecを取得する。 In addition, the receiver 200 needs to switch the acquired captured image to the captured display image Ppre and the decoding image Pdec, and this switching may take time. Therefore, as illustrated in (b1) of FIG. 54, the receiver 200 may provide a switching period when switching between acquisition of the captured display image Ppre and acquisition of the decoding image Pdec. Specifically, when the receiver 200 obtains the decoding image Pdec at time t3, the receiver 200 executes processing for switching the captured image during the switching period from time t3 to t5, and at time t5, the captured display image Prep " A ”is acquired. Thereafter, the receiver 200 executes processing for switching the captured image in the switching period from time t5 to time t7, and acquires the decoding image Pdec at time t7.
 このように切り替え期間が設けられた場合、受信機200は、図54の(b2)に示すように、切り替え期間では、直前に取得された撮像表示画像Ppreを表示する。したがって、この場合には、受信機200における撮像表示画像Ppreの表示のフレームレートは低く、例えば3fpsとなる。このようにフレームレートが低い場合には、ユーザが受信機200を動かしても、表示されている撮像表示画像Ppreがその受信機200の動きに応じて移動しないことがある。つまり、撮像表示画像Ppreはライブビューとして表示されない。そこで、受信機200は、撮像表示画像Ppreを受信機200の動きに応じて移動させてもよい。 When the switching period is provided in this way, the receiver 200 displays the captured display image Ppre acquired immediately before in the switching period, as shown in (b2) of FIG. Therefore, in this case, the display frame rate of the captured display image Ppre in the receiver 200 is low, for example, 3 fps. Thus, when the frame rate is low, even if the user moves the receiver 200, the captured display image Ppre displayed may not move according to the movement of the receiver 200. That is, the captured display image Ppre is not displayed as a live view. Therefore, the receiver 200 may move the captured display image Ppre according to the movement of the receiver 200.
 図55は、本実施の形態における受信機200に表示される撮像表示画像Ppreの一例を示す図である。 FIG. 55 is a diagram showing an example of the captured display image Ppre displayed on the receiver 200 in the present embodiment.
 受信機200は、例えば図55の(a)に示すように、撮像によって得られた撮像表示画像Ppreをディスプレイ201に表示する。ここで、ユーザが受信機200を左側に動かす。このとき、受信機200による撮像によって新たな撮像表示画像Ppreが取得されない場合、受信機200は、図55の(b)に示すように、表示されている撮像表示画像Ppreを右側に移動させる。つまり、受信機200は、加速度センサを備え、その加速度センサによって計測される加速度に応じて、受信機200の動きに整合するように、表示されている撮像表示画像Ppreを移動させる。これにより、受信機200は、撮像表示画像Ppreを擬似的にライブビューとして表示することができる。 The receiver 200 displays, on the display 201, a captured display image Ppre obtained by imaging, for example, as illustrated in FIG. Here, the user moves the receiver 200 to the left side. At this time, when a new captured display image Ppre is not acquired by imaging by the receiver 200, the receiver 200 moves the displayed captured display image Ppre to the right as shown in FIG. 55 (b). That is, the receiver 200 includes an acceleration sensor, and moves the displayed captured display image Ppre to match the movement of the receiver 200 according to the acceleration measured by the acceleration sensor. Thereby, the receiver 200 can display the captured display image Ppre as a live view in a pseudo manner.
 図56は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 56 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、まず、上述と同様に、撮像表示画像Ppreの対象領域PtarにAR画像を重畳して、その対象領域Ptarに追従させる(ステップS122)。つまり、撮像表示画像Ppreにおける対象領域Ptarと共に移動するAR画像が表示される。そして、受信機200は、AR画像の表示を維持するか否かを判定する(ステップS122)。ここで、AR画像の表示を維持しないと判定すると(ステップS122のN)、受信機200は、撮像によって新たな光IDを取得すれば、その光IDに対応する新たなAR画像を撮像表示画像Ppreに重畳して表示する(ステップS123)。 First, similarly to the above, the receiver 200 superimposes the AR image on the target area Ptar of the captured display image Ppre and follows the target area Ptar (step S122). That is, an AR image that moves together with the target area Ptar in the captured display image Ppre is displayed. Then, the receiver 200 determines whether or not to maintain the display of the AR image (step S122). If it is determined that the display of the AR image is not maintained (N in step S122), if the receiver 200 acquires a new light ID by imaging, the new AR image corresponding to the light ID is captured and displayed. It is displayed superimposed on Pre (step S123).
 一方、AR画像の表示を維持すると判定すると(ステップS122のY)、受信機200は、ステップS121からの処理を繰り返し実行させる。このときには、受信機200は、他のAR画像を取得していても他のAR画像を表示しない。または、受信機200は、新たな復号用画像Pdecを取得していても、その復号用画像Pdecに対する復号によって光IDを取得することは行わない。このときには、復号にかかる消費電力を抑えることができる。 On the other hand, if it is determined that the display of the AR image is to be maintained (Y in step S122), the receiver 200 repeatedly executes the processing from step S121. At this time, the receiver 200 does not display another AR image even if another AR image is acquired. Alternatively, even when the receiver 200 has acquired a new decoding image Pdec, the receiver 200 does not acquire an optical ID by decoding the decoding image Pdec. At this time, power consumption for decoding can be suppressed.
 このように、AR画像の表示を維持することによって、表示されているそのAR画像が消去されたり、他のAR画像の表示によって見え難くなってしまうことを抑えることができる。つまり、表示されているAR画像をユーザに見え易くすることができる。 As described above, by maintaining the display of the AR image, it is possible to prevent the displayed AR image from being erased or being difficult to see due to the display of another AR image. That is, the displayed AR image can be easily seen by the user.
 例えば、ステップS122では、受信機200は、AR画像が表示されてから予め定められた期間(一定期間)が経過するまでは、AR画像の表示を維持すると判定する。つまり、受信機200は、撮像表示画像Ppreを表示するときには、ステップS121で重畳されているAR画像である第1のAR画像と異なる第2のAR画像の表示を抑制しながら、予め定められた表示期間だけ、その第1のAR画像を表示する。受信機200は、この表示期間には、新たに取得される復号用画像Pdecに対する復号を禁止してもよい。 For example, in step S122, the receiver 200 determines to maintain the display of the AR image until a predetermined period (a certain period) elapses after the AR image is displayed. That is, when displaying the captured display image Ppre, the receiver 200 is determined in advance while suppressing the display of the second AR image different from the first AR image that is the AR image superimposed in step S121. The first AR image is displayed only during the display period. The receiver 200 may prohibit the decoding of the newly acquired decoding image Pdec during this display period.
 これにより、ユーザが一度表示された第1のAR画像を見ているときに、その第1のAR画像がそれとは異なる第2のAR画像にすぐに置き換わってしまうことを抑えることができる。さらに、新たに取得される復号用画像Pdecの復号は、第2のAR画像の表示が抑制されているときには無駄な処理であるため、その復号を禁止することによって、消費電力を抑えることができる。 Thereby, when the user is viewing the first AR image once displayed, it is possible to prevent the first AR image from being immediately replaced with a different second AR image. Furthermore, since decoding of the newly acquired decoding image Pdec is a wasteful process when the display of the second AR image is suppressed, power consumption can be suppressed by prohibiting the decoding. .
 または、ステップS122では、受信機200は、フェイスカメラを備え、そのフェイスカメラによる撮像結果に基づいて、ユーザの顔が近付いていることを検出すると、AR画像の表示を維持すると判定してもよい。つまり、受信機200は、撮像表示画像Ppreを表示するときには、さらに、受信機200に備えられたフェイスカメラによる撮像によって、受信機200にユーザの顔が近づいている否かを判定する。そして、受信機200は、顔が近づいていると判定すると、ステップS121で重畳されているAR画像である第1のAR画像と異なる第2のAR画像の表示を抑制しながら、その第1のAR画像を表示する。 Alternatively, in step S122, the receiver 200 may include a face camera, and when detecting that the user's face is approaching based on the imaging result of the face camera, the receiver 200 may determine to maintain the display of the AR image. . That is, when displaying the captured display image Ppre, the receiver 200 further determines whether or not the user's face is approaching the receiver 200 by imaging with a face camera provided in the receiver 200. When the receiver 200 determines that the face is approaching, the first AR image is suppressed while suppressing the display of the second AR image that is different from the first AR image that is the AR image superimposed in step S121. An AR image is displayed.
 または、ステップS122では、受信機200は、加速度センサを備え、その加速度センサによる計測結果に基づいて、ユーザの顔が近付いていることを検出すると、AR画像の表示を維持すると判定してもよい。つまり、受信機200は、撮像表示画像Ppreを表示するときには、さらに、加速度センサによって計測される受信機200の加速度によって、受信機200にユーザの顔が近づいている否かを判定する。例えば、加速度センサによって計測される受信機200の加速度が、受信機200のディスプレイ201に対して垂直外向きの方向に正の値を示す場合に、受信機200はユーザの顔が近付いていると判定する。そして、受信機200は、顔が近づいていると判定すると、ステップS121で重畳されているAR画像である第1の拡張現実画像と異なる第2のAR画像の表示を抑制しながら、その第1のAR画像を表示する。 Alternatively, in step S122, the receiver 200 may include an acceleration sensor, and when detecting that the user's face is approaching based on the measurement result of the acceleration sensor, the receiver 200 may determine to maintain the display of the AR image. . That is, when displaying the captured display image Ppre, the receiver 200 further determines whether or not the user's face is approaching the receiver 200 based on the acceleration of the receiver 200 measured by the acceleration sensor. For example, when the acceleration of the receiver 200 measured by the acceleration sensor shows a positive value in a direction perpendicular to the display 201 of the receiver 200, the receiver 200 is approaching the user's face. judge. When the receiver 200 determines that the face is approaching, the first 200 is performed while suppressing the display of the second AR image that is different from the first augmented reality image that is the AR image superimposed in step S121. The AR image is displayed.
 これにより、ユーザが第1のAR画像を見ようとして受信機200に顔を近づけているときに、その第1のAR画像がそれとは異なる第2のAR画像に置き換わってしまうことを抑えることができる。 Accordingly, when the user is approaching the receiver 200 to see the first AR image, the first AR image can be prevented from being replaced with a different second AR image. .
 または、ステップS122では、受信機200は、その受信機200に備えられているロックボタンが押下されると、AR画像の表示を維持すると判定してもよい。 Alternatively, in step S122, the receiver 200 may determine that the display of the AR image is maintained when a lock button provided in the receiver 200 is pressed.
 また、ステップS122では、受信機200は、上述の一定期間(すなわち表示期間)が経過すると、AR画像の表示を維持しないと判定する。また、受信機200は、上述の一定期間が経過していない場合であっても、加速度センサによって閾値以上の加速度が計測されたときには、AR画像の表示を維持しないと判定する。つまり、受信機200は、撮像表示画像Ppreを表示するときには、さらに、上述の表示期間において、受信機200の加速度を加速度センサによって計測し、計測された加速度が閾値以上か否かを判定する。そして、受信機200は、閾値以上と判定したときには、第2のAR画像の表示の抑制を解除することによって、ステップS123において、第1のAR画像の代わりに第2のAR画像を表示する。 In step S122, the receiver 200 determines that the display of the AR image is not maintained when the above-described certain period (that is, the display period) has elapsed. Further, the receiver 200 determines that the display of the AR image is not maintained when acceleration equal to or greater than the threshold value is measured by the acceleration sensor even when the above-described certain period has not elapsed. That is, when displaying the captured display image Ppre, the receiver 200 further measures the acceleration of the receiver 200 with the acceleration sensor during the display period described above, and determines whether or not the measured acceleration is equal to or greater than a threshold value. When the receiver 200 determines that the second AR image is greater than or equal to the threshold, the receiver 200 cancels the suppression of the display of the second AR image, thereby displaying the second AR image instead of the first AR image in step S123.
 これにより、閾値以上の表示装置の加速度が計測されたときに、第2のAR画像の表示の抑制が解除される。したがって、例えば、ユーザが他の被写体にイメージセンサを向けようとして受信機200を大きく動かしたときには、第2のAR画像を直ぐに表示することができる。 Thereby, when the acceleration of the display device equal to or greater than the threshold is measured, the suppression of the display of the second AR image is released. Therefore, for example, when the user moves the receiver 200 greatly so as to point the image sensor at another subject, the second AR image can be displayed immediately.
 図57は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 57 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図57に示すように、照明装置として構成され、小さい人形用のステージ111を照らしながら輝度変化することによって、光IDを送信している。ステージ111は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 57, the transmitter 100 is configured as a lighting device, and transmits a light ID by changing the luminance while illuminating a stage 111 for a small doll. Since the stage 111 is illuminated by the light from the transmitter 100, the luminance changes similarly to the transmitter 100, and the optical ID is transmitted.
 2つの受信機200は、送信機100によって照らされたステージ111を左右から撮像する。 The two receivers 200 image the stage 111 illuminated by the transmitter 100 from the left and right.
 2つの受信機200のうちの左側の受信機200は、送信機100によって照らされたステージ111を左側から撮像することによって、上述と同様に、撮像表示画像Pfと復号用画像とを取得する。左側の受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、左側の受信機200は、ステージ111から光IDを受信する。左側の受信機200は、その光IDをサーバに送信する。そして、左側の受信機200は、その光IDに対応する三次元のAR画像と認識情報とをサーバから取得する。この三次元のAR画像は、例えば人形を立体的に表示するための画像である。左側の受信機200は、撮像表示画像Pfのうち、その認識情報に応じた領域を対象領域として認識する。例えば、左側の受信機200は、ステージ111中央の上側の領域を対象領域として認識する。 The left receiver 200 of the two receivers 200 captures the captured display image Pf and the decoding image in the same manner as described above by capturing the stage 111 illuminated by the transmitter 100 from the left. The receiver 200 on the left side acquires the optical ID by decoding the decoding image. That is, the left receiver 200 receives the optical ID from the stage 111. The left receiver 200 transmits the optical ID to the server. Then, the left-side receiver 200 acquires a three-dimensional AR image and recognition information corresponding to the optical ID from the server. This three-dimensional AR image is an image for displaying a doll three-dimensionally, for example. The left receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pf as a target area. For example, the left receiver 200 recognizes the upper area at the center of the stage 111 as the target area.
 次に、左側の受信機200は、撮像表示画像Pfに映し出されているステージ111の向きに基づいて、その向きに応じた二次元のAR画像P6aを三次元のAR画像から生成する。そして、左側の受信機200は、その対象領域に二次元のAR画像P6aを重畳し、AR画像P6aが重畳された撮像表示画像Pfをディスプレイ201に表示する。この場合、撮像表示画像Pfの対象領域にその二次元のAR画像P6aが重畳されるため、左側の受信機200は、ステージ111上に人形が現実に存在するように、撮像表示画像Pfを表示することができる。 Next, the left-side receiver 200 generates a two-dimensional AR image P6a corresponding to the orientation from the three-dimensional AR image based on the orientation of the stage 111 displayed in the captured display image Pf. Then, the receiver 200 on the left side superimposes the two-dimensional AR image P6a on the target area, and displays the captured display image Pf on which the AR image P6a is superimposed on the display 201. In this case, since the two-dimensional AR image P6a is superimposed on the target area of the captured display image Pf, the left receiver 200 displays the captured display image Pf so that the doll actually exists on the stage 111. can do.
 同様に、2つの受信機200のうちの右側の受信機200は、送信機100によって照らされたステージ111を右側から撮像することによって、上述と同様に、撮像表示画像Pgと復号用画像とを取得する。右側の受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、右側の受信機200は、ステージ111から光IDを受信する。右側の受信機200は、その光IDをサーバに送信する。そして、右側の受信機200は、その光IDに対応する三次元のAR画像と認識情報とをサーバから取得する。右側の受信機200は、撮像表示画像Pgのうち、その認識情報に応じた領域を対象領域として認識する。例えば、右側の受信機200は、ステージ111中央の上側の領域を対象領域として認識する。 Similarly, the right receiver 200 of the two receivers 200 captures the captured display image Pg and the decoding image in the same manner as described above by capturing the stage 111 illuminated by the transmitter 100 from the right. get. The right receiver 200 acquires the optical ID by decoding the decoding image. That is, the right receiver 200 receives the optical ID from the stage 111. The right receiver 200 transmits the optical ID to the server. Then, the right-side receiver 200 acquires a three-dimensional AR image and recognition information corresponding to the optical ID from the server. The right receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pg as a target area. For example, the right receiver 200 recognizes the upper area at the center of the stage 111 as the target area.
 次に、右側の受信機200は、撮像表示画像Pgに映し出されているステージ111の向きに基づいて、その向きに応じた二次元のAR画像P6bを三次元のAR画像から生成する。そして、右側の受信機200は、その対象領域に二次元のAR画像P6bを重畳し、AR画像P6bが重畳された撮像表示画像Pgをディスプレイ201に表示する。この場合、撮像表示画像Pgの対象領域にその二次元のAR画像P6bが重畳されるため、右側の受信機200は、ステージ111上に人形が現実に存在するように、撮像表示画像Pgを表示することができる。 Next, the right-side receiver 200 generates a two-dimensional AR image P6b corresponding to the orientation from the three-dimensional AR image based on the orientation of the stage 111 displayed in the captured display image Pg. Then, the receiver 200 on the right side superimposes the two-dimensional AR image P6b on the target region, and displays the captured display image Pg on which the AR image P6b is superimposed on the display 201. In this case, since the two-dimensional AR image P6b is superimposed on the target region of the captured display image Pg, the right-side receiver 200 displays the captured display image Pg so that the doll actually exists on the stage 111. can do.
 このように、2つの受信機200は、ステージ111上の同じ位置に、AR画像P6aおよびP6bを表示する。また、これらのAR画像P6aおよびP6bは、仮想的な人形が実際に所定の方向を向いているように、受信機200の向きに応じて生成されている。したがって、ステージ111のどの方向から撮像しても、ステージ111上に人形が現実に存在するように、撮像表示画像を表示することができる。 Thus, the two receivers 200 display the AR images P6a and P6b at the same position on the stage 111. The AR images P6a and P6b are generated according to the orientation of the receiver 200 so that the virtual doll is actually facing a predetermined direction. Therefore, the captured display image can be displayed so that the doll actually exists on the stage 111 no matter what direction the stage 111 is captured.
 なお、上述の例では、受信機200は、三次元のAR画像から、受信機200とステージ111との間の位置関係に応じた二次元のAR画像を生成したが、その二次元のAR画像をサーバから取得してもよい。つまり、受信機200は、光IDと共に、その位置関係を示す情報をサーバに送信し、三次元のAR画像の代わりに、その二次元のAR画像をサーバから取得する。これにより、受信機200の負担を軽減することができる。 In the above example, the receiver 200 generates a two-dimensional AR image corresponding to the positional relationship between the receiver 200 and the stage 111 from the three-dimensional AR image. May be obtained from the server. That is, the receiver 200 transmits information indicating the positional relationship together with the optical ID to the server, and acquires the two-dimensional AR image from the server instead of the three-dimensional AR image. Thereby, the burden on the receiver 200 can be reduced.
 図58は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 58 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図58に示すように、照明装置として構成され、円柱状の構造物112を照らしながら輝度変化することによって、光IDを送信している。構造物112は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 58, the transmitter 100 is configured as an illumination device, and transmits a light ID by changing the luminance while illuminating a cylindrical structure 112. Since the structure 112 is illuminated by the light from the transmitter 100, the luminance is changed similarly to the transmitter 100, and the light ID is transmitted.
 受信機200は、送信機100によって照らされた構造物112を撮像することによって、上述と同様に、撮像表示画像Phと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、構造物112から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P7と認識情報とをサーバから取得する。受信機200は、撮像表示画像Phのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、構造物112の中央部が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P7を重畳し、AR画像P7が重畳された撮像表示画像Phをディスプレイ201に表示する。例えば、AR画像P7は、文字列「ABCD」を含む画像であって、その文字列は構造物112の中央部の曲面に合わせて歪んでいる。この場合、撮像表示画像Phの対象領域にその歪んだ文字列を含むAR画像P2が重畳されるため、受信機200は、構造物112に対して描かれた文字列が現実に存在するように、撮像表示画像Phを表示することができる。 The receiver 200 acquires the captured display image Ph and the decoding image in the same manner as described above by imaging the structure 112 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the structure 112. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P7 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Ph as a target area. For example, the receiver 200 recognizes an area where the central portion of the structure 112 is projected as a target area. Then, the receiver 200 superimposes the AR image P7 on the target area, and displays the captured display image Ph on which the AR image P7 is superimposed on the display 201. For example, the AR image P <b> 7 is an image including a character string “ABCD”, and the character string is distorted in accordance with the curved surface at the center of the structure 112. In this case, since the AR image P2 including the distorted character string is superimposed on the target region of the captured display image Ph, the receiver 200 makes sure that the character string drawn on the structure 112 actually exists. The captured display image Ph can be displayed.
 図59は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 59 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図59に示すように、飲食店のメニュー113を照らしながら輝度変化することによって、光IDを送信している。メニュー113は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。また、メニュー113は、例えば「ABCスープ」、「XYZサラダ」および「KLMランチ」などの複数の料理の名称を示す。 For example, as shown in FIG. 59, the transmitter 100 transmits the light ID by changing the luminance while illuminating the menu 113 of the restaurant. Since the menu 113 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted. The menu 113 indicates names of a plurality of dishes such as “ABC soup”, “XYZ salad”, and “KLM lunch”.
 受信機200は、送信機100によって照らされたメニュー113を撮像することによって、上述と同様に、撮像表示画像Piと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、メニュー113から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P8と認識情報とをサーバから取得する。受信機200は、撮像表示画像Piのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、メニュー113が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P8を重畳し、AR画像P8が重畳された撮像表示画像Piをディスプレイ201に表示する。例えば、AR画像P8は、複数の料理のそれぞれに使われている食材をマークで示す画像である。例えば、AR画像P8は、卵が使われている料理「XYZサラダ」に対しては、卵を模ったマークを示し、豚肉が使われている料理「KLMランチ」に対しては、豚を模ったマークを示す。この場合、撮像表示画像Piの対象領域にそのAR画像P8が重畳されるため、受信機200は、食材のマークが付されたメニュー113が現実に存在するように、撮像表示画像Piを表示することができる。これにより、メニュー113に特別な表示装置を備えることなく、受信機200のユーザに各料理の食材を簡単に、かつ、分かりやすく知らせることができる。 The receiver 200 acquires the captured display image Pi and the decoding image in the same manner as described above by capturing an image of the menu 113 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the menu 113. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P8 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pi as a target area. For example, the receiver 200 recognizes an area where the menu 113 is displayed as the target area. Then, the receiver 200 superimposes the AR image P8 on the target region, and displays the captured display image Pi on which the AR image P8 is superimposed on the display 201. For example, the AR image P8 is an image that shows the ingredients used for each of a plurality of dishes by marks. For example, the AR image P8 shows a mark imitating an egg for a dish “XYZ salad” using eggs, and a pig for a dish “KLM lunch” using pork. The imitated mark is shown. In this case, since the AR image P8 is superimposed on the target area of the captured display image Pi, the receiver 200 displays the captured display image Pi so that the menu 113 with the food mark is actually present. be able to. Thereby, without providing a special display device in the menu 113, the user of the receiver 200 can be easily and easily informed of the ingredients of each dish.
 また、受信機200は、複数のAR画像を取得して、ユーザによって設定されたユーザ情報に基づいて、それらの複数のAR画像からユーザに適したAR画像を選択し、そのAR画像を重畳してもよい。例えば、ユーザが卵にアレルギー反応を示すことがユーザ情報に示されていれば、受信機200は、卵が使われた料理に対して卵のマークが付されたAR画像を選択する。また、豚肉の摂取が禁止されていることがユーザ情報に示されていれば、受信機200は、豚肉が使われた料理に対して豚のマークが付されたAR画像を選択する。または、受信機200は、光IDと共に、そのユーザ情報をサーバに送信し、その光IDとユーザ情報に応じたAR画像をサーバから取得してもよい。これにより、ユーザごとに、そのユーザに対して喚起を促すメニューを表示することができる。 In addition, the receiver 200 acquires a plurality of AR images, selects an AR image suitable for the user from the plurality of AR images based on the user information set by the user, and superimposes the AR images. May be. For example, if the user information indicates that the user shows an allergic reaction to the egg, the receiver 200 selects an AR image that is marked with an egg for a dish in which the egg is used. If the user information indicates that the intake of pork is prohibited, the receiver 200 selects an AR image in which a pork mark is attached to a dish in which pork is used. Alternatively, the receiver 200 may transmit the user information together with the optical ID to the server and acquire an AR image corresponding to the optical ID and the user information from the server. Thereby, for each user, a menu that prompts the user to call can be displayed.
 図60は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 60 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図60に示すように、テレビとして構成され、ディスプレイに映像を表示しながら輝度変化することによって、光IDを送信している。また、送信機100の近傍には、通常のテレビ114が配置されている。テレビ114は、ディスプレイに映像を表示しているが、光IDを送信していない。 For example, as shown in FIG. 60, the transmitter 100 is configured as a television, and transmits an optical ID by changing luminance while displaying an image on a display. In addition, a normal television 114 is disposed in the vicinity of the transmitter 100. The television 114 displays an image on the display, but does not transmit an optical ID.
 受信機200は、例えば送信機100とともにテレビ114を撮像することによって、上述と同様に、撮像表示画像Pjと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P9と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pjのうち、その認識情報に応じた領域を対象領域として認識する。 The receiver 200 acquires the captured display image Pj and the decoding image in the same manner as described above, for example, by imaging the television 114 together with the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P9 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pj as a target area.
 例えば、受信機200は、復号用画像の輝線パターン領域を利用することによって、撮像表示画像Pjにおいて、光IDを送信している送信機100が映し出されている領域の下部を第1の対象領域として認識する。なお、このとき、認識情報に含まれる基準情報は、撮像表示画像Pjにおける基準領域の位置が、復号用画像における輝線パターン領域の位置と同じであることを示す。さらに、認識情報に含まれる対象情報は、その基準領域の下部に対象領域があることを示す。受信機200は、このような認識情報を用いて上述の第1の対象領域を認識している。 For example, the receiver 200 uses the bright line pattern area of the image for decoding, so that the lower part of the area where the transmitter 100 transmitting the optical ID is displayed in the captured display image Pj is the first target area. Recognize as At this time, the reference information included in the recognition information indicates that the position of the reference area in the captured display image Pj is the same as the position of the bright line pattern area in the decoding image. Furthermore, the target information included in the recognition information indicates that there is a target area below the reference area. The receiver 200 recognizes the first target area described above using such recognition information.
 さらに、受信機200は、撮像表示画像Pjの下部において予め位置が固定されている領域を第2の対象領域として認識する。第2の対象領域は、第1の対象領域よりも大きい。なお、認識情報に含まれる対象情報は、さらに、第1の対象領域の位置だけでなく、上述のような第2の対象領域の位置およびサイズも示している。受信機200は、このような認識情報を用いて上述の第2の対象領域を認識している。 Furthermore, the receiver 200 recognizes an area whose position is fixed in advance below the captured display image Pj as the second target area. The second target area is larger than the first target area. The target information included in the recognition information further indicates not only the position of the first target area but also the position and size of the second target area as described above. The receiver 200 recognizes the second target area described above using such recognition information.
 そして、受信機200は、その第1の対象領域および第2の対象領域にAR画像P9を重畳し、AR画像P8が重畳された撮像表示画像Pjをディスプレイ201に表示する。このAR画像P9の重畳では、受信機200は、そのAR画像P9のサイズを第1の対象領域のサイズに合わせ、サイズ調整されたAR画像P9をその第1の対象領域に重畳する。さらに、受信機200は、AR画像P9のサイズを第2の対象領域のサイズに合わせ、サイズ調整されたAR画像P9をその第2の対象領域に重畳する。 Then, the receiver 200 superimposes the AR image P9 on the first target area and the second target area, and displays the captured display image Pj on which the AR image P8 is superimposed on the display 201. In the superposition of the AR image P9, the receiver 200 matches the size of the AR image P9 with the size of the first target area, and superimposes the AR image P9 whose size has been adjusted on the first target area. Furthermore, the receiver 200 matches the size of the AR image P9 with the size of the second target area, and superimposes the AR image P9 whose size has been adjusted on the second target area.
 例えば、AR画像P9は、送信機100の映像に対する字幕を示す。また、AR画像P9の字幕の言語は、受信機200に設定登録されているユーザ情報に応じた言語である。つまり、受信機200は、光IDをサーバに送信するときに、そのユーザ情報(例えば、ユーザの国籍または使用言語などを示す情報)もサーバに送信する。そして、受信機200は、そのユーザ情報に応じた言語の字幕を示すAR画像P9を取得する。または、受信機200は、それぞれ異なる言語の字幕を示す複数のAR画像P9を取得し、設定登録されているユーザ情報に応じて、それらの複数のAR画像P9から、重畳に使用されるAR画像P9を選択してもよい。 For example, the AR image P9 indicates a caption for the video of the transmitter 100. Further, the language of the caption of the AR image P9 is a language according to the user information set and registered in the receiver 200. That is, when transmitting the optical ID to the server, the receiver 200 also transmits the user information (for example, information indicating the user's nationality or language used) to the server. Then, the receiver 200 acquires an AR image P9 indicating a language caption corresponding to the user information. Alternatively, the receiver 200 acquires a plurality of AR images P9 indicating subtitles in different languages, and uses the AR images used for superimposition from the plurality of AR images P9 according to the user information registered and registered. P9 may be selected.
 言い換えれば、図60に示す例では、受信機200は、それぞれ画像を表示している複数のディスプレイを被写体として撮像することによって、撮像表示画像Pjおよび復号用画像を取得する。そして、受信機200は、対象領域を認識するときには、撮像表示画像Pjのうち、複数のディスプレイのうちの光IDを送信しているディスプレイである送信ディスプレイ(すなわち送信機100)が現れている領域を対象領域として認識する。次に、受信機200は、送信ディスプレイに表示されている画像に対応する第1の字幕をAR画像としてその対象領域に重畳する。さらに、受信機200は、撮像表示画像Pjのうちの対象領域よりも大きい領域に、第1の字幕が拡大された字幕である第2の字幕を重畳する。 In other words, in the example illustrated in FIG. 60, the receiver 200 acquires a captured display image Pj and a decoding image by capturing a plurality of displays each displaying an image as a subject. Then, when the receiver 200 recognizes the target area, an area in which the transmission display (that is, the transmitter 100) that is the display that transmits the light ID among the plurality of displays appears in the captured display image Pj. Is recognized as a target area. Next, the receiver 200 superimposes the first subtitle corresponding to the image displayed on the transmission display as an AR image on the target area. Furthermore, the receiver 200 superimposes a second subtitle, which is a subtitle obtained by enlarging the first subtitle, on a region larger than the target region in the captured display image Pj.
 これにより、受信機200は、送信機100の映像に字幕が現実に存在するように、撮像表示画像Pjを表示することができる。さらに、受信機200は、撮像表示画像Pjの下部にも、大きな字幕を重畳するため、送信機100の映像に付されている字幕が小さくても、字幕を見やすくすることができる。なお、送信機100の映像に付される字幕がなく、撮像表示画像Pjの下部に大きな字幕だけが重畳される場合には、その重畳されている字幕が送信機100の映像に対する字幕か、テレビ114の映像に対する字幕かを判断することが困難である。しかし、本実施の形態では、光IDを送信する送信機100の映像に対しても字幕が付されるため、ユーザは、重畳されている字幕が何れの映像に対する字幕かを容易に判断することができる。 Thereby, the receiver 200 can display the captured display image Pj so that captions actually exist in the video of the transmitter 100. Furthermore, since the receiver 200 also superimposes a large caption on the lower part of the captured display image Pj, even if the caption attached to the video of the transmitter 100 is small, the caption can be easily viewed. In addition, when there is no caption attached to the video of the transmitter 100 and only a large caption is superimposed on the lower part of the captured display image Pj, whether the superimposed caption is a caption for the video of the transmitter 100 or a television It is difficult to determine whether it is a caption for 114 videos. However, in the present embodiment, since captions are attached to the video of the transmitter 100 that transmits the optical ID, the user can easily determine which video the superimposed caption is for. Can do.
 また、受信機200は、撮像表示画像Pjの表示では、さらに、サーバから取得される情報に、音声情報が含まれているか否かを判定してもよい。そして、受信機200は、音声情報が含まれていると判定したときには、第1および第2の字幕よりも、音声情報が示す音声を優先して出力する。これにより、音声が優先的に出力されるため、ユーザが字幕を読む負担を軽減することができる。 In the display of the captured display image Pj, the receiver 200 may further determine whether or not audio information is included in the information acquired from the server. When the receiver 200 determines that the audio information is included, the receiver 200 outputs the audio indicated by the audio information with priority over the first and second subtitles. Thereby, since sound is preferentially output, it is possible to reduce the burden of the user reading subtitles.
 また、上述の例では、ユーザ情報(すなわちユーザの属性)に応じて字幕の言語を異ならせたが、送信機100に表示されている映像(すなわちコンテンツ)そのものを異ならせてもよい。例えば、送信機100に表示されている映像がニュースの映像である場合において、ユーザが日本人であることがユーザ情報に示されていれば、受信機200は、日本で放送されているニュース映像をAR画像として取得する。そして、受信機200は、そのニュース映像を、送信機100のディスプレイが映し出されている領域(すなわち対象領域)に重畳する。一方、ユーザが米国人であることがユーザ情報に示されていれば、受信機200は、米国で放送されているニュース映像をAR画像として取得する。そして、受信機200は、そのニュース映像を、送信機100のディスプレイが映し出されている領域(すなわち対象領域)に重畳する。これにより、ユーザに適した映像を表示することができる。なお、ユーザ情報には、ユーザの属性として、例えば、国籍または使用言語などが示され、受信機200はその属性に基づいて上述のAR画像を取得する。 In the above example, the subtitle language is changed according to the user information (that is, the user attribute), but the video (that is, the content) itself displayed on the transmitter 100 may be changed. For example, in the case where the video displayed on the transmitter 100 is a news video, if the user information indicates that the user is Japanese, the receiver 200 may update the news video broadcast in Japan. Is acquired as an AR image. Then, the receiver 200 superimposes the news video on an area where the display of the transmitter 100 is displayed (that is, the target area). On the other hand, if the user information indicates that the user is American, the receiver 200 acquires a news video broadcast in the United States as an AR image. Then, the receiver 200 superimposes the news video on an area where the display of the transmitter 100 is displayed (that is, the target area). Thereby, an image suitable for the user can be displayed. Note that the user information indicates, for example, nationality or language used as the user attribute, and the receiver 200 acquires the above-described AR image based on the attribute.
 図61は、本実施の形態における認識情報の一例を示す図である。 FIG. 61 is a diagram showing an example of recognition information in the present embodiment.
 認識情報が例えば上述のような特徴点および特徴量などであっても、誤認識が生じる可能性がある。例えば、送信機100aおよび100bは、それぞれ送信機100と同様に駅名標として構成されている。これらの送信機100aおよび100bは、互に異なる駅名標であっても、互に近い位置にあれば、類似しているために誤認識される可能性がある。 Even if the recognition information is, for example, the above-described feature points and feature amounts, misrecognition may occur. For example, the transmitters 100a and 100b are configured as station names as with the transmitter 100, respectively. Even if these transmitters 100a and 100b are station names different from each other, they may be misrecognized because they are similar if they are located close to each other.
 そこで、送信機100aおよび100bのそれぞれの認識情報は、送信機100aまたは100bの画像全体の各特徴点および各特徴量を示すことなく、その画像のうちの特徴的な一部分のみの各特徴点および各特徴量を示してもよい。 Therefore, the recognition information of each of the transmitters 100a and 100b does not indicate each feature point and each feature amount of the entire image of the transmitter 100a or 100b, and each feature point of only a characteristic part of the image and Each feature amount may be indicated.
 例えば、送信機100aの部分a1と、送信機100bの部分b1とは互に大きく異なり、送信機100aの部分a2と、送信機100bの部分b2とは互に大きく異なる。そこで、サーバは、送信機100aおよび100bが予め定められた範囲内(すなわち近距離)に設置されていれば、送信機100aに対応する認識情報として、部分a1および部分a2のそれぞれの画像の特徴点および特徴量を保持する。同様に、サーバは、送信機100bに対応する識別情報として、部分b1および部分b2のそれぞれの画像の特徴点および特徴量を保持する。 For example, the part a1 of the transmitter 100a and the part b1 of the transmitter 100b are greatly different from each other, and the part a2 of the transmitter 100a and the part b2 of the transmitter 100b are greatly different from each other. Therefore, if the transmitters 100a and 100b are installed in a predetermined range (that is, a short distance), the server uses image characteristics of the parts a1 and a2 as the recognition information corresponding to the transmitter 100a. Preserve points and features. Similarly, the server holds the feature points and feature amounts of the images of the parts b1 and b2 as the identification information corresponding to the transmitter 100b.
 これにより、受信機200は、互に類似する送信機100aおよび100bが互に近くにある場合(上述の予め定められた範囲内にある場合)であっても、それらの識別情報を用いて適切に対象領域を認識することができる。 Thereby, even when the transmitters 100a and 100b similar to each other are close to each other (when they are within the predetermined range described above), the receiver 200 can appropriately use the identification information. The target area can be recognized.
 図62は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 62 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、まず、受信機200に設定登録されているユーザ情報に基づいて、ユーザに視覚障害があるか否かを判定する(ステップS131)。ここで、受信機200は、視覚障害があると判定すると(ステップS131のY)、重畳して表示されるAR画像の文字を音声で出力する(ステップS132)。一方、受信機200は、視覚障害がないと判定すると(ステップS131のN)、さらに、ユーザ情報に基づいて、ユーザに聴覚障害があるか否かを判定する(ステップS133)。ここで、受信機200は、聴覚障害があると判定すると(ステップS133のY)、音声出力を停止する(ステップS134)。このとき、受信機200は、全ての機能による音声の出力を停止する。 The receiver 200 first determines whether or not the user has a visual impairment based on the user information set and registered in the receiver 200 (step S131). If the receiver 200 determines that there is a visual impairment (Y in step S131), the receiver 200 outputs the characters of the AR image displayed in a superimposed manner by voice (step S132). On the other hand, when the receiver 200 determines that there is no visual impairment (N in step S131), the receiver 200 further determines whether the user has a hearing impairment based on the user information (step S133). Here, if the receiver 200 determines that there is a hearing impairment (Y in step S133), the receiver 200 stops the sound output (step S134). At this time, the receiver 200 stops outputting sound by all functions.
 なお、受信機200は、ステップS131において視覚障害があると判定したときに(ステップS131のY)、ステップS133の処理を行ってもよい。つまり、受信機200は、視覚障害があり、かつ、聴覚障害がないと判定したときに、重畳して表示されるAR画像の文字を音声で出力してもよい。 The receiver 200 may perform the process of step S133 when it is determined in step S131 that there is a visual impairment (Y in step S131). That is, when it is determined that there is a visual impairment and there is no hearing impairment, the receiver 200 may output the AR image characters displayed in a superimposed manner by voice.
 図63は、本実施の形態における受信機200が輝線パターン領域を識別する一例を示す図である。 FIG. 63 is a diagram illustrating an example in which the receiver 200 according to the present embodiment identifies bright line pattern regions.
 受信機200は、まず、それぞれ光IDを送信する2つの送信機を撮像することによって復号用画像を取得し、その復号用画像に対する復号によって、図63の(e)に示すように、光IDを取得する。このとき、復号用画像には2つの輝線パターン領域XおよびYが含まれているため、受信機200は、輝線パターン領域Xに対応する送信機の光IDと、輝線パターン領域Yに対応する送信機の光IDとを取得する。輝線パターン領域Xに対応する送信機の光IDは、例えば、アドレス0~9のそれぞれに対応する数値(すなわちデータ)からなり、「5,2,8,4,3,6,1,9,4,3」を示す。輝線パターン領域Xに対応する送信機の光IDも同様に、例えば、アドレス0~9のそれぞれに対応する数値からなり、「5,2,7,7,1,5,3,2,7,4」を示す。 First, the receiver 200 obtains a decoding image by imaging two transmitters each transmitting an optical ID, and performs decoding on the decoding image to obtain an optical ID as shown in FIG. To get. At this time, since the decoding image includes two bright line pattern areas X and Y, the receiver 200 transmits the light ID of the transmitter corresponding to the bright line pattern area X and the transmission corresponding to the bright line pattern area Y. Get the machine's optical ID. The light ID of the transmitter corresponding to the bright line pattern region X is made up of numerical values (that is, data) corresponding to addresses 0 to 9, for example, “5, 2, 8, 4, 3, 6, 1, 9, 4,3 ". Similarly, the transmitter optical ID corresponding to the bright line pattern region X is also composed of numerical values corresponding to addresses 0 to 9, for example, “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 ".
 受信機200は、これらの光IDを一度取得しても、すなわちこれらの光IDが既知であっても、撮像しているときに、それぞれの光IDがどちらの輝線パターン領域から得られたのか分からない状況になることがある。このような場合、受信機200は、図63の(a)~(d)に示す処理を行うことによって、それぞれの既知の光IDがどちらの輝線パターン領域から得られたのかを容易に、かつ、迅速に判定することができる。 Even if the receiver 200 acquires these light IDs once, that is, even if these light IDs are known, from which bright line pattern area each light ID was obtained when taking an image. It may be a situation that you do not understand. In such a case, the receiver 200 can easily determine from which bright line pattern area each known light ID is obtained by performing the processes shown in FIGS. Can be determined quickly.
 具体的には、受信機200は、まず、図63の(a)に示すように、復号用画像Pdec11を取得して、その復号用画像Pdec11に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス0の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス0の数値は「5」であり、輝線パターン領域Yの光IDのアドレス0の数値も「5」である。それぞれの光IDのアドレス0の数値が「5」であるため、このときには、既知の光IDがどちらの輝線パターン領域から得られたのかを判定することができない。 Specifically, the receiver 200 first acquires the decoding image Pdec11 as shown in (a) of FIG. 63, and decodes each of the bright line pattern regions X and Y by decoding the decoding image Pdec11. The numerical value of the optical ID address 0 is acquired. For example, the numerical value of the light ID address 0 of the bright line pattern region X is “5”, and the numerical value of the light ID address 0 of the bright line pattern region Y is also “5”. Since the numerical value of the address 0 of each light ID is “5”, at this time, it cannot be determined from which bright line pattern area the known light ID is obtained.
 そこで、受信機200は、図63の(b)に示すように、復号用画像Pdec12を取得して、その復号用画像Pdec12に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス1の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス1の数値は「2」であり、輝線パターン領域Yの光IDのアドレス1の数値も「2」である。それぞれの光IDのアドレス1の数値が「2」であるため、このときにも、既知の光IDがどちらの輝線パターン領域から得られたのかを判定することができない。 Therefore, as shown in FIG. 63B, the receiver 200 acquires the decoding image Pdec12 and decodes the decoding image Pdec12 to obtain the address 1 of each light ID of the bright line pattern regions X and Y. Get the number of. For example, the numerical value of the address 1 of the light ID in the bright line pattern region X is “2”, and the numerical value of the address 1 of the light ID in the bright line pattern region Y is also “2”. Since the numerical value of the address 1 of each light ID is “2”, it is impossible to determine from which bright line pattern region the known light ID is obtained.
 そこで、さらに、受信機200は、図63の(c)に示すように、復号用画像Pdec13を取得して、その復号用画像Pdec13に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス2の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス2の数値は「8」であり、輝線パターン領域Yの光IDのアドレス2の数値は「7」である。このときには、既知の光ID「5,2,8,4,3,6,1,9,4,3」が輝線パターン領域Xから得られたと判定することができ、既知の光ID「5,2,7,7,1,5,3,2,7,4」が輝線パターン領域Yから得られたと判定することができる。 Therefore, as shown in FIG. 63 (c), the receiver 200 acquires the decoding image Pdec13, and decodes the decoding image Pdec13 to obtain the respective light IDs of the bright line pattern regions X and Y. Get the numerical value of address 2. For example, the numerical value of the address 2 of the light ID in the bright line pattern region X is “8”, and the numerical value of the address 2 of the light ID in the bright line pattern region Y is “7”. At this time, it can be determined that the known light ID “5, 2, 8, 4, 3, 6, 1, 9, 4, 3” has been obtained from the bright line pattern region X, and the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 "can be determined to have been obtained from the bright line pattern region Y.
 しかし、受信機200は、信頼度を高めるために、さらに、図63の(d)に示すように、それぞれの光IDのアドレス3の数値を取得してもよい。つまり、受信機200は、復号用画像Pdec14を取得して、その復号用画像Pdec14に対する復号によって、輝線パターン領域XおよびYのそれぞれの光IDのアドレス3の数値を取得する。例えば、輝線パターン領域Xの光IDのアドレス3の数値は「4」であり、輝線パターン領域Yの光IDのアドレス3の数値は「7」である。このときには、既知の光ID「5,2,8,4,3,6,1,9,4,3」が輝線パターン領域Xから得られたと判定することができ、既知の光ID「5,2,7,7,1,5,3,2,7,4」が輝線パターン領域Yから得られたと判定することができる。つまり、アドレス2だけでなくアドレス3によっても、輝線パターン領域XおよびYの光IDを識別することができるため、信頼度を高めることができる。 However, in order to increase the reliability, the receiver 200 may further acquire the numerical value of the address 3 of each optical ID, as shown in FIG. That is, the receiver 200 acquires the decoding image Pdec14, and acquires the numerical value of the address 3 of the light ID of each of the bright line pattern areas X and Y by decoding the decoding image Pdec14. For example, the numerical value of the address 3 of the light ID in the bright line pattern region X is “4”, and the numerical value of the address 3 of the light ID in the bright line pattern region Y is “7”. At this time, it can be determined that the known light ID “5, 2, 8, 4, 3, 6, 1, 9, 4, 3” has been obtained from the bright line pattern region X, and the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4 "can be determined to have been obtained from the bright line pattern region Y. That is, since the light IDs of the bright line pattern areas X and Y can be identified not only by the address 2 but also by the address 3, the reliability can be increased.
 このように、本実施の形態では、光IDの全てのアドレスの数値(すなわちデータ)を改めて取得することなく、少なくとも1つのアドレスの数値を取得し直す。これによって、既知の光IDがどちらの輝線パターン領域から得られたのかを容易に、かつ、迅速に判定することができる。 Thus, in this embodiment, the numerical value of at least one address is reacquired without acquiring the numerical values (that is, data) of all addresses of the optical ID again. This makes it possible to easily and quickly determine from which bright line pattern region the known light ID is obtained.
 なお、上述の図63の(c)および(d)に示す例では、所定のアドレスに対して取得された数値が、既知の光IDの数値と一致しているが、一致していなくてもよい。例えば、図63の(d)に示す例において、受信機200は、輝線パターン領域Yの光IDのアドレス3の数値として「6」を取得する。このアドレス3の数値「6」は、既知の光ID「5,2,7,7,1,5,3,2,7,4」のアドレス3の数値「7」とは異なる。しかし、数値「6」は数値「7」に近い数値であるため、受信機200は、既知の光ID「5,2,7,7,1,5,3,2,7,4」が輝線パターン領域Yから得られたと判定してもよい。なお、受信機は、数値「6」が数値「7」±n(nは例えば1以上の数)の範囲内にあるか否かによって、数値「6」が数値「7」に近い数値であるか否かを判定してもよい。 In the example shown in FIGS. 63C and 63D described above, the numerical value acquired for the predetermined address matches the numerical value of the known optical ID, but it does not have to match. Good. For example, in the example shown in (d) of FIG. 63, the receiver 200 acquires “6” as the numerical value of the address 3 of the light ID of the bright line pattern region Y. The numerical value “6” of the address 3 is different from the numerical value “7” of the address 3 of the known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4”. However, since the numerical value “6” is a numerical value close to the numerical value “7”, the receiver 200 has a known light ID “5, 2, 7, 7, 1, 5, 3, 2, 7, 4” with a bright line. It may be determined that the pattern area Y is obtained. In the receiver, the numerical value “6” is a numerical value close to the numerical value “7” depending on whether the numerical value “6” is within the range of the numerical value “7” ± n (n is a number of 1 or more, for example). It may be determined whether or not.
 図64は、本実施の形態における受信機200の他の例を示す図である。 FIG. 64 is a diagram illustrating another example of the receiver 200 in the present embodiment.
 受信機200は、上述の例ではスマートフォンとして構成されているが、イメージセンサを備えたヘッドマウントディスプレイ(グラスともいう)として構成されていてもよい。 The receiver 200 is configured as a smartphone in the above example, but may be configured as a head-mounted display (also referred to as glass) including an image sensor.
 このような受信機200は、上述のようなAR画像の表示のための処理回路(以下、AR処理回路という)を常に起動しておくと消費電力が多くなるため、予め定められた信号を検出したときに、そのAR処理回路を起動してもよい。 Such a receiver 200 detects a predetermined signal because power consumption increases when a processing circuit for displaying an AR image as described above (hereinafter referred to as an AR processing circuit) is always activated. When this occurs, the AR processing circuit may be activated.
 例えば、受信機200は、タッチセンサ202を備えている。タッチセンサ202は、ユーザの指などに触れると、タッチ信号を出力する。受信機200は、そのタッチ信号を検出したときに、AR処理回路を起動する。 For example, the receiver 200 includes a touch sensor 202. The touch sensor 202 outputs a touch signal when it touches a user's finger or the like. The receiver 200 activates the AR processing circuit when detecting the touch signal.
 または、受信機200は、Bluetooth(登録商標)またはWi-Fi(登録商標)などの電波信号を検出したときに、AR処理回路を起動してもよい。 Alternatively, the receiver 200 may activate the AR processing circuit when detecting a radio signal such as Bluetooth (registered trademark) or Wi-Fi (registered trademark).
 または、受信機200は、加速度センサを備え、その加速度センサによって重力の向きと反対の向きへの閾値以上の加速度が計測されたときに、AR処理回路を起動してもよい。つまり、受信機200は、上記加速度を示す信号を検出したときに、AR処理回路を起動する。例えば、ユーザが、グラスとして構成されている受信機200の鼻あて部分を下から指先で上向きに突きあげると、受信機200は上記加速度を示す信号を検出して、AR処理回路を起動する。 Alternatively, the receiver 200 may include an acceleration sensor, and may activate the AR processing circuit when the acceleration sensor measures an acceleration equal to or greater than a threshold value in a direction opposite to the direction of gravity. That is, the receiver 200 activates the AR processing circuit when detecting the signal indicating the acceleration. For example, when the user pushes up the nose pad portion of the receiver 200 configured as a glass upward with a fingertip from below, the receiver 200 detects a signal indicating the acceleration and activates the AR processing circuit.
 または、受信機200は、GPSおよび9軸センサなどによって、イメージセンサが送信機100に向けられたことを検知したときに、AR処理回路を起動してもよい。つまり、受信機200は、受信機200が所定の向きに向けられたことを示す信号を検出したときに、AR処理回路を起動する。この場合、送信機100が上述の日本語の駅名標などであれば、受信機200は、英語の駅名を示すAR画像をその駅名標に重畳して表示する。 Alternatively, the receiver 200 may activate the AR processing circuit when it is detected by the GPS and the 9-axis sensor that the image sensor is directed to the transmitter 100. That is, the receiver 200 activates the AR processing circuit when detecting a signal indicating that the receiver 200 is directed in a predetermined direction. In this case, if the transmitter 100 is the above-mentioned Japanese station name mark, the receiver 200 displays an AR image indicating the English station name superimposed on the station name mark.
 図65は、本実施の形態における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 65 is a flowchart showing another example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、送信機100から光IDを取得すると(ステップS141)、その光IDに応じたモード指定情報を受信することによって、ノイズキャンセルのモードを切り替える(ステップS142)。そして、受信機200は、そのモードの切り替え処理を終了すべきか否かを判定し(ステップS143)、終了すべきでないと判定すると(ステップS143のN)ステップS141からの処理を繰り返し実行する。ノイズキャンセルのモードの切り替えは、例えば、飛行機内におけるエンジンなどの騒音を消去するモード(ON)と、その騒音の消去を行わないモード(OFF)である。具体的には、受信機200を携帯するユーザは、その受信機200に接続されるイヤホンを耳にあてて、その受信機200から出力される音楽などの音声を聞いている。このようなユーザが飛行機に搭乗すると、受信機200は光IDを取得する。その結果、受信機200は、ノイズキャンセルのモードをOFFからONに切り替える。これにより、ユーザは、機内であっても、エンジンの騒音などのノイズが含まれない音声を聞くことができる。また、ユーザが飛行機から出るときにも、受信機200は光IDを取得する。この光IDを取得した受信機200は、ノイズキャンセルのモードをONからOFFに切り替える。なお、ノイズキャンセルの対象となるノイズは、エンジンの騒音だけでなく、人の声など、どのような音であってもよい。 When the receiver 200 acquires the optical ID from the transmitter 100 (step S141), the receiver 200 switches the mode of noise cancellation by receiving mode designation information corresponding to the optical ID (step S142). Then, the receiver 200 determines whether or not the mode switching process should be terminated (step S143), and when it is determined that the mode switching process should not be terminated (N in step S143), the process from step S141 is repeatedly executed. The switching of the noise canceling mode is, for example, a mode (ON) for canceling noise such as an engine in an airplane and a mode (OFF) for not canceling the noise. Specifically, a user carrying the receiver 200 is listening to a sound such as music output from the receiver 200 by putting an earphone connected to the receiver 200 on the ear. When such a user gets on the airplane, the receiver 200 acquires an optical ID. As a result, the receiver 200 switches the noise cancellation mode from OFF to ON. As a result, the user can hear a voice that does not include noise such as engine noise even in the cabin. The receiver 200 also acquires the light ID when the user leaves the airplane. The receiver 200 that has acquired this optical ID switches the noise cancellation mode from ON to OFF. Note that the noise to be subject to noise cancellation is not limited to engine noise but may be any sound such as a human voice.
 図66は、本実施の形態における複数の送信機を含む送信システムの一例を示す図である。 FIG. 66 is a diagram illustrating an example of a transmission system including a plurality of transmitters in the present embodiment.
 この送信システムは、予め定められた順に配列された複数の送信機120を備えている。これらの送信機120は、送信機100と同様、上記実施の形態1~3のうちの何れかの実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)を備える。先頭の送信機120は、予め定められた周波数(キャリア周波数)にしたがって1つまたは複数の発光素子の輝度を変化させることによって、光IDを送信する。さらに、先頭の送信機120は、その輝度の変化を示す信号を同期信号として後続の送信機120に出力する。後続の送信機120は、その同期信号を受けると、その同期信号にしたがって1つまたは複数の発光素子の輝度を変化させることによって、光IDを送信する。さらに、後続の送信機120は、その輝度の変化を示す信号を同期信号として次の後続の送信機120に出力する。これにより、送信システムに含まれる全ての送信機120は、同期して光IDを送信する。 This transmission system includes a plurality of transmitters 120 arranged in a predetermined order. Like the transmitter 100, these transmitters 120 are transmitters in any one of the first to third embodiments, and include one or a plurality of light emitting elements (for example, LEDs). The leading transmitter 120 transmits the optical ID by changing the luminance of one or a plurality of light emitting elements according to a predetermined frequency (carrier frequency). Further, the first transmitter 120 outputs a signal indicating the change in luminance to the subsequent transmitter 120 as a synchronization signal. When the subsequent transmitter 120 receives the synchronization signal, it transmits the optical ID by changing the luminance of one or more light emitting elements in accordance with the synchronization signal. Further, the subsequent transmitter 120 outputs a signal indicating the change in luminance to the subsequent subsequent transmitter 120 as a synchronization signal. Thereby, all the transmitters 120 included in the transmission system transmit the optical ID in synchronization.
 ここで、同期信号は、先頭の送信機120から後続の送信機120に受け渡され、後続の送信機120からさらに次の後続の送信機120に受け渡されて、最後の送信機120にまで届く。同期信号の受け渡しには例えば約1μ秒かかる。したがって、送信システムに、N(Nは2以上の整数)台の送信機120が備えられていれば、同期信号が先頭の送信機120から最後の送信機120に届くまで1×Nμ秒かかることになる。その結果、光IDの送信のタイミングが最大Nμ秒ずれることになる。例えば、N台の送信機120が9.6kHzの周波数にしたがって光IDを送信し、受信機200が9.6kHzの周波数で光IDを受信しようとしても、受信機200は、Nμ秒ずれた光IDを受信するため、その光IDを正しく受信することができない場合がある。 Here, the synchronization signal is transferred from the first transmitter 120 to the subsequent transmitter 120, and is further transferred from the subsequent transmitter 120 to the next subsequent transmitter 120 to the last transmitter 120. reach. For example, it takes about 1 μsec to transfer the synchronization signal. Therefore, if N (N is an integer of 2 or more) transmitters 120 are provided in the transmission system, it takes 1 × Nμ seconds for the synchronization signal to reach the last transmitter 120 from the first transmitter 120. become. As a result, the transmission timing of the optical ID is shifted by a maximum of N μ seconds. For example, even if N transmitters 120 transmit optical IDs according to a frequency of 9.6 kHz, and the receiver 200 attempts to receive an optical ID at a frequency of 9.6 kHz, the receiver 200 receives light that is shifted by N μ seconds. Since the ID is received, the optical ID may not be received correctly.
 そこで、本実施の形態では、先頭の送信機120は、送信システムに含まれる送信機120の台数に応じて速めに光IDを送信する。例えば、先頭の送信機120は、9.605kHzの周波数にしたがって光IDを送信する。一方、受信機200は、9.6kHzの周波数で光IDを受信する。このとき、受信機200はNμ秒ずれた光IDを受信しても、先頭の送信機120の周波数が受信機200の周波数よりも0.005kHzだけ高いため、その光IDのずれによる受信エラーの発生を抑えることができる。 Therefore, in the present embodiment, the head transmitter 120 transmits the optical ID at a higher speed according to the number of transmitters 120 included in the transmission system. For example, the first transmitter 120 transmits an optical ID according to a frequency of 9.605 kHz. On the other hand, the receiver 200 receives the optical ID at a frequency of 9.6 kHz. At this time, even if the receiver 200 receives the optical ID shifted by N μs, the frequency of the leading transmitter 120 is higher than the frequency of the receiver 200 by 0.005 kHz. Occurrence can be suppressed.
 また、先頭の送信機120は、最後の送信機120から同期信号をフィードバックしてもらうことによって、周波数の調整量を制御してもよい。例えば、先頭の送信機120は、自ら同期信号を出力してから、最後の送信機120からフィードバックされた同期信号を受け取るまでの時間を計測する。そして、先頭の送信機120は、その時間が長いほど、基準の周波数(例えば、9.6kHz)よりも高い周波数にしたがって光IDを送信する。 Also, the first transmitter 120 may control the frequency adjustment amount by having the synchronization signal fed back from the last transmitter 120. For example, the first transmitter 120 measures the time from when it outputs the synchronization signal itself until it receives the synchronization signal fed back from the last transmitter 120. And the head transmitter 120 transmits optical ID according to a frequency higher than a reference frequency (for example, 9.6 kHz), so that the time is long.
 図67は、本実施の形態における複数の送信機および受信機を含む送信システムの一例を示す図である。 FIG. 67 is a diagram illustrating an example of a transmission system including a plurality of transmitters and receivers in the present embodiment.
 この送信システムは、例えば2つの送信機120と受信機200とを備えている。2つの送信機120のうちの一方の送信機120は、9.599kHzの周波数にしたがって光IDを送信する。他方の送信機120は、9.601kHzの周波数にしたがって光IDを送信する。このような場合、2つの送信機120はそれぞれ、自らの光IDの周波数を電波信号で受信機200に通知する。 This transmission system includes, for example, two transmitters 120 and a receiver 200. One of the two transmitters 120 transmits an optical ID according to a frequency of 9.599 kHz. The other transmitter 120 transmits an optical ID according to a frequency of 9.601 kHz. In such a case, each of the two transmitters 120 notifies the receiver 200 of the frequency of its own optical ID with a radio wave signal.
 受信機200は、それらの周波数の通知を受けると、通知された周波数のそれぞれにしたがった復号を試みる。つまり、受信機200は、9.599kHzの周波数にしたがって、復号用画像に対する復号を試み、これにより光IDが受信できなければ、9.601kHzの周波数にしたがって、復号用画像に対する復号を試みる。このように、受信機200は、通知された全ての周波数のそれぞれにしたがって、復号用画像に対する復号を試みる。言い換えれば、受信機200は、通知されたそれぞれの周波数に対して総当たりを行う。または、受信機200は、通知された全ての周波数の平均周波数にしたがった復号を試みてもよい。つまり、受信機200は、9.599kHzと9.601kHzとの平均周波数である9.6kHzにしたがった復号を試みる。 When the receiver 200 receives the notification of those frequencies, the receiver 200 tries to perform decoding according to each of the notified frequencies. That is, the receiver 200 attempts to decode the decoding image according to the frequency of 9.599 kHz. If the optical ID cannot be received by this, the receiver 200 attempts to decode the decoding image according to the frequency of 9.601 kHz. As described above, the receiver 200 attempts to decode the decoding image according to each of all the notified frequencies. In other words, the receiver 200 performs brute force for each notified frequency. Alternatively, the receiver 200 may attempt decoding according to the average frequency of all the notified frequencies. That is, the receiver 200 attempts decoding according to 9.6 kHz, which is an average frequency of 9.599 kHz and 9.601 kHz.
 これにより、受信機200と送信機120とのそれぞれの周波数の違いによる受信エラーの発生率を低下させることができる。 Thereby, it is possible to reduce the occurrence rate of reception errors due to the difference in frequency between the receiver 200 and the transmitter 120.
 図68Aは、本実施の形態における受信機200の処理動作の一例を示すフローチャートである。 FIG. 68A is a flowchart illustrating an example of the processing operation of the receiver 200 in the present embodiment.
 受信機200は、まず、撮像を開始して(ステップS151)、パラメータNを1に初期化する(ステップS152)。次に、受信機200は、その撮像によって得られた復号用画像を、パラメータNに対応する周波数にしたがって復号し、その復号結果に対する評価値を算出する(ステップS153)。例えば、パラメータN=1、2、3、4、5のそれぞれには、9.6kHz、9.601kHz、9.599kHz、9.602kHzなどの周波数が予め対応付けられている。評価値は、復号結果が正しい光IDに類似しているほど高い数値を示す。 First, the receiver 200 starts imaging (step S151) and initializes the parameter N to 1 (step S152). Next, the receiver 200 decodes the decoding image obtained by the imaging according to the frequency corresponding to the parameter N, and calculates an evaluation value for the decoding result (step S153). For example, parameters N = 1, 2, 3, 4, 5 are associated with frequencies such as 9.6 kHz, 9.601 kHz, 9.599 kHz, and 9.602 kHz in advance. The evaluation value indicates a higher numerical value as the decoding result is more similar to the correct optical ID.
 次に、受信機200は、パラメータNの数値が、予め定められた1以上の整数であるNmaxに等しいか否かを判定する(ステップS154)。ここで、受信機200は、Nmaxに等しくないと判定すると(ステップS154のN)、パラメータNをインクリメントして(ステップS155)、ステップS153からの処理を繰り返し実行する。一方、受信機200は、Nmaxに等しいと判定すると(ステップS154のY)、最大の評価値が算出された周波数を最適周波数として、受信機200の場所を示す場所情報に対応付けてサーバに登録する。このように登録される最適周波数および場所情報は、登録後、その場所情報に示される場所に移動してきた受信機200による光IDの受信のために用いられる。また、場所情報は、例えばGPSによって計測される位置を示す情報であってもよく、無線LAN(Local Area Network)におけるアクセスポイントの識別情報(例えば、SSID:Service Set Identifier)であってもよい。 Next, the receiver 200 determines whether or not the numerical value of the parameter N is equal to Nmax that is a predetermined integer of 1 or more (step S154). Here, if the receiver 200 determines that it is not equal to Nmax (N in step S154), it increments the parameter N (step S155) and repeats the processing from step S153. On the other hand, if the receiver 200 determines that it is equal to Nmax (Y in step S154), the frequency for which the maximum evaluation value is calculated is registered as the optimum frequency in association with the location information indicating the location of the receiver 200. To do. The optimum frequency and location information registered in this way are used for receiving the optical ID by the receiver 200 that has moved to the location indicated by the location information after registration. The location information may be information indicating a position measured by GPS, for example, or may be identification information (for example, SSID: Service Set Identifier) of an access point in a wireless LAN (Local Area Network).
 また、サーバへの登録を行った受信機200は、その最適周波数による復号によって得られた光IDにしたがって、例えば上述のようなAR画像の表示を行う。 In addition, the receiver 200 that has registered with the server displays, for example, the AR image as described above according to the optical ID obtained by decoding at the optimum frequency.
 図68Bは、本実施の形態における受信機200の処理動作の一例を示すフローチャートである。 FIG. 68B is a flowchart illustrating an example of the processing operation of the receiver 200 in the present embodiment.
 図68Aに示すサーバへの登録が行われた後、受信機200は、自らが存在する場所を示す場所情報をサーバに送信する(ステップS161)。次に、受信機200は、その場所情報に対応付けて登録されている最適周波数をそのサーバから取得する(ステップS162)。 68. After registration with the server shown in FIG. 68A is performed, the receiver 200 transmits location information indicating the location where the receiver 200 exists to the server (step S161). Next, the receiver 200 acquires the optimum frequency registered in association with the location information from the server (step S162).
 次に、受信機200は、撮像を開始し(ステップS163)、その撮像によって得られた復号用画像を、ステップS162で取得した最適周波数にしたがって復号する(ステップS164)。受信機200は、この復号によって得られた光IDにしたがって、例えば上述のようなAR画像の表示を行う。 Next, the receiver 200 starts imaging (step S163), and decodes the decoding image obtained by the imaging according to the optimum frequency acquired in step S162 (step S164). The receiver 200 displays an AR image as described above, for example, according to the optical ID obtained by this decoding.
 このように、サーバへの登録が行われた後では、受信機200は、図68Aに示す処理を実行することなく、最適周波数を取得して光IDを受信することができる。なお、受信機200は、ステップS162において最適周波数を取得することができなかったときに、図68Aに示す処理を実行することによって最適周波数を取得してもよい。 Thus, after registration to the server is performed, the receiver 200 can acquire the optimum frequency and receive the optical ID without executing the processing shown in FIG. 68A. Note that the receiver 200 may acquire the optimum frequency by executing the process shown in FIG. 68A when the optimum frequency cannot be obtained in step S162.
 [実施の形態4のまとめ]
 図69Aは、本実施の形態における表示方法を示すフローチャートである。
[Summary of Embodiment 4]
FIG. 69A is a flowchart showing a display method in the present embodiment.
 本実施の形態における表示方法は、上述の受信機200である表示装置が画像を表示する表示方法であって、ステップSL11~SL16を含む。 The display method in the present embodiment is a display method in which the display device that is the above-described receiver 200 displays an image, and includes steps SL11 to SL16.
 ステップSL11では、イメージセンサが被写体を撮像することによって撮像表示画像および復号用画像を取得する。ステップSL12では、その復号用画像に対する復号によって光IDを取得する。ステップSL13では、その光IDをサーバに送信する。ステップSL14では、その光IDに対応するAR画像と認識情報とをサーバから取得する。ステップSL15では、撮像表示画像のうち、認識情報に応じた領域を対象領域として認識する。ステップSL16では、対象領域にAR画像が重畳された撮像表示画像を表示する。 In step SL11, the image sensor captures an image of the subject to acquire a captured display image and a decoding image. In step SL12, the optical ID is acquired by decoding the decoding image. In step SL13, the optical ID is transmitted to the server. In step SL14, the AR image corresponding to the optical ID and the recognition information are acquired from the server. In step SL15, an area corresponding to the recognition information in the captured display image is recognized as a target area. In step SL16, the captured display image in which the AR image is superimposed on the target area is displayed.
 これにより、AR画像が撮像表示画像に重畳されて表示されるため、ユーザに有益な画像を表示することができる。さらに、処理負荷を抑えて適切な対象領域にAR画像を重畳することができる。 Thereby, since the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
 つまり、一般的な拡張現実(すなわちAR)では、予め保存されている膨大な数の認識対象画像と、撮像表示画像とを比較することによって、その撮像表示画像に何れかの認識対象画像が含まれているか否かが判定される。そして、認識対象画像が含まれていると判定されれば、その認識対象画像に対応するAR画像が撮像表示画像に重畳される。このとき、認識対象画像を基準にAR画像の位置合わせが行われる。このように、一般的な拡張現実では、膨大な数の認識対象画像と撮像表示画像とを比較するため、さらに、位置合わせにおいても撮像表示画像における認識対象画像の位置検出が必要となるため、計算量が多く、処理負荷が高いという問題がある。 That is, in general augmented reality (that is, AR), a huge number of recognition target images stored in advance are compared with the captured display image, and any of the recognition target images is included in the captured display image. It is determined whether or not If it is determined that the recognition target image is included, the AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image. In this way, in general augmented reality, a huge number of recognition target images are compared with captured display images, and further, position detection of the recognition target images in the captured display image is necessary even in alignment. There is a problem of a large amount of calculation and a high processing load.
 しかし、本実施の形態にける表示方法では、図41~図68Bにも示すように、被写体の撮像によって得られる復号用画像を復号することによって光IDが取得される。つまり、被写体である送信機から送信された光IDが受信される。さらに、この光IDに対応するAR画像と認識情報とがサーバから取得される。したがって、サーバでは、膨大な数の認識対象画像と撮像表示画像とを比較する必要がなく、光IDに予め対応付けられているAR画像を選択して表示装置に送信することができる。これにより、計算量を減らして処理負荷を大幅に抑えることができる。 However, in the display method according to the present embodiment, as shown in FIGS. 41 to 68B, the light ID is acquired by decoding the decoding image obtained by imaging the subject. That is, the optical ID transmitted from the transmitter that is the subject is received. Furthermore, the AR image corresponding to this optical ID and the recognition information are acquired from the server. Therefore, the server does not need to compare a huge number of recognition target images and captured display images, and can select and transmit an AR image previously associated with the optical ID to the display device. Thereby, the amount of calculation can be reduced and the processing load can be significantly suppressed.
 また、本実施の形態における表示方法では、この光IDに対応する認識情報がサーバから取得される。認識情報は、撮像表示画像においてAR画像が重畳される領域である対象領域を認識するための情報である。この認識情報は、例えば白い四角形が対象領域であることを示す情報であってもよい。この場合には、対象領域を簡単に認識することができ、処理負荷をさらに抑えることができる。つまり、認識情報の内容に応じて、処理負荷をさらに抑えることができる。また、サーバでは、光IDに応じてその認識情報の内容を任意に設定することができるため、処理負荷と認識精度とのバランスを適切に保つことができる。 Further, in the display method in the present embodiment, the recognition information corresponding to this light ID is acquired from the server. The recognition information is information for recognizing a target area that is an area in which an AR image is superimposed in a captured display image. This recognition information may be information indicating that a white square is the target area, for example. In this case, the target area can be easily recognized, and the processing load can be further suppressed. That is, the processing load can be further suppressed according to the content of the recognition information. Further, since the server can arbitrarily set the content of the recognition information according to the optical ID, the balance between the processing load and the recognition accuracy can be appropriately maintained.
 ここで、認識情報は、撮像表示画像のうちの基準領域を特定するための基準情報であり、対象領域の認識では、その基準情報に基づいて撮像表示画像から基準領域を特定し、撮像表示画像のうち、その基準領域の位置により対象領域を認識してもよい。 Here, the recognition information is reference information for specifying a reference area in the captured display image. In recognition of the target area, the reference area is specified from the captured display image based on the reference information, and the captured display image is displayed. Of these, the target area may be recognized based on the position of the reference area.
 または、認識情報は、撮像表示画像のうちの基準領域を特定するための基準情報と、その基準領域に対する対象領域の相対位置を示す対象情報とを含んでいてもよい。この場合、対象領域の認識では、基準情報に基づいて撮像表示画像から基準領域を特定し、撮像表示画像のうち、その基準領域の位置を基準として対象情報により示される相対位置にある領域を、対象領域として認識する。 Alternatively, the recognition information may include reference information for specifying a reference area in the captured display image and target information indicating a relative position of the target area with respect to the reference area. In this case, in the recognition of the target area, the reference area is identified from the captured display image based on the reference information, and the area at the relative position indicated by the target information in the captured display image with the position of the reference area as a reference, Recognize as a target area.
 これにより、図50および図51に示すように、撮像表示画像において認識される対象領域の位置の自由度を広げることができる。 Thereby, as shown in FIGS. 50 and 51, the degree of freedom of the position of the target area recognized in the captured display image can be expanded.
 また、基準情報は、撮像表示画像における基準領域の位置が、復号用画像における、イメージセンサが有する複数の露光ラインの露光によって現れる複数の輝線のパターンからなる輝線パターン領域の位置と同じであることを示してもよい。 Further, the reference information is such that the position of the reference area in the captured display image is the same as the position of the bright line pattern area composed of a plurality of bright line patterns appearing by exposure of the multiple exposure lines of the image sensor in the decoding image. May be indicated.
 これにより、図50および図51に示すように、撮像表示画像における輝線パターン領域に対応する領域を基準にして対象領域を認識することができる。 Thereby, as shown in FIGS. 50 and 51, the target area can be recognized with reference to the area corresponding to the bright line pattern area in the captured display image.
 また、基準情報は、撮像表示画像における基準領域が、撮像表示画像のうちのディスプレイが映し出されている領域であることを示してもよい。 Further, the reference information may indicate that the reference area in the captured display image is an area in which the display of the captured display image is displayed.
 これにより、図41に示すように、例えば駅名標をディスプレイとすれば、そのディスプレイが映し出されている領域を基準にして対象領域を認識することができる。 Thus, as shown in FIG. 41, for example, if a station name sign is used as a display, the target area can be recognized with reference to the area where the display is displayed.
 また、撮像表示画像の表示では、上述のAR画像である第1のAR画像と異なる第2のAR画像の表示を抑制しながら、予め定められた表示期間だけ、第1のAR画像を表示してもよい。 Further, in the display of the captured display image, the first AR image is displayed for a predetermined display period while suppressing the display of the second AR image different from the first AR image that is the above-described AR image. May be.
 これにより、図56に示すように、ユーザが一度表示された第1のAR画像を見ているときに、その第1のAR画像がそれとは異なる第2のAR画像にすぐに置き換わってしまうことを抑えることができる。 As a result, as shown in FIG. 56, when the user is viewing the first AR image once displayed, the first AR image is immediately replaced with a different second AR image. Can be suppressed.
 また、撮像表示画像の表示では、表示期間には、新たに取得される復号用画像に対する復号を禁止してもよい。 In the display of the captured display image, decoding of a newly acquired decoding image may be prohibited during the display period.
 これにより、図56に示すように、新たに取得される復号用画像の復号は、第2のAR画像の表示が抑制されているときには無駄な処理であるため、その復号を禁止することによって、消費電力を抑えることができる。 As a result, as shown in FIG. 56, decoding of the newly acquired decoding image is a wasteful process when the display of the second AR image is suppressed. Power consumption can be reduced.
 また、撮像表示画像の表示では、さらに、表示期間において、表示装置の加速度を加速度センサによって計測し、計測された加速度が閾値以上か否かを判定してもよい。そして、閾値以上と判定したときには、第2のAR画像の表示の抑制を解除することによって、第1のAR画像の代わりに第2のAR画像を表示してもよい。 In the display of the captured display image, the acceleration of the display device may be measured by an acceleration sensor during the display period, and it may be determined whether or not the measured acceleration is equal to or greater than a threshold value. And when it determines with more than a threshold value, you may display a 2nd AR image instead of a 1st AR image by canceling suppression of a display of the 2nd AR image.
 これにより、図56に示すように、閾値以上の表示装置の加速度が計測されたときに、第2のAR画像の表示の抑制が解除される。したがって、例えば、ユーザが他の被写体にイメージセンサを向けようと表示装置を大きく動かしたときには、第2のAR画像を直ぐに表示することができる。 Thereby, as shown in FIG. 56, when the acceleration of the display device equal to or higher than the threshold is measured, the suppression of the display of the second AR image is released. Accordingly, for example, when the user moves the display device greatly to direct the image sensor toward another subject, the second AR image can be displayed immediately.
 また、撮像表示画像の表示では、さらに、表示装置に備えられたフェイスカメラによる撮像によって、表示装置にユーザの顔が近づいている否かを判定してもよい。そして、顔が近づいていると判定すると、第1のAR画像と異なる第2のAR画像の表示を抑制しながら、第1のAR画像を表示してもよい。または、撮像表示画像の表示では、さらに、加速度センサによって計測される表示装置の加速度によって、表示装置にユーザの顔が近づいている否かを判定してもよい。そして、顔が近づいていると判定すると、第1のAR画像と異なる第2のAR画像の表示を抑制しながら、第1のAR画像を表示してもよい。 Further, in the display of the captured display image, it may be further determined whether or not the user's face is approaching the display device by imaging with a face camera provided in the display device. And if it determines with the face approaching, you may display a 1st AR image, suppressing the display of the 2nd AR image different from a 1st AR image. Alternatively, in the display of the captured display image, whether or not the user's face is approaching the display device may be further determined based on the acceleration of the display device measured by the acceleration sensor. And if it determines with the face approaching, you may display a 1st AR image, suppressing the display of the 2nd AR image different from a 1st AR image.
 これにより、図56に示すように、ユーザが第1のAR画像を見ようとして表示装置に顔を近づけているときに、その第1のAR画像がそれとは異なる第2のAR画像に置き換わってしまうことを抑えることができる。 As a result, as shown in FIG. 56, when the user is looking at the first AR image and the face is brought close to the display device, the first AR image is replaced with a different second AR image. That can be suppressed.
 また、図60に示すように、撮像表示画像および復号用画像の取得では、それぞれ画像を表示している複数のディスプレイを被写体として撮像することによって、その撮像表示画像および復号用画像を取得してもよい。このとき、対象領域の認識では、撮像表示画像のうち、複数のディスプレイのうちの光IDを送信しているディスプレイである送信ディスプレイが現れている領域を対象領域として認識する。また、撮像表示画像の表示では、送信ディスプレイに表示されている画像に対応する第1の字幕をAR画像として対象領域に重畳し、さらに、撮像表示画像のうちの対象領域よりも大きい領域に、第1の字幕が拡大された字幕である第2の字幕を重畳する。 Further, as shown in FIG. 60, in the acquisition of the captured display image and the decoding image, the captured display image and the decoding image are acquired by capturing a plurality of displays each displaying an image as a subject. Also good. At this time, in the recognition of the target area, an area in which a transmission display that is a display that transmits the light ID among the plurality of displays appears in the captured display image is recognized as the target area. Further, in the display of the captured display image, the first subtitle corresponding to the image displayed on the transmission display is superimposed on the target area as an AR image, and further, in an area larger than the target area of the captured display image, The second subtitle, which is an expanded subtitle of the first subtitle, is superimposed.
 これにより、送信ディスプレイの画像に第1の字幕が重畳されるため、その第1の字幕が複数のディスプレイのうちの何れのディスプレイの画像に対する字幕であるかを、ユーザに容易に把握させることができる。また、第1の字幕が拡大された字幕である第2の字幕も表示されるため、第1の字幕が小さくて読み難い場合であっても、第2の字幕の表示によって、字幕を読み易くすることができる。 Thereby, since the first subtitle is superimposed on the image on the transmission display, the user can easily grasp which display subtitle is the subtitle for the display image of the plurality of displays. it can. In addition, since the second subtitle, which is an enlarged subtitle of the first subtitle, is also displayed, even if the first subtitle is small and difficult to read, the subtitle can be easily read by displaying the second subtitle. can do.
 また、撮像表示画像の表示では、さらに、上述のサーバから取得される情報に、音声情報が含まれているか否かを判定し、含まれていると判定したときには、第1および第2の字幕よりも、その音声情報が示す音声を優先して出力してもよい。 Further, in the display of the captured display image, it is further determined whether or not audio information is included in the information acquired from the server. When it is determined that the information is included, the first and second subtitles are determined. The voice indicated by the voice information may be output with priority.
 これにより、音声が優先的に出力されるため、ユーザが字幕を読む負担を軽減することができる。 This allows the user to reduce the burden of reading subtitles because the audio is output preferentially.
 図69Bは、本実施の形態における表示装置の構成を示すブロック図である。 FIG. 69B is a block diagram illustrating a configuration of the display device in the present embodiment.
 本実施の形態における表示装置10は、画像を表示する表示装置であって、イメージセンサ11と、復号部12と、送信部13と、取得部14と、認識部15と、表示部16とを備える。なお、この表示装置10は、上述の受信機200に相当する。 The display device 10 according to the present embodiment is a display device that displays an image, and includes an image sensor 11, a decoding unit 12, a transmission unit 13, an acquisition unit 14, a recognition unit 15, and a display unit 16. Prepare. The display device 10 corresponds to the receiver 200 described above.
 イメージセンサ11は、被写体を撮像することによって撮像表示画像および復号用画像を取得する。復号部12は、その復号用画像に対する復号によって光IDを取得する。送信部13は、その光IDをサーバに送信する。取得部14は、光IDに対応するAR画像と認識情報とをサーバから取得する。認識部15は、撮像表示画像のうち、その認識情報に応じた領域を対象領域として認識する。表示部16は、その対象領域にAR画像が重畳された撮像表示画像を表示する。 The image sensor 11 acquires a captured display image and a decoding image by imaging a subject. The decoding unit 12 acquires the optical ID by decoding the decoding image. The transmission unit 13 transmits the optical ID to the server. The acquisition unit 14 acquires the AR image corresponding to the optical ID and the recognition information from the server. The recognition unit 15 recognizes a region corresponding to the recognition information in the captured display image as a target region. The display unit 16 displays a captured display image in which an AR image is superimposed on the target area.
 これにより、AR画像が撮像表示画像に重畳されて表示されるため、ユーザに有益な画像を表示することができる。さらに、処理負荷を抑えて適切な対象領域にAR画像を重畳することができる。 Thereby, since the AR image is displayed superimposed on the captured display image, an image useful for the user can be displayed. Furthermore, it is possible to superimpose an AR image on an appropriate target area while suppressing the processing load.
 なお、本実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。ここで、本実施の形態の受信機200または表示装置10などを実現するソフトウェアは、図45、図52、図56、図62、図65、および図68A~図69Aに示すフローチャートに含まれる各ステップをコンピュータに実行させるプログラムである。 In the present embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. Here, software for realizing the receiver 200 or the display device 10 according to the present embodiment is included in the flowcharts shown in FIGS. 45, 52, 56, 62, 65, and 68A to 69A. A program for causing a computer to execute steps.
 [実施の形態4の変形例1]
 以下、実施の形態4の変形例1、つまり、光IDを用いたARを実現する表示方法の変形例1について説明する。
[Modification 1 of Embodiment 4]
Hereinafter, a first modification of the fourth embodiment, that is, a first modification of the display method for realizing the AR using the optical ID will be described.
 図70は、実施の形態4の変形例1における受信機がAR画像を表示する例を示す図である。 FIG. 70 is a diagram illustrating an example in which the receiver in the first modification of the fourth embodiment displays an AR image.
 受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Pkと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 acquires the above-described captured display image Pk, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは、ロボットとして構成されている送信機100cと、送信機100cの隣にいる人物21を撮像する。送信機100cは、上記実施の形態1~3のうちの何れかの実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)131を備える。この送信機100cは、その1つまたは複数の発光素子131を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100c configured as a robot and the person 21 adjacent to the transmitter 100c. The transmitter 100c is the transmitter according to any one of the first to third embodiments, and includes one or a plurality of light emitting elements (for example, LEDs) 131. The transmitter 100c changes its luminance by blinking the one or more light emitting elements 131, and transmits an optical ID (optical identification information) by the luminance change. This light ID is the above-mentioned visible light signal.
 受信機200は、送信機100cおよび人物21を通常露光時間で撮像することによって、それらが映し出された撮像表示画像Pkを取得する。さらに、受信機200は、その通常露光時間よりも短い通信用露光時間で送信機100cおよび人物21を撮像することによって、復号用画像を取得する。 The receiver 200 acquires the captured display image Pk on which the transmitter 100c and the person 21 are imaged by the normal exposure time. Furthermore, the receiver 200 acquires the decoding image by capturing the transmitter 100c and the person 21 with the communication exposure time shorter than the normal exposure time.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100cから光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P10と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pkのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100cであるロボットが映し出されている領域の右側にある領域を対象領域として認識する。具体的には、受信機200は、撮像表示画像Pkに映し出されている送信機100cの2つのマーカ132aおよび132bの間の距離を特定する。そして、受信機200は、その距離に応じた幅および高さを有する領域を対象領域として認識する。つまり、認識情報は、マーカ132aおよび132bの形状と、それらのマーカ132aおよび132bを基準とする対象領域の位置および大きさとを示している。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100c. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P10 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pk as a target area. For example, the receiver 200 recognizes an area on the right side of the area where the robot that is the transmitter 100c is projected as the target area. Specifically, the receiver 200 specifies the distance between the two markers 132a and 132b of the transmitter 100c displayed in the captured display image Pk. Then, the receiver 200 recognizes an area having a width and a height corresponding to the distance as a target area. That is, the recognition information indicates the shape of the markers 132a and 132b and the position and size of the target region with reference to the markers 132a and 132b.
 そして、受信機200は、その対象領域にAR画像P10を重畳し、AR画像P10が重畳された撮像表示画像Pkをディスプレイ201に表示する。例えば、受信機200は、送信機100cとは異なる他のロボットを示すAR画像P10を取得する。この場合、撮像表示画像Pkの対象領域にそのAR画像P10が重畳されるため、送信機100cの隣に他のロボットが現実に存在するように、撮像表示画像Pkを表示することができる。その結果、人物21は、他のロボットが実在していなくても、送信機100cと共に他のロボットと一緒に写真に写ることができる。 Then, the receiver 200 superimposes the AR image P10 on the target area, and displays the captured display image Pk on which the AR image P10 is superimposed on the display 201. For example, the receiver 200 acquires an AR image P10 indicating another robot different from the transmitter 100c. In this case, since the AR image P10 is superimposed on the target area of the captured display image Pk, the captured display image Pk can be displayed so that another robot actually exists next to the transmitter 100c. As a result, the person 21 can be photographed together with the other robot together with the transmitter 100c even if no other robot exists.
 図71は、実施の形態4の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 71 is a diagram illustrating another example in which the receiver 200 according to the first modification of the fourth embodiment displays an AR image.
 送信機100は、例えば図71に示すように、表示パネルを有する画像表示装置として構成され、その表示パネルに静止画像PSを表示しながら輝度変化することによって、光IDを送信している。なお、表示パネルは、例えば液晶ディスプレイまたは有機EL(electroluminescence)ディスプレイである。 For example, as shown in FIG. 71, the transmitter 100 is configured as an image display device having a display panel, and transmits a light ID by changing the luminance while displaying a still image PS on the display panel. The display panel is, for example, a liquid crystal display or an organic EL (electroluminescence) display.
 受信機200は、送信機100を撮像することによって、上述と同様に、撮像表示画像Pmと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P11と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pmのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100における表示パネルが映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P11を重畳し、AR画像P11が重畳された撮像表示画像Pmをディスプレイ201に表示する。例えば、AR画像P11は、送信機100の表示パネルに表示されている静止画像PSと同一または実質的に同一のピクチャを表示順で先頭のピクチャとして有する動画像である。つまり、AR画像P11は、静止画像PSから動きだす動画像である。 The receiver 200 acquires the captured display image Pm and the decoding image by imaging the transmitter 100 in the same manner as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P11 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pm as a target area. For example, the receiver 200 recognizes the area where the display panel of the transmitter 100 is displayed as the target area. Then, the receiver 200 superimposes the AR image P11 on the target area, and displays the captured display image Pm on which the AR image P11 is superimposed on the display 201. For example, the AR image P11 is a moving image having the same or substantially the same picture as the still image PS displayed on the display panel of the transmitter 100 as the first picture in the display order. That is, the AR image P11 is a moving image that starts to move from the still image PS.
 この場合、撮像表示画像Pmの対象領域にそのAR画像P11が重畳されるため、受信機200は、動画像を表示する画像表示装置が現実に存在するように、撮像表示画像Pmを表示することができる。 In this case, since the AR image P11 is superimposed on the target region of the captured display image Pm, the receiver 200 displays the captured display image Pm so that an image display device that displays a moving image actually exists. Can do.
 図72は、実施の形態4の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 72 is a diagram illustrating another example in which the receiver 200 according to the first modification of the fourth embodiment displays an AR image.
 送信機100は、例えば図72に示すように駅名標として構成され、輝度変化することによって、光IDを送信している。 For example, as shown in FIG. 72, the transmitter 100 is configured as a station name sign, and transmits a light ID by changing the luminance.
 受信機200は、図72の(a)に示すように、送信機100から離れた位置から送信機100を撮像する。これにより、受信機200は、上述と同様に、撮像表示画像Pnと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P12~P14と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pnのうち、その認識情報に応じた2つの領域を第1および第2の対象領域として認識する。例えば、受信機200は、送信機100の周囲の領域を第1の対象領域として認識する。そして、受信機200は、その第1の対象領域にAR画像P12を重畳し、AR画像P12が重畳された撮像表示画像Pnをディスプレイ201に表示する。例えば、AR画像P12は、受信機200のユーザに対して送信機100への接近を促す矢印である。 The receiver 200 images the transmitter 100 from a position away from the transmitter 100 as shown in FIG. Thereby, the receiver 200 acquires the captured display image Pn and the decoding image in the same manner as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR images P12 to P14 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes two regions corresponding to the recognition information in the captured display image Pn as first and second target regions. For example, the receiver 200 recognizes the area around the transmitter 100 as the first target area. Then, the receiver 200 superimposes the AR image P12 on the first target area, and displays the captured display image Pn on which the AR image P12 is superimposed on the display 201. For example, the AR image P12 is an arrow that prompts the user of the receiver 200 to approach the transmitter 100.
 この場合、撮像表示画像Pnの第1の対象領域にそのAR画像P12が重畳されて表示されるため、ユーザは、受信機200を送信機100に向けた状態で送信機100に近づく。このような受信機200の送信機100への接近によって、撮像表示画像Pnに映し出されている送信機100の領域(上述の基準領域に相当)は大きくなる。その領域の大きさが第1の閾値以上になると、受信機200は、例えば図72の(b)に示すように、さらに、送信機100が映し出されている領域である第2の対象領域にAR画像P13を重畳する。つまり、受信機200は、AR画像P12およびP13が重畳された撮像表示画像Pnをディスプレイ201に表示する。例えば、AR画像P13は、ユーザに対して、駅名標に示される駅周辺の概要を知らせるメッセージである。また、AR画像P13は、撮像表示画像Pnに映し出されている送信機100の領域の大きさと等しい。 In this case, since the AR image P12 is displayed superimposed on the first target area of the captured display image Pn, the user approaches the transmitter 100 with the receiver 200 facing the transmitter 100. As the receiver 200 approaches the transmitter 100, the area of the transmitter 100 (corresponding to the reference area described above) displayed in the captured display image Pn becomes larger. When the size of the area becomes equal to or larger than the first threshold, the receiver 200 further moves to a second target area, which is an area where the transmitter 100 is projected, as shown in FIG. 72 (b), for example. The AR image P13 is superimposed. That is, the receiver 200 displays the captured display image Pn on which the AR images P12 and P13 are superimposed on the display 201. For example, the AR image P13 is a message that informs the user of the outline of the vicinity of the station indicated by the station name sign. The AR image P13 is equal to the size of the area of the transmitter 100 displayed in the captured display image Pn.
 また、この場合にも、撮像表示画像Pnの第1の対象領域に矢印であるAR画像P12が重畳されて表示されるため、ユーザは、受信機200を送信機100に向けた状態で送信機100に近づく。このような受信機200の送信機100への接近によって、撮像表示画像Pnに映し出されている送信機100の領域(上述の基準領域に相当)はさらに大きくなる。その領域の大きさが第2の閾値以上になると、受信機200は、例えば図72の(c)に示すように、第2の対象領域に重畳されているAR画像P13をAR画像P14に変更する。さらに、受信機200は、第1の対象領域に重畳されているAR画像P12を削除する。 Also in this case, since the AR image P12 that is an arrow is superimposed and displayed on the first target area of the captured display image Pn, the user transmits the transmitter 200 with the receiver 200 facing the transmitter 100. Approaching 100. As the receiver 200 approaches the transmitter 100, the area of the transmitter 100 (corresponding to the reference area described above) displayed in the captured display image Pn becomes larger. When the size of the area becomes equal to or larger than the second threshold, the receiver 200 changes the AR image P13 superimposed on the second target area to the AR image P14, for example, as illustrated in FIG. To do. Furthermore, the receiver 200 deletes the AR image P12 superimposed on the first target area.
 つまり、受信機200は、AR画像P14が重畳された撮像表示画像Pnをディスプレイ201に表示する。例えば、AR画像P14は、ユーザに対して、駅名標に示される駅周辺の詳細を知らせるメッセージである。また、AR画像P14は、撮像表示画像Pnに映し出されている送信機100の領域の大きさと等しい。この送信機100の領域は、受信機200が送信機100に近いほど大きい。したがって、AR画像P14は、AR画像P13よりも大きい。 That is, the receiver 200 displays the captured display image Pn on which the AR image P14 is superimposed on the display 201. For example, the AR image P14 is a message that informs the user of the details around the station indicated by the station name sign. The AR image P14 is equal to the size of the area of the transmitter 100 displayed in the captured display image Pn. The area of the transmitter 100 is larger as the receiver 200 is closer to the transmitter 100. Therefore, the AR image P14 is larger than the AR image P13.
 このように、受信機200は、送信機100に近づくほど、AR画像を大きくし、多くの情報を表示する。また、AR画像P12のようなユーザに接近を促す矢印が表示されるため、送信機100に近づくと多くの情報が表示されることをユーザに容易に把握させることができる。 Thus, the receiver 200 enlarges the AR image and displays more information as it approaches the transmitter 100. In addition, since an arrow that prompts the user to approach, such as the AR image P12, is displayed, the user can easily grasp that a large amount of information is displayed when approaching the transmitter 100.
 図73は、実施の形態4の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 73 is a diagram illustrating another example in which the receiver 200 in the first modification of the fourth embodiment displays an AR image.
 図72に示す例では、受信機200は、送信機100に近づくと多くの情報を表示させるが、送信機100との間の距離に関わらず、多くの情報を例えば吹き出しの形態で表示してもよい。 In the example shown in FIG. 72, the receiver 200 displays a lot of information when approaching the transmitter 100, but displays a lot of information in the form of, for example, a balloon regardless of the distance to the transmitter 100. Also good.
 具体的には、受信機200は、図73に示すように、送信機100を撮像することにより、上述と同様に、撮像表示画像Poと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P15と認識情報とをサーバから取得する。受信機200は、撮像表示画像Poのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100の周囲の領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P15を重畳し、AR画像P15が重畳された撮像表示画像Poをディスプレイ201に表示する。例えば、AR画像P15は、ユーザに対して、駅名標に示される駅周辺の詳細を吹き出しの形態で知らせるメッセージである。 Specifically, as shown in FIG. 73, the receiver 200 captures the captured display image Po and the decoding image by capturing an image of the transmitter 100 as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P15 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Po as a target area. For example, the receiver 200 recognizes an area around the transmitter 100 as a target area. Then, the receiver 200 superimposes the AR image P15 on the target area, and displays the captured display image Po on which the AR image P15 is superimposed on the display 201. For example, the AR image P15 is a message that informs the user of the details of the vicinity of the station indicated by the station name in a balloon form.
 この場合、撮像表示画像Poの対象領域にそのAR画像P15が重畳されるため、受信機200のユーザは送信機100に近づかなくても多くの情報を受信機200に表示させることができる。 In this case, since the AR image P15 is superimposed on the target area of the captured display image Po, the user of the receiver 200 can display a lot of information on the receiver 200 without approaching the transmitter 100.
 図74は、実施の形態4の変形例1における受信機200の他の例を示す図である。 FIG. 74 is a diagram illustrating another example of the receiver 200 in the first modification of the fourth embodiment.
 受信機200は、上述の例ではスマートフォンとして構成されているが、図64に示す例と同様に、イメージセンサを備えたヘッドマウントディスプレイ(グラスともいう)として構成されていてもよい。 The receiver 200 is configured as a smartphone in the above-described example, but may be configured as a head-mounted display (also referred to as glass) including an image sensor, as in the example illustrated in FIG.
 このような受信機200は、復号用画像の一部の復号対象領域に対してのみ復号を行うことによって光IDを取得する。例えば、受信機200は、図74の(a)に示すように、視線検出カメラ203を備えている。視線検出カメラ203は、受信機200であるヘッドマウントディスプレイを装着しているユーザの眼を撮像する。受信機200は、この視線検出カメラ203による撮像によって得られた眼の画像に基づいて、そのユーザの視線を検出する。 Such a receiver 200 acquires the optical ID by performing decoding only on a part of the decoding target area of the decoding image. For example, the receiver 200 includes a line-of-sight detection camera 203 as shown in FIG. The line-of-sight detection camera 203 images the eyes of the user wearing the head-mounted display that is the receiver 200. The receiver 200 detects the user's line of sight based on the eye image obtained by the imaging by the line-of-sight detection camera 203.
 受信機200は、図74の(b)に示すように、例えば、ユーザの視界のうち、検出された視線が向けられている領域に視線枠204が現れるように、その視線枠204を表示する。したがって、この視線枠204は、ユーザの視線の動きに応じて移動する。受信機200は、復号用画像のうち、その視線枠204内に相当する領域を復号対象領域として扱う。つまり、受信機200は、復号用画像のうち復号対象領域外に輝線パターン領域があっても、その輝線パターン領域に対する復号を行わず、復号対象領域内の輝線パターン領域に対してのみ復号を行う。これにより、復号用画像に複数の輝線パターン領域が有る場合でも、それらの全ての輝線パターン領域に対する復号を行わないため、処理負荷を軽減することができるとともに、余計なAR画像の表示を抑えることができる。 As shown in FIG. 74B, the receiver 200 displays the line-of-sight frame 204 so that the line-of-sight frame 204 appears in an area of the user's field of view where the detected line of sight is directed, for example. . Accordingly, the line-of-sight frame 204 moves according to the movement of the user's line of sight. The receiver 200 treats an area corresponding to the line-of-sight frame 204 in the decoding image as a decoding target area. In other words, the receiver 200 does not decode the bright line pattern area in the decoding image even if there is a bright line pattern area outside the decoding target area, and performs decoding only on the bright line pattern area in the decoding target area. . As a result, even when there are a plurality of bright line pattern areas in the decoding image, decoding is not performed for all of the bright line pattern areas, so that the processing load can be reduced and the display of an extra AR image can be suppressed. Can do.
 また、受信機200は、それぞれ音声を出力するための複数の輝線パターン領域が復号用画像に含まれている場合には、復号対象領域内の輝線パターン領域のみを復号して、その輝線パターン領域に対応する音声のみを出力してもよい。あるいは、受信機200は、復号用画像に含まれる複数の輝線パターン領域のそれぞれを復号し、復号対象領域内の輝線パターン領域に対応する音声を大きく出力し、復号対象領域外の輝線パターン領域に対応する音声を小さく出力してもよい。また、復号対象領域外に複数の輝線パターン領域がある場合には、受信機200は、復号対象領域に近い輝線パターン領域ほど、その輝線パターン領域に対応する音声を大きく出力してもよい。 In addition, when a plurality of bright line pattern areas for outputting sound are included in the decoding image, the receiver 200 decodes only the bright line pattern area in the decoding target area, and the bright line pattern area Only the sound corresponding to may be output. Alternatively, the receiver 200 decodes each of the plurality of bright line pattern regions included in the decoding image, outputs a large amount of sound corresponding to the bright line pattern region in the decoding target region, and outputs to the bright line pattern region outside the decoding target region. The corresponding voice may be output small. In addition, when there are a plurality of bright line pattern areas outside the decoding target area, the receiver 200 may output a larger amount of sound corresponding to the bright line pattern area as the bright line pattern area is closer to the decoding target area.
 図75は、実施の形態4の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 75 is a diagram illustrating another example in which the receiver 200 in the first modification of the fourth embodiment displays an AR image.
 送信機100は、例えば図75に示すように、表示パネルを有する画像表示装置として構成され、その表示パネルに画像を表示しながら輝度変化することによって、光IDを送信している。 For example, as shown in FIG. 75, the transmitter 100 is configured as an image display device having a display panel, and transmits an optical ID by changing luminance while displaying an image on the display panel.
 受信機200は、送信機100を撮像することによって、上述と同様に、撮像表示画像Ppと復号用画像とを取得する。 The receiver 200 acquires the captured display image Pp and the decoding image in the same manner as described above by imaging the transmitter 100.
 このとき、受信機200は、復号用画像における輝線パターン領域と同じ位置にあってその輝線パターン領域と同じ大きさの領域を、撮像表示画像Ppの中から特定する。そして、受信機200は、その領域の一端から他端に向けて繰り返し移動する走査線P100を表示してもよい。 At this time, the receiver 200 specifies, from the captured display image Pp, an area having the same position as the bright line pattern area in the decoding image and the same size as the bright line pattern area. The receiver 200 may display the scanning line P100 that repeatedly moves from one end of the region to the other end.
 この走査線P100が表示されている間、受信機200は、復号用画像に対する復号によって光IDを取得し、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像と認識情報とをサーバから取得する。受信機200は、撮像表示画像Ppのうち、その認識情報に応じた領域を対象領域として認識する。 While the scanning line P100 is displayed, the receiver 200 acquires the optical ID by decoding the decoding image and transmits the optical ID to the server. Then, the receiver 200 acquires the AR image corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pp as a target area.
 このような対象領域を認識すると、受信機200は、走査線P100の表示を終了し、その対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppをディスプレイ201に表示する。 When recognizing such a target area, the receiver 200 ends the display of the scanning line P100, superimposes the AR image on the target area, and displays the captured display image Pp on which the AR image is superimposed on the display 201. .
 これにより、送信機100の撮像が行われてからAR画像が表示されるまでの間、移動する走査線P100が表示されるため、光IDおよびAR画像の読み取りなどの処理が行われていることをユーザに対して知らせることができる。 Accordingly, since the moving scanning line P100 is displayed from when the transmitter 100 is imaged until the AR image is displayed, processing such as reading of the optical ID and the AR image is performed. Can be notified to the user.
 図76は、実施の形態4の変形例1における受信機200がAR画像を表示する他の例を示す図である。 FIG. 76 is a diagram illustrating another example in which the receiver 200 in the first modification of the fourth embodiment displays an AR image.
 2つの送信機100はそれぞれ、例えば図76に示すように、表示パネルを有する画像表示装置として構成され、その表示パネルに同一の静止画像PSを表示しながら輝度変化することによって、光IDを送信している。ここで、2つの送信機100はそれぞれ、互いに異なる態様で輝度変化することによって、互いに異なる光ID(例えば光ID「01」および「02」)を送信している。 Each of the two transmitters 100 is configured as an image display device having a display panel, for example, as shown in FIG. 76, and transmits an optical ID by changing the luminance while displaying the same still image PS on the display panel. is doing. Here, the two transmitters 100 transmit different optical IDs (for example, optical IDs “01” and “02”) by changing the luminance in different manners.
 受信機200は、図71に示す例と同様に、2つの送信機100を撮像することによって、撮像表示画像Pqと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光ID「01」および「02」を取得する。つまり、受信機200は、2つの送信機100のうちの一方から光ID「01」を受信し、他方から光ID「02」を受信する。受信機200は、それらの光IDをサーバに送信する。そして、受信機200は、その光ID「01」に対応するAR画像P16と認識情報とをサーバから取得する。さらに、受信機200は、光ID「02」に対応するAR画像P17と認識情報とをサーバから取得する。 The receiver 200 acquires the captured display image Pq and the decoding image by capturing images of the two transmitters 100 as in the example illustrated in FIG. The receiver 200 acquires the optical IDs “01” and “02” by decoding the decoding image. That is, the receiver 200 receives the optical ID “01” from one of the two transmitters 100 and receives the optical ID “02” from the other. The receiver 200 transmits those optical IDs to the server. Then, the receiver 200 acquires the AR image P16 corresponding to the optical ID “01” and the recognition information from the server. Furthermore, the receiver 200 acquires the AR image P17 corresponding to the optical ID “02” and the recognition information from the server.
 受信機200は、撮像表示画像Pqのうち、それらの認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、2つの送信機100のそれぞれの表示パネルが映し出されている領域を対象領域として認識する。そして、受信機200は、光ID「01」に対応する対象領域にAR画像P16を重畳し、光ID「02」に対応する対象領域にAR画像P17を重畳する。そして、受信機200は、AR画像P16およびP17が重畳された撮像表示画像Pqをディスプレイ201に表示する。例えば、AR画像P16は、光ID「01」に対応する送信機100の表示パネルに表示されている静止画像PSと同一または実質的に同一のピクチャを表示順で先頭のピクチャとして有する動画像である。また、AR画像P17は、光ID「02」に対応する送信機100の表示パネルに表示されている静止画像PSと同一または実質的に同一のピクチャを表示順で先頭のピクチャとして有する動画像である。つまり、それぞれ動画像であるAR画像P16およびAR画像P17の先頭のピクチャは同じである。しかし、AR画像P16およびAR画像P17は互いに異なる動画像であって、それぞれの先頭以外のピクチャは異なっている。 The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pq as a target area. For example, the receiver 200 recognizes an area where the display panels of the two transmitters 100 are displayed as the target area. Then, the receiver 200 superimposes the AR image P16 on the target area corresponding to the light ID “01”, and superimposes the AR image P17 on the target area corresponding to the light ID “02”. Then, the receiver 200 displays the captured display image Pq on which the AR images P16 and P17 are superimposed on the display 201. For example, the AR image P16 is a moving image having the same or substantially the same picture as the still picture PS displayed on the display panel of the transmitter 100 corresponding to the optical ID “01” as the first picture in the display order. is there. The AR image P17 is a moving image having the same or substantially the same picture as the still picture PS displayed on the display panel of the transmitter 100 corresponding to the optical ID “02” as the first picture in the display order. is there. That is, the leading pictures of the AR image P16 and the AR image P17, which are moving images, are the same. However, the AR image P16 and the AR image P17 are different moving images, and the pictures other than the head of each are different.
 したがって、このようなAR画像P16およびAR画像P17が撮像表示画像Pqに重畳されるため、受信機200は、同じピクチャから再生される互いに異なる動画像を表示する画像表示装置が現実に存在するように、撮像表示画像Pqを表示することができる。 Therefore, since the AR image P16 and the AR image P17 are superimposed on the captured display image Pq, the receiver 200 actually has an image display device that displays different moving images reproduced from the same picture. The captured display image Pq can be displayed.
 図77は、実施の形態4の変形例1における受信機200の処理動作の一例を示すフローチャートである。この図77のフローチャートによって示される処理動作は、具体的には、図71に示す送信機100が2つある場合に、それらの送信機100を個別に撮像する受信機200の処理動作の一例である。 FIG. 77 is a flowchart illustrating an example of a processing operation of the receiver 200 in the first modification of the fourth embodiment. The processing operation shown by the flowchart of FIG. 77 is an example of the processing operation of the receiver 200 that individually images each of the transmitters 100 when there are two transmitters 100 shown in FIG. is there.
 まず、受信機200は、第1の送信機100を第1の被写体として撮像することによって第1の光IDを取得する(ステップS201)。次に、受信機200は、撮像表示画像の中から、その第1の被写体を認識する(ステップS202)。つまり、受信機200は、第1の光IDに対応する第1のAR画像および第1の認識情報をサーバから取得し、その第1の認識情報に基づいて第1の被写体を認識する。そして、受信機200は、その第1のAR画像である第1の動画像の再生を最初から開始する(ステップS203)。つまり、受信機200は、第1の動画像の先頭のピクチャから再生を開始する。 First, the receiver 200 acquires the first light ID by imaging the first transmitter 100 as the first subject (step S201). Next, the receiver 200 recognizes the first subject from the captured display image (step S202). That is, the receiver 200 acquires the first AR image and the first recognition information corresponding to the first light ID from the server, and recognizes the first subject based on the first recognition information. Then, the receiver 200 starts reproduction of the first moving image that is the first AR image from the beginning (step S203). That is, the receiver 200 starts reproduction from the first picture of the first moving image.
 ここで、受信機200は、第1の被写体が撮像表示画像から外れたか否かを判定する(ステップS204)。つまり、受信機200は、撮像表示画像から第1の被写体を認識することができなくなったか否かを判定する。ここで、第1の被写体が撮像表示画像から外れたと判定すると(ステップS204のY)、受信機200は、第1のAR画像である第1の動画像の再生を中断する(ステップS205)。 Here, the receiver 200 determines whether or not the first subject is out of the captured display image (step S204). That is, the receiver 200 determines whether or not the first subject cannot be recognized from the captured display image. Here, if it is determined that the first subject has deviated from the captured display image (Y in step S204), the receiver 200 interrupts the reproduction of the first moving image that is the first AR image (step S205).
 次に、受信機200は、第1の送信機100とは異なる第2の送信機100を第2の被写体として撮像することによって、ステップS201で取得された第1の光IDとは異なる第2の光IDを取得したか否かを判定する(ステップS206)。ここで、受信機200は、第2の光IDを取得したと判定すると(ステップS206のY)、第1の光IDを取得したとき以降のステップS202~S203と同様の処理を行う。つまり、受信機200は、撮像表示画像の中から、第2の被写体を認識する(ステップS207)。そして、受信機200は、第2の光IDに対応する第2のAR画像である第2の動画像の再生を最初から開始する(ステップS208)。つまり、受信機200は、第2の動画像の先頭のピクチャから再生を開始する。 Next, the receiver 200 captures a second transmitter 100 different from the first transmitter 100 as a second subject, thereby different from the first light ID acquired in step S201. It is determined whether or not the optical ID of the first one has been acquired (step S206). Here, when the receiver 200 determines that the second optical ID has been acquired (Y in step S206), the receiver 200 performs the same processing as steps S202 to S203 after the first optical ID is acquired. That is, the receiver 200 recognizes the second subject from the captured display image (step S207). Then, the receiver 200 starts reproduction of the second moving image that is the second AR image corresponding to the second optical ID from the beginning (step S208). That is, the receiver 200 starts playback from the first picture of the second moving image.
 一方、受信機200は、ステップS206において、第2の光IDを取得していないと判定すると(ステップS206のN)、第1の被写体が再び撮像表示画像に入ったか否かを判定する(ステップS209)。つまり、受信機200は、撮像表示画像から第1の被写体を再び認識したか否かを判定する。ここで、受信機200は、第1の被写体が撮像表示画像に入ったと判定すると(ステップS209のY)、さらに、予め定められた時間(すなわち所定時間)が経過したか否かを判定する(ステップS210)。つまり、受信機200は、第1の被写体が撮像表示画像から外れてから再び入るまでにおいて、所定時間が経過したか否かを判定する。ここで、所定時間が経過していないと判定すると(ステップS210のY)、受信機200は、中断された第1の動画像の途中からの再生を開始する(ステップS211)。なお、この途中からの再生開始時に最初に表示される第1の動画像のピクチャである再生再開先頭ピクチャは、第1の動画像の再生が中断されたときの最後に表示されたピクチャの次の表示順のピクチャであってもよい。あるいは、再生再開先頭ピクチャは、最後に表示されたピクチャからn(nは1以上の整数)枚だけ表示順で前のピクチャであってもよい。 On the other hand, if the receiver 200 determines in step S206 that the second light ID has not been acquired (N in step S206), it determines whether or not the first subject has entered the captured display image again (step S206). S209). That is, the receiver 200 determines whether or not the first subject is recognized again from the captured display image. Here, when the receiver 200 determines that the first subject has entered the captured display image (Y in step S209), the receiver 200 further determines whether or not a predetermined time (that is, a predetermined time) has passed (step S209). Step S210). That is, the receiver 200 determines whether or not a predetermined time has elapsed from when the first subject is removed from the captured display image until it enters again. If it is determined that the predetermined time has not elapsed (Y in step S210), the receiver 200 starts reproduction from the middle of the interrupted first moving image (step S211). It should be noted that the first picture to be resumed for reproduction, which is the first picture of the first moving picture that is displayed at the beginning of the reproduction from the middle, is the next picture that was last displayed when the reproduction of the first moving picture was interrupted. The pictures may be in the display order. Alternatively, the reproduction restart top picture may be a picture preceding the last displayed picture by n (n is an integer of 1 or more) in display order.
 一方、所定時間が経過したと判定すると(ステップS210のN)、受信機200は、中断された第1の動画像の最初からの再生を開始する(ステップS212)。 On the other hand, if it is determined that the predetermined time has elapsed (N in step S210), the receiver 200 starts playback of the interrupted first moving image from the beginning (step S212).
 また、上述の例では、受信機200は、撮像表示画像の対象領域にAR画像を重畳するが、このときに、AR画像の明るさを調整してもよい。つまり、受信機200は、サーバから取得したAR画像の明るさが、撮像表示画像の対象領域の明るさと一致するか否かを判定する。そして、受信機200は、一致しないと判定すると、AR画像の明るさを調整することによって、そのAR画像の明るさを対象領域の明るさに一致させる。そして、受信機200は、明るさが調整されたAR画像を撮像表示画像の対象領域に重畳する。これにより、重畳されるAR画像を、より実在するオブジェクトの画像に近づけることができ、ユーザのAR画像に対する違和感を抑えることができる。なお、AR画像の明るさは、そのAR画像の空間的な平均の明るさであり、対象領域の明るさも、その対象領域の空間的な平均の明るさである。 In the above example, the receiver 200 superimposes the AR image on the target area of the captured display image. At this time, the brightness of the AR image may be adjusted. That is, the receiver 200 determines whether or not the brightness of the AR image acquired from the server matches the brightness of the target area of the captured display image. If the receiver 200 determines that they do not match, the receiver 200 adjusts the brightness of the AR image to match the brightness of the AR image with the brightness of the target region. Then, the receiver 200 superimposes the AR image whose brightness has been adjusted on the target area of the captured display image. Thereby, the superimposed AR image can be brought closer to the image of the actual object, and the user's uncomfortable feeling with respect to the AR image can be suppressed. Note that the brightness of the AR image is the spatial average brightness of the AR image, and the brightness of the target area is also the spatial average brightness of the target area.
 また、受信機200は、図53に示すように、AR画像をタップすると、そのAR画像を拡大してディスプレイ201の全体に表示してもよい。また、図53に示す例では、受信機200は、AR画像がタップされるそのAR画像を他のAR画像に切り替えるが、タップに関わらずに、自動的にAR画像を切り替えてもよい。例えば、受信機200は、AR画像が表示されている時間があらかじめ定められた時間だけ経過すると、そのAR画像を他のAR画像に切り替えて表示する。また、受信機200は、現在時刻があらかじめ定められた時刻になると、それまで表示されていたAR画像を、他のAR画像に切り替えて表示する。これにより、ユーザは操作を行うことなく、簡単に新たなAR画像を見ることができる。 Further, as shown in FIG. 53, when the AR image is tapped, the receiver 200 may enlarge the AR image and display it on the entire display 201. In the example illustrated in FIG. 53, the receiver 200 switches the AR image on which the AR image is tapped to another AR image, but may automatically switch the AR image regardless of the tap. For example, when the AR image is displayed for a predetermined time, the receiver 200 switches the AR image to another AR image for display. Further, when the current time reaches a predetermined time, the receiver 200 switches the AR image that has been displayed so far to another AR image and displays it. Thereby, the user can easily see a new AR image without performing an operation.
 [実施の形態4の変形例2]
 以下、実施の形態4の変形例2、つまり、光IDを用いたARを実現する表示方法の変形例2について説明する。
[Modification 2 of Embodiment 4]
Hereinafter, a second modification of the fourth embodiment, that is, a second modification of the display method for realizing the AR using the optical ID will be described.
 図78は、実施の形態4またはその変形例1における受信機200において想定されるAR画像を表示するときの課題の一例を示す図である。 FIG. 78 is a diagram illustrating an example of a problem when displaying an AR image assumed in the receiver 200 according to the fourth embodiment or the modification 1 thereof.
 例えば、実施の形態4またはその変形例1における受信機200は、時刻t1に、被写体を撮像する。なお、上述の被写体は、輝度変化によって光IDを送信するテレビなどの送信機、または、その送信機からの光によって照らされるポスター、案内板、もしくは看板などである。その結果、受信機200は、イメージセンサの有効画素領域によって得られる画像の全体(以下、全撮像画像という)を、ディスプレイ201に撮像表示画像として表示する。このとき、受信機200は、その撮像表示画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。対象領域は、例えばテレビなどの送信機の像またはポスターなどの像を示す領域である。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。なお、AR画像は、静止画または動画でもよく、1つ以上の文字または記号からなる文字列であってもよい。 For example, the receiver 200 in the fourth embodiment or its modification example 1 captures the subject at time t1. Note that the above-described subject is a transmitter such as a television that transmits a light ID according to a change in luminance, or a poster, a guide board, or a signboard illuminated by light from the transmitter. As a result, the receiver 200 displays the entire image obtained by the effective pixel area of the image sensor (hereinafter, referred to as all captured images) on the display 201 as a captured display image. At this time, the receiver 200 recognizes, in the captured display image, an area corresponding to the recognition information acquired based on the light ID as a target area on which the AR image is superimposed. The target area is an area indicating an image of a transmitter such as a television or an image of a poster, for example. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201. The AR image may be a still image or a moving image, or a character string including one or more characters or symbols.
 ここで、受信機200のユーザは、AR画像を大きく表示させるために被写体に近づくと、時刻t2において、イメージセンサにおける対象領域に対応する領域(以下、認識領域という)が有効画素領域からはみ出す。なお、認識領域は、イメージセンサの有効画素領域中、撮像表示画像における対象領域の画像が投影される領域である。つまり、イメージセンサにおける有効画素領域と認識領域はそれぞれ、ディスプレイ201における撮像表示画像および対象領域に相当する。 Here, when the user of the receiver 200 approaches the subject in order to display the AR image in a large size, a region corresponding to the target region in the image sensor (hereinafter referred to as a recognition region) protrudes from the effective pixel region at time t2. The recognition area is an area in which an image of the target area in the captured display image is projected in the effective pixel area of the image sensor. That is, the effective pixel area and the recognition area in the image sensor correspond to the captured display image and the target area on the display 201, respectively.
 認識領域が有効画素領域からはみ出すことによって、受信機200は、撮像表示画像から対象領域を認識することできず、AR画像を表示することができない状態となる。 When the recognition area protrudes from the effective pixel area, the receiver 200 cannot recognize the target area from the captured display image and cannot display the AR image.
 そこで、本変形例における受信機200は、ディスプレイ201の全体に表示される撮像表示画像よりも画角の広い画像を全撮像画像として取得する。 Therefore, the receiver 200 in the present modification acquires an image having a wider angle of view than the captured display image displayed on the entire display 201 as the entire captured image.
 図79は、実施の形態4の変形例2における受信機200がAR画像を表示する例を示す図である。 FIG. 79 is a diagram illustrating an example in which the receiver 200 in the second modification of the fourth embodiment displays an AR image.
 本変形例に係る受信機200の全撮像画像の画角、つまりイメージセンサの有効画素領域の画角は、ディスプレイ201の全体に表示される撮像表示画像の画角よりも広い。なお、イメージセンサにおいて、ディスプレイ201に表示される画像範囲に相当する領域を、以下、表示領域という。 The field angle of all captured images of the receiver 200 according to this modification, that is, the field angle of the effective pixel area of the image sensor is wider than the field angle of the captured display image displayed on the entire display 201. In the image sensor, an area corresponding to an image range displayed on the display 201 is hereinafter referred to as a display area.
 例えば、受信機200は、時刻t1に、被写体を撮像する。その結果、受信機200は、イメージセンサの有効画素領域によって得られる全撮像画像のうち、有効画素領域よりも狭い表示領域によって得られる画像のみを、撮像表示画像としてディスプレイ201に表示する。このとき、受信機200は、上述と同様、その全撮像画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。 For example, the receiver 200 images the subject at time t1. As a result, the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image. At this time, similarly to the above, the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
 ここで、受信機200のユーザは、AR画像を大きく表示させるために被写体に近づくと、イメージセンサにおける認識領域が拡大する。そして、時刻t2において、その認識領域はイメージセンサにおける表示領域からはみ出す。つまり、ディスプレイ201に表示されている撮像表示画像から、対象領域の画像(例えば、ポスターの像など)がはみ出してしまう。しかし、イメージセンサにおける認識領域は、有効画素領域からははみ出していない。つまり、受信機200は、時刻t2においても、対象領域を含む全撮像画像を取得している。その結果、受信機200は、全撮像画像から対象領域を認識することでき、対象領域のうち撮像表示画像内にある一部の領域にのみ、その領域に対応するAR画像の一部を重畳してディスプレイ201に表示する。 Here, when the user of the receiver 200 approaches the subject in order to display the AR image in a large size, the recognition area in the image sensor is expanded. At time t2, the recognition area protrudes from the display area in the image sensor. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201. However, the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all the captured images, and superimposes a part of the AR image corresponding to the target area only on a part of the target area in the captured display image. Are displayed on the display 201.
 これにより、ユーザがAR画像を大きく表示させるために被写体に近づき、対象領域が撮像表示画像からはみ出しても、AR画像の表示を継続することができる。 Thereby, even when the user approaches the subject to display the AR image in a large size and the target region protrudes from the captured display image, the display of the AR image can be continued.
 図80は、実施の形態4の変形例2における受信機200の処理動作の一例を示すフローチャートである。 FIG. 80 is a flowchart illustrating an example of a processing operation of the receiver 200 in the second modification of the fourth embodiment.
 受信機200は、イメージセンサが被写体を撮像することによって全撮像画像および復号用画像を取得する(ステップS301)。次に、受信機200は、その復号用画像に対する復号によって光IDを取得する(ステップS302)。次に、受信機200は、その光IDをサーバに送信する(ステップS303)。次に、受信機200は、その光IDに対応するAR画像と認識情報とをサーバから取得する(ステップS304)。次に、受信機200は、全撮像画像のうち、認識情報に応じた領域を対象領域として認識する(ステップS305)。 The receiver 200 acquires the entire captured image and the decoding image by the image sensor capturing the subject (step S301). Next, the receiver 200 acquires an optical ID by decoding the decoding image (step S302). Next, the receiver 200 transmits the optical ID to the server (step S303). Next, the receiver 200 acquires an AR image and recognition information corresponding to the optical ID from the server (step S304). Next, the receiver 200 recognizes an area corresponding to the recognition information among all captured images as a target area (step S305).
 ここで、受信機200は、イメージセンサの有効画素領域中の、その対象領域の画像に対応する領域である認識領域が、表示領域からはみ出しているか否かを判定する(ステップS306)。ここで、はみ出していると判定すると(ステップS306のYes)、受信機200は、対象領域のうち、撮像表示画像内にある一部の領域にのみ、その領域に対応するAR画像の一部を表示する(ステップS307)。一方、受信機200は、はみ出していないと判定すると(ステップS306のNo)、受信機200は、撮像表示画像の対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS308)。 Here, the receiver 200 determines whether or not the recognition area that is the area corresponding to the image of the target area in the effective pixel area of the image sensor protrudes from the display area (step S306). Here, if it is determined that it is protruding (Yes in step S306), the receiver 200 extracts a part of the AR image corresponding to the area only in a part of the target area within the captured display image. It is displayed (step S307). On the other hand, when the receiver 200 determines that it does not protrude (No in step S306), the receiver 200 superimposes the AR image on the target area of the captured display image and displays the captured display image on which the AR image is superimposed. (Step S308).
 そして、受信機200は、AR画像の表示処理を終了すべきか否かを判定し(ステップS309)、終了すべきでないと判定すると(ステップS309のNo)、ステップS305からの処理を繰り返し実行する。 Then, the receiver 200 determines whether or not the AR image display process should be terminated (step S309), and when it is determined that the AR image display process should not be terminated (No in step S309), the process from step S305 is repeatedly executed.
 図81は、実施の形態4の変形例2における受信機200がAR画像を表示する他の例を示す図である。 FIG. 81 is a diagram illustrating another example in which the receiver 200 in the second modification of the fourth embodiment displays an AR image.
 受信機200は、上述の表示領域に対する認識領域の大きさの比率によってAR画像の画面表示を切り替えてもよい。 The receiver 200 may switch the screen display of the AR image according to the ratio of the size of the recognition area to the display area.
 イメージセンサの表示領域の水平方向の幅をw1、垂直方向の幅をh1とし、認識領域の水平方向の幅をw2、垂直方向の幅をh2とする場合、受信機は、比率(h2/h1)および(w2/w1)のうちの大きい方の比率を閾値と比較する。 When the horizontal width of the display area of the image sensor is w1, the vertical width is h1, the horizontal width of the recognition area is w2, and the vertical width is h2, the receiver uses the ratio (h2 / h1). ) And (w2 / w1), the larger ratio is compared with the threshold value.
 例えば、受信機200は、図81の(画面表示1)のように、AR画像が対象領域に重畳された撮像表示画像を表示しているときには、その大きい方の比率を、第1の閾値(例えば、0.9)と比較する。そして、大きい方の比率が0.9以上になったときには、受信機200は、図81の(画面表示2)のように、ディスプレイ201の全体にAR画像を拡大して表示する。なお、認識領域が表示領域よりも大きくなったとき、さらに、有効画素領域よりも大きくなったときにも、受信機200は、ディスプレイ201の全体にAR画像を拡大して表示し続ける。 For example, as shown in (screen display 1) in FIG. 81, the receiver 200 displays the captured display image in which the AR image is superimposed on the target region, and sets the larger ratio to the first threshold ( For example, compare with 0.9). When the larger ratio becomes 0.9 or more, the receiver 200 enlarges and displays the AR image on the entire display 201 as shown in (screen display 2) of FIG. Note that when the recognition area becomes larger than the display area and further when the recognition area becomes larger than the effective pixel area, the receiver 200 continues to enlarge and display the AR image on the entire display 201.
 また、受信機200は、例えば、図81の(画面表示2)のように、ディスプレイ201の全体にAR画像を拡大して表示しているときには、その大きい方の比率を、第2の閾値(例えば、0.7)と比較する。第2の閾値は、第1の閾値よりも小さい。そして、大きい方の比率が0.7以下になったときには、受信機200は、図81の(画面表示1)のように、AR画像が対象領域に重畳された撮像表示画像を表示する。 Further, for example, when the AR image is enlarged and displayed on the entire display 201 as shown in (screen display 2) of FIG. 81, the receiver 200 sets the larger ratio to the second threshold ( For example, compare with 0.7). The second threshold is smaller than the first threshold. When the larger ratio becomes 0.7 or less, the receiver 200 displays a captured display image in which the AR image is superimposed on the target area as shown in (screen display 1) of FIG.
 図82は、実施の形態4の変形例2における受信機200の処理動作の他の例を示すフローチャートである。 FIG. 82 is a flowchart illustrating another example of the processing operation of the receiver 200 in the second modification of the fourth embodiment.
 受信機200は、まず、光ID処理を行う(ステップS301a)。この光ID処理は、図80に示すステップS301~S304を含む処理である。次に、受信機200は、撮像表示画像のうち、認識情報に応じた領域を対象領域として認識する(ステップS311)。そして、受信機200は、撮像表示画像の対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS312)。 First, the receiver 200 performs optical ID processing (step S301a). This optical ID process is a process including steps S301 to S304 shown in FIG. Next, the receiver 200 recognizes an area corresponding to the recognition information in the captured display image as a target area (step S311). Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed (step S312).
 次に、受信機200は、認識領域の比率、すなわち比率(h2/h1)および(w2/w1)のうちの大きい方の比率が第1の閾値K(例えばK=0.9)以上であるか否かを判定する(ステップS313)。ここで、第1の閾値K以上でないと判定すると(ステップS313のNo)、受信機200は、ステップS311からの処理を繰り返し実行する。一方、第1の閾値K以上であると判定すると(ステップS313のYes)、受信機200は、AR画像をディスプレイ201の全体に拡大して表示する(ステップS314)。このとき、受信機200は、イメージセンサの電源をオンとオフとに周期的に切り換える。イメージセンサの電源を周期的にオフにすることによって、受信機200の省電力化を図ることができる。 Next, the receiver 200 has a recognition area ratio, that is, a larger ratio of the ratios (h2 / h1) and (w2 / w1) is equal to or greater than a first threshold value K (for example, K = 0.9). It is determined whether or not (step S313). If it is determined that the value is not equal to or greater than the first threshold value K (No in step S313), the receiver 200 repeatedly executes the processing from step S311. On the other hand, if it determines with it being more than the 1st threshold value K (Yes of step S313), the receiver 200 will expand and display the AR image on the entire display 201 (step S314). At this time, the receiver 200 periodically switches the power supply of the image sensor between on and off. Power saving of the receiver 200 can be achieved by periodically turning off the power supply of the image sensor.
 次に、受信機200は、イメージセンサの電源が周期的にオンにされているときに、認識領域の比率が第2の閾値L(例えばL=0.7)以下であるか否かを判定する。ここで、第2の閾値L以下でないと判定すると(ステップS315のNo)、受信機200は、ステップS314からの処理を繰り返し実行する。一方、第2の閾値L以下であると判定すると(ステップS315のYes)、受信機200は、撮像表示画像の対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像を表示する(ステップS316)。 Next, the receiver 200 determines whether or not the recognition area ratio is equal to or smaller than a second threshold L (for example, L = 0.7) when the image sensor is periodically turned on. To do. Here, if it is determined that it is not equal to or less than the second threshold L (No in step S315), the receiver 200 repeatedly executes the processing from step S314. On the other hand, if it is determined that it is equal to or smaller than the second threshold value L (Yes in step S315), the receiver 200 superimposes the AR image on the target area of the captured display image and displays the captured display image on which the AR image is superimposed. (Step S316).
 そして、受信機200は、AR画像の表示処理を終了すべきか否かを判定し(ステップS317)、終了すべきでないと判定すると(ステップS317のNo)、ステップS313からの処理を繰り返し実行する。 Then, the receiver 200 determines whether or not to end the AR image display process (step S317). If the receiver 200 determines that the AR image display process should not be ended (No in step S317), the receiver 200 repeatedly executes the processes from step S313.
 このように、第2の閾値Lを第1の閾値Kよりも小さい値にしておくことによって、受信機200の画面表示が(画面表示1)と(画面表示2)とで頻繁に切り替えられることを防ぎ、画面表示の状態を安定化させることができる。 Thus, by setting the second threshold value L to be smaller than the first threshold value K, the screen display of the receiver 200 can be frequently switched between (screen display 1) and (screen display 2). Can be prevented and the state of the screen display can be stabilized.
 なお、図81および図82に示す例では、表示領域と有効画素領域とは同一であってもよく、異なっていてもよい。また、その例では、表示領域に対する認識領域の大きさの比率を用いたが、表示領域と有効画素領域とが異なる場合には、表示領域の代わりに、有効画素領域に対する認識領域の大きさの比率を用いてもよい。 In the example shown in FIGS. 81 and 82, the display area and the effective pixel area may be the same or different. In this example, the ratio of the size of the recognition area to the display area is used. However, if the display area and the effective pixel area are different, the size of the recognition area relative to the effective pixel area is used instead of the display area. A ratio may be used.
 図83は、実施の形態4の変形例2における受信機200がAR画像を表示する他の例を示す図である。 FIG. 83 is a diagram illustrating another example in which the receiver 200 according to the second modification of the fourth embodiment displays an AR image.
 図83に示す例では、図79に示す例と同様、受信機200のイメージセンサは、表示領域よりも広い有効画素領域を有する。 83, in the same manner as the example shown in FIG. 79, the image sensor of the receiver 200 has an effective pixel area wider than the display area.
 例えば、受信機200は、時刻t1に、被写体を撮像する。その結果、受信機200は、イメージセンサの有効画素領域によって得られる全撮像画像のうち、有効画素領域よりも狭い表示領域によって得られる画像のみを、撮像表示画像としてディスプレイ201に表示する。このとき、受信機200は、上述と同様、その全撮像画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。 For example, the receiver 200 images the subject at time t1. As a result, the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image. At this time, similarly to the above, the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
 ここで、ユーザは、受信機200(具体的にはイメージセンサ)の向きを変えると、イメージセンサにおける認識領域が、例えば図83中左上方向に移動し、時刻t2では、表示領域からはみ出す。つまり、ディスプレイ201に表示されている撮像表示画像から、対象領域の画像(例えば、ポスターの像など)がはみ出してしまう。しかし、イメージセンサにおける認識領域は、有効画素領域からははみ出していない。つまり、受信機200は、時刻t2においても、対象領域を含む全撮像画像を取得している。その結果、受信機200は、全撮像画像から対象領域を認識することでき、対象領域のうちの撮像表示画像内にある一部の領域にのみ、その領域に対応するAR画像の一部を重畳してディスプレイ201に表示する。さらに、受信機200は、イメージセンサにおける認識領域の動き、すなわち全撮像画像における対象領域の動きに応じて、表示されるAR画像の一部の大きさおよび位置を変更する。 Here, when the user changes the direction of the receiver 200 (specifically, the image sensor), the recognition area in the image sensor moves, for example, in the upper left direction in FIG. 83, and protrudes from the display area at time t2. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201. However, the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all the captured images, and superimposes a part of the AR image corresponding to the area only on a part of the target area in the captured display image. And displayed on the display 201. Furthermore, the receiver 200 changes the size and position of a part of the displayed AR image in accordance with the movement of the recognition area in the image sensor, that is, the movement of the target area in all captured images.
 また、上述のように認識領域が表示領域からはみ出したときには、受信機200は、有効画素領域の縁と、表示領域の縁との間の距離(以下、領域間距離という)に対応するピクセル数を閾値と比較する。 When the recognition area protrudes from the display area as described above, the receiver 200 counts the number of pixels corresponding to the distance between the edge of the effective pixel area and the edge of the display area (hereinafter referred to as inter-area distance). Is compared to a threshold.
 例えば、有効画素領域の上辺と、表示領域の上辺との間と距離と、有効画素領域の下辺と、表示領域の下辺との間と距離とのうち、短い方の距離(以下、第1の距離という)に対応するピクセル数をdhとする。また、有効画素領域の左辺と、表示領域の左辺との間と距離と、有効画素領域の右辺と、表示領域の右辺との間と距離とのうち、短い方の距離(以下、第2の距離という)に対応するピクセル数をdwとする。このとき、上述の領域間距離は、第1および第2の距離のうちの短い方の距離である。 For example, the shorter distance (hereinafter referred to as the first distance) among the distance between the upper side of the effective pixel area and the upper side of the display area, and the distance between the lower side of the effective pixel area and the lower side of the display area. Let dh be the number of pixels corresponding to (distance). Further, the shorter distance (hereinafter referred to as the second distance) among the distance between the left side of the effective pixel area and the left side of the display area, and the distance between the right side of the effective pixel area and the right side of the display area. Let dw be the number of pixels corresponding to (distance). At this time, the above-mentioned inter-region distance is the shorter one of the first and second distances.
 つまり、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数を、閾値Nと比較する。そして、受信機200は、例えば時刻t2において、その小さい方のピクセル数が閾値N以下になれば、そのイメージセンサにおける認識領域の位置に応じてAR画像の一部の大きさおよび位置を変更することなく固定する。すなわち、受信機200は、AR画像の画面表示を切り替える。例えば、受信機200は、表示されるAR画像の一部の大きさおよび位置を、その小さい方のピクセル数が閾値Nとなったときにディスプレイ201に表示されていたAR画像の一部の大きさおよび位置に固定する。 That is, the receiver 200 compares the smaller pixel number of the pixel numbers dw and dh with the threshold value N. Then, for example, when the smaller number of pixels becomes equal to or smaller than the threshold value N at time t2, the receiver 200 changes the size and position of a part of the AR image according to the position of the recognition area in the image sensor. Fix without. That is, the receiver 200 switches the screen display of the AR image. For example, the receiver 200 sets the size and position of a part of the displayed AR image to the size of the part of the AR image displayed on the display 201 when the smaller number of pixels reaches the threshold value N. Fix in position and position.
 したがって、時刻t3において、認識領域がさらに移動し、有効画素領域からはみ出すことになっても、受信機200は、時刻t2と同様にAR画像の一部を表示し続ける。すなわち、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数が閾値N以下であるかぎり、時刻t2のときと同様、大きさおよび位置が固定されたAR画像の一部を撮像表示画像に重畳して表示し続ける。 Therefore, even when the recognition area further moves and protrudes from the effective pixel area at time t3, the receiver 200 continues to display a part of the AR image similarly to time t2. That is, as long as the smaller number of pixels dw and dh is equal to or smaller than the threshold value N, the receiver 200 captures a part of the AR image whose size and position are fixed as at time t2. Continue to superimpose on the display image.
 図83に示す例では、受信機200は、イメージセンサにおける認識領域の移動に応じて、表示されるAR画像の一部の大きさおよび位置を変更したが、AR画像全体の表示倍率および位置を変更してもよい。 In the example shown in FIG. 83, the receiver 200 changes the size and position of a part of the displayed AR image in accordance with the movement of the recognition area in the image sensor, but the display magnification and position of the entire AR image are changed. It may be changed.
 図84は、実施の形態4の変形例2における受信機200がAR画像を表示する他の例を示す図である。具体的には、図84は、AR画像の表示倍率が変更される例を示す。 FIG. 84 is a diagram illustrating another example in which the receiver 200 according to the second modification of the fourth embodiment displays an AR image. Specifically, FIG. 84 shows an example in which the display magnification of the AR image is changed.
 例えば、図83に示す例と同様、時刻t1の状態から、ユーザは、受信機200(具体的にはイメージセンサ)の向きを変えると、イメージセンサにおける認識領域が、例えば図84中左上方向に移動し、時刻t2では、表示領域からはみ出す。つまり、ディスプレイ201に表示されている撮像表示画像から、対象領域の画像(例えば、ポスターの像など)がはみ出してしまう。しかし、イメージセンサにおける認識領域は、有効画素領域からははみ出していない。つまり、受信機200は、時刻t2においても、対象領域を含む全撮像画像を取得している。その結果、受信機200は、全撮像画像から対象領域を認識することできる。 For example, as in the example shown in FIG. 83, when the user changes the orientation of the receiver 200 (specifically, the image sensor) from the state at time t1, the recognition area in the image sensor is, for example, in the upper left direction in FIG. It moves and protrudes from the display area at time t2. That is, an image of the target region (for example, a poster image) protrudes from the captured display image displayed on the display 201. However, the recognition area in the image sensor does not protrude from the effective pixel area. That is, the receiver 200 acquires all captured images including the target area even at time t2. As a result, the receiver 200 can recognize the target area from all captured images.
 そこで、図84に示す例では、受信機200は、対象領域のうちの撮像表示画像内にある一部の領域のサイズに、AR画像全体のサイズが一致するように、そのAR画像の表示倍率を変更する。つまり、受信機200はAR画像を縮小する。そして、受信機200は、表示倍率が変更された(すなわち縮小された)AR画像をその領域に重畳してディスプレイ201に表示する。さらに、受信機200は、イメージセンサにおける認識領域の動き、すなわち全撮像画像における対象領域の動きに応じて、表示されるAR画像の表示倍率および位置を変更する。 Therefore, in the example illustrated in FIG. 84, the receiver 200 displays the AR image display magnification so that the size of the entire AR image matches the size of a part of the target region in the captured display image. To change. That is, the receiver 200 reduces the AR image. Then, the receiver 200 superimposes the AR image whose display magnification has been changed (that is, reduced) on the area and displays the AR image on the display 201. Furthermore, the receiver 200 changes the display magnification and position of the displayed AR image in accordance with the movement of the recognition area in the image sensor, that is, the movement of the target area in all captured images.
 また、上述のように認識領域が表示領域からはみ出したときには、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数を、閾値Nと比較する。そして、受信機200は、例えば時刻t2において、その小さい方のピクセル数が閾値N以下になれば、そのイメージセンサにおける認識領域の位置に応じてAR画像の表示倍率および位置を変更することなく固定する。つまり、受信機200は、AR画像の画面表示を切り替える。例えば、受信機200は、表示されるAR画像の表示倍率および位置を、その小さい方のピクセル数が閾値Nとなったときにディスプレイ201に表示されていたAR画像の表示倍率および位置に固定する。 In addition, when the recognition area protrudes from the display area as described above, the receiver 200 compares the smaller pixel number of the pixel numbers dw and dh with the threshold value N. Then, for example, if the smaller number of pixels becomes equal to or less than the threshold value N at time t2, the receiver 200 is fixed without changing the display magnification and position of the AR image according to the position of the recognition area in the image sensor. To do. That is, the receiver 200 switches the screen display of the AR image. For example, the receiver 200 fixes the display magnification and position of the displayed AR image to the display magnification and position of the AR image displayed on the display 201 when the smaller number of pixels reaches the threshold value N. .
 したがって、時刻t3において、認識領域がさらに移動し、有効画素領域からはみ出すことになっても、受信機200は、時刻t2と同様にAR画像を表示し続ける。すなわち、受信機200は、ピクセル数dw、dhのうちの小さい方のピクセル数が閾値N以下であるかぎり、時刻t2のときと同様、表示倍率および位置が固定されたAR画像を撮像表示画像に重畳して表示し続ける。 Therefore, the receiver 200 continues to display the AR image similarly to the time t2 even when the recognition area further moves at the time t3 and protrudes from the effective pixel area. That is, as long as the smaller number of pixels dw and dh is equal to or smaller than the threshold value N, the receiver 200 converts an AR image with a fixed display magnification and position into a captured display image as at time t2. Continue to overlay and display.
 なお、上述の例では、ピクセル数dw、dhのうちの小さい方と閾値とを比較したが、その小さい方のピクセル数の比率と閾値とを比較してもよい。そのピクセル数dwの比率は、例えば、有効画素領域の水平方向のピクセル数w0に対するピクセル数dwの比率(dw/w0)である。同様に、ピクセル数dhの比率は、例えば、有効画素領域の垂直方向のピクセル数h0に対するピクセル数dhの比率(dh/h0)である。または、有効画素領域の水平方向または垂直方向のピクセル数の代わりに、表示領域の水平方向または垂直方向のピクセル数を用いて、ピクセル数dw、dhのそれぞれの比率を表してもよい。ピクセル数dw、dhの比率と比較される閾値は、例えば0.05である。 In the above example, the smaller of the number of pixels dw and dh is compared with the threshold value, but the ratio of the smaller number of pixels may be compared with the threshold value. The ratio of the number of pixels dw is, for example, the ratio of the number of pixels dw to the number of pixels w0 in the horizontal direction of the effective pixel region (dw / w0). Similarly, the ratio of the number of pixels dh is, for example, the ratio of the number of pixels dh to the number of pixels h0 in the vertical direction of the effective pixel region (dh / h0). Alternatively, the ratio between the pixel numbers dw and dh may be expressed by using the number of pixels in the horizontal or vertical direction of the display area instead of the number of pixels in the horizontal or vertical direction of the effective pixel area. The threshold value compared with the ratio of the number of pixels dw and dh is, for example, 0.05.
 また、ピクセル数dw、dhのうちの小さい方の画角と閾値とを比較してもよい。有効画素領域の対角線のピクセル数がmであって、その対角線に対応する画角がθ(例えば55°)である場合、ピクセル数dwに対応する画角は、θ×dw/mであり、ピクセル数dhに対応する画角は、θ×dh/mである。 Also, the smaller angle of view of the number of pixels dw and dh may be compared with a threshold value. When the number of diagonal pixels in the effective pixel area is m and the angle of view corresponding to the diagonal is θ (for example, 55 °), the angle of view corresponding to the number of pixels dw is θ × dw / m, The angle of view corresponding to the number of pixels dh is θ × dh / m.
 また、図83および図84に示す例では、受信機200は、有効画素領域と認識領域との間の領域間距離に基づいて、AR画像の画面表示を切り替えたが、表示領域と認識領域との関係に基づいて、AR画像の画面表示を切り替えてもよい。 In the example shown in FIGS. 83 and 84, the receiver 200 switches the screen display of the AR image based on the inter-region distance between the effective pixel region and the recognition region. The screen display of the AR image may be switched based on the relationship.
 図85は、実施の形態4の変形例2における受信機200がAR画像を表示する他の例を示す図である。具体的には、図85は、表示領域と認識領域との関係に基づいてAR画像の画面表示を切り替える例を示す。また、図85に示す例では、図79に示す例と同様、受信機200のイメージセンサは、表示領域よりも広い有効画素領域を有する。 FIG. 85 is a diagram illustrating another example in which the receiver 200 in the second modification of the fourth embodiment displays an AR image. Specifically, FIG. 85 shows an example in which the screen display of the AR image is switched based on the relationship between the display area and the recognition area. In the example shown in FIG. 85, as in the example shown in FIG. 79, the image sensor of the receiver 200 has an effective pixel area wider than the display area.
 例えば、受信機200は、時刻t1に、被写体を撮像する。その結果、受信機200は、イメージセンサの有効画素領域によって得られる全撮像画像のうち、有効画素領域よりも狭い表示領域によって得られる画像のみを、撮像表示画像としてディスプレイ201に表示する。このとき、受信機200は、上述と同様、その全撮像画像のうち、光IDに基づいて取得された認識情報に応じた領域を、AR画像が重畳される対象領域として認識する。そして、受信機200は、その撮像表示画像の対象領域にAR画像を重畳し、AR画像が重畳された撮像表示画像をディスプレイ201に表示する。 For example, the receiver 200 images the subject at time t1. As a result, the receiver 200 displays, on the display 201, only the image obtained by the display area narrower than the effective pixel area among all the captured images obtained by the effective pixel area of the image sensor as the captured display image. At this time, similarly to the above, the receiver 200 recognizes an area corresponding to the recognition information acquired based on the light ID among all the captured images as a target area on which the AR image is superimposed. Then, the receiver 200 superimposes the AR image on the target area of the captured display image, and displays the captured display image on which the AR image is superimposed on the display 201.
 ここで、ユーザは、受信機200の向きを変えると、受信機200は、イメージセンサにおける認識領域の動きに応じて、表示されるAR画像の位置を変更させる。そして、例えば、イメージセンサにおける認識領域が、例えば図85中左上方向に移動し、時刻t2では、認識領域の縁の一部と表示領域の縁の一部とが一致する。つまり、ディスプレイ201に表示されている撮像表示画像の隅に、対象領域の画像(例えばポスターなどの像)が配置される。その結果、受信機200は、撮像表示画像の隅にある対象領域にAR画像を重畳してディスプレイ201に表示する。 Here, when the user changes the direction of the receiver 200, the receiver 200 changes the position of the displayed AR image according to the movement of the recognition area in the image sensor. For example, the recognition area in the image sensor moves in the upper left direction in FIG. 85, for example, and at time t2, a part of the edge of the recognition area coincides with a part of the edge of the display area. That is, an image of the target area (for example, an image such as a poster) is arranged at the corner of the captured display image displayed on the display 201. As a result, the receiver 200 superimposes the AR image on the target area at the corner of the captured display image and displays the AR image on the display 201.
 そして、認識領域がさらに移動して表示領域からはみ出すときには、受信機200は、時刻t2で表示されていたAR画像の大きさおよび位置を変更することなく固定する。つまり、受信機200は、AR画像の画面表示を切り替える。 When the recognition area further moves and protrudes from the display area, the receiver 200 fixes the AR image displayed at time t2 without changing the size and position. That is, the receiver 200 switches the screen display of the AR image.
 したがって、時刻t3において、認識領域がさらに移動し、有効画素領域からはみ出すことになっても、受信機200は、時刻t2と同様にAR画像を表示し続ける。すなわち、受信機200は、認識領域が表示領域からはみ出ているかぎり、受信機200は、時刻t2のときと同じサイズのAR画像を、撮像表示画像における時刻t2のときと同じ位置に重畳して表示し続ける。 Therefore, the receiver 200 continues to display the AR image similarly to the time t2 even when the recognition area further moves at the time t3 and protrudes from the effective pixel area. In other words, as long as the recognition area extends beyond the display area, the receiver 200 superimposes the AR image having the same size as that at time t2 on the same position as at time t2 in the captured display image. Continue to display.
 このように、図85に示す例では、受信機200は、認識領域が表示領域からはみ出すか否かに応じてAR画像の画面表示を切り替える。また、受信機200は、表示領域を包含し、その表示領域よりも大きく有効画素領域よりも小さい判定領域を、表示領域の代わりに用いてもよい。この場合、受信機200は、認識領域が判定領域からはみ出すか否かに応じてAR画像の画面表示を切り替える。 Thus, in the example shown in FIG. 85, the receiver 200 switches the screen display of the AR image depending on whether or not the recognition area protrudes from the display area. Further, the receiver 200 may include a determination area that includes the display area and is larger than the display area and smaller than the effective pixel area instead of the display area. In this case, the receiver 200 switches the screen display of the AR image depending on whether or not the recognition area protrudes from the determination area.
 以上、図79~図85を用いてAR画像の画面表示について説明したが、受信機200は、全撮像画像から対象領域を認識することができなくなったときに、その直前まで認識されていた対象領域の大きさのAR画像を撮像表示画像に重畳して表示してもよい。 As described above, the screen display of the AR image has been described with reference to FIGS. 79 to 85. However, when the receiver 200 cannot recognize the target area from all the captured images, the target that has been recognized until immediately before the target area is recognized. The AR image having the size of the area may be displayed superimposed on the captured display image.
 図86は、実施の形態4の変形例2における受信機200がAR画像を表示する他の例を示す図である。 FIG. 86 is a diagram illustrating another example in which the receiver 200 according to the second modification of the fourth embodiment displays an AR image.
 なお、図49に示す例では、受信機200は、送信機100によって照らされた案内板107を撮像することによって、上述と同様に、撮像表示画像Peと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、案内板107から光IDを受信する。しかし、案内板107の表面全体が、光を吸収するような色(例えば暗色)であれば、その表面は送信機100によって照らされても暗いため、受信機200は、光IDを正しく受信することができない場合がある。または、案内板107の表面全体が、復号用画像(すなわち輝線画像)のような縞模様であっても、受信機200は、光IDを正しく受信することができない場合がある。 Note that, in the example illustrated in FIG. 49, the receiver 200 acquires the captured display image Pe and the decoding image in the same manner as described above by imaging the guide plate 107 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the guide plate 107. However, if the entire surface of the guide plate 107 is a color that absorbs light (for example, dark color), the receiver 200 receives the light ID correctly because the surface is dark even when illuminated by the transmitter 100. It may not be possible. Alternatively, even if the entire surface of the guide plate 107 has a striped pattern such as a decoding image (that is, a bright line image), the receiver 200 may not be able to correctly receive the light ID.
 そこで、図86に示すように、案内板107の近くに反射板109を配置しておいてもよい。これにより、受信機200は、送信機100から反射板109によって反射された光、つまり、送信機100から送信される可視光(具体的には光ID)を受けることができる。その結果、受信機200は、適切に光IDを受信してAR画像P5を表示することができる。 Therefore, as shown in FIG. 86, a reflecting plate 109 may be arranged near the guide plate 107. Thereby, the receiver 200 can receive light reflected from the transmitter 100 by the reflector 109, that is, visible light (specifically, light ID) transmitted from the transmitter 100. As a result, the receiver 200 can appropriately receive the optical ID and display the AR image P5.
 [実施の形態4の変形例1および2のまとめ]
 図87Aは、本発明の一態様に係る表示方法を示すフローチャートである。
[Summary of Modifications 1 and 2 of Embodiment 4]
FIG. 87A is a flowchart illustrating a display method according to one embodiment of the present invention.
 本発明の一態様に係る表示方法は、ステップS41~S43を含む。 The display method according to one aspect of the present invention includes steps S41 to S43.
 ステップS41では、光の輝度変化により信号を送信する送信機によりライトアップされている対象物を被写体として撮像センサにより撮像することによって、撮像画像を取得する。ステップS42では、その撮像画像から信号を復号する。ステップS43では、復号された信号に対応する動画像をメモリから読み出し、撮像画像中のその被写体に対応する対象領域に、動画像を重畳させてディスプレイに表示する。ここで、ステップS43では、その動画像に含まれる複数の画像のうちの何れかの画像であって、その対象物を含む画像と、前記対象物を含む画像の表示時間で前後にある所定の数の複数の画像とのうちの、何れかの画像から、その動画像を表示する。例えば、その所定の数は、10フレームである。あるいは、対象物は、静止画であり、ステップS43では、静止画と同一の画像から、その動画像を表示する。なお、動画像の表示が開始される画像は、静止画と同一の画像に限らず、その静止画と同一の画像、すなわち対象物を含む画像から、表示順で所定のフレーム数だけ前後にある画像であってもよい。また、対象物は、静止画に限らず、人形などであってもよい。 In step S41, a picked-up image is acquired by picking up an image of an object illuminated by a transmitter that transmits a signal according to a change in luminance of light as a subject with an image sensor. In step S42, a signal is decoded from the captured image. In step S43, a moving image corresponding to the decoded signal is read from the memory, and the moving image is superimposed on a target area corresponding to the subject in the captured image and displayed on the display. Here, in step S43, a predetermined image that is one of a plurality of images included in the moving image and includes display images of the image including the target object and the image including the target object. The moving image is displayed from any one of the plurality of images. For example, the predetermined number is 10 frames. Alternatively, the object is a still image, and in step S43, the moving image is displayed from the same image as the still image. The image from which the display of the moving image is started is not limited to the same image as the still image, but is the same number as the still image, that is, the predetermined number of frames in the display order from the image including the target object. It may be an image. Further, the object is not limited to a still image, and may be a doll or the like.
 なお、撮像センサおよび撮像画像は、例えば、実施の形態4におけるイメージセンサおよび全撮像画像である。また、ライトアップされる静止画は、画像表示装置の表示パネルに表示される静止画像であってもよく、送信機からの光によって照らされるポスター、案内板、もしくは看板などであってもよい。 Note that the imaging sensor and the captured image are, for example, the image sensor and the entire captured image in the fourth embodiment. The still image to be lit up may be a still image displayed on the display panel of the image display device, or may be a poster, a guide board, a signboard, or the like illuminated by light from the transmitter.
 また、このような表示方法は、さらに、信号をサーバに送信する送信ステップと、その信号に対応する動画像をサーバから受信する受信ステップとを含んでもよい。 Further, such a display method may further include a transmission step of transmitting a signal to the server and a reception step of receiving a moving image corresponding to the signal from the server.
 これにより、例えば図71に示すように、静止画が動き出すように仮想現実的に動画像を表示することができ、ユーザに有益な画像を表示することができる。 Thus, for example, as shown in FIG. 71, a moving image can be displayed virtually so that a still image starts to move, and an image useful for the user can be displayed.
 また、静止画は、所定の色の外枠を有し、本発明の一態様に係る表示方法は、さらに、その所定の色により、撮像画像から対象領域を認識する認識ステップを含んでもよい。この場合、ステップS43では、認識された対象領域のサイズと同一となるように、動画像をリサイズし、撮像画像中の対象領域に、リサイズされた動画像を重畳させてディスプレイに表示してもよい。例えば、所定の色の外枠は、静止画を取り囲む白色または黒色の矩形枠であり、実施の形態4における認識情報によって示される。そして、実施の形態4におけるAR画像が動画像としてリサイズされて重畳される。 Further, the still image has an outer frame of a predetermined color, and the display method according to one aspect of the present invention may further include a recognition step of recognizing a target area from the captured image by the predetermined color. In this case, in step S43, the moving image is resized so as to be the same as the size of the recognized target region, and the resized moving image is superimposed on the target region in the captured image and displayed on the display. Good. For example, the outer frame of the predetermined color is a white or black rectangular frame surrounding a still image, and is indicated by the recognition information in the fourth embodiment. Then, the AR image in the fourth embodiment is resized and superimposed as a moving image.
 これにより、動画像が被写体として実在するように、より現実的にその動画像を表示することができる。 Thereby, the moving image can be displayed more realistically so that the moving image actually exists as a subject.
 また、撮像センサの撮像領域のうち、その撮像領域よりも小さい領域である表示領域に投影される画像のみがディスプレイに表示される。この場合、ステップS43では、その撮像領域において被写体が投影されている投影領域が、表示領域よりも大きい場合には、投影領域のうち、表示領域を越えた部分によって得られる画像を、ディスプレイに表示しなくてもよい。ここで、例えば図79に示すように、撮像領域および投影領域は、イメージセンサの有効画素領域および認識領域である。 Further, only the image projected on the display area which is smaller than the imaging area among the imaging areas of the imaging sensor is displayed on the display. In this case, in step S43, if the projection area onto which the subject is projected in the imaging area is larger than the display area, an image obtained by a portion of the projection area that exceeds the display area is displayed on the display. You don't have to. Here, for example, as shown in FIG. 79, the imaging region and the projection region are an effective pixel region and a recognition region of the image sensor.
 これにより、例えば図79に示すように、被写体である静止画に撮像センサが近づくことによって、投影領域(図79の認識領域)によって得られる画像の一部がディスプレイに表示されなくても、被写体である静止画の全体が撮像領域に投影されている場合がある。したがって、この場合には、被写体である静止画を適切に認識することができ、撮像画像中の被写体に対応する対象領域に、動画像を適切に重畳させることができる。 As a result, for example, as shown in FIG. 79, the imaging sensor approaches the still image that is the subject, so that even if a part of the image obtained by the projection region (recognition region in FIG. 79) is not displayed on the display, the subject In some cases, the entire still image is projected onto the imaging region. Therefore, in this case, a still image that is a subject can be appropriately recognized, and a moving image can be appropriately superimposed on a target region corresponding to the subject in the captured image.
 また、例えば、表示領域の水平方向および垂直方向のそれぞれの幅が、w1およびh1であり、投影領域の水平方向および垂直方向のそれぞれの幅が、w2およびh2である。この場合、ステップS43では、h2/h1またはw2/w1のいずれか大きい値が所定の値以上である場合には、動画像をディスプレイの全画面に表示し、h2/h1またはw2/w1のいずれか大きい値が所定の値よりも小さい場合には、撮像画像中の対象領域に動画像を重畳させてディスプレイに表示してもよい。 For example, the horizontal and vertical widths of the display area are w1 and h1, and the horizontal and vertical widths of the projection area are w2 and h2. In this case, in step S43, when h2 / h1 or w2 / w1 is greater than a predetermined value, a moving image is displayed on the entire screen of the display, and either h2 / h1 or w2 / w1 is displayed. If the larger value is smaller than the predetermined value, the moving image may be superimposed on the target area in the captured image and displayed on the display.
 これにより、例えば図81に示すように、被写体である静止画に撮像センサが近づくと、動画像が全画面に表示されるため、ユーザは、撮像センサをさらに静止画に近づけて動画像を大きく表示させる必要がない。そのため、撮像センサを静止画に近づけすぎて、投影領域(図81の認識領域)が撮像領域(有効画素領域)からはみ出してしまうことによって、信号を復号することができなくなることを抑えることができる。 As a result, for example, as shown in FIG. 81, when the imaging sensor approaches the still image that is the subject, the moving image is displayed on the full screen. Therefore, the user enlarges the moving image by moving the imaging sensor closer to the still image. There is no need to display. Therefore, it is possible to prevent the signal from being unable to be decoded when the imaging sensor is too close to the still image and the projection area (recognition area in FIG. 81) protrudes from the imaging area (effective pixel area). .
 また、本発明の一態様に係る表示方法は、さらに、動画像をディスプレイの全画面に表示する場合には、撮像センサの動作をオフにする制御ステップを含んでいてもよい。 In addition, the display method according to one aspect of the present invention may further include a control step of turning off the operation of the imaging sensor when a moving image is displayed on the entire screen of the display.
 これにより、例えば図82のステップS314に示すように、撮像センサの動作をオフにすることによって、撮像センサの消費電力を抑えることができる。 Thereby, for example, as shown in step S314 in FIG. 82, the power consumption of the image sensor can be suppressed by turning off the operation of the image sensor.
 また、ステップS43では、撮像センサが移動することにより、撮像画像から対象領域が認識できなくなった場合には、認識できなくなる直前に認識していた対象領域のサイズと同一のサイズで動画像を表示してもよい。なお、撮像画像から対象領域が認識できないとは、例えば、被写体である静止画に対応する対象領域の少なくとも一部が撮像画像に含まれていない状況である。このように、対象領域が認識できない場合には、例えば図85の時刻t3のときのように、直前に認識していた対象領域のサイズと同じサイズの動画像が表示される。したがって、撮像センサを移動させてしまったために、動画像の少なくとも一部が表示されなくなることを抑えることができる。 In step S43, if the target area cannot be recognized from the captured image due to the movement of the imaging sensor, the moving image is displayed in the same size as the size of the target area recognized immediately before it cannot be recognized. May be. Note that the target area cannot be recognized from the captured image, for example, is a situation in which at least a part of the target area corresponding to the still image that is the subject is not included in the captured image. Thus, when the target area cannot be recognized, a moving image having the same size as the size of the target area recognized immediately before is displayed, for example, at time t3 in FIG. Accordingly, it is possible to suppress the at least part of the moving image from being displayed because the imaging sensor has been moved.
 また、ステップS43では、撮像センサが移動することにより、対象領域のうちの一部のみが、撮像画像のうちのディスプレイに表示される領域に含まれる場合には、その対象領域の一部に対応する動画像の空間領域の一部を、対象領域の一部に重畳させてディスプレイに表示してもよい。なお、動画像の空間領域の一部とは、動画像を構成する各ピクチャのうちの一部である。 Further, in step S43, when only a part of the target area is included in the area displayed on the display of the captured image due to the movement of the imaging sensor, it corresponds to a part of the target area. A part of the spatial area of the moving image to be displayed may be superimposed on a part of the target area and displayed on the display. A part of the spatial area of the moving image is a part of each picture constituting the moving image.
 これにより、例えば図83の時刻t2のときのように、動画像(図83のAR画像)の空間領域の一部のみがディスプレイに表示される。その結果、撮像センサが被写体となる静止画に適切に向けられていないことをユーザに知らせることができる。 Thus, only a part of the spatial area of the moving image (AR image in FIG. 83) is displayed on the display, for example, at time t2 in FIG. As a result, the user can be informed that the image sensor is not properly directed to the still image that is the subject.
 また、ステップS43では、撮像センサが移動することにより、撮像画像から対象領域が認識できなくなった場合には、認識できなくなる直前に表示されていた、対象領域の一部に対応する動画像の空間領域の一部を、継続して表示してもよい。 In step S43, if the target area cannot be recognized from the captured image due to the movement of the imaging sensor, the space of the moving image corresponding to a part of the target area displayed immediately before the target area cannot be recognized. A part of the area may be continuously displayed.
 これにより、例えば図83の時刻t3のときのように、ユーザが、被写体となる静止画と異なる方向に撮像センサを向けたときにも、動画像(図83のAR画像)の空間領域の一部が継続して表示される。その結果、撮像センサをどのように向ければ動画像の全体が表示されるかを、ユーザに把握しやすくすることができる。 Thus, even when the user points the imaging sensor in a direction different from the still image as the subject, for example, at time t3 in FIG. 83, one space area of the moving image (AR image in FIG. 83) is displayed. The part is displayed continuously. As a result, it is possible to make it easier for the user to know how the image sensor is directed to display the entire moving image.
 また、ステップS43では、撮像センサの撮像領域における水平方向および垂直方向のそれぞれの幅が、w0およびh0であり、撮像領域において被写体が投影されている投影領域と、その撮像領域との間の水平方向および垂直方向のそれぞれの距離が、dhおよびdwである場合、dw/w0またはdh/h0のいずれか小さい方の値が、所定値以下の場合に、対象領域が認識できないと判断してもよい。なお、投影領域は、例えば図83に示す認識領域である。または、ステップS43では、撮像センサの撮像領域において被写体が投影されている投影領域と、その撮像領域との間の水平方向および垂直方向のそれぞれの距離のうちの短い方に対応する画角が所定値以下の場合に、対象領域が認識できないと判断してもよい。 In step S43, the horizontal and vertical widths in the imaging area of the imaging sensor are w0 and h0, respectively, and the horizontal area between the projection area where the subject is projected in the imaging area and the imaging area. When the distances in the direction and the vertical direction are dh and dw, it is determined that the target region cannot be recognized when the smaller one of dw / w0 and dh / h0 is equal to or smaller than a predetermined value. Good. The projection area is a recognition area shown in FIG. 83, for example. Alternatively, in step S43, the angle of view corresponding to the shorter one of the horizontal and vertical distances between the projection area where the subject is projected in the imaging area of the imaging sensor and the imaging area is predetermined. If the value is less than or equal to the value, it may be determined that the target area cannot be recognized.
 これにより、対象領域が認識できるか否かを適切に判断することができる。 This makes it possible to appropriately determine whether or not the target area can be recognized.
 図87Bは、本発明の一態様に係る表示装置の構成を示すブロック図である。 FIG. 87B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
 本発明の一態様に係る表示装置A10は、撮像センサA11と、復号部A12と、表示制御部A13とを備える。 The display device A10 according to an aspect of the present invention includes an imaging sensor A11, a decoding unit A12, and a display control unit A13.
 撮像センサA11は、光の輝度変化により信号を送信する送信機によりライトアップされている静止画を被写体として撮像することによって、撮像画像を取得する。 The imaging sensor A11 acquires a captured image by capturing a still image illuminated by a transmitter that transmits a signal according to a change in the luminance of light as a subject.
 復号部A12は、その撮像画像から信号を復号する復号部する。 The decoding unit A12 is a decoding unit that decodes a signal from the captured image.
 表示制御部A13は、復号された信号に対応する動画像をメモリから読み出し、その撮像画像中の被写体に対応する対象領域に、その動画像を重畳させてディスプレイに表示する。ここで、表示制御部A13は、その動画像に含まれる複数の画像のうち、静止画と同一の画像である先頭画像から、その複数の画像を順に表示する。 The display control unit A13 reads out the moving image corresponding to the decoded signal from the memory, and displays the moving image on the display by superimposing the moving image on the target area corresponding to the subject in the captured image. Here, the display control unit A13 displays the plurality of images in order from the first image that is the same image as the still image among the plurality of images included in the moving image.
 これにより、上述の表示方法と同様の効果を奏することができる。 Thereby, the same effect as the above-described display method can be obtained.
 また、撮像センサA11は、複数のマイクロミラーと、フォトセンサとを有し、表示装置A10は、さらに、撮像センサを制御する撮像制御部を備えてもよい。この場合、撮像制御部は、撮像画像のうち、信号を含む領域を信号領域として特定し、複数のマイクロミラーのうち、特定した信号領域に対応するマイクロミラーの角度を制御する。そして、撮像制御部は、複数のマイクロミラーのうち、角度が制御されたマイクロミラーによる反射光のみを、上述のフォトセンサに受光させる。 Further, the imaging sensor A11 may include a plurality of micromirrors and a photosensor, and the display device A10 may further include an imaging control unit that controls the imaging sensor. In this case, the imaging control unit identifies a region including a signal as a signal region in the captured image, and controls the angle of the micromirror corresponding to the identified signal region among the plurality of micromirrors. And an imaging control part makes the above-mentioned photosensor receive only the reflected light by the micro mirror by which the angle was controlled among a plurality of micro mirrors.
 これにより、光の輝度変化によって表される信号である可視光信号に高周波成分が含まれていても、その高周波成分を正しく復号することができる。 Thus, even if a visible light signal, which is a signal represented by a change in light brightness, contains a high frequency component, the high frequency component can be correctly decoded.
 なお、上記各実施の形態および各変形例において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図77、図80、図82および図87Aのフローチャートによって示される表示方法をコンピュータに実行させる。 In each of the above-described embodiments and modifications, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the display method shown by the flowcharts of FIGS. 77, 80, 82, and 87A.
 以上、一つまたは複数の態様に係る表示方法について、上記各実施の形態および各変形例に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態および変形例における構成要素を組み合わせて構築される形態も、本発明の範囲内に含まれてもよい。 As described above, the display method according to one or a plurality of aspects has been described based on the above-described embodiments and modifications. However, the present invention is not limited to the embodiments. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments and modifications are also within the scope of the present invention. May be included.
 [実施の形態4の変形例3]
 以下、実施の形態4の変形例3、つまり、光IDを用いたARを実現する表示方法の変形例3について説明する。
[Modification 3 of Embodiment 4]
Hereinafter, a third modification of the fourth embodiment, that is, a third modification of the display method for realizing the AR using the optical ID will be described.
 図88は、AR画像の拡大および移動の一例を示す図である。 FIG. 88 is a diagram showing an example of expansion and movement of the AR image.
 受信機200は、図88の(a)に示すように、上記実施の形態4もしくはその変形例1または2と同様、撮像表示画像Ppreの対象領域にAR画像P21を重畳する。そして、受信機200は、そのAR画像P21が重畳された撮像表示画像Ppreをディスプレイ201に表示する。例えば、AR画像P21は動画像である。 As shown in FIG. 88 (a), the receiver 200 superimposes the AR image P21 on the target area of the captured display image Ppre, as in the fourth embodiment or the first or second modification thereof. Then, the receiver 200 displays the captured display image Ppre on which the AR image P21 is superimposed on the display 201. For example, the AR image P21 is a moving image.
 ここで、受信機200は、図88の(b)に示すように、サイズ変更の指示を受け付けると、その指示に応じてAR画像P21のサイズを変更する。例えば、受信機200は、拡大の指示を受け付けると、その指示に応じてAR画像P21を拡大する。サイズ変更の指示は、ユーザによるAR画像P21に対する例えばピンチ操作、ダブルタップまたは長押しによって行われる。具体的には、受信機200は、ピンチアウトによって行われる拡大の指示を受け付けると、その指示に応じてAR画像P21を拡大する。逆に、受信機200は、ピンチインによって行われる縮小の指示を受け付けると、その指示に応じてAR画像P21を縮小する。 Here, as shown in FIG. 88 (b), when receiving an instruction to change the size, the receiver 200 changes the size of the AR image P21 in accordance with the instruction. For example, when receiving an enlargement instruction, the receiver 200 enlarges the AR image P21 according to the instruction. The size change instruction is given by, for example, a pinch operation, a double tap, or a long press on the AR image P21 by the user. Specifically, when receiving an enlargement instruction performed by pinching out, the receiver 200 enlarges the AR image P21 according to the instruction. Conversely, when receiving an instruction for reduction performed by pinch-in, the receiver 200 reduces the AR image P21 in accordance with the instruction.
 また、受信機200は、図88の(c)に示すように、位置変更の指示を受け付けると、その指示に応じてAR画像P21の位置を変更する。位置変更の指示は、ユーザによるAR画像に対する例えばスワイプなどによって行われる。具体的には、受信機200は、スワイプによって行われる位置変更の指示を受け付けると、その指示に応じてAR画像P21の位置を変更する。すなわち、AR画像P21が移動する。 Moreover, as shown in FIG. 88 (c), when the receiver 200 receives a position change instruction, the receiver 200 changes the position of the AR image P21 in accordance with the instruction. The instruction to change the position is given by, for example, swiping the AR image by the user. Specifically, when receiving an instruction to change the position performed by swiping, the receiver 200 changes the position of the AR image P21 according to the instruction. That is, the AR image P21 moves.
 これにより、動画像であるAR画像の拡大によって、そのAR画像をより見易くすることができるとともに、動画像であるAR画像の縮小または移動によって、AR画像に隠れている撮像表示画像Ppreの領域をユーザに表示することができる。 This makes it possible to make the AR image easier to see by enlarging the AR image that is a moving image, and to reduce the area of the captured display image Pre that is hidden in the AR image by reducing or moving the AR image that is the moving image. Can be displayed to the user.
 図89は、AR画像の拡大の一例を示す図である。 FIG. 89 is a diagram illustrating an example of the enlargement of the AR image.
 受信機200は、図89の(a)に示すように、上記実施の形態4もしくはその変形例1または2と同様、撮像表示画像Ppreの対象領域にAR画像P22を重畳する。そして、受信機200は、そのAR画像P22が重畳された撮像表示画像Ppreをディスプレイ201に表示する。例えば、AR画像P22は、文字列が記載されている静止画像である。 As shown in FIG. 89 (a), the receiver 200 superimposes the AR image P22 on the target area of the captured display image Ppre, as in the fourth embodiment or the first or second modification thereof. Then, the receiver 200 displays the captured display image Ppre on which the AR image P22 is superimposed on the display 201. For example, the AR image P22 is a still image in which a character string is described.
 ここで、受信機200は、図89の(b)に示すように、サイズ変更の指示を受け付けると、その指示に応じてAR画像P22のサイズを変更する。例えば、受信機200は、拡大の指示を受け付けると、その指示に応じてAR画像P22を拡大する。サイズ変更の指示は、上述と同様、ユーザによるAR画像P22に対する例えばピンチ操作、ダブルタップまたは長押しによって行われる。具体的には、受信機200は、ピンチアウトによって行われる拡大の指示を受け付けると、その指示に応じてAR画像P22を拡大する。このAR画像P22の拡大によって、AR画像P22に記載されている文字列をユーザに対して読み易くすることができる。 Here, as shown in FIG. 89 (b), when receiving an instruction to change the size, the receiver 200 changes the size of the AR image P22 in accordance with the instruction. For example, when receiving an enlargement instruction, the receiver 200 enlarges the AR image P22 according to the instruction. The size change instruction is performed by, for example, a pinch operation, double tap, or long press on the AR image P22 by the user, as described above. Specifically, when receiving an enlargement instruction performed by pinching out, the receiver 200 enlarges the AR image P22 according to the instruction. By enlarging the AR image P22, the character string described in the AR image P22 can be easily read by the user.
 また、受信機200は、図89の(c)に示すように、さらに、サイズ変更の指示を受け付けると、その指示に応じてAR画像P22のサイズを変更する。例えば、受信機200は、さらなる拡大の指示を受け付けると、その指示に応じてAR画像P22をさらに拡大する。このAR画像P22の拡大によって、AR画像P22に記載されている文字列をユーザに対してさらに読み易くすることができる。 Further, as shown in (c) of FIG. 89, when the receiver 200 further receives a size change instruction, the receiver 200 changes the size of the AR image P22 according to the instruction. For example, when receiving a further enlargement instruction, the receiver 200 further enlarges the AR image P22 according to the instruction. By enlarging the AR image P22, it is possible to make it easier for the user to read the character string described in the AR image P22.
 なお、受信機200は、拡大の指示を受け付けたときに、その指示に応じたAR画像の拡大率が閾値以上になる場合には、高解像度のAR画像を取得してもよい。この場合、受信機200は、既に表示されている元のAR画像の代わりに、その高解像度のAR画像を上述の拡大率まで拡大して表示してもよい。例えば、受信機200は、640×480画素のAR画像の代わりに、1920×1080画素のAR画像を表示する。これにより、AR画像が現実に被写体として撮像されているように、そのAR画像を拡大することができるとともに、光学ズームでは得られない高解像度の画像を表示することができる。 Note that, when receiving an enlargement instruction, the receiver 200 may acquire a high-resolution AR image if the enlargement ratio of the AR image corresponding to the instruction is equal to or greater than a threshold value. In this case, the receiver 200 may enlarge and display the high-resolution AR image up to the above-described enlargement factor instead of the original AR image that has already been displayed. For example, the receiver 200 displays an AR image of 1920 × 1080 pixels instead of the AR image of 640 × 480 pixels. As a result, the AR image can be enlarged and a high-resolution image that cannot be obtained by the optical zoom can be displayed so that the AR image is actually captured as a subject.
 図90は、受信機200によるAR画像の拡大および移動に関する処理動作の一例を示すフローチャートである。 FIG. 90 is a flowchart illustrating an example of processing operations related to enlargement and movement of an AR image by the receiver 200.
 まず、受信機200は、図45のフローチャートに示すステップS101と同様に、通常露光時間および通信用露光時間による撮像を開始する(ステップS401)。この撮像が開始されると、通常露光時間による撮像表示画像Ppreと、通信用露光時間による復号用画像(すなわち輝線画像)Pdecとがそれぞれ周期的に得られる。そして、受信機200は、その復号用画像Pdecを復号することによって光IDを取得する。 First, the receiver 200 starts imaging based on the normal exposure time and the communication exposure time as in step S101 shown in the flowchart of FIG. 45 (step S401). When this imaging is started, a captured display image Ppre based on the normal exposure time and a decoding image (that is, a bright line image) Pdec based on the communication exposure time are periodically obtained. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec.
 次に、受信機200は、図45のフローチャートに示すステップS102~S106の処理を含むAR画像重畳処理を行う(ステップS402)。このAR画像重畳処理が行われると、AR画像が撮像表示画像Ppreに重畳されて表示される。このとき、受信機200は、光ID取得レートを下げる(ステップS403)。光ID取得レートとは、ステップS401において開始される撮像によって得られる単位時間あたりの撮像画像の枚数のうちの、復号用画像(すなわち輝線画像)Pdecの枚数の割合である。例えば、光ID取得レートが下がることによって、単位時間あたりに得られる復号用画像Pdecの枚数は、単位時間あたりに得られる撮像表示画像Ppreの枚数よりも少なくなる。 Next, the receiver 200 performs an AR image superimposition process including the processes of steps S102 to S106 shown in the flowchart of FIG. 45 (step S402). When this AR image superimposition process is performed, the AR image is displayed superimposed on the captured display image Ppre. At this time, the receiver 200 decreases the optical ID acquisition rate (step S403). The light ID acquisition rate is a ratio of the number of decoding images (that is, bright line images) Pdec out of the number of captured images per unit time obtained by imaging started in step S401. For example, as the optical ID acquisition rate decreases, the number of decoding images Pdec obtained per unit time becomes smaller than the number of captured display images Ppre obtained per unit time.
 次に、受信機200は、サイズ変更の指示を受け付けたか否かを判定する(ステップS404)。ここで、サイズ変更の指示を受け付けたと判定すると(ステップS404のYes)、受信機200は、さらに、そのサイズ変更の指示が拡大の指示か否かを判定する(ステップS405)。サイズ変更の指示が拡大の指示であると判定すると(ステップS405のYes)、受信機200は、さらに、AR画像の再取得が必要か否かを判定する(ステップS406)。例えば、受信機200は、拡大の指示に応じたAR画像の拡大率が閾値以上になると判断した場合に、AR画像の再取得が必要と判定する。ここで、受信機200は、再取得が必要と判定すると(ステップS406のYes)、高解像度のAR画像を例えばサーバから取得して、重畳して表示されているAR画像を、その高解像度のAR画像に置き換える(ステップS407)。 Next, the receiver 200 determines whether or not a size change instruction has been received (step S404). If it is determined that the size change instruction has been received (Yes in step S404), the receiver 200 further determines whether the size change instruction is an enlargement instruction (step S405). If it is determined that the size change instruction is an enlargement instruction (Yes in step S405), the receiver 200 further determines whether it is necessary to reacquire the AR image (step S406). For example, when the receiver 200 determines that the AR image enlargement rate according to the enlargement instruction is equal to or greater than a threshold, the receiver 200 determines that the AR image needs to be reacquired. When the receiver 200 determines that reacquisition is necessary (Yes in step S406), the receiver 200 acquires a high-resolution AR image from, for example, a server, and converts the AR image displayed in a superimposed manner into the high-resolution AR image. Replace with the AR image (step S407).
 そして、受信機200は、受け付けられたサイズ変更の指示に応じて、AR画像のサイズを変更する(ステップS408)。つまり、ステップS407で高解像度のAR画像を取得した場合には、受信機200は、その高解像度のAR画像を拡大する。また、ステップS406で、AR画像の再取得が不要と判定された場合には(ステップS406のNo)、受信機200は、重畳されているAR画像を拡大する。また、ステップS405で、サイズ変更の指示が縮小の指示であると判定すると(ステップS405のNo)、受信機200は、受け付けられたサイズ変更の指示、すなわち縮小の指示に応じて、重畳して表示されているAR画像を縮小する。 Then, the receiver 200 changes the size of the AR image in accordance with the received size change instruction (step S408). That is, when a high-resolution AR image is acquired in step S407, the receiver 200 enlarges the high-resolution AR image. Further, when it is determined in step S406 that the re-acquisition of the AR image is unnecessary (No in step S406), the receiver 200 enlarges the superimposed AR image. If it is determined in step S405 that the size change instruction is a reduction instruction (No in step S405), the receiver 200 performs superimposition according to the received size change instruction, that is, the reduction instruction. Reduce the displayed AR image.
 一方、受信機200は、ステップS404で、サイズ変更の指示を受け付けていないと判定すると(ステップS404のNo)、位置変更の指示を受け付けたか否かを判定する(ステップS409)。ここで、位置変更の指示を受け付けたと判定すると(ステップS409のYes)、受信機200は、その位置変更の指示に応じて、重畳して表示されているAR画像の位置を変更する(ステップS410)。つまり、受信機200は、AR画像を移動させる。また、位置変更の指示を受け付けていないと判定すると(ステップS409のNo)、受信機200は、ステップS404からの処理を繰り返し実行する。 On the other hand, when the receiver 200 determines in step S404 that the size change instruction has not been received (No in step S404), the receiver 200 determines whether a position change instruction has been received (step S409). If it is determined that a position change instruction has been received (Yes in step S409), the receiver 200 changes the position of the superimposed AR image in accordance with the position change instruction (step S410). ). That is, the receiver 200 moves the AR image. If it is determined that the position change instruction has not been received (No in step S409), the receiver 200 repeatedly executes the processing from step S404.
 ステップS408でAR画像のサイズが変更されると、または、ステップS410でAR画像の位置が変更されると、受信機200は、ステップS401から周期的に取得されている光IDが、取得されなくなったか否かを判定する(ステップS411)。ここで、光IDが取得されなくなったと判定すると(ステップS411のYes)、受信機200は、AR画像の拡大および移動に関する処理動作を終了する。一方、現在も光IDが取得されていると判定すると(ステップS411のNo)、受信機200は、ステップS404からの処理を繰り返し実行する。 When the size of the AR image is changed in step S408 or the position of the AR image is changed in step S410, the receiver 200 cannot acquire the light ID periodically acquired from step S401. It is determined whether or not (step S411). If it is determined that the light ID is no longer acquired (Yes in step S411), the receiver 200 ends the processing operation related to the expansion and movement of the AR image. On the other hand, if it is determined that the light ID is still acquired (No in step S411), the receiver 200 repeatedly executes the processing from step S404.
 図91は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 91 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
 受信機200は、上述のように、撮像表示画像Ppre中の対象領域にAR画像P23を重畳する。ここで、図91に示すように、AR画像P23は、AR画像P23の各部位がAR画像P23の端に近いほどその部位における透過率が高くなるように構成されている。透過率は、重畳される画像が透けて表示される度合いである。例えば、AR画像の全体の透過率が100%とは、撮像表示画像の対象領域にAR画像が重畳されていても、ディスプレイ201にはそのAR画像が表示されずに対象領域のみが表示されることを意味する。逆に、AR画像の全体の透過率が0%とは、ディスプレイ201には撮像表示画像の対象領域は表示されず、その対象領域に重畳されているAR画像のみが表示されることを意味する。 As described above, the receiver 200 superimposes the AR image P23 on the target area in the captured display image Ppre. Here, as shown in FIG. 91, the AR image P23 is configured such that the transmittance of each part of the AR image P23 increases as the part of the AR image P23 is closer to the end of the AR image P23. The transmittance is the degree to which the superimposed image is displayed through. For example, when the overall transmittance of the AR image is 100%, even if the AR image is superimposed on the target area of the captured display image, only the target area is displayed on the display 201 without displaying the AR image. Means that. Conversely, the transmittance of the entire AR image being 0% means that the target area of the captured display image is not displayed on the display 201 and only the AR image superimposed on the target area is displayed. .
 例えば、AR画像P23が矩形状である場合、AR画像P23における各部位の透過率は、その部位が矩形の上端、下端、左端または右端に近いほど高い。より具体的には、それらの端における透過率は100%である。また、AR画像P23の中央部分には、AR画像P23よりも小さい透過率0%の矩形領域があり、その矩形領域には、例えば「Kyoto Station」と英語で記載されている。つまり、AR画像P23の周縁部では、透過率がグラデーションのように0%から100%まで段階的に変化している。 For example, when the AR image P23 is rectangular, the transmittance of each part in the AR image P23 is higher as the part is closer to the upper end, lower end, left end, or right end of the rectangle. More specifically, the transmittance at those ends is 100%. In addition, in the central portion of the AR image P23, there is a rectangular area having a transmittance of 0% which is smaller than that of the AR image P23. In the rectangular area, for example, “Kyoto Station” is written in English. That is, at the peripheral edge of the AR image P23, the transmittance changes stepwise from 0% to 100% like a gradation.
 受信機200は、このようなAR画像P23を、図91に示すように、撮像表示画像Ppre中の対象領域に重畳する。このとき、受信機200は、AR画像P23のサイズを対象領域のサイズに合わせて、そのリサイズされたAR画像P23を対象領域に重畳する。例えば、対象領域には、AR画像P23の中央部にある矩形領域と同じ背景色の駅名標の像が現れている。なお、駅名標には日本語で「京都」と記載されている。 The receiver 200 superimposes such an AR image P23 on the target area in the captured display image Ppre as shown in FIG. At this time, the receiver 200 matches the size of the AR image P23 with the size of the target area, and superimposes the resized AR image P23 on the target area. For example, an image of a station name sign having the same background color as that of the rectangular area in the center of the AR image P23 appears in the target area. The station name has “Kyoto” written in Japanese.
 ここで、上述のように、AR画像P23の各部位の透過率は、その部位がAR画像P23の端に近いほど高い。したがって、対象領域にAR画像P23が重畳されると、AR画像P23の中央部分の矩形領域は表示されても、そのAR画像P23の端は表示されず、対象領域の端、すなわち、駅名標の像の端が表示される。 Here, as described above, the transmittance of each part of the AR image P23 is higher as the part is closer to the end of the AR image P23. Therefore, when the AR image P23 is superimposed on the target area, even if the rectangular area at the center of the AR image P23 is displayed, the end of the AR image P23 is not displayed, but the end of the target area, that is, the station name mark The edge of the image is displayed.
 これにより、AR画像P23と対象領域とのずれを目立ち難くすることができる。つまり、AR画像P23が対象領域に重畳されても、受信機200の動きなどによって、AR画像P23と対象領域との間にずれが生じることがある。この場合、仮にAR画像P23の全体の透過率が0%であれば、AR画像P23の端と、対象領域の端とが表示され、そのずれが目立ってしまう。しかし、本変形例におけるAR画像P23では、端に近い部位ほどその部位の透過率が高いため、AR画像P23の端が表示され難くすることができ、その結果、AR画像P23と対象領域との間のずれを目立ち難くすることができる。さらに、AR画像P23の周縁部では、グラデーションのように透過率が変化しているため、AR画像P23が対象領域に重畳されていることを目立ち難くすることができる。 Thereby, the deviation between the AR image P23 and the target area can be made inconspicuous. That is, even when the AR image P23 is superimposed on the target area, a shift may occur between the AR image P23 and the target area due to the movement of the receiver 200 or the like. In this case, if the overall transmittance of the AR image P23 is 0%, the end of the AR image P23 and the end of the target area are displayed, and the shift becomes conspicuous. However, in the AR image P23 in the present modification, the closer to the end, the higher the transmittance of the part, so that the end of the AR image P23 can be made difficult to be displayed. As a result, the AR image P23 and the target region It is possible to make the gap inconspicuous. Furthermore, since the transmittance changes like gradation in the peripheral portion of the AR image P23, it is difficult to notice that the AR image P23 is superimposed on the target region.
 図92は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 92 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、上述のように、撮像表示画像Ppre中の対象領域にAR画像P24を重畳する。ここで、図92に示すように、撮像される被写体は、例えば飲食店のメニューである。このメニューは白枠に囲われ、さらに、その白枠は黒枠に囲われている。つまり、被写体は、メニューと、そのメニューを囲う白枠と、その白枠を囲う黒枠とを含む。 As described above, the receiver 200 superimposes the AR image P24 on the target area in the captured display image Ppre. Here, as shown in FIG. 92, the imaged subject is, for example, a restaurant menu. This menu is surrounded by a white frame, and the white frame is surrounded by a black frame. That is, the subject includes a menu, a white frame surrounding the menu, and a black frame surrounding the white frame.
 受信機200は、撮像表示画像Ppreのうちの、白枠の像よりも大きく、黒枠の像よりも小さい領域を対象領域として認識する。そして、受信機200は、AR画像P24のサイズをその対象領域のサイズに合わせて、そのリサイズされたAR画像P24を対象領域に重畳する。 The receiver 200 recognizes, as a target area, an area larger than the white frame image and smaller than the black frame image in the captured display image Pre. Then, the receiver 200 matches the size of the AR image P24 with the size of the target area, and superimposes the resized AR image P24 on the target area.
 これにより、重畳されているAR画像P24が、受信機200の動きなどによって、対象領域からずれてしまった場合でも、そのAR画像P24を、黒枠に囲まれた状態で表示させ続けることができる。したがって、AR画像P24と対象領域との間のずれを目立ち難くすることができる。 Thereby, even when the superimposed AR image P24 is displaced from the target area due to the movement of the receiver 200, the AR image P24 can be continuously displayed in a state surrounded by a black frame. Therefore, the shift between the AR image P24 and the target region can be made inconspicuous.
 なお、図92に示す例では、枠の色は黒または白であるが、これらの色に限定されず、どのような色であってもよい。 In the example shown in FIG. 92, the color of the frame is black or white, but is not limited to these colors and may be any color.
 図93は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 93 is a diagram illustrating an example of superimposition of AR images by the receiver 200. FIG.
 例えば、受信機200は、夜空にライトアップされた城が描かれたポスターを被写体として撮像する。例えば、このポスターは、バックライトとして構成された上述の送信機100によって照らされ、そのバックライトによって可視光信号(すなわち光ID)を送信している。受信機200は、その撮像によって、そのポスターである被写体の像を含む撮像表示画像Ppreと、その光IDに対応するAR画像P25とを取得する。ここで、AR画像P25は、上述の城が描かれている領域が抜き取られたポスターの像と同一の形状を有する。すなわち、AR画像P25における、ポスターの像の城に対応する領域は、マスキングされている。さらに、AR画像P25は、上述のAR画像P23と同様、AR画像P25の各部位がAR画像P25の端に近いほどその部位における透過率が高くなるように構成されている。また、AR画像P25において透過率が0%である中央部分には、夜空に打ち上げられる花火が動画として表示される。 For example, the receiver 200 images a poster on which a castle illuminated in the night sky is drawn as a subject. For example, the poster is illuminated by the above-described transmitter 100 configured as a backlight, and a visible light signal (ie, a light ID) is transmitted by the backlight. The receiver 200 acquires the captured display image Ppre including the image of the subject that is the poster and the AR image P25 corresponding to the light ID by the imaging. Here, the AR image P25 has the same shape as the poster image from which the region where the castle is drawn is extracted. That is, the area corresponding to the castle of the poster image in the AR image P25 is masked. Further, the AR image P25 is configured so that the transmittance of each part of the AR image P25 is higher as the part of the AR image P25 is closer to the end of the AR image P25, similarly to the AR image P23 described above. In the central portion where the transmittance is 0% in the AR image P25, fireworks launched in the night sky are displayed as moving images.
 受信機200は、このようなAR画像P25のサイズを、被写体の像である対象領域のサイズに合わせて、そのリサイズされたAR画像P25を対象領域に重畳する。その結果、ポスターに描かれている城は、AR画像ではなく、被写体の像として表示され、さらに、花火の動像がAR画像として表示される。 The receiver 200 matches the size of the AR image P25 with the size of the target area that is the image of the subject, and superimposes the resized AR image P25 on the target area. As a result, the castle drawn on the poster is displayed as an image of the subject, not as an AR image, and a moving image of fireworks is displayed as an AR image.
 これにより、ポスターの中で現実に花火が打ち上げられているように撮像表示画像Ppreを表示することができる。また、AR画像P25の各部位の透過率は、その部位がAR画像P25の端に近いほど高い。したがって、対象領域にAR画像P25が重畳されると、AR画像P25の中央部分は表示されても、そのAR画像P25の端は表示されず、対象領域の端が表示される。その結果、AR画像P25と対象領域とのずれを目立ち難くすることができる。さらに、AR画像P25の周縁部では、グラデーションのように透過率が変化しているため、AR画像P25が対象領域に重畳されていることを目立ち難くすることができる。 Thereby, the captured display image Ppre can be displayed as if fireworks are actually being launched in the poster. Further, the transmittance of each part of the AR image P25 is higher as the part is closer to the end of the AR image P25. Therefore, when the AR image P25 is superimposed on the target area, even if the center portion of the AR image P25 is displayed, the end of the AR image P25 is not displayed, but the end of the target area is displayed. As a result, the deviation between the AR image P25 and the target region can be made inconspicuous. Furthermore, since the transmittance changes like a gradation at the peripheral portion of the AR image P25, it is difficult to notice that the AR image P25 is superimposed on the target region.
 図94は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 94 is a diagram illustrating an example of superposition of AR images by the receiver 200. FIG.
 例えば、受信機200は、テレビとして構成されている送信機100を被写体として撮像する。具体的には、この送信機100は、夜空にライトアップされた城をディスプレイに表示するとともに、可視光信号(すなわち光ID)を送信している。受信機200は、その撮像によって、送信機100が映し出された撮像表示画像Ppreと、その光IDに対応するAR画像P26とを取得する。ここで、受信機200は、まず、撮像表示画像Ppreをディスプレイ201に表示する。このとき、受信機200は、ディスプレイ201に、ユーザに消灯を促すメッセージmも表示する。具体的には、そのメッセージmは、例えば「部屋の照明をオフにして、部屋を暗くしてください」である。 For example, the receiver 200 images the transmitter 100 configured as a television as a subject. Specifically, the transmitter 100 displays a castle illuminated in the night sky on a display and transmits a visible light signal (that is, a light ID). The receiver 200 acquires the captured display image Ppre displayed by the transmitter 100 and the AR image P26 corresponding to the optical ID by the imaging. Here, the receiver 200 first displays the captured display image Ppre on the display 201. At this time, the receiver 200 also displays on the display 201 a message m that prompts the user to turn it off. Specifically, the message m is, for example, “Turn off the room lighting and darken the room”.
 このメッセージmの表示によって、ユーザが消灯し、送信機100が設置されている部屋が暗くなると、受信機200は、AR画像P26を撮像表示画像Ppreに重畳して表示する。ここで、AR画像P26は、撮像表示画像Ppreと同じサイズであって、そのAR画像P26における、撮像表示画像Ppreの城に対応する領域はくり抜かれている。つまり、AR画像P26における、撮像表示画像Ppreの城に対応する領域はマスキングされている。したがって、その領域から撮像表示画像Ppreの城をユーザに見せることができる。また、AR画像P26におけるその領域の周縁部では、上述と同様に、透過率がグラデーションのように0%から100%まで段階的に変化していてもよい。この場合には、撮像表示画像PpreとAR画像P26との間のずれを目立ち難くすることができる。 When the message m is displayed and the user is turned off and the room in which the transmitter 100 is installed becomes dark, the receiver 200 displays the AR image P26 superimposed on the captured display image Ppre. Here, the AR image P26 has the same size as the captured display image Ppre, and the area corresponding to the castle of the captured display image Ppre in the AR image P26 is cut out. That is, the area corresponding to the castle of the captured display image Ppre in the AR image P26 is masked. Therefore, the castle of the captured display image Ppre can be shown to the user from the area. Further, in the peripheral portion of the area in the AR image P26, similarly to the above, the transmittance may change stepwise from 0% to 100% like a gradation. In this case, the shift between the captured display image Ppre and the AR image P26 can be made inconspicuous.
 上述の例では、周縁部の透過率が高いAR画像を、撮像表示画像Ppreの対象領域に重畳することによって、AR画像と対象領域とのずれを目立ち難くしている。しかし、このようなAR画像の代わりに、撮像表示画像Ppreと同じサイズであって、全体が半透明(すなわち透過率が50%)のAR画像を撮像表示画像Ppreに重畳してもよい。この場合であっても、AR画像と対象領域とのずれを目立ち難くすることができる。また、撮像表示画像Ppreが全体的に明るい場合には、均一に透明度が低いAR画像を撮像表示画像Ppreに重畳し、逆に、撮像表示画像Ppreが全体的に暗い場合には、均一に透明度が高いAR画像を撮像表示画像Ppreに重畳してもよい。 In the above-described example, the AR image having a high peripheral edge transmittance is superimposed on the target area of the captured display image Ppre, so that the shift between the AR image and the target area is less noticeable. However, instead of such an AR image, an AR image that is the same size as the captured display image Ppre and is entirely translucent (that is, having a transmittance of 50%) may be superimposed on the captured display image Ppre. Even in this case, the shift between the AR image and the target region can be made inconspicuous. Further, when the captured display image Ppre is generally bright, the AR image having a uniform low transparency is superimposed on the captured display image Ppre. Conversely, when the captured display image Ppre is generally dark, the transparency is uniformly uniform. A high AR image may be superimposed on the captured display image Ppre.
 なお、AR画像P25およびAR画像P26の花火などのオブジェクトは、CG(computer graphics)によって表現されてもよい。この場合には、マスキングを不要にすることができる。また、図94に示す例では、受信機200は、ユーザに消灯を促すメッセージmを表示するが、このような表示を行うことなく、自動的に消灯してもよい。例えば、受信機200は、Bluetooth(登録商標)、ZigBee、または特定小電力無線局等によって、テレビである送信機100が設定されている照明装置に対して消灯信号を出力する。これによって、自動的に照明装置の消灯が行われる。 It should be noted that objects such as the fireworks of the AR image P25 and the AR image P26 may be expressed by CG (computer graphics). In this case, masking can be eliminated. In the example illustrated in FIG. 94, the receiver 200 displays the message m that prompts the user to turn off the light, but the light may be automatically turned off without performing such display. For example, the receiver 200 outputs a turn-off signal to the lighting device in which the transmitter 100 that is a television is set by Bluetooth (registered trademark), ZigBee, a specific low-power radio station, or the like. Thereby, the lighting device is automatically turned off.
 図95Aは、受信機200による撮像によって得られる撮像表示画像Ppreの一例を示す図である。 FIG. 95A is a diagram illustrating an example of a captured display image Ppre obtained by imaging by the receiver 200.
 例えば、送信機100は、スタジアムに設置されている大型ディスプレイとして構成されている。そして、送信機100は、例えばファーストフードおよびドリンクの注文が光IDで可能であることを示すメッセージを表示するとともに、可視光信号(すなわち光ID)を送信する。このようなメッセージが表示されると、ユーザは受信機200を送信機100に向けて撮像を行う。つまり、受信機200は、スタジアムに設置されている大型ディスプレイとして構成されている送信機100を被写体として撮像する。 For example, the transmitter 100 is configured as a large display installed in a stadium. Then, the transmitter 100 displays a message indicating that, for example, fast food and drinks can be ordered with the light ID, and transmits a visible light signal (that is, a light ID). When such a message is displayed, the user images the receiver 200 toward the transmitter 100. That is, the receiver 200 images the transmitter 100 configured as a large display installed in the stadium as a subject.
 受信機200は、その撮像によって撮像表示画像Ppreと復号用画像Pdecとを取得する。そして、受信機200は、その復号用画像Pdecを復号することによって光IDを取得し、その光IDと撮像表示画像Ppreとをサーバに送信する。 The receiver 200 acquires the captured display image Ppre and the decoding image Pdec by the imaging. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and transmits the optical ID and the captured display image Ppre to the server.
 サーバは、光IDごとに、その光IDに対応付けられた設置情報の中から、受信機200から送信された光IDに対応付けられた、撮像された大型ディスプレイの設置情報を特定する。例えば、設置情報は、大型ディスプレイが設置されている位置および向きと、その大型ディスプレイの大きさとなどを示す。さらに、サーバは、その撮像表示画像Ppreに映し出されている大型ディスプレイの大きさおよび向きと、その設置情報とに基づいて、スタジアムにおいてその撮像表示画像Ppreの撮像が行われた座席の番号を特定する。そして、サーバは、その座席の番号を含むメニュー画面を受信機200に表示させる。 For each light ID, the server specifies the installation information of the captured large display associated with the light ID transmitted from the receiver 200 from the installation information associated with the light ID. For example, the installation information indicates the position and orientation where the large display is installed, the size of the large display, and the like. Further, the server identifies the seat number where the captured display image Ppre was captured in the stadium based on the size and orientation of the large display displayed in the captured display image Ppre and the installation information. To do. Then, the server causes the receiver 200 to display a menu screen including the seat number.
 図95Bは、受信機200のディスプレイ201に表示されるメニュー画面の一例を示す図である。 FIG. 95B is a diagram showing an example of a menu screen displayed on the display 201 of the receiver 200.
 メニュー画面m1は、例えば商品ごとに、その商品の注文数が入力される入力欄ma1と、サーバによって特定されたスタジアムの座席の番号が記載されている座席欄mb1と、注文ボタンmc1とを含む。ユーザは、受信機200を操作することによって、所望の商品に対応する入力欄ma1にその商品の注文数を入力し、注文ボタンmc1を選択する。これにより、注文が確定され、受信機200は、その入力結果に応じた注文内容をサーバに送信する。 The menu screen m1 includes, for example, for each product, an input field ma1 in which the number of orders for the product is input, a seat field mb1 in which the seat number of the stadium specified by the server is described, and an order button mc1. . The user operates the receiver 200 to input the order quantity of the product in the input field ma1 corresponding to the desired product, and selects the order button mc1. As a result, the order is confirmed, and the receiver 200 transmits the order contents corresponding to the input result to the server.
 サーバは、その注文内容を受信すると、その注文内容にしたがった注文数の商品を、上述のように特定された番号の座席に届けるようにスタジアムのスタッフに指示する。 When the server receives the order details, it instructs the stadium staff to deliver the number of products according to the order details to the seat of the number specified as described above.
 図96は、受信機200とサーバとの処理動作の一例を示すフローチャートである。 FIG. 96 is a flowchart showing an example of processing operation between the receiver 200 and the server.
 受信機200は、まず、スタジアムの大型ディスプレイとして構成されている送信機100を撮像する(ステップS421)。受信機200は、その撮像によって得られる復号用画像Pdecを復号することによって、送信機100から送信される光IDを取得する(ステップS422)。受信機200は、ステップS422で取得された光IDと、ステップS421の撮像によって得られた撮像表示画像Ppreとをサーバに送信する(ステップS423)。 The receiver 200 first images the transmitter 100 configured as a large stadium display (step S421). The receiver 200 acquires the optical ID transmitted from the transmitter 100 by decoding the decoding image Pdec obtained by the imaging (step S422). The receiver 200 transmits the optical ID acquired in step S422 and the captured display image Ppre obtained by imaging in step S421 to the server (step S423).
 サーバは、その光IDおよび撮像表示画像Ppreを受信すると(ステップS424)、その光IDに基づいて、スタジアムに設置されている大型ディスプレイの設置情報を特定する(ステップS425)。例えば、サーバは、光IDごとに、その光IDに対応付けられた大型ディスプレイの設置情報を示すテーブルを保持し、受信機200から送信された光IDに対応付けられた設置情報をそのテーブルから検索することによって、その設置情報を特定する。 When the server receives the light ID and the captured display image Pre (step S424), the server identifies installation information of a large display installed in the stadium based on the light ID (step S425). For example, for each light ID, the server holds a table indicating installation information of a large display associated with the light ID, and the installation information associated with the light ID transmitted from the receiver 200 is stored from the table. The installation information is specified by searching.
 次に、サーバは、その特定された設置情報と、撮像表示画像Ppreに映っている大型ディスプレイの大きさおよび向きとに基づいて、スタジアムにおいて、その撮像表示画像Ppreの取得(すなわち撮像)が行われた座席の番号を特定する(ステップS426)。そして、サーバは、特定された座席の番号を含むメニュー画面m1のURL(Uniform Resource Locator)を受信機200に送信する(ステップS427)。 Next, the server acquires (that is, captures) the captured display image Ppre in the stadium based on the specified installation information and the size and orientation of the large display displayed in the captured display image Ppre. The assigned seat number is specified (step S426). Then, the server transmits the URL (Uniform Resource Locator) of the menu screen m1 including the identified seat number to the receiver 200 (step S427).
 受信機200は、サーバから送信されたメニュー画面m1のURLを受信すると(ステップS428)、そのURLにアクセスし、メニュー画面m1を表示する(ステップS429)。ここで、ユーザは、受信機200を操作することによって、注文内容をメニュー画面m1に入力し、注文ボタンmc1を選択することによって、注文を確定する。これにより、受信機200は、注文内容をサーバに送信する(ステップS430)。 When the receiver 200 receives the URL of the menu screen m1 transmitted from the server (step S428), the receiver 200 accesses the URL and displays the menu screen m1 (step S429). Here, the user operates the receiver 200 to input the order contents into the menu screen m1, and selects the order button mc1, thereby confirming the order. Thereby, the receiver 200 transmits the order details to the server (step S430).
 サーバは、その受信機200から送信された注文内容を受信すると、その注文内容にしたがった受注処理を行う(ステップS431)。このとき、サーバは、例えば、その注文内容に応じた注文数の商品を、ステップS426で特定された番号の座席に届けるようにスタジアムのスタッフに指示する。 Upon receiving the order details transmitted from the receiver 200, the server performs an order receiving process according to the order details (step S431). At this time, for example, the server instructs the staff of the stadium to deliver the number of products corresponding to the order contents to the seat of the number specified in step S426.
 このように、受信機200による撮像によって得られた撮像表示画像Ppreに基づいて、座席の番号が特定されるため、受信機200のユーザは、商品の注文の際に、わざわざ座席の番号を入力する必要がない。したがって、ユーザは、座席の番号の入力を省いて簡単に商品の注文を行うことができる。 Thus, since the seat number is specified based on the captured display image Ppre obtained by imaging by the receiver 200, the user of the receiver 200 bothers to input the seat number when ordering a product. There is no need to do. Therefore, the user can easily place an order for a product without inputting the seat number.
 なお、上述の例では、サーバが座席の番号を特定したが、受信機200が座席の番号を特定してもよい。この場合には、受信機200は、サーバから設置情報を取得して、その設置情報と、撮像表示画像Ppreに映っている大型ディスプレイの大きさおよび向きとに基づいて座席の番号を特定する。 In the above example, the server specifies the seat number, but the receiver 200 may specify the seat number. In this case, the receiver 200 acquires the installation information from the server, and specifies the seat number based on the installation information and the size and orientation of the large display displayed in the captured display image Pre.
 図97は、受信機1800aによって再生される音声の音量を説明するための図である。 FIG. 97 is a diagram for explaining the volume of sound reproduced by the receiver 1800a.
 受信機1800aは、図23に示す例と同様に、例えば街頭デジタルサイネージとして構成される送信機1800bから送信された光ID(可視光信号)を受信する。そして、受信機1800aは、送信機1800bによる画像再生と同じタイミングで、音声を再生する。つまり、受信機1800aは、送信機1800bによって再生される画像と同期するように音声を再生する。なお、受信機1800aは、送信機1800bによって再生される画像(再生画像)と同一の画像、または、その再生画像に関連するAR画像(ARの動画像)を、音声とともに再生してもよい。 23. Similarly to the example shown in FIG. 23, the receiver 1800a receives the light ID (visible light signal) transmitted from the transmitter 1800b configured as, for example, street digital signage. Then, the receiver 1800a reproduces sound at the same timing as the image reproduction by the transmitter 1800b. That is, the receiver 1800a reproduces sound so as to be synchronized with the image reproduced by the transmitter 1800b. Note that the receiver 1800a may reproduce the same image as the image reproduced by the transmitter 1800b (reproduced image) or an AR image (AR moving image) related to the reproduced image together with the sound.
 ここで、受信機1800aは、上述のように音声を再生するときには、送信機1800bまでの距離に応じてその音声の音量を調整する。具体的には、受信機1800aは、送信機1800bまでの距離が長いほど音量を小さく調整し、逆に、送信機1800bまでの距離が短いほど音量を大きく調整する。 Here, when reproducing the sound as described above, the receiver 1800a adjusts the volume of the sound according to the distance to the transmitter 1800b. Specifically, the receiver 1800a adjusts the volume smaller as the distance to the transmitter 1800b is longer, and conversely adjusts the volume larger as the distance to the transmitter 1800b is shorter.
 受信機1800aは、送信機1800bまでの距離を、GPS(Global Positioning System)などを利用して特定してもよい。具体的には、受信機1800aは、光IDに対応付けられた送信機1800bの位置情報をサーバなどから取得し、さらに、GPSによって受信機1800aの位置を特定する。そして、受信機1800aは、サーバから取得された位置情報によって示される送信機1800bの位置と、特定された受信機1800aの位置との間の距離を、上述の送信機1800bまでの距離として特定する。なお、受信機1800aは、GPSの代わりにBluetooth(登録商標)などを利用して、送信機1800bまでの距離を特定してもよい。 The receiver 1800a may specify the distance to the transmitter 1800b using a GPS (Global Positioning System) or the like. Specifically, the receiver 1800a acquires position information of the transmitter 1800b associated with the optical ID from a server or the like, and further specifies the position of the receiver 1800a by GPS. Then, the receiver 1800a specifies the distance between the position of the transmitter 1800b indicated by the position information acquired from the server and the position of the specified receiver 1800a as the distance to the above-described transmitter 1800b. . Note that the receiver 1800a may specify the distance to the transmitter 1800b by using Bluetooth (registered trademark) instead of GPS.
 また、受信機1800aは、撮像によって得られる上述の復号用画像Pdecの輝線パターン領域の大きさに基づいて、送信機1800bまでの距離を特定してもよい。輝線パターン領域は、図51および図52に示す例と同様、受信機1800aのイメージセンサが有する複数の露光ラインの通信用露光時間での露光によって現れる複数の輝線のパターンからなる領域である。この輝線パターン領域は、撮像表示画像Ppreに映し出されている送信機1800bのディスプレイの領域に相当する。具体的には、受信機1800aは、輝線パターン領域が大きいほど短い距離を送信機1800bまでの距離として特定し、逆に、輝線パターン領域が小さいほど長い距離を送信機1800bまでの距離として特定する。また、受信機1800aは、輝線パターン領域の大きさと距離との関係を示す距離データを用い、その距離データにおいて、撮像表示画像Ppre中の輝線パターン領域の大きさに対応付けられている距離を、送信機1800bまでの距離として特定してもよい。なお、受信機1800aは、上述のように受信された光IDをサーバに送信し、その光IDに対応付けられた距離データをそのサーバから取得してもよい。 Further, the receiver 1800a may specify the distance to the transmitter 1800b based on the size of the bright line pattern region of the above-described decoding image Pdec obtained by imaging. As in the example shown in FIGS. 51 and 52, the bright line pattern region is a region formed of a plurality of bright line patterns that appear by exposure at the exposure time for communication of a plurality of exposure lines included in the image sensor of the receiver 1800a. This bright line pattern area corresponds to the display area of the transmitter 1800b displayed in the captured display image Ppre. Specifically, the receiver 1800a specifies the shorter distance as the distance to the transmitter 1800b as the bright line pattern region is larger, and conversely specifies the longer distance as the distance to the transmitter 1800b as the bright line pattern region is smaller. . The receiver 1800a uses distance data indicating the relationship between the size of the bright line pattern region and the distance, and in the distance data, the distance associated with the size of the bright line pattern region in the captured display image Pre is The distance to the transmitter 1800b may be specified. Note that the receiver 1800a may transmit the optical ID received as described above to the server, and obtain distance data associated with the optical ID from the server.
 このように、送信機1800bまでの距離に応じて音量が調整されるため、受信機1800aのユーザは、受信機1800aによって再生される音声を、現実に送信機1800bによって再生されている音声のように聞き取ることができる。 As described above, since the volume is adjusted according to the distance to the transmitter 1800b, the user of the receiver 1800a makes the sound reproduced by the receiver 1800a like the sound actually reproduced by the transmitter 1800b. Can be heard.
 図98は、受信機1800aから送信機1800bまでの距離と音量との関係を示す図である。 FIG. 98 is a diagram showing the relationship between the distance from the receiver 1800a to the transmitter 1800b and the sound volume.
 例えば、送信機1800bまでの距離がL1~L2[m]の間では、音量は、Vmin~Vmax[dB]までの範囲において、その距離に比例して増加または減少する。具体的には、受信機1800aは、送信機1800bまでの距離がL1[m]からL2[m]まで長くなれば、音量をVmax[dB]からVmin[dB]まで直線的に減少させる。また、送信機1800bまでの距離がL1[m]よりも短くなっても、受信機1800aは、音量をVmax[dB]に維持し、送信機1800bまでの距離がL2[m]よりも長くなっても、音量をVmin[dB]に維持する。 For example, when the distance to the transmitter 1800b is between L1 and L2 [m], the volume increases or decreases in proportion to the distance in the range from Vmin to Vmax [dB]. Specifically, the receiver 1800a linearly decreases the volume from Vmax [dB] to Vmin [dB] when the distance to the transmitter 1800b increases from L1 [m] to L2 [m]. Even if the distance to the transmitter 1800b is shorter than L1 [m], the receiver 1800a maintains the volume at Vmax [dB], and the distance to the transmitter 1800b is longer than L2 [m]. However, the volume is maintained at Vmin [dB].
 このように、受信機1800aは、最大音量Vmaxと、その最大音量Vmaxの音声が出力される最長距離L1と、最小音量Vminと、その最小音量Vminの音声が出力される最短距離L2とを記憶している。また、受信機1800aは、自らに設定されている属性に応じて、その最大音量Vmax、最小音量Vmin、最長距離L1および最短距離L2を変更してもよい。例えば、属性がユーザの年齢であって、その年齢が高齢を示す場合には、受信機1800aは、最大音量Vmaxを基準最大音量よりも大きくし、最小音量Vminを基準最小音量よりも大きくしてもよい。また、属性は、音声の出力が、スピーカから行われるかイヤホンから行われるかを示す情報であってもよい。 As described above, the receiver 1800a stores the maximum volume Vmax, the longest distance L1 at which the sound of the maximum volume Vmax is output, the minimum volume Vmin, and the shortest distance L2 at which the sound of the minimum volume Vmin is output. is doing. The receiver 1800a may change the maximum volume Vmax, the minimum volume Vmin, the longest distance L1, and the shortest distance L2 according to the attributes set for the receiver 1800a. For example, when the attribute is the age of the user and the age indicates a high age, the receiver 1800a sets the maximum volume Vmax higher than the reference maximum volume and sets the minimum volume Vmin higher than the reference minimum volume. Also good. Further, the attribute may be information indicating whether audio output is performed from a speaker or an earphone.
 このように、受信機1800aには最小音量Vminが設定されているため、受信機1800aが送信機1800bから遠すぎるために、音声が聞こえないことを抑えることができる。さらに、受信機1800aには最大音量Vmaxが設定されているため、受信機1800aが送信機1800bから近すぎるために、必要以上に大音量の音声が出力されてしまうことを抑えることができる。 As described above, since the minimum volume Vmin is set in the receiver 1800a, it is possible to prevent the receiver 1800a from being inaudible because the receiver 1800a is too far from the transmitter 1800b. Furthermore, since the maximum volume Vmax is set in the receiver 1800a, the receiver 1800a is too close to the transmitter 1800b, so that it is possible to suppress an excessively loud sound from being output.
 図99は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 99 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、ライトアップされた看板を撮像する。ここで、看板は、光IDを送信する上述の送信機100である照明装置によってライトアップされている。したがって、受信機200は、その撮像によって撮像表示画像Ppreと復号用画像Pdecとを取得する。そして、受信機200は、復号用画像Pdecを復号することによって光IDを取得し、その光IDに対応付けられた複数のAR画像P27a~P27cと認識情報とをサーバから取得する。受信機200は、認識情報に基づいて、撮像表示画像Ppreのうちの看板が映し出されている領域m2の周辺を対象領域として認識する。 The receiver 200 images the illuminated signboard. Here, the signboard is lit up by the illumination device which is the above-described transmitter 100 that transmits the optical ID. Therefore, the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by the imaging. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and acquires a plurality of AR images P27a to P27c associated with the optical ID and the recognition information from the server. Based on the recognition information, the receiver 200 recognizes the periphery of the area m2 in which the signboard is displayed in the captured display image Ppre as a target area.
 具体的には、受信機200は、図99の(a)に示すように、領域m2の左側に接する領域を第1の対象領域として認識し、その第1の対象領域にAR画像P27aを重畳する。 Specifically, as shown in (a) of FIG. 99, the receiver 200 recognizes an area in contact with the left side of the area m2 as the first target area, and superimposes the AR image P27a on the first target area. To do.
 次に、受信機200は、図99の(b)に示すように、領域m2の下側を含む領域を第2の対象領域として認識し、その第2の対象領域にAR画像P27bを重畳する。 Next, as illustrated in FIG. 99B, the receiver 200 recognizes an area including the lower side of the area m2 as a second target area, and superimposes the AR image P27b on the second target area. .
 次に、受信機200は、図99の(c)に示すように、領域m2の上側に接する領域を第3の対象領域として認識し、その第3の対象領域にAR画像P27cを重畳する。 Next, as shown in (c) of FIG. 99, the receiver 200 recognizes an area in contact with the upper side of the area m2 as the third target area, and superimposes the AR image P27c on the third target area.
 ここで、AR画像P27a~P27cのそれぞれは、例えば雪男のキャラクターの画像であって、動画であってもよい。 Here, each of the AR images P27a to P27c is, for example, an image of a snowman character and may be a moving image.
 また、受信機200は、光IDを継続して繰り返し取得している間、予め定められた順序およびタイミングで、認識される対象領域を第1~第3の対象領域のうちの何れかに切り替えてもよい。つまり、受信機200は、認識される対象領域を、第1の対象領域、第2の対象領域、第3の対象領域の順に切り替えてもよい。あるいは、受信機200は、上述の光IDを取得するごとに、予め定められた順序で、認識される対象領域を第1~第3の対象領域のうちの何れかに切り替えてもよい。つまり、受信機200は、最初に光IDを取得し、その光IDを継続して繰り返し取得している間には、図99の(a)に示すように、第1の対象領域を認識して、その第1の対象領域にAR画像P27aを重畳する。そして、受信機200は、その光IDを取得できなくなった場合には、AR画像P27aを非表示にする。次に、受信機200は、再び光IDを取得した場合には、その光IDを継続して繰り返し取得している間、図99の(b)に示すように、第2の対象領域を認識して、その第2の対象領域にAR画像P27bを重畳する。そして、受信機200は、再び、その光IDを取得できなくなった場合には、AR画像P27bを非表示にする。次に、受信機200は、再び光IDを取得した場合には、その光IDを継続して繰り返し取得している間、図99の(c)に示すように、第3の対象領域を認識して、その第3の対象領域にAR画像P27cを重畳する。 Further, the receiver 200 switches the recognized target area to any one of the first to third target areas in a predetermined order and timing while continuously acquiring the optical ID. May be. That is, the receiver 200 may switch the recognized target area in the order of the first target area, the second target area, and the third target area. Alternatively, the receiver 200 may switch the recognized target area to any one of the first to third target areas in a predetermined order each time the above-described optical ID is acquired. That is, the receiver 200 first acquires the light ID, and while continuously acquiring the light ID, the receiver 200 recognizes the first target area as shown in FIG. Then, the AR image P27a is superimposed on the first target area. Then, when the receiver 200 cannot acquire the optical ID, the receiver 200 hides the AR image P27a. Next, when the receiver 200 acquires the light ID again, the receiver 200 recognizes the second target area as shown in FIG. 99 (b) while continuously acquiring the light ID. Then, the AR image P27b is superimposed on the second target area. Then, when the receiver 200 cannot acquire the optical ID again, the receiver 200 hides the AR image P27b. Next, when the receiver 200 acquires the light ID again, the receiver 200 recognizes the third target area as shown in (c) of FIG. 99 while continuously acquiring the light ID. Then, the AR image P27c is superimposed on the third target area.
 このように光IDを取得するごとに、認識される対象領域を切り替える場合には、受信機200は、N(Nは2以上の整数)回に1回の頻度で、表示されるAR画像の色を変更してもよい。N回は、AR画像が表示される回数であって、例えば200回であってもよい。つまり、AR画像P27a~P27cは、全て同じ白色のキャラクターの画像であるが、200回に1回の頻度で、例えばピンク色のキャラクターのAR画像が表示される。受信機200は、そのピンク色のキャラクターのAR画像が表示されているときに、ユーザによるそのAR画像に対する操作を受け付けると、そのユーザに対してポイントを付与してもよい。 In this way, when the target area to be recognized is switched every time the light ID is acquired, the receiver 200 displays the AR image displayed once every N (N is an integer of 2 or more) times. The color may be changed. N times is the number of times the AR image is displayed, and may be 200 times, for example. That is, the AR images P27a to P27c are images of the same white character, but an AR image of a pink character, for example, is displayed at a frequency of once every 200 times. When the AR image of the pink character is displayed, the receiver 200 may give points to the user when receiving an operation on the AR image by the user.
 このように、AR画像が重畳される対象領域を切り替えたり、AR画像の色を所定の頻度で変更することによって、送信機100によってライトアップされた看板の撮像にユーザの興味を向けることができ、ユーザに光IDを繰り返し取得させることができる。 In this way, by switching the target area on which the AR image is superimposed or changing the color of the AR image at a predetermined frequency, the user's interest can be directed to the imaging of the signboard lit up by the transmitter 100. The user can repeatedly obtain the optical ID.
 図100は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 100 is a diagram illustrating an example of superimposition of AR images by the receiver 200.
 受信機200は、例えば建物内における複数の通路が交差する位置の床面に描かれたマークM4を撮像することによって、ユーザの進むべき進路を提示する、いわゆるウェイファインダー(Way Finder)としての機能を有する。建物は、例えばホテルなどであって、提示される進路は、チェックインを行ったユーザが自らの部屋に向かう進路である。 The receiver 200 functions as a so-called way finder (Way Finder) that presents a route to be followed by the user, for example, by imaging the mark M4 drawn on the floor surface at a position where a plurality of passages intersect in the building. Have The building is a hotel, for example, and the presented route is a route where the user who has checked in heads to his / her room.
 マークM4は、輝度変化によって光IDを送信する上述の送信機100である照明装置によってライトアップされている。したがって、受信機200は、そのマークM4の撮像によって撮像表示画像Ppreと復号用画像Pdecとを取得する。そして、受信機200は、復号用画像Pdecを復号することによって光IDを取得し、その光IDと受信機200の端末情報とをサーバに送信する。受信機200は、その光IDおよび端末情報に対応付けられた複数のAR画像P28と認識情報とをサーバから取得する。なお、光IDおよび端末情報は、ユーザのチェックインのときに、複数のAR画像P28および認識情報に対応付けてサーバに格納されている。 The mark M4 is lit up by the illumination device that is the above-described transmitter 100 that transmits the light ID by a change in luminance. Therefore, the receiver 200 acquires the captured display image Ppre and the decoding image Pdec by capturing the mark M4. Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec, and transmits the optical ID and the terminal information of the receiver 200 to the server. The receiver 200 acquires a plurality of AR images P28 and recognition information associated with the optical ID and terminal information from the server. The optical ID and terminal information are stored in the server in association with a plurality of AR images P28 and recognition information at the time of user check-in.
 受信機200は、認識情報に基づいて、撮像表示画像PpreのうちのマークM4が映し出されている領域m4の周辺において複数の対象領域を認識する。そして、受信機200は、図100に示すように、その複数の対象領域のそれぞれに、例えば動物の足跡のようなAR画像P28を重畳して表示する。 Based on the recognition information, the receiver 200 recognizes a plurality of target areas around the area m4 in which the mark M4 is displayed in the captured display image Ppre. Then, as shown in FIG. 100, the receiver 200 superimposes and displays an AR image P28 such as an animal footprint on each of the plurality of target regions.
 具体的には、認識情報は、マークM4の位置で右に曲がる進路を示す。受信機200は、このような認識情報に基づいて、撮像表示画像Ppreにおける経路を特定し、その経路に沿って配列される複数の対象領域を認識する。この経路は、ディスプレイ201の下側から領域m4に向かい、領域m4で右に曲がる経路である。受信機200は、あたかも動物がその経路に沿って歩いたかのように、認識された複数の対象領域のそれぞれにAR画像P28を配置する。 Specifically, the recognition information indicates the course of turning to the right at the position of the mark M4. Based on such recognition information, the receiver 200 identifies a route in the captured display image Ppre and recognizes a plurality of target regions arranged along the route. This route is a route that goes from the lower side of the display 201 to the region m4 and turns right in the region m4. The receiver 200 arranges the AR image P28 in each of the recognized plurality of target regions as if the animal walked along the route.
 ここで、受信機200は、撮像表示画像Ppreにおける経路を特定する場合には、自らに備えられている9軸センサによって検出される地磁気を利用してもよい。この場合、認識情報は、マークM4の位置で進むべき方位を地磁気の向きを基準として示す。例えば、認識情報は、マークM4の位置で進むべき方向として西を示す。受信機200は、このような認識情報に基づいて、撮像表示画像Ppreにおいて、ディスプレイ201の下側から領域m4に向かい、領域m4で西に向かう経路を特定する。そして、受信機200は、その経路に沿って配列される複数の対象領域を認識する。なお、受信機200は、9軸センサによる重力加速度の検出によって、ディスプレイ201の下側を特定する。 Here, when specifying the route in the captured display image Pre, the receiver 200 may use the geomagnetism detected by the 9-axis sensor provided in the receiver 200. In this case, the recognition information indicates the direction to proceed at the position of the mark M4 with reference to the direction of geomagnetism. For example, the recognition information indicates west as the direction to proceed at the position of the mark M4. Based on such recognition information, the receiver 200 specifies a route from the lower side of the display 201 toward the area m4 and toward the west in the area m4 in the captured display image Ppre. Then, the receiver 200 recognizes a plurality of target areas arranged along the route. Note that the receiver 200 identifies the lower side of the display 201 by detecting gravitational acceleration using a nine-axis sensor.
 このように、受信機200によってユーザの進路が提示されるため、ユーザはその進路にしたがって進めば、簡単に目的地に辿り着くことができる。また、その進路は、撮像表示画像PpreにおけるAR画像として表示されるため、ユーザに分かりやすくその進路を提示することができる。 In this way, since the route of the user is presented by the receiver 200, the user can easily reach the destination by following the route. Moreover, since the course is displayed as an AR image in the captured display image Ppre, the course can be presented to the user in an easy-to-understand manner.
 なお、送信機100である照明装置は、短パルスの光でマークM4を照らすことによって、明るさを抑えながら光IDを適切に送信することができる。また、受信機200は、マークM4を撮像したが、ディスプレイ201側に配置されているカメラ(いわゆる自取りカメラ)を用いて、照明装置を撮像してもよい。また、受信機200は、マークM4および照明装置の両方を撮像してもよい。 In addition, the illuminating device which is the transmitter 100 can appropriately transmit the light ID while suppressing the brightness by illuminating the mark M4 with a short pulse of light. In addition, the receiver 200 images the mark M4. However, the receiver 200 may image the illumination device using a camera (a so-called self-taking camera) disposed on the display 201 side. The receiver 200 may capture both the mark M4 and the illumination device.
 図101は、受信機200によるラインスキャン時間の求め方の一例を説明するための図である。 FIG. 101 is a diagram for explaining an example of how the receiver 200 obtains the line scan time.
 受信機200は、復号用画像Pdecを復号する場合には、ラインスキャン時間を用いて復号を行う。このラインスキャン時間は、イメージセンサに含まれる1つの露光ラインの露光が開始されてから、次の露光ラインの露光が開始されるまでの時間である。受信機200は、このラインスキャン時間が判明していれば、その判明しているラインスキャン時間を用いて復号用画像Pdecを復号する。しかし、そのラインスキャン時間が判明していない場合には、受信機200は、ラインスキャン時間を復号用画像Pdecから求める。 The receiver 200 performs decoding using the line scan time when decoding the decoding image Pdec. This line scan time is the time from the start of exposure of one exposure line included in the image sensor to the start of exposure of the next exposure line. If the line scan time is known, the receiver 200 decodes the decoding image Pdec using the known line scan time. However, when the line scan time is not known, the receiver 200 obtains the line scan time from the decoding image Pdec.
 例えば、受信機200は、図101に示すように、復号用画像Pdecにおいて輝線パターンを構成する複数の明線と複数の暗線の中から、最小幅の線を見つけ出す。なお、明線は、送信機100の輝度が高いときに、1つまたは複数の連続する露光ラインのそれぞれが露光することによって生じる復号用画像Pdec上の線である。また、暗線は、送信機100の輝度が低いときに、1つまたは複数の連続する露光ラインのそれぞれが露光することによって生じる復号用画像Pdec上の線である。 For example, as shown in FIG. 101, the receiver 200 finds a line having the minimum width from among a plurality of bright lines and a plurality of dark lines constituting a bright line pattern in the decoding image Pdec. The bright line is a line on the decoding image Pdec generated when each of one or a plurality of continuous exposure lines is exposed when the luminance of the transmitter 100 is high. Further, the dark line is a line on the decoding image Pdec generated by exposure of each of one or a plurality of continuous exposure lines when the luminance of the transmitter 100 is low.
 受信機200は、その最小幅の線を見つけると、その最小幅の線に対応する露光ラインのライン数、つまりピクセル数を特定する。送信機100が光IDを送信するために輝度変化するキャリア周波数が9.6kHzである場合、送信機100の輝度が高い時間または低い時間は、最短で104μsである。したがって、受信機200は、104μsを、特定された最小幅のピクセル数で除算することによって、ラインスキャン時間を算出する。 When the receiver 200 finds the line with the minimum width, the receiver 200 specifies the number of exposure lines corresponding to the line with the minimum width, that is, the number of pixels. When the carrier frequency at which the luminance changes so that the transmitter 100 transmits the optical ID is 9.6 kHz, the time when the luminance of the transmitter 100 is high or low is 104 μs at the shortest. Therefore, the receiver 200 calculates the line scan time by dividing 104 μs by the number of pixels having the specified minimum width.
 図102は、受信機200によるラインスキャン時間の求め方の一例を説明するための図である。 FIG. 102 is a diagram for explaining an example of how the receiver 200 obtains the line scan time.
 受信機200は、復号用画像Pdecの輝線パターンに対してフーリエ変換を行い、そのフーリエ変換によって得られる空間周波数に基づいてラインスキャン時間を求めてもよい。 The receiver 200 may perform a Fourier transform on the bright line pattern of the decoding image Pdec and obtain the line scan time based on the spatial frequency obtained by the Fourier transform.
 例えば図102に示すように、受信機200は、上述のフーリエ変換によって、空間周波数と、復号用画像Pdecにおけるその空間周波数の成分の強度との関係を示すスペクトルを導出する。次に、受信機200は、そのスペクトルに示される複数のピークのそれぞれを順に選択する。そして、受信機200は、ピークを選択するごとに、その選択されたピークの空間周波数(例えば図102における空間周波数f2)が、9.6kHzの時間周波数によって得られるようなラインスキャン時間を、ラインスキャン時間候補として算出する。9.6kHzは、上述のように送信機100の輝度変化のキャリア周波数である。これにより、複数のラインスキャン時間候補が算出される。受信機200は、これらの複数のラインスキャン時間候補のうちの最尤の候補を、ラインスキャン時間として選択する。 For example, as shown in FIG. 102, the receiver 200 derives a spectrum indicating the relationship between the spatial frequency and the intensity of the component of the spatial frequency in the decoding image Pdec by the Fourier transform described above. Next, the receiver 200 sequentially selects each of the plurality of peaks indicated in the spectrum. Each time the receiver 200 selects a peak, the receiver 200 calculates a line scan time such that the spatial frequency of the selected peak (for example, the spatial frequency f2 in FIG. 102) is obtained by a time frequency of 9.6 kHz. Calculate as a scan time candidate. 9.6 kHz is the carrier frequency of the luminance change of the transmitter 100 as described above. Thereby, a plurality of line scan time candidates are calculated. The receiver 200 selects the most likely candidate among the plurality of line scan time candidates as the line scan time.
 最尤の候補を選択するためには、受信機200は、撮像におけるフレームレートと、イメージセンサに含まれる露光ラインの数とに基づいて、ラインスキャン時間の許容範囲を算出する。つまり、受信機200は、1×10[μs]/{(フレームレート)×(露光ライン数)}によって、ラインスキャン時間の最大値を算出する。そして、受信機200は、その最大値×定数K(K<1)~最大値までを、ラインスキャン時間の許容範囲として決定する。定数Kは、例えば0.9または0.8などである。 In order to select the most likely candidate, the receiver 200 calculates the allowable range of line scan time based on the frame rate in imaging and the number of exposure lines included in the image sensor. That is, the receiver 200 calculates the maximum value of the line scan time by 1 × 10 6 [μs] / {(frame rate) × (number of exposure lines)}. Then, the receiver 200 determines the maximum value × constant K (K <1) to the maximum value as the allowable range of the line scan time. The constant K is, for example, 0.9 or 0.8.
 受信機200は、複数のラインスキャン時間候補のうち、この許容範囲にある候補を最尤の候補、すなわちラインスキャン時間として選択する。 The receiver 200 selects a candidate within this allowable range from among a plurality of line scan time candidates as a maximum likelihood candidate, that is, a line scan time.
 なお、受信機200は、図101に示す例によって算出されたラインスキャン時間が上述の許容範囲にあるか否かによって、その算出されたラインスキャン時間の信頼性を評価してもよい。 Note that the receiver 200 may evaluate the reliability of the calculated line scan time depending on whether or not the line scan time calculated according to the example illustrated in FIG. 101 is within the above-described allowable range.
 図103は、受信機200によるラインスキャン時間の求め方の一例を示すフローチャートである。 FIG. 103 is a flowchart showing an example of how to obtain the line scan time by the receiver 200.
 受信機200は、復号用画像Pdecの復号を試みることによって、ラインスキャン時間を求めてもよい。具体的には、まず、受信機200は、撮像を開始する(ステップS441)。次に、受信機200は、ラインスキャン時間が判明しているか否かを判定する(ステップS442)。例えば、受信機200は、自らの種類および型式をサーバに通知し、その種類および型式に応じたラインスキャン時間を問い合わせることによって、そのラインスキャン時間が判明しているか否かを判定してもよい。ここで、判明していると判定すると(ステップS442のYes)、受信機200は、光IDの基準取得回数をn(nは2以上の整数であって、例えば4)に設定する(ステップS443)。次に、受信機200は、その判明しているラインスキャン時間を用いて復号用画像Pdecを復号することによって、光IDを取得する(ステップS444)。このとき、受信機200は、ステップS441で開始された撮像によって順次得られる複数の復号用画像Pdecのそれぞれに対して復号を行うことによって、複数の光IDを取得する。ここで、受信機200は、同じ光IDを基準取得回数(すなわちn回)だけ取得したか否かを判定する(ステップS445)。n回取得したと判定すると(ステップS445のYes)、受信機200は、その光IDを信用し、その光IDを用いた処理(例えばAR画像の重畳)を開始する(ステップS446)。一方、n回取得していないと判定すると(ステップS445のNo)、受信機200は、その光IDを信用せず、処理を終了する。 The receiver 200 may obtain the line scan time by trying to decode the decoding image Pdec. Specifically, first, the receiver 200 starts imaging (step S441). Next, the receiver 200 determines whether or not the line scan time is known (step S442). For example, the receiver 200 may determine whether or not the line scan time is known by notifying the server of the type and model of the receiver 200 and inquiring the line scan time according to the type and model. . If it is determined that it is known (Yes in step S442), the receiver 200 sets the reference acquisition count of the optical ID to n (n is an integer equal to or larger than 2, for example, 4) (step S443). ). Next, the receiver 200 acquires the optical ID by decoding the decoding image Pdec using the known line scan time (step S444). At this time, the receiver 200 acquires a plurality of optical IDs by performing decoding on each of the plurality of decoding images Pdec obtained sequentially by the imaging started in step S441. Here, the receiver 200 determines whether or not the same optical ID has been acquired the reference acquisition times (that is, n times) (step S445). If it is determined that it has been acquired n times (Yes in step S445), the receiver 200 trusts the optical ID and starts processing using the optical ID (for example, superimposition of an AR image) (step S446). On the other hand, if it is determined that it has not been acquired n times (No in step S445), the receiver 200 does not trust the optical ID and ends the process.
 ステップS442において、ラインスキャン時間が判明していないと判定すると(ステップS442のNo)、受信機200は、光IDの基準取得回数をn+k(kは1以上の整数)に設定する(ステップS447)。つまり、受信機200は、ラインスキャン時間が判明していないときには、ラインスキャン時間が判明しているときよりも多い基準取得回数を設定する。次に、受信機200は、仮のラインスキャン時間を決定する(ステップS448)。そして、受信機200は、仮決めのラインスキャン時間を用いて復号用画像Pdecを復号することによって、光IDを取得する(ステップS449)。このとき、受信機200は、上述と同様、ステップS441で開始された撮像によって順次得られる複数の復号用画像Pdecのそれぞれに対して復号を行うことによって、複数の光IDを取得する。ここで、受信機200は、同じ光IDを基準取得回数(すなわち(n+k)回)だけ取得したか否かを判定する(ステップS450)。 If it is determined in step S442 that the line scan time is not known (No in step S442), the receiver 200 sets the optical ID reference acquisition count to n + k (k is an integer equal to or greater than 1) (step S447). . That is, when the line scan time is not known, the receiver 200 sets a larger reference acquisition count than when the line scan time is known. Next, the receiver 200 determines a temporary line scan time (step S448). Then, the receiver 200 acquires the optical ID by decoding the decoding image Pdec using the provisional line scan time (step S449). At this time, the receiver 200 acquires a plurality of optical IDs by performing decoding on each of the plurality of decoding images Pdec obtained sequentially by the imaging started in step S441, as described above. Here, the receiver 200 determines whether or not the same optical ID has been acquired the reference acquisition times (that is, (n + k) times) (step S450).
 (n+k)回取得したと判定すると(ステップS450のYes)、受信機200は、仮決めのラインスキャン時間が正しいラインスキャン時間であると判断する。そして、受信機200は、受信機200の種類および型式と、そのラインスキャン時間とをサーバに通知する(ステップS451)。これにより、サーバでは、受信機の種類および型式と、その受信機に適したラインスキャン時間とが対応付けて記憶される。したがって、同じ種類および型式の他の受信機が撮像を開始した場合には、他の受信機は、サーバに問い合わせることによって、自らのラインスキャン時間を特定することができる。つまり、他の受信機は、ステップS442の判定において、ラインスキャン時間が判明していると判定することができる。 If it is determined that (n + k) times have been acquired (Yes in step S450), the receiver 200 determines that the provisional line scan time is the correct line scan time. Then, the receiver 200 notifies the server of the type and model of the receiver 200 and the line scan time (step S451). As a result, the server stores the type and model of the receiver in association with the line scan time suitable for the receiver. Therefore, when another receiver of the same type and type starts imaging, the other receiver can specify its own line scan time by making an inquiry to the server. That is, the other receivers can determine that the line scan time is known in the determination in step S442.
 そして、受信機200は、(n+k)回取得された光IDを信用し、その光IDを用いた処理(例えばAR画像の重畳)を開始する(ステップS446)。 Then, the receiver 200 trusts the optical ID acquired (n + k) times, and starts processing using the optical ID (for example, superimposition of an AR image) (step S446).
 また、ステップS450において、同じ光IDを(n+k)回取得していないと判定すると(ステップS450のNo)、受信機200は、さらに、終了条件が満たされたか否かを判定する(ステップS452)。終了条件は、例えば、撮像開始から予め定められた時間が経過したこと、あるいは、光IDの取得が最大取得回数以上行われたことなどである。このような終了条件が満たされたと判定すると(ステップS452のYes)、受信機200は処理を終了する。一方、終了条件が満たされていないと判定すると(ステップS452のNo)、受信機200は、仮決めのラインスキャン時間を変更する(ステップS453)。そして、受信機200は、その変更された仮決めのラインスキャン時間を用いてステップS449からの処理を繰り返し実行する。 If it is determined in step S450 that the same light ID has not been acquired (n + k) times (No in step S450), the receiver 200 further determines whether or not an end condition is satisfied (step S452). . The end condition is, for example, that a predetermined time has elapsed since the start of imaging, or that the optical ID has been acquired more than the maximum number of acquisitions. If it is determined that such an end condition is satisfied (Yes in step S452), the receiver 200 ends the process. On the other hand, when determining that the termination condition is not satisfied (No in step S452), the receiver 200 changes the provisional line scan time (step S453). Then, the receiver 200 repeatedly executes the processing from step S449 using the changed provisional line scan time.
 このように、受信機200は、ラインスキャン時間が判明していなくても、図101~図103に示す例のように、そのラインスキャン時間を求めることができる。これにより、受信機200の種類および型式がどのようなものであっても、受信機200は、復号用画像Pdecを適切に復号して光IDを取得することができる。 Thus, even if the line scan time is not known, the receiver 200 can obtain the line scan time as in the examples shown in FIGS. Accordingly, regardless of the type and model of the receiver 200, the receiver 200 can appropriately decode the decoding image Pdec and obtain an optical ID.
 図104は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 104 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、テレビとして構成されている送信機100を撮像する。この送信機100は、例えばテレビ番組を表示しながら輝度変化することによって、光IDとタイムコードを周期的に送信している。タイムコードは、送信されるたびに、その送信時の時刻を示す情報であって、例えば図26に示す時間パケットであってもよい。 The receiver 200 images the transmitter 100 configured as a television. The transmitter 100 periodically transmits an optical ID and a time code by changing luminance while displaying a television program, for example. The time code is information indicating the time at the time of transmission each time it is transmitted, and may be, for example, a time packet shown in FIG.
 受信機200は、上述の撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。そして、受信機200は、周期的に取得される撮像表示画像Ppreをディスプレイ201に表示しながら、復号用画像Pdecを復号することによって、上述の光IDとタイムコードを取得する。次に、受信機200は、その光IDをサーバ300に送信する。サーバ300は、その光IDを受信すると、その光IDに対応付けられた音声データと、AR開始時刻情報と、AR画像P29と、認識情報とを受信機200に送信する。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec through the above-described imaging. The receiver 200 acquires the above-described optical ID and time code by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201. Next, the receiver 200 transmits the optical ID to the server 300. When the server 300 receives the optical ID, the server 300 transmits the audio data associated with the optical ID, the AR start time information, the AR image P29, and the recognition information to the receiver 200.
 受信機200は、音声データを取得すると、その音声データを送信機100に映し出されているテレビ番組の映像と同期させて再生する。つまり、音声データは、複数の音声単位データからなり、それらの複数の音声単位データにはタイムコードが含まれている。受信機200は、音声データのうち、光IDとともに送信機100から取得されるタイムコードと同一の時刻を示すタイムコードを含む音声単位データから、複数の音声単位データの再生を開始する。これにより、音声データの再生が、テレビ番組の映像と同期される。なお、このような音声と映像との同期は、図23以降の各図によって示される音声同期再生と同様の方法によって行われてもよい。 When the receiver 200 acquires the audio data, the receiver 200 reproduces the audio data in synchronization with the video of the TV program displayed on the transmitter 100. That is, the sound data is composed of a plurality of sound unit data, and the plurality of sound unit data includes a time code. The receiver 200 starts reproduction of a plurality of audio unit data from the audio unit data including the time code indicating the same time as the time code acquired from the transmitter 100 together with the optical ID in the audio data. Thereby, the reproduction of the audio data is synchronized with the video of the television program. It should be noted that such synchronization between audio and video may be performed by the same method as the audio synchronous reproduction shown in each of the drawings after FIG.
 受信機200は、AR画像P29および認識情報を取得すると、撮像表示画像Ppreのうち、その認識情報に応じた領域を対象領域として認識し、その対象領域にAR画像P29を重畳する。例えば、AR画像P29は、受信機200のディスプレイ201の亀裂を示す画像であって、対象領域は、撮像表示画像Ppreのうちの送信機100の像を横切る領域である。 When the receiver 200 acquires the AR image P29 and the recognition information, the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Ppre as a target area, and superimposes the AR image P29 on the target area. For example, the AR image P29 is an image showing a crack in the display 201 of the receiver 200, and the target area is an area that crosses the image of the transmitter 100 in the captured display image Ppre.
 ここで、受信機200は、上述のようなAR画像P29が重畳された撮像表示画像Ppreを、AR開始時刻情報に応じたタイミングに表示する。AR開始時刻情報は、AR画像P29が表示される時刻を示す情報である。つまり、受信機200は、送信機100から随時送信されるタイムコードのうち、AR開始時刻情報と同一の時刻を示すタイムコードを受信したタイミングに、上述のAR画像P29が重畳された撮像表示画像Ppreを表示する。例えば、AR開始時刻情報によって示される時刻は、テレビ番組において、魔法使いの少女が氷の魔法をかけるシーンが登場する時刻である。また、この時刻には、受信機200は、音声データの再生によって、そのAR画像P29の亀裂が生じる音を受信機200のスピーカから出力してもよい。 Here, the receiver 200 displays the captured display image Ppre on which the AR image P29 as described above is superimposed at a timing according to the AR start time information. The AR start time information is information indicating the time at which the AR image P29 is displayed. That is, the receiver 200 captures a display image in which the above-described AR image P29 is superimposed at the timing of receiving the time code indicating the same time as the AR start time information among the time codes transmitted from the transmitter 100 as needed. Display Pre. For example, the time indicated by the AR start time information is the time when a scene in which a magician girl applies ice magic appears in a television program. At this time, the receiver 200 may output from the speaker of the receiver 200 a sound in which the AR image P29 is cracked due to the reproduction of the audio data.
 これにより、ユーザは、テレビ番組のシーンを、より臨場感を持って視聴することできる。 This allows the user to view the TV program scene with a sense of reality.
 また、受信機200は、AR開始時刻情報によって示される時刻に、受信機200に備えられているバイブレータを振動させてもよく、光源をフラッシュのように発光させてもよく、ディスプレイ201を瞬間的に明るくさせたり点滅させたりしてもよい。また、AR画像P29は、亀裂を示す画像だけでなく、ディスプレイ201の結露が凍り付いた状態を示す画像を含んでいてもよい。 Further, the receiver 200 may vibrate a vibrator provided in the receiver 200 at a time indicated by the AR start time information, or may cause the light source to emit light like a flash, and the display 201 may be instantaneously displayed. It may be brightened or flashed. Further, the AR image P29 may include not only an image showing a crack but also an image showing a state in which condensation of the display 201 is frozen.
 図105は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 105 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、例えば玩具の杖として構成されている送信機100を撮像する。この送信機100は、光源を備え、その光源が輝度変化することによって、光IDを送信している。 The receiver 200 images the transmitter 100 configured as a toy cane, for example. The transmitter 100 includes a light source, and transmits an optical ID by changing the luminance of the light source.
 受信機200は、上述の撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。そして、受信機200は、周期的に取得される撮像表示画像Ppreをディスプレイ201に表示しながら、復号用画像Pdecを復号することによって、上述の光IDを取得する。次に、受信機200は、その光IDをサーバ300に送信する。サーバ300は、その光IDを受信すると、その光IDに対応付けられたAR画像P30と認識情報とを受信機200に送信する。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec through the above-described imaging. The receiver 200 acquires the above-described optical ID by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201. Next, the receiver 200 transmits the optical ID to the server 300. When the server 300 receives the optical ID, the server 300 transmits the AR image P30 associated with the optical ID and the recognition information to the receiver 200.
 ここで、認識情報は、さらに、送信機100を把持する人物によるジェスチャ(すなわち動作)を示すジェスチャ情報を含む。ジェスチャ情報は、例えば、人物が送信機100を右から左に動かすジェスチャを示す。受信機200は、各撮像表示画像Ppreに映し出されている、送信機100を把持する人物によるジェスチャと、ジェスチャ情報によって示されるジェスチャとを比較する。そして、受信機200は、それらのジェスチャが一致すると、例えば、多くの星型のAR画像P30が、そのジェスチャによって移動する送信機100の軌跡に沿って配列されるように、それらのAR画像P30を撮像表示画像Ppreに重畳する。 Here, the recognition information further includes gesture information indicating a gesture (that is, an action) by a person holding the transmitter 100. The gesture information indicates, for example, a gesture in which a person moves the transmitter 100 from right to left. The receiver 200 compares the gesture by the person holding the transmitter 100 displayed on each captured display image Ppre with the gesture indicated by the gesture information. Then, when the gestures coincide with each other, the receiver 200 arranges the AR images P30 such that, for example, many star-shaped AR images P30 are arranged along the trajectory of the transmitter 100 moved by the gestures. Is superimposed on the captured display image Ppre.
 図106は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 106 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、上述と同様に、例えば玩具の杖として構成されている送信機100を撮像する。 The receiver 200 images the transmitter 100 configured as, for example, a toy cane, as described above.
 受信機200は、その撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。そして、受信機200は、周期的に取得される撮像表示画像Ppreをディスプレイ201に表示しながら、復号用画像Pdecを復号することによって、上述の光IDを取得する。次に、受信機200は、その光IDをサーバ300に送信する。サーバ300は、その光IDを受信すると、その光IDに対応付けられたAR画像P31と認識情報とを受信機200に送信する。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec by the imaging. The receiver 200 acquires the above-described optical ID by decoding the decoding image Pdec while displaying the captured display image Ppre acquired periodically on the display 201. Next, the receiver 200 transmits the optical ID to the server 300. When the server 300 receives the optical ID, the server 300 transmits the AR image P31 associated with the optical ID and the recognition information to the receiver 200.
 ここで、認識情報は、上述と同様に、送信機100を把持する人物によるジェスチャを示すジェスチャ情報を含む。ジェスチャ情報は、例えば、人物が送信機100を右から左に動かすジェスチャを示す。受信機200は、各撮像表示画像Ppreに映し出されている、送信機100を把持する人物によるジェスチャと、ジェスチャ情報によって示されるジェスチャとを比較する。そして、受信機200は、それらのジェスチャが一致すると、例えば、撮像表示画像Ppreにおいて、その送信機100を把持する人物が映し出されている領域である対象領域に、ドレスの衣装を示すAR画像P30を重畳する。 Here, the recognition information includes gesture information indicating a gesture by a person holding the transmitter 100 as described above. The gesture information indicates, for example, a gesture in which a person moves the transmitter 100 from right to left. The receiver 200 compares the gesture by the person holding the transmitter 100 displayed on each captured display image Ppre with the gesture indicated by the gesture information. Then, when the gestures match, the receiver 200, for example, in the captured display image Ppre, an AR image P30 indicating a dress costume in a target area that is an area in which a person holding the transmitter 100 is projected. Is superimposed.
 このように、本変形例における表示方法では、光IDに対応するジェスチャ情報をサーバから取得する。次に、周期的に取得される撮像表示画像によって示される被写体の動きが、サーバから取得されたジェスチャ情報によって示される動きと一致するか否かを判定する。そして、一致すると判定されたときに、AR画像が重畳された撮像表示画像Ppreを表示する。 As described above, in the display method in the present modification, gesture information corresponding to the light ID is acquired from the server. Next, it is determined whether or not the movement of the subject indicated by the periodically acquired captured display image matches the movement indicated by the gesture information acquired from the server. And when it determines with matching, the picked-up display image Ppre on which AR image was superimposed is displayed.
 これにより、例えば人物などの被写体の動きに応じてAR画像を表示することができる。つまり、適切なタイミングにAR画像を表示することができる。 Thereby, for example, an AR image can be displayed according to the movement of a subject such as a person. That is, the AR image can be displayed at an appropriate timing.
 図107は、受信機200の姿勢に応じて取得される復号用画像Pdecの一例を示す図である。 FIG. 107 is a diagram illustrating an example of the decoding image Pdec acquired according to the attitude of the receiver 200.
 例えば、図107の(a)に示すように、受信機200は、横向きの姿勢で、輝度変化によって光IDを送信する送信機100を撮像する。なお、横向きの姿勢は、受信機200のディスプレイ201の長手方向が水平方向に沿う姿勢である。また、受信機200に備えられているイメージセンサの各露光ラインは、ディスプレイ201の長手方向に対して直交している。上述のような撮像によって、輝線の数が少ない輝線パターン領域Xを含む復号用画像Pdecが取得される。この復号用画像Pdecの輝線パターン領域Xでは、輝線の数が少ない。つまり、輝度がHighまたはLowに変化する部位が少ない。したがって、受信機200は、その復号用画像Pdecに対する復号によって適切に光IDを取得することができない場合がある。 For example, as shown in (a) of FIG. 107, the receiver 200 images the transmitter 100 that transmits the optical ID according to the luminance change in the horizontal orientation. In the horizontal orientation, the longitudinal direction of the display 201 of the receiver 200 is an orientation along the horizontal direction. Further, each exposure line of the image sensor provided in the receiver 200 is orthogonal to the longitudinal direction of the display 201. By the imaging as described above, a decoding image Pdec including the bright line pattern region X with a small number of bright lines is acquired. In the bright line pattern region X of the decoding image Pdec, the number of bright lines is small. That is, there are few parts where the luminance changes to High or Low. Therefore, the receiver 200 may not be able to acquire the optical ID appropriately by decoding the decoding image Pdec.
 そこで、例えば、図107の(b)に示すように、ユーザは、受信機200の姿勢を横向きから縦向きに変える。なお、縦向きの姿勢は、受信機200のディスプレイ201の長手方向が垂直方向に沿う姿勢である。このような姿勢の受信機200は、光IDを送信する送信機100を撮像すると、輝線の数が多い輝線パターン領域Yを含む復号用画像Pdecを取得することができる。 Therefore, for example, as shown in FIG. 107 (b), the user changes the attitude of the receiver 200 from landscape to portrait. The vertical orientation is a posture in which the longitudinal direction of the display 201 of the receiver 200 is along the vertical direction. The receiver 200 having such an attitude can acquire the decoding image Pdec including the bright line pattern region Y having a large number of bright lines when the transmitter 100 that transmits the light ID is imaged.
 このように、受信機200の姿勢に応じて、光IDを適切に取得することができない場合があるため、受信機200に光IDを取得させるときには、撮像している受信機200の姿勢を適宜変更するとよい。姿勢が変更されているときには、受信機200は、光IDを取得し易い姿勢になったタイミングで、光IDを適切に取得することができる。 As described above, the optical ID may not be appropriately acquired according to the attitude of the receiver 200. Therefore, when the receiver 200 acquires the optical ID, the attitude of the receiver 200 that is imaging is appropriately set. It is good to change. When the posture is changed, the receiver 200 can appropriately acquire the light ID at a timing when the posture is such that the light ID can be easily acquired.
 図108は、受信機200の姿勢に応じて取得される復号用画像Pdecの他の例を示す図である。 FIG. 108 is a diagram illustrating another example of the decoding image Pdec acquired according to the attitude of the receiver 200.
 例えば、送信機100は、喫茶店のデジタルサイネージとして構成され、映像表示期間に、喫茶店の広告に関する映像を表示し、光ID送信期間に、輝度変化によって光IDを送信する。つまり、送信機100は、映像表示期間における映像の表示と、光ID送信期間における光IDの送信とを交互に繰り返し実行する。 For example, the transmitter 100 is configured as a digital signage of a coffee shop, displays a video relating to a coffee shop advertisement during the video display period, and transmits a light ID by a change in luminance during the light ID transmission period. That is, the transmitter 100 alternately and repeatedly performs video display during the video display period and transmission of the optical ID during the optical ID transmission period.
 受信機200は、送信機100の撮像によって、撮像表示画像Ppreと復号用画像Pdecとを周期的に取得する。このとき、送信機100の映像表示期間および光ID送信期間の繰り返し周期と、受信機200による撮像表示画像Ppreおよび復号用画像Pdecの取得の繰り返し周期との同期によって、輝線パターン領域を含む復号用画像Pdecを取得することができない場合がある。さらに、受信機200の姿勢によって、輝線パターン領域を含む復号用画像Pdecを取得することができない場合がある。 The receiver 200 periodically acquires the captured display image Ppre and the decoding image Pdec by imaging of the transmitter 100. At this time, the decoding cycle including the bright line pattern region is synchronized with the repetition cycle of the video display period and the optical ID transmission period of the transmitter 100 and the repetition cycle of acquisition of the captured display image Ppre and the decoding image Pdec by the receiver 200. The image Pdec may not be acquired. Furthermore, the decoding image Pdec including the bright line pattern region may not be acquired depending on the attitude of the receiver 200.
 例えば、受信機200は、図108の(a)に示すような姿勢で、送信機100を撮像する。つまり、受信機200は、送信機100に近づき、受信機200のイメージセンサの全体に送信機100の像が投影されるように、その送信機100を撮像する。 For example, the receiver 200 images the transmitter 100 in a posture as shown in FIG. That is, the receiver 200 approaches the transmitter 100 and images the transmitter 100 so that the image of the transmitter 100 is projected on the entire image sensor of the receiver 200.
 ここで、受信機200が撮像表示画像Ppreを取得するタイミングが、送信機100の映像表示期間内にあれば、受信機200は、送信機100が映し出された撮像表示画像Ppreを適切に取得する。 Here, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 appropriately acquires the captured display image Ppre displayed by the transmitter 100. .
 そして、受信機200が復号用画像Pdecを取得するタイミングが、送信機100の映像表示期間と光ID送信期間とに跨る場合であっても、受信機200は、輝線パターン領域Z1を含む復号用画像Pdecを取得することができる。 Even when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 includes the bright line pattern region Z1. An image Pdec can be acquired.
 つまり、イメージセンサに含まれる各露光ラインの露光は、垂直方向の上端にある露光ラインから下側に順に開始される。したがって、映像表示期間において、受信機200が復号用画像Pdecを取得するためにイメージセンサの露光を開始しても、輝線パターン領域を得ることはできない。しかし、その映像表示期間が光ID送信期間に切り替わると、その光ID送信期間に露光が行われる各露光ラインに対応した輝線パターン領域を得ることができる。 That is, the exposure of each exposure line included in the image sensor is started sequentially from the exposure line at the upper end in the vertical direction downward. Therefore, even if the receiver 200 starts exposure of the image sensor to acquire the decoding image Pdec during the video display period, it is not possible to obtain the bright line pattern region. However, when the video display period is switched to the light ID transmission period, a bright line pattern region corresponding to each exposure line that is exposed in the light ID transmission period can be obtained.
 ここで、受信機200は、図108の(b)に示すような姿勢で、送信機100を撮像する。つまり、受信機200は、送信機100から離れ、受信機200のイメージセンサの上側の領域のみに送信機100の像が投影されるように、その送信機100を撮像する。このときには、上述と同様、受信機200が撮像表示画像Ppreを取得するタイミングが、送信機100の映像表示期間内にあれば、受信機200は、送信機100が映し出された撮像表示画像Ppreを適切に取得する。しかし、受信機200が復号用画像Pdecを取得するタイミングが、送信機100の映像表示期間と光ID送信期間とに跨る場合には、受信機200が、輝線パターン領域を含む復号用画像Pdecを取得することができないことがある。つまり、送信機100の映像表示期間が光ID送信期間に切り替わっても、その光ID送信期間に露光が行われるイメージセンサの下側にある各露光ラインには、輝度変化する送信機100の像が投影されないことがある。したがって、輝線パターン領域を有する復号用画像Pdecを取得することができない。 Here, the receiver 200 images the transmitter 100 in a posture as shown in FIG. That is, the receiver 200 is separated from the transmitter 100 and images the transmitter 100 so that the image of the transmitter 100 is projected only on the area above the image sensor of the receiver 200. At this time, as described above, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 captures the captured display image Prep on which the transmitter 100 is projected. Get properly. However, when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 selects the decoding image Pdec including the bright line pattern region. You may not be able to get it. That is, even if the video display period of the transmitter 100 is switched to the optical ID transmission period, the image of the transmitter 100 whose luminance changes is displayed on each exposure line below the image sensor that is exposed during the optical ID transmission period. May not be projected. Therefore, the decoding image Pdec having the bright line pattern region cannot be acquired.
 一方、受信機200は、図108の(c)に示すように、送信機100から離れた状態で、受信機200のイメージセンサの下側の領域のみに送信機100の像が投影されるように、その送信機100を撮像する。このときには、上述と同様、受信機200が撮像表示画像Ppreを取得するタイミングが、送信機100の映像表示期間内にあれば、受信機200は、送信機100が映し出された撮像表示画像Ppreを適切に取得する。さらに、受信機200が復号用画像Pdecを取得するタイミングが、送信機100の映像表示期間と光ID送信期間とに跨る場合でも、受信機200が輝線パターン領域を含む復号用画像Pdecを取得することができることがある。つまり、送信機100の映像表示期間が光ID送信期間に切り替わると、その光ID送信期間に露光が行われるイメージセンサの下側にある各露光ラインには、輝度変化する送信機100の像が投影される。したがって、輝線パターン領域Z2を有する復号用画像Pdecを取得することができる。 On the other hand, as shown in FIG. 108 (c), the receiver 200 projects the image of the transmitter 100 only on the lower region of the image sensor of the receiver 200 in a state of being separated from the transmitter 100. Then, the transmitter 100 is imaged. At this time, as described above, if the timing at which the receiver 200 acquires the captured display image Ppre is within the video display period of the transmitter 100, the receiver 200 captures the captured display image Prep on which the transmitter 100 is projected. Get properly. Furthermore, even when the timing at which the receiver 200 acquires the decoding image Pdec extends between the video display period and the optical ID transmission period of the transmitter 100, the receiver 200 acquires the decoding image Pdec including the bright line pattern region. There are things that can be done. That is, when the video display period of the transmitter 100 is switched to the optical ID transmission period, the image of the transmitter 100 whose luminance changes is displayed on each exposure line below the image sensor that is exposed during the optical ID transmission period. Projected. Therefore, the decoding image Pdec having the bright line pattern region Z2 can be acquired.
 このように、受信機200の姿勢に応じて、光IDを適切に取得することができない場合があるため、受信機200は、光IDを取得するときには、受信機200の姿勢を変えるようにユーザに促してもよい。つまり、受信機200は、撮像が開始されると、受信機200の姿勢が変わるように、例えば「動かしてください」または「振ってください」というメッセージの表示または音声出力を行う。これにより、受信機200は、姿勢を変えながら撮像を行うため、光IDを適切に取得することができる。 As described above, since the optical ID may not be appropriately acquired according to the attitude of the receiver 200, the receiver 200 may change the attitude of the receiver 200 when acquiring the optical ID. You may be encouraged. That is, when imaging starts, the receiver 200 displays, for example, a message “Please move” or “Shake” or output sound so that the attitude of the receiver 200 changes. Thereby, since the receiver 200 performs imaging while changing the posture, it can appropriately acquire the light ID.
 図109は、受信機200の処理動作の一例を示すフローチャートである。 FIG. 109 is a flowchart illustrating an example of processing operation of the receiver 200.
 例えば、受信機200は、撮像しているときに、受信機200が振られているか否かを判定する(ステップS461)。具体的には、受信機200は、受信機200に備えられた9軸センサの出力に基づいて、振られているか否かを判定する。ここで、受信機200は、撮像中に振られていると判定すると(ステップS461のYes)、上述の光ID取得レートを上げる(ステップS462)。具体的には、受信機200は、撮像中に得られる単位時間あたりの全ての撮像画像を復号用画像(すなわち輝線画像)Pdecとして取得し、取得された全ての復号用画像のそれぞれをデコードする。または、受信機200は、全ての撮像画像が撮像表示画像Ppreとして取得されているときには、つまり、復号用画像Pdecの取得およびデコードが停止されているときには、その取得およびデコードを開始する。 For example, the receiver 200 determines whether or not the receiver 200 is shaken during imaging (step S461). Specifically, the receiver 200 determines whether or not it is shaken based on the output of the 9-axis sensor provided in the receiver 200. Here, if the receiver 200 determines that it is being shaken during imaging (Yes in step S461), the receiver 200 increases the above-described optical ID acquisition rate (step S462). Specifically, the receiver 200 acquires all captured images per unit time obtained during imaging as decoding images (that is, bright line images) Pdec, and decodes all the acquired decoding images. . Alternatively, the receiver 200 starts acquisition and decoding when all the captured images are acquired as the captured display image Ppre, that is, when acquisition and decoding of the decoding image Pdec are stopped.
 一方、受信機200は、撮像中に振られていないと判定すると(ステップS461のNo)、低い光ID取得レートで復号用画像Pdecを取得する(ステップS463)。具体的には、光ID取得レートがステップS462で上げられて現在も高い光ID取得レートになっていれば、受信機200は、現在の光ID取得レートが高いため、その光ID取得レートを下げる。これにより、受信機200による復号用画像Pdecの復号処理が行われる頻度が少なくなるため、消費電力を抑えることができる。 On the other hand, if the receiver 200 determines that it is not shaken during imaging (No in step S461), the receiver 200 acquires the decoding image Pdec at a low optical ID acquisition rate (step S463). Specifically, if the optical ID acquisition rate is increased in step S462 and is still a high optical ID acquisition rate, the receiver 200 sets the optical ID acquisition rate because the current optical ID acquisition rate is high. Lower. Thereby, since the frequency with which the decoding process of the decoding image Pdec by the receiver 200 is reduced, power consumption can be suppressed.
 そして、受信機200は、光ID取得レートの調整処理を終了するための終了条件が満たされたか否かを判定し(ステップS464)、満たされていないと判定すると(ステップS464のNo)、ステップS461からの処理を繰り返し実行する。一方、受信機200は、終了条件が満たされたと判定すると(ステップS464のYes)、光ID取得レートの調整処理を終了する。 Then, the receiver 200 determines whether or not an end condition for ending the adjustment process of the optical ID acquisition rate is satisfied (step S464). When the receiver 200 determines that the end condition is not satisfied (No in step S464), The processing from S461 is repeatedly executed. On the other hand, when the receiver 200 determines that the end condition is satisfied (Yes in step S464), the receiver 200 ends the optical ID acquisition rate adjustment process.
 図110は、受信機200によるカメラレンズの切り替え処理の一例を示す図である。 FIG. 110 is a diagram illustrating an example of a camera lens switching process by the receiver 200.
 受信機200は、広角レンズ211と望遠レンズ212とをそれぞれカメラレンズとして備えていてもよい。広角レンズ211を用いた撮像によって得られる撮像画像は、画角の広い画像であって、その画像には被写体が小さく映し出される。一方、望遠レンズ212を用いた撮像によって得られる撮像画像は、画角の狭い画像であって、その画像には被写体が大きく映し出される。 The receiver 200 may include a wide-angle lens 211 and a telephoto lens 212 as camera lenses. A captured image obtained by imaging using the wide-angle lens 211 is an image with a wide angle of view, and a subject is projected to be small in the image. On the other hand, a captured image obtained by imaging using the telephoto lens 212 is an image with a narrow angle of view, and a subject is projected greatly in the image.
 上述のような受信機200は、撮像を行うときには、図110に示す方法A~Eの何れかの方法によって、撮像に用いられるカメラレンズを切り替えてもよい。 The receiver 200 as described above may switch the camera lens used for imaging by any one of the methods A to E shown in FIG.
 方法Aでは、受信機200は、通常撮像の場合でも、光IDを受信する場合でも、撮像するときには常に望遠レンズ212を用いる。ここで、通常撮像の場合とは、撮像によって全ての撮像画像を撮像表示画像Ppreとして取得する場合である。また、光IDを受信する場合とは、撮像によって撮像表示画像Ppreと復号用画像Pdecを周期的に取得する場合である。 In Method A, the receiver 200 always uses the telephoto lens 212 when imaging, whether in normal imaging or when receiving an optical ID. Here, the case of normal imaging is a case where all captured images are acquired as captured display images Ppre by imaging. The case where the optical ID is received is a case where the captured display image Ppre and the decoding image Pdec are periodically acquired by imaging.
 方法Bでは、受信機200は、通常撮像の場合には、広角レンズ211を用いる。一方、光IDを受信する場合には、受信機200は、まず、広角レンズ211を用いる。そして、受信機200は、その広角レンズ211を用いているときに取得された復号用画像Pdecに輝線パターン領域が含まれていれば、カメラレンズを広角レンズ211から望遠レンズ212に切り替える。この切り替え後には、受信機200は、画角の狭い、すなわち輝線パターン領域が大きく表れた復号用画像Pdecを取得することができる。 In Method B, the receiver 200 uses the wide-angle lens 211 in the case of normal imaging. On the other hand, when receiving the optical ID, the receiver 200 first uses the wide-angle lens 211. The receiver 200 switches the camera lens from the wide-angle lens 211 to the telephoto lens 212 if the bright line pattern region is included in the decoding image Pdec acquired when the wide-angle lens 211 is used. After this switching, the receiver 200 can acquire a decoding image Pdec with a narrow angle of view, that is, a bright line pattern region appearing large.
 方法Cでは、受信機200は、通常撮像の場合には、広角レンズ211を用いる。一方、光IDを受信する場合には、受信機200は、カメラレンズを広角レンズ211と望遠レンズ212とに切り替える。つまり、受信機200は、広角レンズ211を用いて撮像表示画像Ppreを取得し、望遠レンズ212を用いて復号用画像Pdecを取得する。 In Method C, the receiver 200 uses the wide-angle lens 211 in the case of normal imaging. On the other hand, when receiving the optical ID, the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212. That is, the receiver 200 acquires the captured display image Ppre using the wide-angle lens 211 and acquires the decoding image Pdec using the telephoto lens 212.
 方法Dでは、受信機200は、通常撮像の場合でも、光IDを受信する場合でも、ユーザによる操作に応じて、カメラレンズを広角レンズ211と望遠レンズ212とに切り替える。 In Method D, the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212 in accordance with the operation by the user regardless of whether it is normal imaging or receives an optical ID.
 方法Eでは、受信機200は、光IDを受信する場合、広角レンズ211を用いて取得された復号用画像Pdecを復号し、正しく復号できなければ、カメラレンズを広角レンズ211から望遠レンズ212に切り替える。または、受信機200は、望遠レンズ212を用いて取得された復号用画像Pdecを復号し、正しく復号できなければ、カメラレンズを望遠レンズ212から広角レンズ211に切り替える。なお、受信機200は、復号用画像Pdecを正しく復号できたか否かを判定するときには、まず、その復号用画像Pdecに対する復号によって得られる光IDをサーバに送信する。サーバは、その光IDが自らに登録されている光IDに一致していれば、一致していることを示す一致情報を受信機200に通知し、一致していなければ、一致していないことを示す不一致情報を受信機200に通知する。受信機200は、サーバから通知された情報が一致情報であれば、復号用画像Pdecが正しく復号できたと判定し、サーバから通知された情報が不一致情報であれば、復号用画像Pdecが正しく復号できなかったと判定する。または、受信機200は、復号用画像Pdecの復号によって得られる光IDが、予め定められた条件を満たす場合には、復号用画像Pdecが正しく復号できたと判定する。一方、その条件を満たさない場合には、受信機200は、復号用画像Pdecが正しく復号できなかったと判定する。 In the method E, when receiving the optical ID, the receiver 200 decodes the decoding image Pdec acquired using the wide-angle lens 211. If the decoding image Pdec cannot be correctly decoded, the camera lens is changed from the wide-angle lens 211 to the telephoto lens 212. Switch. Alternatively, the receiver 200 decodes the decoding image Pdec acquired using the telephoto lens 212, and switches the camera lens from the telephoto lens 212 to the wide-angle lens 211 if it cannot be decoded correctly. Note that when determining whether or not the decoding image Pdec has been correctly decoded, the receiver 200 first transmits an optical ID obtained by decoding the decoding image Pdec to the server. If the optical ID matches the optical ID registered in the server, the server notifies the receiver 200 of the matching information indicating that it matches, and if it does not match, the server does not match. Is sent to the receiver 200. The receiver 200 determines that the decoding image Pdec has been correctly decoded if the information notified from the server is coincidence information. If the information notified from the server is mismatch information, the receiver 200 correctly decodes the decoding image Pdec. Judge that it was not possible. Alternatively, the receiver 200 determines that the decoding image Pdec has been correctly decoded when the optical ID obtained by decoding the decoding image Pdec satisfies a predetermined condition. On the other hand, if the condition is not satisfied, the receiver 200 determines that the decoding image Pdec has not been correctly decoded.
 このようにカメラレンズを切り替えることによって、適切な復号用画像Pdecを取得することができる。 By switching the camera lens in this way, an appropriate decoding image Pdec can be acquired.
 図111は、受信機200によるカメラの切り替え処理の一例を示す図である。 FIG. 111 is a diagram illustrating an example of camera switching processing by the receiver 200.
 例えば、受信機200は、カメラとしてインカメラ213とアウトカメラ(図111では図示せず)とを備える。インカメラ213は、フェイスカメラまたは自撮りカメラともいい、受信機200におけるディスプレイ201と同じ面に配置されているカメラである。アウトカメラは、受信機200におけるディスプレイ201の面と反対側の面に配置されているカメラである。 For example, the receiver 200 includes an in camera 213 and an out camera (not shown in FIG. 111) as cameras. The in-camera 213 is also referred to as a face camera or a self-portrait camera, and is a camera arranged on the same surface as the display 201 in the receiver 200. The out camera is a camera arranged on the surface of the receiver 200 opposite to the surface of the display 201.
 このような受信機200は、インカメラ213を上に向けた状態で、照明装置として構成された送信機100をインカメラ213によって撮像する。この撮像によって、受信機200は、復号用画像Pdecを取得し、その復号用画像Pdecに対する復号によって、送信機100から送信される光IDを取得する。 Such a receiver 200 images the transmitter 100 configured as a lighting device with the in-camera 213 with the in-camera 213 facing upward. By this imaging, the receiver 200 acquires the decoding image Pdec, and acquires the optical ID transmitted from the transmitter 100 by decoding the decoding image Pdec.
 次に、受信機200は、その取得された光IDをサーバに送信することによって、その光IDに対応付けられたAR画像および認識情報をサーバから取得する。受信機200は、アウトカメラおよびインカメラ213のそれぞれによって得られる各撮像表示画像Ppreの中から、その認識情報に応じた対象領域を認識する処理を開始する。ここで、受信機200は、アウトカメラおよびインカメラ213のそれぞれによって得られた撮像表示画像Ppreの何れからも、対象領域を認識することができない場合、受信機200を動かすようにユーザに促す。受信機200に促されたユーザは、受信機200を動かす。具体的には、ユーザは、インカメラ213およびアウトカメラがユーザの前後方向を向くように、受信機200を動かす。その結果、受信機200は、アウトカメラによって取得された撮像表示画像Ppreの中から、対象領域を認識する。つまり、受信機200は、人が映し出された領域を対象領域として認識し、撮像表示画像Ppreのうちのその対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppreを表示する。 Next, the receiver 200 acquires the AR image and the recognition information associated with the optical ID from the server by transmitting the acquired optical ID to the server. The receiver 200 starts a process of recognizing a target area corresponding to the recognition information from the captured display images Ppre obtained by the out camera and the in camera 213, respectively. Here, the receiver 200 prompts the user to move the receiver 200 when the target area cannot be recognized from any of the captured display images Ppre obtained by the out camera and the in camera 213 respectively. The user who is prompted by the receiver 200 moves the receiver 200. Specifically, the user moves the receiver 200 so that the in-camera 213 and the out-camera face the front-rear direction of the user. As a result, the receiver 200 recognizes the target area from the captured display image Ppre acquired by the out camera. That is, the receiver 200 recognizes a region in which a person is projected as a target region, superimposes an AR image on the target region in the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed. To do.
 図112は、受信機200とサーバとの処理動作の一例を示すフローチャートである。 FIG. 112 is a flowchart showing an example of processing operation between the receiver 200 and the server.
 受信機200は、照明装置である送信機100をインカメラ213で撮像することによって、その送信機100から送信される光IDを取得し、その光IDをサーバに送信する(ステップS471)。サーバは、受信機200から光IDを受信し(ステップS472)、その光IDに基づいて、受信機200の位置を推定する(ステップS473)。例えば、サーバは、光IDごとに、その光IDを送信する送信機100が配置されている部屋、建物、またはスペースなど示すテーブルを記憶している。そして、サーバは、そのテーブルにおいて、受信機200から送信された光IDに対応付けられた部屋などを、受信機200の位置として推定する。さらに、サーバは、その推定された位置に対応付けられたAR画像および認識情報を受信機200に送信する(ステップS474)。 The receiver 200 acquires an optical ID transmitted from the transmitter 100 by capturing an image of the transmitter 100, which is a lighting device, with the in-camera 213, and transmits the optical ID to the server (step S471). The server receives the optical ID from the receiver 200 (step S472), and estimates the position of the receiver 200 based on the optical ID (step S473). For example, the server stores, for each light ID, a table indicating a room, a building, a space, or the like in which the transmitter 100 that transmits the light ID is arranged. Then, the server estimates the room associated with the optical ID transmitted from the receiver 200 as the position of the receiver 200 in the table. Further, the server transmits the AR image and recognition information associated with the estimated position to the receiver 200 (step S474).
 受信機200は、サーバから送信されたAR画像および認識情報を取得する(ステップS475)。ここで、受信機200は、アウトカメラおよびインカメラ213のそれぞれによって得られた各撮像表示画像Ppreの中から、その認識情報に応じた対象領域を認識する処理を開始する。そして、受信機200は、例えばアウトカメラによって取得された撮像表示画像Ppreの中から対象領域を認識する(ステップS476)。受信機200は、撮像表示画像Ppreのうちの対象領域にAR画像を重畳し、そのAR画像が重畳された撮像表示画像Ppreを表示する(ステップS477)。 The receiver 200 acquires the AR image and the recognition information transmitted from the server (Step S475). Here, the receiver 200 starts a process of recognizing a target area corresponding to the recognition information from each captured display image Ppre obtained by each of the out camera and the in camera 213. Then, the receiver 200 recognizes the target area from, for example, the captured display image Ppre acquired by the out-camera (step S476). The receiver 200 superimposes the AR image on the target area in the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S477).
 なお、上述の例では、受信機200は、サーバから送信されたAR画像および認識情報を取得すると、ステップS476において、アウトカメラおよびインカメラ213のそれぞれによって得られた各撮像表示画像Ppreの中から対象領域を認識する処理を開始した。しかし、受信機200は、ステップS476において、アウトカメラのみによって得られた撮像表示画像Ppreの中から対象領域を認識する処理を開始してもよい。つまり、光IDを取得するためのカメラ(上述の例ではインカメラ213)と、AR画像が重畳される撮像表示画像Ppreを取得するためのカメラ(上述の例ではアウトカメラ)とを、常に異ならせてもよい。 In the above-described example, when the receiver 200 acquires the AR image and the recognition information transmitted from the server, in step S476, the receiver 200 selects from the captured display images Ppre obtained by the out camera and the in camera 213, respectively. The process of recognizing the target area has started. However, the receiver 200 may start the process of recognizing the target area from the captured display image Ppre obtained only by the out-camera in step S476. That is, the camera for acquiring the light ID (in-camera 213 in the above example) and the camera for acquiring the captured display image Pre on which the AR image is superimposed (out-camera in the above example) are always different. It may be allowed.
 また、上述の例では、受信機200は、照明装置である送信機100をインカメラ213で撮像したが、送信機100によって照らされた床面をアウトカメラで撮影してもよい。このようなアウトカメラによる撮像でも、受信機200は、送信機100から送信される光IDを取得することができる。 In the above-described example, the receiver 200 images the transmitter 100 that is an illumination device with the in-camera 213, but the floor illuminated by the transmitter 100 may be captured with the out-camera. Even with such an out-camera imaging, the receiver 200 can acquire the optical ID transmitted from the transmitter 100.
 図113は、受信機200によるAR画像の重畳の一例を示す図である。 FIG. 113 is a diagram illustrating an example of AR image superimposition performed by the receiver 200.
 受信機200は、例えばコンビニエンスストアなどの店舗に設置された電子レンジとして構成されている送信機100を撮像する。この送信機100は、電子レンジの庫内を撮像するためのカメラと、その庫内を照らす照明装置とを備える。そして、送信機100は、庫内に収納された飲食物(すなわち温め対象物)を、カメラによる撮像によって認識する。また、送信機100は、その飲食物を温めるときには、上述の照明装置を発光させるとともに、その照明装置を輝度変化させることによって、認識された飲食物を示す光IDを送信する。なお、この照明装置は電子レンジの庫内を照らすが、その照明装置の光は、電子レンジの透過性を有する窓部から外部に放たれる。したがって、光IDは、照明装置から電子レンジの窓部を介して、電子レンジの外部に送信される。 The receiver 200 images the transmitter 100 configured as a microwave oven installed in a store such as a convenience store. The transmitter 100 includes a camera for imaging the inside of a microwave oven and an illumination device that illuminates the inside of the oven. And the transmitter 100 recognizes the food / beverage (namely, warming object) accommodated in the store | warehouse | chamber by imaging with a camera. In addition, when the food or drink is warmed, the transmitter 100 transmits the light ID indicating the recognized food or drink by causing the lighting device to emit light and changing the luminance of the lighting device. In addition, although this illuminating device illuminates the inside of the store | warehouse | chamber of a microwave oven, the light of the illuminating device is emitted outside from the window part which has the transparency of a microwave oven. Accordingly, the light ID is transmitted from the lighting device to the outside of the microwave oven through the window portion of the microwave oven.
 ここで、ユーザは、コンビニエンスストアにて飲食物を購入し、その飲食物を温めるために、電子レンジである送信機100にその飲食物を入れる。このとき、送信機100は、カメラによってその飲食物を認識し、その認識された飲食物を示す光IDを送信しながら飲食物の温めを開始する。 Here, the user purchases food and drink at a convenience store, and puts the food and drink into the transmitter 100, which is a microwave oven, in order to warm the food and drink. At this time, the transmitter 100 recognizes the food and drink with the camera, and starts warming the food and drink while transmitting the optical ID indicating the recognized food and drink.
 受信機200は、その温めを開始した送信機100を撮像することによって、送信機100から送信された光IDを取得し、その光IDをサーバに送信する。次に、受信機200は、その光IDに対応付けられたAR画像、音声データおよび認識情報をサーバから取得する。 The receiver 200 acquires the optical ID transmitted from the transmitter 100 by capturing an image of the transmitter 100 that has started the warming, and transmits the optical ID to the server. Next, the receiver 200 acquires an AR image, audio data, and recognition information associated with the optical ID from the server.
 上述のAR画像は、送信機100の内部の仮想的な様子を示す動画であるAR画像P32aと、庫内に収納された飲食物を詳細に示すAR画像P32bと、送信機100から湯気が出ている様子を動画によって示すAR画像P32cと、飲食物の温め完了までの残り時間を動画によって示すAR画像P32dとを含む。 The above-mentioned AR image includes an AR image P32a that is a moving image showing a virtual state inside the transmitter 100, an AR image P32b that shows food and drink stored in the cabinet in detail, and steam from the transmitter 100. AR image P32c which shows a state of being heated by a moving image, and AR image P32d which shows the remaining time until the completion of warming of food and drink by a moving image.
 例えば、AR画像P32aは、電子レンジの庫内に収納された飲食物がピザであれば、ピザを載せたターンテーブルが回転していて、そのピザの周りを複数の小人が踊っている動画である。AR画像P32bは、例えば、庫内に収納された飲食物がピザであれば、その商品名「ピザ」と、そのピザの材料とを示す画像である。 For example, if the food and drink stored in the microwave oven is a pizza, the AR image P32a is a video in which a turntable with a pizza is rotating and a plurality of dwarfs are dancing around the pizza. It is. The AR image P32b is, for example, an image showing the product name “pizza” and the material of the pizza if the food and drink stored in the warehouse is a pizza.
 受信機200は、認識情報に基づいて、撮像表示画像Ppreのうちの送信機100の窓部が映し出されている領域を、AR画像P32aの対象領域として認識し、その対象領域にAR画像P32aを重畳する。さらに、受信機200は、認識情報に基づいて、撮像表示画像Ppreのうちの、送信機100が映し出されている領域よりも上にある領域を、AR画像P32bの対象領域として認識し、その対象領域にAR画像P32bを重畳する。さらに、受信機200は、認識情報に基づいて、撮像表示画像Ppreのうち、AR画像P32aの対象領域と、AR画像P32bの対象領域との間にある領域を、AR画像P32cの対象領域として認識し、その対象領域にAR画像P32cを重畳する。さらに、受信機200は、認識情報に基づいて、撮像表示画像Ppreのうち、送信機100が映し出されている領域の下にある領域を、AR画像P32dの対象領域として認識し、その対象領域にAR画像P32dを重畳する。 Based on the recognition information, the receiver 200 recognizes the area in which the window of the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32a, and the AR image P32a is displayed in the target area. Superimpose. Further, based on the recognition information, the receiver 200 recognizes the area above the area where the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32b, and the target The AR image P32b is superimposed on the area. Furthermore, based on the recognition information, the receiver 200 recognizes an area between the target area of the AR image P32a and the target area of the AR image P32b in the captured display image Ppre as the target area of the AR image P32c. Then, the AR image P32c is superimposed on the target area. Further, based on the recognition information, the receiver 200 recognizes an area below the area where the transmitter 100 is projected in the captured display image Ppre as the target area of the AR image P32d, and sets the target area as the target area. The AR image P32d is superimposed.
 さらに、受信機200は、音声データを再生することによって、飲食物が加熱されるときに生じる音を出力する。 Furthermore, the receiver 200 outputs sound generated when food or drink is heated by reproducing audio data.
 受信機200によって上述のようなAR画像P32a~P32dが表示され、さらに、音が出力されることによって、飲食物の温めが完了するまでの間、ユーザの興味を受信機200に引き付けることができる。その結果、温めの完了を待っているユーザの負担を軽減することができる。また、湯気などを示すAR画像P32cが表示され、飲食物が加熱されるときに生じる音が出力されることによって、ユーザにシズル感を与えることができる。また、AR画像P32dの表示によって、ユーザは、飲食物の温め完了までの残り時間を容易に知ることができる。したがって、ユーザは、温め完了までの間、例えば、電子レンジである送信機100から離れて店舗内に陳列されている本などを読むことができる。また、受信機200は、残り時間が0になったときには、温めが完了したことをユーザに通知してもよい。 The AR images P32a to P32d as described above are displayed by the receiver 200, and further, by outputting sound, it is possible to attract the user's interest to the receiver 200 until the warming of food and drink is completed. . As a result, the burden on the user who is waiting for completion of warming can be reduced. Further, the AR image P32c indicating steam or the like is displayed, and a sound generated when the food or drink is heated is output, so that a sizzle can be given to the user. In addition, the display of the AR image P32d allows the user to easily know the remaining time until the completion of the heating of the food and drink. Therefore, the user can read, for example, a book displayed in the store away from the transmitter 100 that is a microwave oven until the warming is completed. The receiver 200 may notify the user that the warming is completed when the remaining time becomes zero.
 なお、上述の例では、AR画像P32aは、ピザを載せたターンテーブルが回転していて、そのピザの周りを複数の小人が踊っている動画であったが、例えば、庫内の温度分布を仮想的に示す画像であってもよい。また、AR画像P32bは、庫内に収納された飲食物の商品名および材料を示す画像であったが、栄養成分またはカロリーを示す画像であってもよい。あるいは、AR画像P32bは、割引券を示す画像であってもよい。 In the above-described example, the AR image P32a is a moving image in which a turntable on which a pizza is placed is rotating and a plurality of dwarfs are dancing around the pizza. May be an image that virtually represents. Moreover, although AR image P32b was an image which shows the brand name and material of the food / beverage accommodated in the store | warehouse | chamber, the image which shows a nutrient component or a calorie may be sufficient. Alternatively, the AR image P32b may be an image showing a discount ticket.
 このように本変形例における表示方法では、被写体は、照明装置を備えた電子レンジであって、照明装置は、電子レンジの庫内を照らし、かつ、輝度変化することによって光IDを電子レンジの外部に送信する。そして、撮像表示画像Ppreおよび復号用画像Pdecの取得では、光IDを送信している電子レンジを撮像することによって撮像表示画像Ppreおよび復号用画像Pdecを取得する。対象領域の認識では、撮像表示画像Ppreに映し出されている電子レンジの窓部分を対象領域として認識する。撮像表示画像Ppreの表示では、庫内の状態変化を示すAR画像が重畳された撮像表示画像Ppreを表示する。 As described above, in the display method according to this modification, the subject is a microwave oven provided with an illumination device, and the illumination device illuminates the interior of the microwave oven and changes the luminance to change the light ID of the microwave oven. Send to the outside. In acquiring the captured display image Ppre and the decoding image Pdec, the captured display image Ppre and the decoding image Pdec are acquired by imaging the microwave oven that transmits the optical ID. In the recognition of the target area, the window portion of the microwave oven displayed in the captured display image Ppre is recognized as the target area. In the display of the captured display image Ppre, the captured display image Ppre on which the AR image indicating the state change in the warehouse is superimposed is displayed.
 これにより、電子レンジの庫内の状態変化がAR画像として表示されるため、電子レンジの利用者に庫内の様子を分かりやすく伝えることができる。 Thereby, since the change in the state of the microwave oven is displayed as an AR image, the state of the oven can be easily communicated to the user of the microwave oven.
 図114は、受信機200、電子レンジ、中継サーバおよび電子決済用サーバを含むシステムの処理動作を示すシーケンス図である。なお、電子レンジは、上述と同様、カメラおよび照明装置を備え、その照明装置の輝度を変化させることによって光IDを送信する。つまり、電子レンジは送信機100としての機能を有する。 FIG. 114 is a sequence diagram showing the processing operation of the system including the receiver 200, the microwave oven, the relay server, and the electronic settlement server. Note that the microwave oven includes a camera and a lighting device as described above, and transmits the light ID by changing the luminance of the lighting device. That is, the microwave oven has a function as the transmitter 100.
 まず、電子レンジは、庫内に収納された飲食物をカメラによって認識する(ステップS481)。次に、電子レンジは、その認識された飲食物を示す光IDを照明装置の輝度変化によって受信機200に送信する。 First, the microwave oven recognizes food and drink stored in the cabinet with the camera (step S481). Next, the microwave oven transmits a light ID indicating the recognized food and drink to the receiver 200 by a luminance change of the lighting device.
 受信機200は、電子レンジを撮像することによって、その電子レンジから送信された光IDを受信し(ステップS483)、光IDとカード情報とを中継サーバに送信する。カード情報は、受信機200に予め保存されているクレジットカードなどの情報であって、電子決済に必要な情報である。 The receiver 200 receives the optical ID transmitted from the microwave oven by imaging the microwave oven (step S483), and transmits the optical ID and the card information to the relay server. The card information is information such as a credit card stored in advance in the receiver 200 and is information necessary for electronic payment.
 中継サーバは、光IDごとに、その光IDに対応するAR画像、認識情報および商品情報を示すテーブルを保持している。この商品情報は、光IDによって示される飲食物の料金などを示す。このような中継サーバは、受信機200から送信された光IDとカード情報とを受信すると(ステップS486)、その光IDに対応付けられた商品情報を上述のテーブルから見つけ出す。そして、中継サーバは、その商品情報とカード情報とを電子決済用サーバに送信する(ステップS486)。電子決済用サーバは、中継サーバから送信された商品情報とカード情報とを受信すると(ステップS487)、その商品情報とカード情報とに基づいて電子決済の処理を行う(ステップS488)。そして、電子決済用サーバは、その電子決済の処理が完了すると、その完了を中継サーバに通知する(ステップS489)。 The relay server holds a table indicating an AR image, recognition information, and product information corresponding to each optical ID. This merchandise information indicates the price of food and drink indicated by the light ID. When such a relay server receives the optical ID and card information transmitted from the receiver 200 (step S486), it finds product information associated with the optical ID from the above table. Then, the relay server transmits the merchandise information and card information to the electronic settlement server (step S486). When the electronic settlement server receives the merchandise information and the card information transmitted from the relay server (step S487), the electronic settlement server performs an electronic settlement process based on the merchandise information and the card information (step S488). Then, when the electronic payment processing is completed, the electronic settlement server notifies the relay server of the completion (step S489).
 中継サーバは、電子決済用サーバからの決済完了の通知を確認すると(ステップS490)、飲食物の温め開始しを電子レンジに指示する(ステップS491)。さらに、中継サーバは、上述のテーブルにおいて、ステップS485で受信した光IDに対応付けられているAR画像および認識情報を受信機200に送信する(ステップS493)。 When the relay server confirms the payment completion notification from the electronic payment server (step S490), the relay server instructs the microwave to start heating the food (step S491). Further, the relay server transmits the AR image and the recognition information associated with the optical ID received in step S485 in the above table to the receiver 200 (step S493).
 電子レンジは、中継サーバから温め開始の指示を受けると、庫内に収納された飲食物の温めを開始する(ステップS492)。また、受信機200は、中継サーバから送信されたAR画像および認識情報を受信すると、ステップS483から開始されている撮像によって周期的に取得される撮像表示画像Ppreから、その認識情報に応じた対象領域を認識する。そして、受信機200は、その対象領域にAR画像を重畳する(ステップS494)。 When the microwave oven receives a warming start instruction from the relay server, it starts warming the food and drink stored in the cabinet (step S492). Further, when the receiver 200 receives the AR image and the recognition information transmitted from the relay server, the target according to the recognition information from the captured display image Ppre periodically acquired by the imaging started from step S483. Recognize the area. Then, the receiver 200 superimposes the AR image on the target area (step S494).
 これにより、受信機200のユーザは、電子レンジの庫内に飲食物を入れて撮像を行えば、簡単に決済を済ませて、飲食物の温めを開始することができる。また、決済ができない場合には、ユーザによる飲食物の温めを禁止することができる。さらに、温めが開始されたときには、図113に示すAR画像P32aなどの表示を行うことができ、庫内の様子をユーザに知らせることができる。 Thereby, the user of the receiver 200 can easily settle the settlement and start warming the food and drink by placing the food and drink in the microwave oven and taking an image. Moreover, when payment cannot be performed, warming of food and drink by a user can be prohibited. Furthermore, when the warming is started, an AR image P32a shown in FIG. 113 can be displayed, and the user can be informed of the state in the warehouse.
 図115は、POS端末、サーバ、受信機200および電子レンジを含むシステムの処理動作を示すシーケンス図である。なお、電子レンジは、上述と同様、カメラおよび照明装置を備え、その照明装置の輝度を変化させることによって光IDを送信する。つまり、電子レンジは送信機100としての機能を有する。また、POS(point-of-sale)端末は、電子レンジと同じコンビニエンスストアなどの店舗に設置された端末である。 FIG. 115 is a sequence diagram showing processing operations of a system including a POS terminal, a server, a receiver 200, and a microwave oven. Note that the microwave oven includes a camera and a lighting device as described above, and transmits the light ID by changing the luminance of the lighting device. That is, the microwave oven has a function as the transmitter 100. A POS (point-of-sale) terminal is a terminal installed in a store such as the same convenience store as the microwave oven.
 まず、受信機200のユーザは、店舗で、商品である飲食物を選び、その飲食物を購入するためにPOS端末が設置された場所に向かう。その店舗の店員は、POS端末を操作し、飲食物の代金をユーザから受け取る。この店員によるPOS端末の操作によって、POS端末は、操作入力データと販売情報とを取得する(ステップS501)。販売情報は、例えば商品の名称、個数および値段と、販売場所と、販売日時とを示す。操作入力データは、例えば、店員によって入力されたユーザの性別および年代などを示す。POS端末は、その操作入力データと販売情報とをサーバに送信する(ステップS502)。サーバは、POS端末から送信された操作入力データと販売情報とを受信する(ステップS503)。 First, the user of the receiver 200 selects a food or drink as a product at a store and heads to a place where a POS terminal is installed in order to purchase the food or drink. The store clerk operates the POS terminal and receives the price of food and drink from the user. By the operation of the POS terminal by the store clerk, the POS terminal acquires operation input data and sales information (step S501). The sales information indicates, for example, the name, number, and price of the product, the sales location, and the sales date and time. The operation input data indicates, for example, the sex and age of the user input by the store clerk. The POS terminal transmits the operation input data and sales information to the server (step S502). The server receives operation input data and sales information transmitted from the POS terminal (step S503).
 一方、受信機200のユーザは、店員に飲食物の代金を支払うと、その飲食物を温めるために電子レンジの庫内に飲食物を入れる。電子レンジは、庫内に収納された飲食物をカメラによって認識する(ステップS504)。次に、電子レンジは、その認識された飲食物を示す光IDを照明装置の輝度変化によって受信機200に送信する(ステップS505)。そして、電子レンジは、飲食物の温めを開始する(ステップS507)。 On the other hand, when the user of the receiver 200 pays the clerk for the food and drink, the user puts the food and drink in the microwave oven to warm the food and drink. The microwave oven recognizes the food and drink stored in the cabinet with the camera (step S504). Next, the microwave oven transmits the light ID indicating the recognized food and drink to the receiver 200 by the luminance change of the lighting device (step S505). And a microwave oven starts warming of food and drink (step S507).
 受信機200は、電子レンジを撮像することによって、その電子レンジから送信された光IDを受信し(ステップS508)、光IDと端末情報とをサーバに送信する(ステップS509)。端末情報は、受信機200に予め保存されている情報であって、例えば、受信機200のディスプレイ201に表示される言語の種別(例えば英語または日本語など)を示す。 The receiver 200 receives the optical ID transmitted from the microwave oven by imaging the microwave oven (step S508), and transmits the optical ID and the terminal information to the server (step S509). The terminal information is information stored in advance in the receiver 200 and indicates, for example, the language type (for example, English or Japanese) displayed on the display 201 of the receiver 200.
 サーバは、受信機200からアクセスされ、受信機200から送信された光IDと端末情報とを受信すると、その受信機200からのアクセスが、最初のアクセスか否かを判定する(ステップS510)。最初のアクセスは、ステップS503の処理が行われたときから所定時間内において最初に行われるアクセスである。ここで、サーバは、その受信機200からのアクセスが最初のアクセスであると判定すると(ステップS510のYes)、操作入力データと端末情報とを関連付けて保存する(ステップS511)。 When the server is accessed from the receiver 200 and receives the optical ID and the terminal information transmitted from the receiver 200, the server determines whether the access from the receiver 200 is the first access (step S510). The first access is the first access performed within a predetermined time from the time when the process of step S503 is performed. If the server determines that the access from the receiver 200 is the first access (Yes in step S510), the server associates and stores the operation input data and the terminal information (step S511).
 なお、サーバは、受信機200からのアクセスが最初のアクセスか否かを判定したが、販売情報によって示される商品が、光IDによって示される飲食物に一致するか否かを判定してもよい。また、サーバは、ステップS511では、操作入力データと端末情報とを関連付けるだけでなく、販売情報もそれらに関連付けて保存してもよい。 The server determines whether or not the access from the receiver 200 is the first access, but may determine whether or not the product indicated by the sales information matches the food or drink indicated by the light ID. . In step S511, the server may store not only the operation input data and the terminal information but also the sales information in association with them.
 (屋内での利用)
 図116は、地下街等の屋内での利用の様子を示す図である。
(Indoor use)
FIG. 116 is a diagram showing a situation of indoor use such as an underground shopping mall.
 受信機200は、照明装置として構成された送信機100の送信する光IDを受信し、自身の現在位置を推定する。また、受信機200は、地図上に現在位置を表示して道案内を行ったり、付近の店舗の情報を表示したりする。 The receiver 200 receives the light ID transmitted from the transmitter 100 configured as a lighting device, and estimates its current position. Further, the receiver 200 displays the current position on a map to provide route guidance, or displays information on nearby stores.
 緊急時には送信機100から災害情報や避難情報を送信することで、通信が混雑している場合や、通信基地局が故障した場合や、通信基地局からの電波が届かない場所にいる場合であっても、これらの情報を得ることができる。これは、緊急放送を聞き逃した場合や、緊急放送を聞くことができない聴覚障害者に有効である。 In an emergency, transmission of disaster information and evacuation information from the transmitter 100 can be used when communication is congested, when a communication base station fails, or when radio waves from the communication base station do not reach. Even this information can be obtained. This is effective for a hearing impaired person who has missed an emergency broadcast or cannot hear an emergency broadcast.
 つまり、受信機200は、撮像することによって、送信機100から送信された光IDを取得し、さらに、その光IDに対応付けられたAR画像P33と認識情報とをサーバから取得する。そして、受信機200は、上述の撮像によって得られた撮像表示画像Ppreから、認識情報に応じた対象領域を認識し、その対象領域に、矢印の形状をしたAR画像P33を重畳する。これにより、受信機200を上述のウェイファインダー(図100参照)として利用することができる。 That is, the receiver 200 acquires the optical ID transmitted from the transmitter 100 by taking an image, and further acquires the AR image P33 and the recognition information associated with the optical ID from the server. Then, the receiver 200 recognizes the target area corresponding to the recognition information from the captured display image Ppre obtained by the above-described imaging, and superimposes the AR image P33 in the shape of an arrow on the target area. Thereby, receiver 200 can be used as the above-mentioned way finder (refer to Drawing 100).
 (拡張現実オブジェクトの表示)
 図117は、拡張現実オブジェクトを表示する様子を示す図である。
(Display of augmented reality object)
FIG. 117 is a diagram illustrating a state in which an augmented reality object is displayed.
 拡張現実を表示させる舞台2718eは、上述の送信機100として構成され、発光部2718a、2718b、2718c、2718dの発光パターンや位置パターンで、拡張現実オブジェクトの情報や、拡張現実オブジェクトを表示させる基準位置を送信する。 The stage 2718e that displays the augmented reality is configured as the transmitter 100 described above, and the light emitting patterns and position patterns of the light emitting units 2718a, 2718b, 2718c, and 2718d are the reference positions for displaying the augmented reality object information and the augmented reality object. Send.
 受信機200は、受信した情報を基に、AR画像である拡張現実オブジェクト2718fを撮像画像に重畳して表示させる。 The receiver 200 displays the augmented reality object 2718f, which is an AR image, superimposed on the captured image based on the received information.
 なお、これらの包括的または具体的な態様は、装置、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、装置、システム、方法、集積回路、コンピュータプログラムまたは記録媒体の任意な組み合わせで実現されてもよい。また、一実施形態に関わる方法を実行するコンピュータプログラムがサーバの記録媒体に保存されており、端末の要求に応じて、サーバから端末に配信する態様で実現されてもよい。 Note that these comprehensive or specific modes may be realized by a recording medium such as an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable CD-ROM. You may implement | achieve in arbitrary combinations of a circuit, a computer program, or a recording medium. Moreover, the computer program which performs the method concerning one Embodiment is preserve | saved at the recording medium of the server, and may be implement | achieved in the aspect delivered to a terminal from a server according to the request | requirement of a terminal.
 [実施の形態4の変形例4]
 図118は、実施の形態4の変形例4における表示システムの構成を示す図である。
[Modification 4 of Embodiment 4]
FIG. 118 is a diagram illustrating a configuration of a display system according to the fourth modification of the fourth embodiment.
 この表示システム500は、可視光信号を用いた物体認識と拡張現実(Augmented Reality/Mixed Reality)表示とを行う。 This display system 500 performs object recognition and augmented reality (Augmented Reality / Mixed Reality) display using visible light signals.
 受信機200は、撮像を行い、可視光信号の受信と、物体認識または空間認識のための特徴量の抽出とを行う。特徴量の抽出は、撮像によって得られる撮像画像からの画像特徴量の抽出である。なお、可視光信号は、赤外線または紫外線などの可視光隣接キャリア信号であってもよい。また、本変形例では、受信機200が、拡張現実感画像(すなわちAR画像)が表示される対象物の認識を行う認識装置として構成されている。なお、図118に示す例では、対象物は例えばAR対象物501などである。 The receiver 200 performs imaging, receives a visible light signal, and extracts feature quantities for object recognition or space recognition. The feature amount extraction is extraction of an image feature amount from a captured image obtained by imaging. The visible light signal may be a visible light adjacent carrier signal such as infrared light or ultraviolet light. In this modification, the receiver 200 is configured as a recognition device that recognizes an object on which an augmented reality image (that is, an AR image) is displayed. In the example shown in FIG. 118, the target object is, for example, the AR target object 501.
 送信機100は、自身またはAR対象物501を識別するためのID等の情報を、可視光信号または電波信号として送信する。なお、IDは、例えば上述の光IDなどの識別情報であり、AR対象物501は、上述の対象領域である。可視光信号は、送信機100が有する光源の輝度変化により送信される信号である。 The transmitter 100 transmits information such as an ID for identifying itself or the AR object 501 as a visible light signal or a radio wave signal. The ID is identification information such as the above-described optical ID, for example, and the AR object 501 is the above-described target area. The visible light signal is a signal transmitted by a change in luminance of a light source included in the transmitter 100.
 受信機200またはサーバ300は、送信機100が送信する識別情報と、AR認識情報及びAR表示情報を紐付けて保持している。紐付けは1対1であってもよいし、1対多であってもよい。AR認識情報とは、上述の認識情報であって、AR表示を行うためのAR対象物501を認識するための情報である。具体的には、AR認識情報は、AR対象物501の画像特徴量(SIFT特徴量、SURF特徴量、またはORB特徴量等)、色、形状、大きさ、反射率、透過率、または三次元モデル等である。また、AR認識情報は、どの認識手法を用いて認識を行うかを示す識別情報または認識アルゴリズムを含んでもよい。AR表示情報は、AR表示を行うための情報であり、画像(すなわち上述のAR画像)、映像、音声、三次元モデル、モーションデータ、表示座標、表示サイズ、または透過率等である。また、AR表示情報は、色相、彩度および明度のそれぞれの絶対値または変更割合であってもよい。 The receiver 200 or the server 300 holds the identification information transmitted from the transmitter 100 in association with the AR recognition information and the AR display information. The association may be one-to-one or one-to-many. The AR recognition information is the above-described recognition information, and is information for recognizing the AR object 501 for performing AR display. Specifically, the AR recognition information is the image feature amount (SIFT feature amount, SURF feature amount, ORB feature amount, etc.), color, shape, size, reflectance, transmittance, or three-dimensional of the AR object 501. Model etc. The AR recognition information may include identification information or a recognition algorithm indicating which recognition method is used for recognition. The AR display information is information for performing AR display, and is an image (that is, the above-described AR image), video, audio, three-dimensional model, motion data, display coordinates, display size, transmittance, or the like. Further, the AR display information may be the absolute value or change ratio of each of hue, saturation, and brightness.
 送信機100は、サーバ300としての機能を兼ねてもよい。つまり、送信機100は、AR認識情報およびAR表示情報を保持し、有線または無線通信によって、それらの情報を送信してもよい。 The transmitter 100 may also function as the server 300. That is, the transmitter 100 may hold the AR recognition information and the AR display information and transmit the information by wired or wireless communication.
 受信機200は、カメラ(具体的にはイメージセンサ)で画像を撮像する。また、受信機200は、可視光信号、または、WiFiもしくはBluetooth(登録商標)などの電波信号を受信する。また、受信機200は、GPS等によって得られる位置情報、ジャイロセンサもしくは加速度センサによって得られる情報、およびマイクからの音声などの情報を取得し、これらの全ての情報あるいは一部の情報を統合して付近に存在するAR対象物を認識してもよい。また、受信機200は、それらの情報を統合せず、何れかの情報のみを用いてAR対象物を認識してもよい。 The receiver 200 captures an image with a camera (specifically, an image sensor). The receiver 200 receives a visible light signal or a radio wave signal such as WiFi or Bluetooth (registered trademark). Further, the receiver 200 acquires position information obtained by GPS or the like, information obtained by a gyro sensor or acceleration sensor, and information such as sound from a microphone, and integrates all or some of these information. AR objects existing in the vicinity may be recognized. Further, the receiver 200 may recognize the AR object using only one of the information without integrating the information.
 図119は、実施の形態4の変形例4に係る表示システムの処理動作を示すフローチャートである。 FIG. 119 is a flowchart showing the processing operation of the display system according to the fourth modification of the fourth embodiment.
 受信機200は、まず、既に可視光信号を受信しているか否かを判定する(ステップS521)。つまり、受信機200は、例えば、可視光信号を光源の輝度変化により送信する送信機100を撮影することにより、識別情報を示す可視光信号を取得しているか否かを判定する。このときには、その撮影によって、送信機100の撮像画像が取得される。 First, the receiver 200 determines whether or not a visible light signal has already been received (step S521). That is, for example, the receiver 200 determines whether or not the visible light signal indicating the identification information is acquired by photographing the transmitter 100 that transmits the visible light signal according to the luminance change of the light source. At this time, a captured image of the transmitter 100 is acquired by the shooting.
 ここで、受信機200は、既に可視光信号を受信していると判定した場合には(ステップS521のY)、受信した情報からAR対象物(物体、基準点、空間座標、または空間中の受信機200の位置と向き)を特定する。さらに、受信機200は、AR対象物の相対位置を認識する。この相対位置は、受信機200からAR対象物までの距離および方向によって表される。例えば、受信機200は、図50に示す輝線パターン領域の大きさおよび位置などに基づいて、AR対象物(すなわち輝線パターン領域である対象領域)を特定し、そのAR対象物の相対位置を認識する。 Here, when the receiver 200 determines that the visible light signal has already been received (Y in step S521), the AR object (object, reference point, spatial coordinate, or space in the space is determined from the received information. The position and orientation of the receiver 200 are specified. Furthermore, the receiver 200 recognizes the relative position of the AR object. This relative position is represented by the distance and direction from the receiver 200 to the AR object. For example, the receiver 200 identifies an AR object (that is, a target area that is a bright line pattern area) based on the size and position of the bright line pattern area shown in FIG. 50, and recognizes the relative position of the AR object. To do.
 そして、受信機200は、可視光信号に含まれるID等の情報と相対位置とをサーバ300に送信し、その情報および相対位置とをキーとして用いることによって、サーバ300に登録されたAR認識情報とAR表示情報とを取得する(ステップS522)。このとき、受信機200は、認識したAR対象物の情報だけでなく、そのAR対象物の付近に存在する他のAR対象物の情報(すなわちAR認識情報およびAR表示情報)も同時に取得しても良い。これにより、付近に存在する他のAR対象物がその受信機200によって撮像された際に、受信機200は、素早く、また、誤りなく、その付近に存在する他のAR対象物を認識することができる。例えば、付近に存在する他のAR対象物は、最初に認識したAR対象物とは異なる。 Then, the receiver 200 transmits the information such as an ID included in the visible light signal and the relative position to the server 300, and uses the information and the relative position as a key, thereby registering the AR recognition information in the server 300. And AR display information are acquired (step S522). At this time, the receiver 200 acquires not only the information on the recognized AR object but also information on other AR objects existing in the vicinity of the AR object (that is, the AR recognition information and the AR display information). Also good. Thereby, when another AR object existing in the vicinity is imaged by the receiver 200, the receiver 200 recognizes the other AR object existing in the vicinity quickly and without error. Can do. For example, other AR objects present in the vicinity are different from the first recognized AR object.
 なお、受信機200は、サーバ300にアクセスする代わりに、受信機200内のデータベースからこれらの情報を取得してもよい。受信機200は、これらの情報を、取得時から一定時間経過後、または特定の処理(例えば、画面のオフ、ボタン押下、アプリの終了もしくは停止、AR画像の表示、または、別のAR対象物の認識等)の後に廃棄してもよい。あるいは、受信機200は、取得される複数の情報のそれぞれで、その情報の取得から一定時間経過ごとに、その情報の信頼度を下げ、複数の情報のうち信頼度の高い情報を用いてもよい。 Note that the receiver 200 may acquire these pieces of information from the database in the receiver 200 instead of accessing the server 300. The receiver 200 receives these pieces of information after a certain period of time has elapsed from the time of acquisition or a specific process (for example, turning off the screen, pressing a button, ending or stopping an application, displaying an AR image, or another AR object. May be discarded after the recognition). Alternatively, the receiver 200 may decrease the reliability of the information every time a certain period of time has elapsed since the acquisition of the information, and use highly reliable information among the plurality of information. Good.
 ここで、受信機200は、各AR対象物との相対位置に基づいて、その相対位置の関係において有効なAR対象物のAR認識情報を優先して取得してもよい。例えば、受信機200は、ステップS521において、複数の送信機100を撮影することにより、複数の可視光信号(すなわち識別情報)を取得し、ステップS522において、それらの複数の可視光信号に対応する複数のAR認識情報(すなわち画像特徴量)を取得する。このとき、受信機200は、ステップS522において、複数のAR対象物のうち、それらの送信機100の撮影を行う受信機200から最も近いAR対象物の画像特徴量を選択する。つまり、この選択された画像特徴量が、可視光信号を用いて特定される1つのAR対象物(すなわち第1の対象物)の特定に用いられる。これにより、複数の画像特徴量が取得されても、適切な画像特徴量を第1の対象物の特定に用いることができる。 Here, the receiver 200 may preferentially acquire the AR recognition information of the AR object that is effective in relation to the relative position based on the relative position to each AR object. For example, the receiver 200 acquires a plurality of visible light signals (that is, identification information) by photographing the plurality of transmitters 100 in step S521, and corresponds to the plurality of visible light signals in step S522. A plurality of AR recognition information (that is, image feature amount) is acquired. At this time, in step S522, the receiver 200 selects an image feature amount of the AR object closest to the receiver 200 that performs imaging of the transmitter 100 among the plurality of AR objects. That is, the selected image feature amount is used to specify one AR object (that is, the first object) specified using the visible light signal. Thereby, even if a plurality of image feature amounts are acquired, an appropriate image feature amount can be used for specifying the first object.
 一方、受信機200は、可視光信号を受信していないと判定した場合には(ステップS521のN)、さらに、既にAR認識情報を取得しているか否かを判定する(ステップS523)。AR認識情報を取得していないと判定すると(ステップS523のN)、受信機200は、可視光信号によって示されるID等の識別情報を用いずに、画像処理により、または、位置情報もしくは電波情報などのその他の情報を用いてAR対象物の候補を認識する(ステップS524)。この処理は受信機200のみで行われてもよい。あるいは、受信機200は、撮像画像、またはその撮像画像の画像特徴量などの情報をサーバ300へ送信し、サーバ300が、そのAR対象物の候補を認識してもよい。その結果、受信機200は、認識された候補に対応したAR認識情報とAR表示情報とを、サーバ300または自身のデータベースから取得する。 On the other hand, when it is determined that the visible light signal is not received (N in step S521), the receiver 200 further determines whether or not the AR recognition information has already been acquired (step S523). If it is determined that the AR recognition information has not been acquired (N in step S523), the receiver 200 does not use identification information such as an ID indicated by the visible light signal, or by image processing, or position information or radio wave information. The AR object candidate is recognized using other information such as (step S524). This process may be performed only by the receiver 200. Alternatively, the receiver 200 may transmit information such as a captured image or an image feature amount of the captured image to the server 300, and the server 300 may recognize the AR object candidate. As a result, the receiver 200 acquires AR recognition information and AR display information corresponding to the recognized candidate from the server 300 or its own database.
 ステップS522の後、受信機200は、例えば画像認識など、可視光信号によって示されるID等の識別情報を用いない別の方法で、AR対象物を検出しているか否かを判定する(ステップS525)。つまり、受信機200は、複数の方法でAR対象物を認識したか否かを判定する。具体的には、受信機200は、可視光信号によって示される識別情報に基づいて取得された画像特徴量を用いて、撮像画像からAR対象物(すなわち第1の対象物)を特定する。そして、受信機200は、そのような識別情報を用いずに、画像処理により、撮像画像からAR対象物(すなわち第2の対象物)を特定しているか否かを判定する。 After step S522, the receiver 200 determines whether or not the AR object is detected by another method that does not use identification information such as an ID indicated by the visible light signal, such as image recognition (step S525). ). That is, the receiver 200 determines whether or not the AR object is recognized by a plurality of methods. Specifically, the receiver 200 specifies the AR object (that is, the first object) from the captured image using the image feature amount acquired based on the identification information indicated by the visible light signal. Then, the receiver 200 determines whether or not the AR object (that is, the second object) is specified from the captured image by image processing without using such identification information.
 ここで、受信機200は、複数の方法でAR対象物を認識したと判定すると(ステップS525のY)、可視光信号による認識結果を優先する。つまり、受信機200は、各方法によって認識されたAR対象物が一致しているか否かを確認する。そして、一致していなければ、受信機200は、それらのAR対象物の中から、撮像画像中においてAR画像が重畳される1つのAR対象物を、可視光信号によって認識されたAR対象物に決定する(ステップS526)。つまり、第1の対象物が第2の対象物と異なる場合には、受信機200は、第1の対象物を優先して、AR画像が表示される対象物として認識する。なお、AR画像が表示される対象物は、AR画像が重畳される対象物である。 Here, when the receiver 200 determines that the AR object has been recognized by a plurality of methods (Y in step S525), the recognition result by the visible light signal is prioritized. That is, the receiver 200 confirms whether or not the AR objects recognized by the respective methods match. If they do not match, the receiver 200 selects one AR object on which the AR image is superimposed in the captured image as the AR object recognized by the visible light signal. Determination is made (step S526). That is, when the first object is different from the second object, the receiver 200 gives priority to the first object and recognizes it as the object on which the AR image is displayed. The object on which the AR image is displayed is an object on which the AR image is superimposed.
 または、受信機200は、複数の方法のそれぞれに付与された優先順に基づいて、高い優先順位が付与された方法を優先してもよい。つまり、受信機200は、各方法によって認識されたAR対象物の中から、撮像画像中においてAR画像が重畳される1つのAR対象物を、例えば最も高い優先順位が付与された方法によって認識されたAR対象物に決定する。または、受信機200は、多数決もしくは優先度付き多数決によって、撮像画像中においてAR画像が重畳される1つのAR対象物を決定してもよい。この処理によって、それまでの認識結果が覆された場合は、受信機200はエラー対応処理を行う。 Alternatively, the receiver 200 may prioritize a method having a higher priority order based on a priority order assigned to each of the plurality of methods. That is, the receiver 200 recognizes one AR object on which the AR image is superimposed in the captured image from among the AR objects recognized by each method, for example, by a method having the highest priority. AR target is determined. Alternatively, the receiver 200 may determine one AR object on which the AR image is superimposed in the captured image by a majority decision or a majority decision with priority. If the recognition result up to that point is overturned by this process, the receiver 200 performs an error handling process.
 次に、受信機200は、取得したAR認識情報に基いて、撮像画像中のAR対象物の状態(具体的には、絶対位置、受信機200からの相対位置、大きさ、角度、照明状況、またはオクルージョン等)を認識する(ステップS527)。そして、受信機200は、その認識結果に合わせてAR表示情報(すなわちAR画像)を撮像画像に重畳して表示する(ステップS528)。つまり、受信機200は、撮像画像中の認識されたAR対象物にAR表示情報を重畳する。または、受信機200は、AR表示情報のみを表示する。 Next, the receiver 200 determines the state of the AR object in the captured image (specifically, the absolute position, the relative position from the receiver 200, the size, the angle, and the illumination status) based on the acquired AR recognition information. Or occlusion or the like) (step S527). Then, the receiver 200 displays the AR display information (that is, the AR image) superimposed on the captured image in accordance with the recognition result (step S528). That is, the receiver 200 superimposes the AR display information on the recognized AR object in the captured image. Alternatively, the receiver 200 displays only the AR display information.
 これらにより、画像処理のみでは困難な認識または検出が可能になる。その困難な認識または検出は、例えば、(文字内容だけが異なっているなどの)画像的に類似したAR対象物の識別、模様が少ないAR対象物の検出、反射率もしくは透過率が高いAR対象物の検出、形状もしくは模様が変化するAR対象物(例えば動物など)の検出、または、広い角度(いろいろな方向)からのAR対象物の検出である。つまり、本変形例では、これらのAR対象物の認識とAR表示とを行うことができる。また、可視光信号を用いない画像処理では、認識したいAR対象物が多くなるに従い、画像特徴量の近傍検索に時間がかかり、認識処理に時間がかかるようになり、また、認識率も悪化する。しかし、本変形例では、認識対象の増加による認識時間の増加と認識率の悪化の影響は、まったくないか極めて小さく、効果的なAR対象物の認識が可能となる。また、AR対象物の相対位置を用いることで、効率的な認識が可能となる。例えば、AR対象物までのおおよその距離を利用することで、画像特徴量の計算に際してAR対象物の大きさに非依存とするための処理を省いたり、大きさに依存する特徴を利用することができる。また、AR対象物の角度を利用し、通常であれば多くの角度に対して画像特徴量の評価が必要なところ、そのAR対象物の角度に対応する画像特徴量の保持と計算のみを行えばよく、計算速度またはメモリ効率を向上することができる。 These enable recognition or detection that is difficult with image processing alone. The difficult recognition or detection includes, for example, identification of AR objects that are image-similar (such as only different text content), detection of AR objects with fewer patterns, and AR objects with high reflectivity or transmittance. Detection of an object, detection of an AR object (for example, an animal) whose shape or pattern changes, or detection of an AR object from a wide angle (in various directions). That is, in this modification, recognition of these AR objects and AR display can be performed. In addition, in image processing that does not use a visible light signal, as the number of AR objects to be recognized increases, it takes time to search for neighborhoods of image feature values, which takes time for recognition processing, and the recognition rate also deteriorates. . However, in this modification, the influence of the increase in recognition time and the deterioration of the recognition rate due to the increase in recognition objects is not at all or extremely small, and effective AR object recognition is possible. Further, efficient recognition is possible by using the relative position of the AR object. For example, by using an approximate distance to the AR object, processing for making the AR object size independent of the calculation of the image feature amount can be omitted, or a size-dependent feature can be used. Can do. In addition, when the angle of the AR object is used and it is usually necessary to evaluate the image feature amount for many angles, only the retention and calculation of the image feature amount corresponding to the angle of the AR object is performed. The calculation speed or the memory efficiency can be improved.
 [実施の形態4の変形例4のまとめ]
 図120は、本発明の一態様に係る認識方法を示すフローチャートである。
[Summary of Modification 4 of Embodiment 4]
FIG. 120 is a flowchart illustrating a recognition method according to an aspect of the present invention.
 本発明の一態様に係る表示方法は、拡張現実感画像(AR画像)が表示される対象物の認識方法であって、ステップS531~535を含む。 The display method according to one aspect of the present invention is a method for recognizing an object on which an augmented reality image (AR image) is displayed, and includes steps S531 to S535.
 ステップS531では、受信機200は、可視光信号を光源の輝度変化により送信する送信機100を撮影することにより、識別情報を取得する。識別情報は例えば光IDである。ステップS532では、受信機200は、その識別情報をサーバ300に送信し、サーバ300から識別情報に対応する画像特徴量を取得する。画像特徴量は、AR認識情報または認識情報として示される。 In step S531, the receiver 200 acquires the identification information by photographing the transmitter 100 that transmits a visible light signal by a change in luminance of the light source. The identification information is, for example, an optical ID. In step S <b> 532, the receiver 200 transmits the identification information to the server 300 and acquires an image feature amount corresponding to the identification information from the server 300. The image feature amount is indicated as AR recognition information or recognition information.
 ステップS533では、受信機200は、その画像特徴量を用いて、送信機100の撮像画像から第1の対象物を特定する。ステップS534では、受信機200は、識別情報(すなわち光ID)を用いずに、画像処理により、送信機100の撮像画像から第2の対象物を特定する。 In step S533, the receiver 200 specifies the first object from the captured image of the transmitter 100 using the image feature amount. In step S534, the receiver 200 specifies the second object from the captured image of the transmitter 100 by image processing without using the identification information (that is, the optical ID).
 ステップS535では、ステップS533で特定された第1の対象物が、ステップS534で特定された第2の対象物と異なる場合に、受信機200は、第1の対象物を優先して、拡張現実感画像が表示される対象物として認識する。 In step S535, when the first object specified in step S533 is different from the second object specified in step S534, the receiver 200 gives priority to the first object and augments reality. Recognize as an object to be displayed.
 例えば、拡張現実感画像、撮像画像、および対象物はそれぞれ、実施の形態4およびその各変形例におけるAR画像、撮像表示画像、対象領域に相当する。 For example, the augmented reality image, the captured image, and the target object correspond to the AR image, the captured display image, and the target region in the fourth embodiment and the modifications thereof, respectively.
 これにより、図119に示すように、可視光信号によって示される識別情報を用いて特定された第1の対象物と、その識別情報を用いずに画像処理によって特定された第2の対象物とが異なる場合であっても、拡張現実感画像が表示される対象物として第1の対象物が優先して認識される。したがって、撮像画像から、拡張現実感画像が表示される対象物を適切に認識することができる。 Thereby, as shown in FIG. 119, the first object specified by using the identification information indicated by the visible light signal, and the second object specified by the image processing without using the identification information, Are different, the first object is preferentially recognized as the object on which the augmented reality image is displayed. Therefore, the object on which the augmented reality image is displayed can be appropriately recognized from the captured image.
 また、画像特徴量は、第1の対象物の画像特徴量に加え、第1の対象物の近辺に位置し、第1の対象物とは異なる第3の対象物の画像特徴量も含んでいてもよい。 In addition to the image feature amount of the first object, the image feature amount includes an image feature amount of a third object that is located in the vicinity of the first object and is different from the first object. May be.
 これにより、図119のステップS522に示すように、第1の対象物の画像特徴量だけでなく、第3の対象物の画像特徴量も取得されるため、その後に、第3の対象物が撮像画像に現れるときには、迅速にその第3の対象物を特定または認識することができる。 Thereby, as shown in step S522 of FIG. 119, not only the image feature amount of the first object but also the image feature amount of the third object is acquired. When appearing in the captured image, the third object can be quickly identified or recognized.
 また、受信機200は、ステップS531において、複数の送信機を撮影することにより、複数の識別情報を取得し、ステップS532において、複数の識別情報に対応する複数の画像特徴量を取得する場合がある。このような場合には、受信機200は、ステップS533では、複数の対象物のうち、複数の送信機の撮影を行う受信機200から最も近い対象物の画像特徴量を、第1の対象物の特定に用いてもよい。 In addition, the receiver 200 may acquire a plurality of identification information by photographing a plurality of transmitters in step S531, and may acquire a plurality of image feature amounts corresponding to the plurality of identification information in step S532. is there. In such a case, in step S533, the receiver 200 determines the image feature amount of the object closest to the receiver 200 that performs imaging by the plurality of transmitters among the plurality of objects as the first object. It may be used to identify
 これにより、図119のステップS522に示すように、複数の画像特徴量が取得されても、適切な画像特徴量を第1の対象物の特定に用いることができる。 Thereby, as shown in step S522 of FIG. 119, even if a plurality of image feature amounts are acquired, an appropriate image feature amount can be used to identify the first object.
 なお、本変形例における認識装置は、例えば上述の受信機200に備えられた装置であって、プロセッサと記録媒体とを備える。この記録媒体には、図120に示す認識方法をプロセッサに実行させるプログラムが記録されている。また、本変形例におけるプログラムは、図120に示す認識方法をコンピュータに実行させるプログラムである。 Note that the recognition device in the present modification is, for example, a device provided in the receiver 200 described above, and includes a processor and a recording medium. On this recording medium, a program for causing the processor to execute the recognition method shown in FIG. 120 is recorded. Moreover, the program in this modification is a program which makes a computer perform the recognition method shown in FIG.
 (実施の形態5)
 図121は、本実施の形態に係る可視光信号の動作モードの一例を示す図である。
(Embodiment 5)
FIG. 121 is a diagram showing an example of an operation mode of a visible light signal according to this embodiment.
 可視光信号の物理(PHY)層の動作モードには、図121に示すように、2つのモードがある。1つ目の動作モードは、パケットPWM(Pulse Width Modulation)が行われるモードであり、2つ目の動作モードは、パケットPPM(Pulse-Position Modulation)が行われるモードである。上記各実施の形態またはその変形例に係る送信機は、この何れかの動作モードにしたがって送信対象の信号を変調することによって、可視光信号を生成して送信する。 As shown in FIG. 121, there are two modes for the operation mode of the physical (PHY) layer of the visible light signal. The first operation mode is a mode in which packet PWM (Pulse Width Modulation) is performed, and the second operation mode is a mode in which packet PPM (Pulse-Position Modulation) is performed. The transmitter according to each of the above-described embodiments or modifications thereof generates and transmits a visible light signal by modulating a signal to be transmitted according to any one of these operation modes.
 パケットPWMの動作モードでは、RLL(Run-Length Limited)符号化は行われず、光クロックレートは100kHzであり、前方誤り訂正(FEC)は、繰り返し符号化され、典型的なデータレートは5.5kbpsである。 In the operation mode of the packet PWM, RLL (Run-Length Limited) encoding is not performed, the optical clock rate is 100 kHz, forward error correction (FEC) is repeatedly encoded, and a typical data rate is 5.5 kbps. It is.
 このパケットPWMでは、パルス幅が変調され、パルスは、2つの明るさの状態によって表される。2つの明るさの状態は、明るい状態(BrightまたはHigh)と暗い状態(DarkまたはLow)であるが、典型的には、光のオンとオフである。パケット(PHYパケットともいう)と呼ばれる物理層の信号のチャンクは、MAC(medium access control)フレームに対応している。送信機は、PHYパケットを繰り返し送信し、特別な順番によらずに複数のPHYパケットのセットを送信することができる。 In this packet PWM, the pulse width is modulated, and the pulse is represented by two brightness states. The two brightness states are a bright state (Bright or High) and a dark state (Dark or Low), but typically the light is on and off. A chunk of a physical layer signal called a packet (also referred to as a PHY packet) corresponds to a MAC (medium access control) frame. The transmitter can repeatedly transmit PHY packets and transmit a set of a plurality of PHY packets regardless of a special order.
 なお、パケットPWMは、通常の送信機から送信される可視光信号の生成に用いられる。 Note that the packet PWM is used to generate a visible light signal transmitted from a normal transmitter.
 パケットPPMの動作モードでは、RLL符号化は行われず、光クロックレートは100kHzであり、前方誤り訂正(FEC)は、繰り返し符号化され、典型的なデータレートは8kbpsである。 In the operation mode of packet PPM, RLL encoding is not performed, the optical clock rate is 100 kHz, forward error correction (FEC) is repeatedly encoded, and a typical data rate is 8 kbps.
 このパケットPPMでは、短い時間長のパルスの位置が変調される。つまり、このパルスは、明るいパルス(High)と暗いパルス(Low)のうちの明るいパルスであり、このパルスの位置が変調される。また、このパルスの位置は、パルスと次のパルスとの間のインターバルによって示される。 In this packet PPM, the position of a short time length pulse is modulated. That is, this pulse is a bright pulse of a bright pulse (High) and a dark pulse (Low), and the position of this pulse is modulated. The position of this pulse is indicated by the interval between the pulse and the next pulse.
 パケットPPMは、深い調光を実現する。各実施の形態およびその変形例において説明されていないパケットPPMにおけるフォーマット、波形および特徴は、パケットPWMと同様である。なお、パケットPPMは、非常に明るく発光する光源を有する送信機から送信される可視光信号の生成に用いられる。 Packet PPM realizes deep dimming. The format, waveform, and characteristics of the packet PPM not described in each embodiment and its modification are the same as those of the packet PWM. The packet PPM is used to generate a visible light signal transmitted from a transmitter having a light source that emits very bright light.
 また、パケットPWMおよびパケットPPMのそれぞれにおいて、可視光信号の物理層における調光は、オプショナルフィールドの平均輝度によって制御される。 In each of the packet PWM and the packet PPM, dimming in the physical layer of the visible light signal is controlled by the average luminance of the optional field.
 図122Aは、実施の形態5に係る他の可視光信号の生成方法を示すフローチャートである。この可視光信号の生成方法は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する方法であって、ステップSE1~SE3を含む。 FIG. 122A is a flowchart showing another visible light signal generation method according to Embodiment 5. This visible light signal generation method is a method for generating a visible light signal transmitted by a change in luminance of a light source provided in a transmitter, and includes steps SE1 to SE3.
 ステップSE1では、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 In step SE1, a preamble which is data in which each of the first and second luminance values, which are different luminance values, appears alternately along the time axis is generated.
 ステップSE2では、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを、送信対象の信号に応じた方式にしたがって決定することにより、第1のペイロードを生成する。 In step SE2, in the data in which the first and second luminance values appear alternately along the time axis, an interval from when the first luminance value appears until the next first luminance value appears is set as a transmission target. 1st payload is produced | generated by determining according to the system according to this signal.
 ステップSE3では、プリアンブルと第1のペイロードとを結合することによって可視光信号を生成する。 In step SE3, a visible light signal is generated by combining the preamble and the first payload.
 図122Bは、実施の形態5に係る他の信号生成装置の構成を示すブロック図である。この信号生成装置E10は、送信機が備える光源の輝度変化によって送信される可視光信号を生成する信号生成装置であって、プリアンブル生成部E11と、ペイロード生成部E12と、結合部E13とを備える。また、この信号生成装置E10は、図122Aに示すフローチャートの処理を実行する。 FIG. 122B is a block diagram illustrating a configuration of another signal generation device according to Embodiment 5. The signal generation device E10 is a signal generation device that generates a visible light signal transmitted by the luminance change of the light source provided in the transmitter, and includes a preamble generation unit E11, a payload generation unit E12, and a combining unit E13. . In addition, the signal generation device E10 executes the processing of the flowchart shown in FIG. 122A.
 つまり、プリアンブル生成部E11は、互いに異なる輝度値である第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れるデータであるプリアンブルを生成する。 That is, the preamble generation unit E11 generates a preamble that is data in which the first and second luminance values, which are different luminance values, appear alternately on the time axis.
 ペイロード生成部E12は、第1および第2の輝度値が時間軸上に沿って交互に現れるデータにおいて、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを、送信対象の信号に応じた方式にしたがって決定することにより、第1のペイロードを生成する。 In the data in which the first and second luminance values appear alternately along the time axis, the payload generation unit E12 sets an interval from when the first luminance value appears until the next first luminance value appears. A first payload is generated by determining according to a method according to a signal to be transmitted.
 結合部E13では、プリアンブルと第1のペイロードとを結合することによって可視光信号を生成する。 The combining unit E13 generates a visible light signal by combining the preamble and the first payload.
 例えば、第1および第2の輝度値は、Bright(High)およびDark(Low)であり、第1のペイロードは、PHYペイロードである。このように生成された可視光信号を送信することによって、受信パケット数を増やすことができるとともに、信頼度を高めることができる。その結果、多様な機器間の通信を可能にすることができる。 For example, the first and second luminance values are Bright (High) and Dark (Low), and the first payload is a PHY payload. By transmitting the visible light signal generated in this way, the number of received packets can be increased and the reliability can be increased. As a result, communication between various devices can be enabled.
 例えば、プリアンブルおよび第1のペイロードのそれぞれにおける第1の輝度値の時間長は、10μ秒以下である。 For example, the time length of the first luminance value in each of the preamble and the first payload is 10 μsec or less.
 これにより、可視光通信を行いながら光源の平均的な輝度を抑えることができる。 This makes it possible to suppress the average luminance of the light source while performing visible light communication.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを3つ含む。ここで、その3つのインターバルのそれぞれは、160μ秒である。つまり、パケットPPMのモード1におけるヘッダ(SHR)に含まれる各パルス間のインターバルのパターンが定義される。なお、上記各パルスは、例えば第1の輝度値を有するパルスである。 Also, the preamble is a header for the first payload, and the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears. Here, each of the three intervals is 160 μs. That is, a pattern of intervals between pulses included in the header (SHR) in mode 1 of the packet PPM is defined. Each of the pulses is a pulse having a first luminance value, for example.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを3つ含む。ここで、その3つのインターバルのうちの1つ目のインターバルは、160μ秒であり、2つ目のインターバルは、180μ秒であり、3つ目のインターバルは、160μ秒である。つまり、パケットPPMのモード2におけるヘッダ(SHR)に含まれる各パルス間のインターバルのパターンが定義される。 Also, the preamble is a header for the first payload, and the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears. Here, the first interval among the three intervals is 160 μsec, the second interval is 180 μsec, and the third interval is 160 μsec. That is, a pattern of intervals between pulses included in the header (SHR) in mode 2 of the packet PPM is defined.
 また、プリアンブルは、第1のペイロードに対するヘッダであり、そのヘッダの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを3つ含む。ここで、3つのインターバルのうちの1つ目のインターバルは、80μ秒であり、2つ目のインターバルは、90μ秒であり、3つ目のインターバルは、80μ秒である。つまり、パケットPPMのモード3におけるヘッダ(SHR)に含まれる各パルス間のインターバルのパターンが定義される。 Also, the preamble is a header for the first payload, and the time length of the header includes three intervals from when the first luminance value appears until the next first luminance value appears. Here, the first one of the three intervals is 80 μs, the second interval is 90 μs, and the third interval is 80 μs. That is, a pattern of intervals between pulses included in the header (SHR) in mode 3 of the packet PPM is defined.
 このように、パケットPPMのモード1、モード2およびモード3のそれぞれのヘッダのパターンが定義されるため、受信機は、可視光信号における第1のペイロードを適切に受信することができる。 Thus, since the header patterns of the mode 1, mode 2 and mode 3 of the packet PPM are defined in this way, the receiver can appropriately receive the first payload in the visible light signal.
 また、送信対象の信号は、第1のビットxから第6のビットxまでの6ビットからなり、第1のペイロードの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを2つ含む。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0または1)、第1のペイロードの生成では、第1のペイロードにおける2つのインターバルのそれぞれを、上述の方式であるインターバルP=180+30×y[μ秒]にしたがって決定する。つまり、パケットPPMのモード1では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルス間のインターバルとして変調される。 The signal to be transmitted consists of 6 bits from the first bit x 0 to the bit x 5 in the sixth, the time length of the first payload, first from the first luminance value appears in the following 1 Two intervals until the brightness value of 2 appears. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is 0 or 1), in the generation of the first payload, in the first payload, Each of the two intervals is determined according to the interval P k = 180 + 30 × y k [μsec], which is the above-described method. That is, in the mode 1 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse included in the first payload (PHY payload).
 また、送信対象の信号は、第1のビットxから第12のビットx11までの12ビットからなり、第1のペイロードの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルを4つ含む。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0、1、2または3)、第1のペイロードの生成では、第1のペイロードにおける4つのインターバルのそれぞれを、上述の方式であるインターバルP=180+30×y[μ秒]にしたがって決定する。つまり、パケットPPMのモード2では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルス間のインターバルとして変調される。 The signal to be transmitted consists of 12 bits from the first bit x 0 to bit x 11 of the 12, the time length of the first payload, first from the first luminance value appears in the following 1 This includes four intervals until the luminance value appears. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is 0, 1, 2, or 3), in the generation of the first payload, Each of the four intervals in one payload is determined according to the interval P k = 180 + 30 × y k [μsec], which is the above-described method. That is, in the mode 2 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse included in the first payload (PHY payload).
 また、送信対象の信号は、第1のビットxから第3nのビットx3n-1までの3nビットからなり(nは2以上の整数)、第1のペイロードの時間長は、第1の輝度値が現れてから次の第1の輝度値が現れるまでのインターバルをn個含む。ここで、パラメータyが、y=x3k+x3k+1×2+x3k+2×4として表される場合(kは0~(n-1)までの整数)、第1のペイロードの生成では、第1のペイロードにおけるn個の前記インターバルのそれぞれを、上述の方式であるインターバルP=100+20×y[μ秒]にしたがって決定する。つまり、パケットPPMのモード3では、送信対象の信号が、第1のペイロード(PHYペイロード)に含まれる各パルス間のインターバルとして変調される。 The signal to be transmitted consists of 3n bits from the first bit x 0 through bit x 3n-1 of the 3n (n is an integer of 2 or more), the time length of the first payload, a first It includes n intervals from when the luminance value appears until the next first luminance value appears. Here, when the parameter y k is expressed as y k = x 3k + x 3k + 1 × 2 + x 3k + 2 × 4 (k is an integer from 0 to (n−1)), the first payload is generated. Then, each of the n intervals in the first payload is determined according to the interval P k = 100 + 20 × y k [μsec], which is the above-described method. That is, in the mode 3 of the packet PPM, the signal to be transmitted is modulated as an interval between pulses included in the first payload (PHY payload).
 このように、パケットPPMのモード1、モード2およびモード3では、送信対象の信号が各パルス間のインターバルとして変調されるため、受信機は、そのインターバルに基づいて、可視光信号を適切に送信対象の信号に復調することができる。 As described above, in the mode 1, mode 2 and mode 3 of the packet PPM, the signal to be transmitted is modulated as an interval between each pulse, so that the receiver appropriately transmits a visible light signal based on the interval. It can be demodulated into the signal of interest.
 また、可視光信号の生成方法では、さらに、第1のペイロードに対するフッタを生成し、可視光信号の生成では、第1のペイロードの次にそのフッタを結合してもよい。つまり、パケットPWMおよびパケットPPMのモード3では、第1のペイロード(PHYペイロード)に続いてフッタ(SFT)が送信される。これにより、第1のペイロードの終了をフッタによって明確に特定することができるため、可視光通信を効率的に行うことができる。 In the visible light signal generation method, a footer for the first payload may be further generated. In the generation of the visible light signal, the footer may be combined next to the first payload. That is, in mode 3 of the packet PWM and the packet PPM, the footer (SFT) is transmitted following the first payload (PHY payload). Thereby, since the end of the first payload can be clearly specified by the footer, visible light communication can be performed efficiently.
 また、可視光信号の生成では、フッタが送信されない場合には、そのフッタに代えて、送信対象の信号の次の信号に対するヘッダを結合してもよい。つまり、パケットPWMおよびパケットPPMのモード3では、フッタ(SFT)の代わりに、第1のペイロード(PHYペイロード)に続いて、その次の第1のペイロードに対するヘッダ(SHR)が送信される。これにより、第1のペイロードの終了を、次の第1のペイロードに対するヘッダによって明確に特定することができるとともに、フッタが送信されないため、可視光通信をより効率的に行うことができる。 In addition, when generating a visible light signal, if a footer is not transmitted, a header for a signal next to a transmission target signal may be combined instead of the footer. That is, in the mode 3 of the packet PWM and the packet PPM, instead of the footer (SFT), the header (SHR) for the next first payload is transmitted following the first payload (PHY payload). Thus, the end of the first payload can be clearly specified by the header for the next first payload, and since the footer is not transmitted, visible light communication can be performed more efficiently.
 なお、上記各実施の形態および各変形例において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図122Aのフローチャートによって示される可視光信号の生成方法をコンピュータに実行させる。 In each of the above-described embodiments and modifications, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the visible light signal generation method shown by the flowchart of FIG. 122A.
 以上、一つまたは複数の態様に係る可視光信号の生成方法について、上記各実施の形態および各変形例に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態および変形例における構成要素を組み合わせて構築される形態も、本発明の範囲内に含まれてもよい。 As described above, the visible light signal generation method according to one or a plurality of aspects has been described based on the above-described embodiments and modifications. However, the present invention is not limited to the embodiments. Unless it deviates from the gist of the present invention, various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments and modifications are also within the scope of the present invention. May be included.
 (実施の形態6)
 本実施の形態では、可視光信号の復号方法および符号化方法などについて説明する。
(Embodiment 6)
In this embodiment, a visible light signal decoding method, an encoding method, and the like will be described.
 図123は、MPMにおけるMACフレームのフォーマットを示す図である。 FIG. 123 is a diagram showing a format of a MAC frame in MPM.
 MPM(Mirror Pulse Modulation)におけるMAC(medium access control)フレームのフォーマットは、MHR(medium access control header)とMSDU(medium access control service-data unit)とから構成される。MHRフィールドは、シーケンス番号サブフィールドを含む。MSDUは、フレームペイロードを含み、可変長である。MHRとMSDUとを含むMPDU(medium access control protocol-data unit)のビット長は、macMpmMpduLengthとして設定される。 The format of the MAC (medium access control) header in MPM (Mirror Pulse Modulation) consists of MHR (medium access control header) and MSDU (medium access control service-data unit). The MHR field includes a sequence number subfield. The MSDU includes a frame payload and has a variable length. The bit length of MPDU (medium access control control protocol-data unit) including MHR and MSDU is set as macMpmMpduLength.
 なお、MPMは、実施の形態5における変調方式であって、例えば、図121に示されるように送信対象の情報または信号を変調する方式である。 Note that the MPM is a modulation method in the fifth embodiment, for example, a method for modulating information or signals to be transmitted as shown in FIG.
 図124は、MPMにおけるMACフレームを生成する符号化装置の処理動作を示すフローチャートである。具体的には、図124は、シーケンス番号サブフィールドのビット長の決め方を示す図である。なお、符号化装置は、例えば、可視光信号を送信する上述の送信機または送信装置などに備えられている。 FIG. 124 is a flowchart showing the processing operation of the encoding device for generating a MAC frame in MPM. Specifically, FIG. 124 is a diagram illustrating how to determine the bit length of the sequence number subfield. Note that the encoding device is provided in, for example, the above-described transmitter or transmission device that transmits a visible light signal.
 シーケンス番号サブフィールドは、フレームシーケンス番号(シーケンス番号ともいう)を含む。シーケンス番号サブフィールドのビット長は、macMpmSnLengthとして設定される。シーケンス番号サブフィールドのビット長が可変長に設定されている場合、シーケンス番号サブフィールドにおける先頭のビットは、最終フレームフラグとして使用される。つまり、この場合、シーケンス番号サブフィールドは、最終フレームフラグと、シーケンス番号を示すビット列とを含む。その最終フレームフラグは、最終フレームでは1に設定され、その他のフレームでは、0に設定される。つまり、この最終フレームフラグは、処理対象フレームが最終フレームであるか否かを示す。なお、この最終フレームフラグは、上述のストップビットに相当する。また、シーケンス番号は、上述のアドレスに相当する。 The sequence number subfield includes a frame sequence number (also called a sequence number). The bit length of the sequence number subfield is set as macMpmSnLength. When the bit length of the sequence number subfield is set to a variable length, the first bit in the sequence number subfield is used as the last frame flag. That is, in this case, the sequence number subfield includes a final frame flag and a bit string indicating the sequence number. The final frame flag is set to 1 for the final frame, and is set to 0 for the other frames. That is, the final frame flag indicates whether or not the processing target frame is the final frame. This final frame flag corresponds to the above-described stop bit. The sequence number corresponds to the above address.
 まず、符号化装置は、SNが可変長に設定されているか否かを判定する(ステップS101a)。なお、SNは、シーケンス番号サブフィールドのビット長である。つまり、符号化装置は、macMpmSnLengthが0xfを示すか否かを判定する。macMpmSnLengthが0xfを示すときには、SNは可変長であり、macMpmSnLengthが0xf以外を示すときには、SNは固定長である。符号化装置は、SNが可変長に設定されていない、すなわち、SNが固定長に設定されていると判定すると(ステップS101aのN)、SNをmacMpmSnLengthによって示される値に決定する(ステップS102a)。このとき、符号化装置は、最終フレームフラグ(すなわちLFF)を使用しない。 First, the encoding device determines whether or not SN is set to a variable length (step S101a). SN is the bit length of the sequence number subfield. That is, the encoding apparatus determines whether macMpmSnLength indicates 0xf. When macMpmSnLength indicates 0xf, SN is a variable length, and when macMpmSnLength indicates other than 0xf, SN is a fixed length. When determining that the SN is not set to a variable length, that is, the SN is set to a fixed length (N in step S101a), the encoding apparatus determines the SN to a value indicated by macMpmSnLength (step S102a). . At this time, the encoding apparatus does not use the final frame flag (that is, LFF).
 一方、符号化装置は、SNが可変長に設定されていると判定すると(ステップS101aのY)、処理対象フレームが最終フレームか否かを判定する(ステップS103a)。ここで、符号化装置は、処理対象フレームが最終フレームであると判定すると(ステップS103aのY)、SNを5ビットに決定する(ステップS104a)。このとき、符号化装置は、シーケンス番号サブフィールドにおける先頭のビットとして、1を示す最終フレームフラグを決定する。 On the other hand, when determining that the SN is set to a variable length (Y in step S101a), the encoding apparatus determines whether the processing target frame is the last frame (step S103a). Here, when the encoding apparatus determines that the processing target frame is the final frame (Y in step S103a), the encoding apparatus determines SN to be 5 bits (step S104a). At this time, the encoding apparatus determines a final frame flag indicating 1 as the first bit in the sequence number subfield.
 また、符号化装置は、処理対象フレームが最終フレームでないと判定すると(ステップS103aのN)、最終フレームのシーケンス番号の値が、1-15のうちの何れかを判定する(ステップS105a)。なお、シーケンス番号は、0から昇順に、各フレームに対して割り当てられる整数である。また、ステップS103aでNの場合には、フレーム数は2以上である。したがって、この場合には、最終フレームのシーケンス番号の値は、0を除く1-15のうちの何れかを取り得る。 Further, when the encoding apparatus determines that the processing target frame is not the final frame (N in step S103a), the encoding apparatus determines whether the value of the sequence number of the final frame is 1-15 (step S105a). The sequence number is an integer assigned to each frame in ascending order from 0. In the case of N in step S103a, the number of frames is 2 or more. Therefore, in this case, the value of the sequence number of the last frame can take any of 1-15 except 0.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が1であると判定すると、SNを1ビットに決定する(ステップS106a)。このとき、符号化装置は、シーケンス番号サブフィールドにおける先頭のビットである最終フレームフラグの値を、0に決定する。 If the encoding apparatus determines in step S105a that the value of the sequence number of the final frame is 1, it determines SN as 1 bit (step S106a). At this time, the encoding apparatus determines that the value of the last frame flag which is the first bit in the sequence number subfield is 0.
 例えば、最終フレームのシーケンス番号の値が1である場合、最終フレームのシーケンス番号サブフィールドは、最終フレームフラグ(1)とシーケンス番号の値(1)とを含む(1,1)として表される。このとき、符号化装置は、処理対象フレームのシーケンス番号サブフィールドのビット長を1ビットに決定する。つまり、符号化装置は、最終フレームフラグ(0)のみを含むシーケンス番号サブフィールドを決定する。 For example, when the sequence number value of the last frame is 1, the sequence number subfield of the last frame is represented as (1, 1) including the last frame flag (1) and the sequence number value (1). . At this time, the encoding apparatus determines that the bit length of the sequence number subfield of the processing target frame is 1 bit. That is, the encoding apparatus determines a sequence number subfield including only the last frame flag (0).
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が2であると判定すると、SNを2ビットに決定する(ステップS107a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 If the encoding device determines in step S105a that the value of the sequence number of the last frame is 2, it determines SN to 2 bits (step S107a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 例えば、最終フレームのシーケンス番号の値が2である場合、最終フレームのシーケンス番号サブフィールドは、最終フレームフラグ(1)とシーケンス番号の値(2)とを含む(1,0,1)として表される。なお、シーケンス番号は、ビット列によって示されるが、そのビット列では、左端のビットがLSB(least significant bit)であって、右端のビットがMSB(most significant bit)である。したがって、シーケンス番号の値(2)は、ビット列(0,1)と表記される。このように、最終フレームのシーケンス番号の値が2である場合、符号化装置は、処理対象フレームのシーケンス番号サブフィールドのビット長を2ビットに決定する。つまり、符号化装置は、最終フレームフラグ(0)と、シーケンス番号を示すビット(0)または(1)とを含むシーケンス番号サブフィールドを決定する。 For example, when the sequence number value of the last frame is 2, the sequence number subfield of the last frame is represented as (1, 0, 1) including the last frame flag (1) and the sequence number value (2). Is done. The sequence number is indicated by a bit string. In the bit string, the leftmost bit is LSB (leastlesignificant bit) and the rightmost bit is MSB (most significant bit). Therefore, the value (2) of the sequence number is expressed as a bit string (0, 1). Thus, when the value of the sequence number of the last frame is 2, the encoding apparatus determines the bit length of the sequence number subfield of the processing target frame to be 2 bits. That is, the encoding apparatus determines the sequence number subfield including the last frame flag (0) and the bit (0) or (1) indicating the sequence number.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が3または4であると判定すると、SNを3ビットに決定する(ステップS108a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 If the encoding device determines in step S105a that the value of the sequence number of the final frame is 3 or 4, it determines SN to 3 bits (step S108a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が5-8の何れかの整数であると判定すると、SNを4ビットに決定する(ステップS109a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 If the encoding apparatus determines in step S105a that the value of the sequence number of the last frame is any integer of 5-8, it determines SN as 4 bits (step S109a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 符号化装置は、ステップS105aにおいて、最終フレームのシーケンス番号の値が9-15の何れかの整数であると判定すると、SNを5ビットに決定する(ステップS110a)。このときにも、符号化装置は、最終フレームフラグの値を0に決定する。 When the encoding apparatus determines in step S105a that the sequence number value of the final frame is any integer of 9-15, it determines SN as 5 bits (step S110a). Also at this time, the encoding apparatus determines the value of the final frame flag as 0.
 図125は、MPMにおけるMACフレームを復号する復号装置の処理動作を示すフローチャートである。具体的には、図125は、シーケンス番号サブフィールドのビット長の決め方を示す図である。なお、復号装置は、例えば、可視光信号を受信する上述の受信機または受信装置などに備えられている。 FIG. 125 is a flowchart showing the processing operation of the decoding device for decoding the MAC frame in MPM. Specifically, FIG. 125 is a diagram showing how to determine the bit length of the sequence number subfield. Note that the decoding device is provided, for example, in the above-described receiver or receiving device that receives a visible light signal.
 ここで、復号装置は、SNが可変長に設定されているか否かを判定する(ステップS201a)。つまり、復号装置は、macMpmSnLengthが0xfを示すか否かを判定する。復号装置は、SNが可変長に設定されていない、すなわち、SNが固定長に設定されていると判定すると(ステップS201aのN)、SNをmacMpmSnLengthによって示される値に決定する(ステップS202a)。このとき、復号装置は、最終フレームフラグ(すなわちLFF)を使用しない。 Here, the decoding apparatus determines whether or not SN is set to a variable length (step S201a). That is, the decoding apparatus determines whether macMpmSnLength indicates 0xf. If the decoding apparatus determines that SN is not set to a variable length, that is, that SN is set to a fixed length (N in step S201a), SN is determined to be a value indicated by macMpmSnLength (step S202a). At this time, the decoding device does not use the final frame flag (ie, LFF).
 一方、復号装置は、SNが可変長に設定されていると判定すると(ステップS201aのY)、復号対象フレームの最終フレームフラグの値が1であるか0であるかを判定する(ステップS203a)。つまり、復号装置は、復号対象フレームが最終フレームであるか否かを判定する。ここで、復号装置は、最終フレームフラグの値が1であると判定すると(ステップS203aの1)、SNを5ビットに決定する(ステップS204a)。 On the other hand, when the decoding apparatus determines that the SN is set to a variable length (Y in step S201a), the decoding apparatus determines whether the value of the final frame flag of the decoding target frame is 1 or 0 (step S203a). . That is, the decoding apparatus determines whether or not the decoding target frame is the last frame. Here, when the decoding apparatus determines that the value of the final frame flag is 1 (1 in step S203a), it determines SN to be 5 bits (step S204a).
 また、復号装置は、最終フレームフラグの値が0であると判定すると(ステップS203aの0)、最終フレームのシーケンス番号サブフィールドにおける第2ビットから第5ビットまでのビット列によって示される値が、1-15のうちの何れであるかを判定する(ステップS205a)。最終フレームは、1を示す最終フレームフラグを有し、復号対象フレームと同じソースから生成されたフレームである。また、各ソースは、撮像画像中の位置によって特定される。なお、ソースは、例えば複数のフレーム(すなわちパケット)に分割される。つまり、最終フレームは、1つのソースの分割によって生成された複数のフレームの中の最後のフレームである。また、シーケンス番号サブフィールドにおける第2ビットから第5ビットまでのビット列によって示される値は、シーケンス番号の値である。 When the decoding apparatus determines that the value of the last frame flag is 0 (0 in step S203a), the value indicated by the bit string from the second bit to the fifth bit in the sequence number subfield of the last frame is 1. It is determined which of −15 (step S205a). The final frame has a final frame flag indicating 1 and is a frame generated from the same source as the decoding target frame. Each source is specified by a position in the captured image. Note that the source is divided into, for example, a plurality of frames (that is, packets). That is, the last frame is the last frame among a plurality of frames generated by dividing one source. The value indicated by the bit string from the second bit to the fifth bit in the sequence number subfield is the value of the sequence number.
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が1であると判定すると、SNを1ビットに決定する(ステップS206a)。例えば、最終フレームのシーケンス番号サブフィールドが(1,1)の2ビットである場合、最終フレームフラグは1であり、最終フレームのシーケンス番号、すなわち上記ビット列によって示される値は1である。このとき、復号装置は、復号対象フレームのシーケンス番号サブフィールドのビット長を1ビットに決定する。つまり、復号装置は、復号対象フレームのシーケンス番号サブフィールドを(0)に決定する。 If the decoding apparatus determines in step S205a that the value indicated by the bit string is 1, it determines the SN to be 1 bit (step S206a). For example, when the sequence number subfield of the last frame is 2 bits of (1, 1), the last frame flag is 1, and the sequence number of the last frame, that is, the value indicated by the bit string is 1. At this time, the decoding apparatus determines that the bit length of the sequence number subfield of the decoding target frame is 1 bit. That is, the decoding apparatus determines (0) as the sequence number subfield of the decoding target frame.
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が2であると判定すると、SNを2ビットに決定する(ステップS207a)。例えば、最終フレームのシーケンス番号サブフィールドが(1,0,1)の3ビットである場合、最終フレームフラグは1であり、最終フレームのシーケンス番号、すなわち上記ビット列(0,1)によって示される値は2である。なお、上記ビット列では、左端のビットがLSB(least significant bit)であって、右端のビットがMSB(most significant bit)である。このとき、復号装置は、復号対象フレームのシーケンス番号サブフィールドのビット長を2ビットに決定する。つまり、復号装置は、復号対象フレームのシーケンス番号サブフィールドを(0,0)または(0,1)に決定する。 If the decoding apparatus determines in step S205a that the value indicated by the bit string is 2, the decoding apparatus determines SN to be 2 bits (step S207a). For example, if the sequence number subfield of the last frame is 3 bits (1, 0, 1), the last frame flag is 1, and the sequence number of the last frame, that is, the value indicated by the bit string (0, 1). Is 2. In the above bit string, the leftmost bit is LSB (least significant bit) and the rightmost bit is MSB (most) significant bit). At this time, the decoding apparatus determines that the bit length of the sequence number subfield of the decoding target frame is 2 bits. That is, the decoding apparatus determines the sequence number subfield of the decoding target frame to be (0, 0) or (0, 1).
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が3または4であると判定すると、SNを3ビットに決定する(ステップS208a)。 If it is determined in step S205a that the value indicated by the bit string is 3 or 4, the decoding apparatus determines SN to be 3 bits (step S208a).
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が5-8の何れかの整数であると判定すると、SNを4ビットに決定する(ステップS209a)。 If the decoding apparatus determines in step S205a that the value indicated by the bit string is any integer of 5-8, it determines SN as 4 bits (step S209a).
 復号装置は、ステップS205aにおいて、上記ビット列によって示される値が9-15の何れかの整数であると判定すると、SNを5ビットに決定する(ステップS210a)。 When the decoding apparatus determines in step S205a that the value indicated by the bit string is any integer of 9-15, it determines SN as 5 bits (step S210a).
 図126は、MACのPIBの属性を示す図である。 FIG. 126 is a diagram showing the attributes of the MAC PIB.
 MACのPIB(physical-layer personal-area-network information base)の属性には、macMpmSnLengthとmacMpmMpduLengthとがある。macMpmSnLengthは、0x0-0xfまでの範囲における何れかの整数値であって、シーケンス番号サブフィールドのビット長を示す。具体的には、macMpmSnLengthは、0x0-0xeまでの範囲における何れかの整数値である場合には、その整数値をシーケンス番号サブフィールドの固定のビット長として示す。また、macMpmSnLengthは、0xfである場合には、シーケンス番号サブフィールドのビット長が可変であることを示す。 The MAC PIB (physical-layer personal-area-network information base) attributes include macMpmSnLength and macMpmMpduLength. macMpmSnLength is any integer value in the range of 0x0-0xf, and indicates the bit length of the sequence number subfield. Specifically, when macMpmSnLength is any integer value in the range of 0x0-0xe, the integer value is indicated as a fixed bit length of the sequence number subfield. When macMpmSnLength is 0xf, it indicates that the bit length of the sequence number subfield is variable.
 macMpmMpduLengthは、0x00-0xffまでの範囲における何れかの整数値であって、MPDUのビット長を示す。 MacMpmMpduLength is any integer value in the range of 0x00-0xff and indicates the bit length of the MPDU.
 図127は、MPMの調光方法を説明するための図である。 FIG. 127 is a diagram for explaining an MPM dimming method.
 MPMは、調光機能を有する。MPMの調光方法には、例えば図127に示す、(a)アナログ調光方式、(b)PWM調光方式、(c)VPPM調光方式、および(d)フィールド挿入調光方式がある。 MPM has a dimming function. For example, FIG. 127 shows (a) an analog dimming method, (b) a PWM dimming method, (c) a VPPM dimming method, and (d) a field insertion dimming method.
 アナログ調光方式では、例えば(a2)に示すように、輝度を変化させることによって可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(a1)に示すように、可視光信号の全体の輝度を下げる。逆に、その可視光信号を明るくする場合には、例えば(a3)に示すように、可視光信号の全体の輝度を上げる。 In the analog dimming method, for example, as shown in (a2), the visible light signal is transmitted by changing the luminance. Here, when darkening the visible light signal, for example, as shown in (a1), the overall luminance of the visible light signal is lowered. Conversely, when the visible light signal is brightened, for example, as shown in (a3), the overall luminance of the visible light signal is increased.
 PWM調光方式では、例えば(b2)に示すように、輝度を変化させることによって可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(b1)に示すように、(b2)に示す高い輝度の光が出力される期間において、僅かな期間だけその輝度を下げる。逆に、その可視光信号を明るくする場合には、例えば(b3)に示すように、(b2)に示す低い輝度の光が出力される期間において、僅かな期間だけその輝度を上げる。なお、上述の僅かな期間は、元のパルス幅の1/3未満で、50μ秒未満でなければならない。 In the PWM dimming method, for example, as shown in (b2), the visible light signal is transmitted by changing the luminance. Here, when darkening the visible light signal, for example, as shown in (b1), the luminance is lowered for a short period in the period in which the high-luminance light shown in (b2) is output. On the contrary, when the visible light signal is brightened, for example, as shown in (b3), the luminance is raised only for a short period in the period when the low-luminance light shown in (b2) is output. It should be noted that the short period described above must be less than 1/3 of the original pulse width and less than 50 μsec.
 VPPM調光方式では、例えば(c2)に示すように、輝度を変化させることによって可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(c1)に示すように、輝度の立ち下がりのタイミングを早める。逆に、その可視光信号を明るくする場合には、例えば(c3)に示すように、輝度の立ち下がりのタイミングを遅らせる。なお、VPPM変調方式は、MPMにおけるPHYのPPMモードに対してのみ用いることができる。 In the VPPM dimming method, for example, as shown in (c2), the visible light signal is transmitted by changing the luminance. Here, when darkening the visible light signal, for example, as shown in (c1), the timing of the fall of the brightness is advanced. On the other hand, when the visible light signal is brightened, for example, as shown in (c3), the timing of the fall of the luminance is delayed. The VPPM modulation method can be used only for the PHY PPM mode in MPM.
 フィールド挿入調光方式では、例えば(d2)に示すように、複数のPPDU(physical-layer data unit)を含む可視光信号を送信する。ここで、その可視光信号を暗くする場合には、例えば(d1)に示すように、PPDUの間に、PPDUの輝度よりも低い輝度の調光フィールドを挿入する。逆に、その可視光信号を明るくする場合には、例えば(d3)に示すように、PPDUの間に、PPDUの輝度よりも高い輝度の調光フィールドを挿入する。 In the field insertion dimming method, for example, as shown in (d2), a visible light signal including a plurality of PPDUs (physical-layer data units) is transmitted. Here, when darkening the visible light signal, for example, as shown in (d1), a dimming field having a luminance lower than the luminance of the PPDU is inserted between the PPDUs. On the other hand, when the visible light signal is brightened, for example, as shown in (d3), a dimming field having a luminance higher than that of the PPDU is inserted between the PPDUs.
 図128は、PHYのPIBの属性を示す図である。 FIG. 128 is a diagram showing attributes of the PHY PIB.
 PHY(physical layer)のPIBの属性には、phyMpmMode、phyMpmPlcpHeaderMode、phyMpmPlcpCenterMode、phyMpmSymbolSize、phyMpmOddSymbolBit、phyMpmEvenSymbolBit、phyMpmSymbolOffset、およびphyMpmSymbolUnitがある。 PHY (physical layer) PIB attributes include phyMpmMode, phyMpmPlcpHeaderMode, phyMpmPlcpCenterMode, phyMpmSymbolSize, phyMpmOddSymbolBit, phyMpmEvenSymbolBit, phyMpmSymbolOffset, and phyMpmSymbolUnit.
 phyMpmModeは、0または1であって、MPMのPHYモードを示す。具体的には、phyMpmModeは、0である場合には、PHYモードがPWMモードであることを示し、1である場合には、PHYモードがPWMモードであることを示す。 PhyMpmMode is 0 or 1, and indicates the PHY mode of MPM. Specifically, when phyMpmMode is 0, it indicates that the PHY mode is the PWM mode, and when it is 1, it indicates that the PHY mode is the PWM mode.
 phyMpmPlcpHeaderModeは、0x0-0xfまでの範囲における何れかの整数値であって、PLCP(Physical Layer Conversion Protocol)ヘッダサブフィールドモードおよびPLCPフッタサブフィールドモードを示す。 PhyMpmPlcpHeaderMode is any integer value in the range of 0x0-0xf, and indicates a PLCP (Physical Layer Conversion Protocol) header subfield mode and a PLCP footer subfield mode.
 phyMpmPlcpCenterModeは、0x0-0xfまでの範囲における何れかの整数値であって、PLCPセンタサブフィールドモードを示す。 PhyMpmPlcpCenterMode is any integer value in the range of 0x0-0xf and indicates the PLCP center subfield mode.
 phyMpmSymbolSizeは、0x0-0xfまでの範囲における何れかの整数値であって、ペイロードサブフィールドのシンボル数を示す。具体的には、phyMpmSymbolSizeは、0x0の場合には、そのシンボル数が可変であることを示し、Nとして参照される。 PhyMpmSymbolSize is any integer value in the range of 0x0-0xf, and indicates the number of symbols in the payload subfield. Specifically, when phyMpmSymbolSize is 0x0, it indicates that the number of symbols is variable, and is referred to as N.
 phyMpmOddSymbolBitは、0x0-0xfまでの範囲における何れかの整数値であって、ペイロードサブフィールドの各奇数シンボルに含まれるビット長を示し、Moddとして参照される。 phyMpmOddSymbolBit is an any integer value in the range of up to OxO-Oxf, shows the bit length included in each odd symbol in the payload field and are referred to as M odd.
 phyMpmEvenSymbolBitは、0x0-0xfまでの範囲における何れかの整数値であって、ペイロードサブフィールドの各偶数シンボルに含まれるビット長を示し、Mevenとして参照される。 phyMpmEvenSymbolBit is any integer value in the range of 0x0-0xf, indicates the bit length included in each even symbol of the payload subfield, and is referred to as M even .
 phyMpmSymbolOffsetは、0x00-0xffまでの範囲における何れかの整数値であって、ペイロードサブフィールドのシンボルのオフセット値を示し、Wとして参照される。 phyMpmSymbolOffset is an any integer value in the range of up 0x00-0xFF, it indicates the offset value of the symbol of the payload field and are referred to as W 1.
 phyMpmSymbolUnitは、0x00-0xffまでの範囲における何れかの整数値であって、ペイロードサブフィールドのシンボルのユニット値を示し、Wとして参照される。 phyMpmSymbolUnit is an any integer value in the range of up 0x00-0xFF, indicates the unit value of the symbol of the payload field and are referred to as W 2.
 図129は、MPMを説明するための図である。 FIG. 129 is a diagram for explaining the MPM.
 MPMは、PSDU(PHY service data unit)フィールドのみで構成される。また、PSDUフィールドは、MPMのPLCPによって変換されるMPDUを含む。 MPM consists only of PSDU (PHY service data unit) fields. The PSDU field includes an MPDU converted by the MPM PLCP.
 MPMのPLCPは、図129に示すように、MPDUを5つのサブフィールドに変換する。5つのサブフィールドは、PLCPヘッダサブフィールド、フロントペイロードサブフィールド、PLCPセンタサブフィールド、バックペイロードサブフィールド、およびPLCPフッタサブフィールドである。MPMのPHYモードは、phyMpmModeとして設定される。 MPM PLCP converts MPDU into 5 subfields as shown in FIG. The five subfields are a PLCP header subfield, a front payload subfield, a PLCP center subfield, a back payload subfield, and a PLCP footer subfield. The PHY mode of MPM is set as phyMpmMode.
 図129に示すように、MPMのPLCPは、ビット再配置部301aと、複製部302aと、フロント変換部303aと、バック変換部304aとを備える。 As shown in FIG. 129, the MPM PLCP includes a bit rearrangement unit 301a, a duplication unit 302a, a front conversion unit 303a, and a back conversion unit 304a.
 ここで、(x、x、x、...)は、MPDUに含まれる各ビットであり、LSNは、シーケンス番号サブフィールドのビット長であり、Nは、各ペイロードサブフィールドのシンボル数である。ビット再配置部301aは、以下の(式1)にしたがって、(x、x、x、...)を(y、y、y、...)に再配置する。 Here, (x 0 , x 1 , x 2 ,...) Is each bit included in the MPDU, L SN is the bit length of the sequence number subfield, and N is each payload subfield. The number of symbols. The bit rearrangement unit 301a rearranges (x 0 , x 1 , x 2 ,...) To (y 0 , y 1 , y 2 ,...) According to the following (Equation 1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 この再配置によって、MPDUの先頭にあるシーケンス番号サブフィールドに含まれる各ビットは、LSNだけ後側に移動する。複製部302aは、そのビット再配置後のMPDUを複製する。 This relocation, each bit included in the sequence number subfield at the head of the MPDU, it moves rearward only L SN. The duplication unit 302a duplicates the MPDU after the bit rearrangement.
 フロントペイロードサブフィールドおよびバックペイロードサブフィールドはそれぞれ、N個のシンボルからなる。ここで、Moddは、奇数番目のシンボルに含まれるビット長であり、Mevenは、偶数番目のシンボルに含まれるビット長であり、Wは、シンボル値オフセット(上述のオフセット値)であり、Wは、シンボル値単位(上述のユニット値)である。なお、N、Modd、Meven、W、およびWは、図128に示すPHYのPIBによって設定される。 Each of the front payload subfield and the back payload subfield consists of N symbols. Here, M odd is the bit length included in the odd-numbered symbol, M even is the bit length included in the even-numbered symbol, and W 1 is the symbol value offset (the above-described offset value). , W 2 is the symbol value unit (unit value described above). N, M odd , M even , W 1 , and W 2 are set by the PHY PIB shown in FIG.
 フロント変換部303aおよびバック変換部304aは、再配置されたMPDUのペイロードビット(y0、y1、y2、...)を、以下の(式2)~(式5)によってzに変換する。 The front conversion unit 303a and the back conversion unit 304a convert payload bits (y0, y1, y2,...) Of the rearranged MPDU into z i according to the following (Expression 2) to (Expression 5).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 フロント変換部303aは、zを用いて、フロントペイロードサブフィールドのi番目のシンボル(すなわちシンボル値)を以下の(式6)によって算出する。 The front conversion unit 303a calculates the i-th symbol (that is, the symbol value) of the front payload subfield using z i by the following (Equation 6).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 バック変換部304aは、zを用いて、バックペイロードサブフィールドのi番目のシンボル(すなわちシンボル値)を以下の(式7)によって算出する。 The back conversion unit 304a calculates the i-th symbol (that is, the symbol value) of the back payload subfield using z i according to (Equation 7) below.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 図130は、PLCPヘッダサブフィールドを示す図である。 FIG. 130 shows a PLCP header subfield.
 PLCPヘッダサブフィールドは、図130に示すように、PWMモードでは、4つのシンボルによって構成され、PPMモードでは、3つのシンボルによって構成される。 As shown in FIG. 130, the PLCP header subfield is composed of four symbols in the PWM mode, and is composed of three symbols in the PPM mode.
 図131は、PLCPセンタサブフィールドを示す図である。 FIG. 131 is a diagram showing a PLCP center subfield.
 PLCPセンタのサブフィールドは、図131に示すように、PWMモードでは、4つのシンボルによって構成され、PPMモードでは、3つのシンボルによって構成される。 As shown in FIG. 131, the PLCP center subfield includes four symbols in the PWM mode and three symbols in the PPM mode.
 図132は、PLCPフッタサブフィールドを示す図である。 FIG. 132 is a diagram showing a PLCP footer subfield.
 PLCPフッタサブフィールドは、図132に示すように、PWMモードでは、4つのシンボルによって構成され、PPMモードでは、3つのシンボルによって構成される。 As shown in FIG. 132, the PLCP footer subfield is composed of four symbols in the PWM mode, and is composed of three symbols in the PPM mode.
 図133は、MPMにおけるPHYのPWMモードの波形を示す図である。 FIG. 133 is a diagram showing a PHY PWM mode waveform in the MPM.
 PWMモードでは、シンボルは、光強度の2つの状態のうちの何れか、すなわち明るい状態または暗い状態として送信されなければならない。MPMにおけるPHYのPWMモードでは、シンボル値は、マイクロ秒単位の連続時間に対応する。例えば、図133に示すように、第1のシンボル値は、第1の明るい状態の連続時間に対応し、第2のシンボル値は、次の暗い状態の連続時間に対応する。なお、図133に示す例では、各サブフィールドの最初の状態は、明るい状態であるが、暗い状態であってもよい。 In PWM mode, a symbol must be transmitted as one of two states of light intensity: bright or dark. In the PHY PWM mode in MPM, the symbol value corresponds to a continuous time in microseconds. For example, as shown in FIG. 133, the first symbol value corresponds to the continuous time of the first bright state, and the second symbol value corresponds to the continuous time of the next dark state. In the example shown in FIG. 133, the initial state of each subfield is a bright state, but may be a dark state.
 図134は、MPMにおけるPHYのPPMモードの波形を示す図である。 FIG. 134 is a diagram showing a PHY PPM mode waveform in MPM.
 PPMモードでは、図134に示すように、シンボル値は、明るい状態の開始から次の明るい状態の開始までの時間をマイクロ秒単位で表す。明るい状態の時間は、シンボル値の90%より短くなければならない。 In the PPM mode, as shown in FIG. 134, the symbol value represents the time from the start of the bright state to the start of the next bright state in units of microseconds. The bright state time must be shorter than 90% of the symbol value.
 両方のモードについて、送信機は、複数のシンボルの一部のみを送信することができる。しかし、送信機は、PLCPセンタサブフィールドのすべてのシンボルと、少なくともN個のシンボルとを送信しなければならない。その少なくともN個のシンボルのぞれぞれは、フロントペイロードサブフィールドおよびバックペイロードサブフィールドの何れかに含まれるシンボルである。 For both modes, the transmitter can transmit only some of the symbols. However, the transmitter must transmit all the symbols of the PLCP center subfield and at least N symbols. Each of the at least N symbols is a symbol included in either the front payload subfield or the back payload subfield.
 (実施の形態6のまとめ)
 図135は、実施の形態6の復号方法の一例を示すフローチャートである。なお、この図135に示すフローチャートは、図125に示すフローチャートに相当する。
(Summary of Embodiment 6)
FIG. 135 is a flowchart illustrating an example of the decoding method according to the sixth embodiment. Note that the flowchart shown in FIG. 135 corresponds to the flowchart shown in FIG.
 この復号方法は、複数のフレームで構成される可視光信号を復号する方法であって、図135に示すように、ステップS310bと、ステップS320bと、ステップS330bとを含む。また、これらの複数のフレームのそれぞれはシーケンス番号とフレームペイロードとを含む。 This decoding method is a method for decoding a visible light signal composed of a plurality of frames, and includes step S310b, step S320b, and step S330b as shown in FIG. Each of the plurality of frames includes a sequence number and a frame payload.
 ステップS310bでは、復号対象フレームにおいてシーケンス番号が格納されるサブフィールドのビット長を決定するための情報であるmacSnLengthに基づいて、そのサブフィールドのビット長が可変長か否かを判定する可変長判定処理を行う。 In step S310b, based on macSnLength which is information for determining the bit length of the subfield in which the sequence number is stored in the decoding target frame, it is determined whether or not the bit length of the subfield is variable Process.
 ステップS320bでは、その可変長判定処理の結果に基づいて、そのサブフィールドのビット長を決定する。そして、ステップS330bでは、決定されたサブフィールドのビット長に基づいて、復号対象フレームを復号する。 In step S320b, the bit length of the subfield is determined based on the result of the variable length determination process. In step S330b, the decoding target frame is decoded based on the determined bit length of the subfield.
 ここで、ステップS320bにおける上記サブフィールドのビット長の決定は、ステップS321b~S324bを含む。 Here, the determination of the bit length of the subfield in step S320b includes steps S321b to S324b.
 つまり、ステップS310bの可変長判定処理において、サブフィールドのビット長が可変長ではないと判定された場合には、そのサブフィールドのビット長を、上述のmacSnLengthによって示される値に決定する(ステップS321b)。 That is, in the variable length determination process of step S310b, when it is determined that the bit length of the subfield is not variable, the bit length of the subfield is determined to the value indicated by the above macSnLength (step S321b). ).
 一方、ステップS310bの可変長判定処理において、サブフィールドのビット長が可変長であると判定された場合には、復号対象フレームが、上記複数のフレームのうちの最終フレームであるか否か判定する最終判定処理を行う(ステップS322b)。ここで、最終フレームであると判定された場合には(ステップS322bのY)、そのサブフィールドのビット長を所定の値に決定する(ステップS323b)。一方、最終フレームでないと判定された場合には(ステップS322bのN)、最終フレームのシーケンス番号の値に基づいて、そのサブフィールドのビット長を決定する(ステップS324b)。 On the other hand, if it is determined in the variable length determination process in step S310b that the bit length of the subfield is variable length, it is determined whether or not the decoding target frame is the last frame of the plurality of frames. A final determination process is performed (step S322b). If it is determined that the frame is the final frame (Y in step S322b), the bit length of the subfield is determined to be a predetermined value (step S323b). On the other hand, if it is determined that the frame is not the last frame (N in step S322b), the bit length of the subfield is determined based on the sequence number value of the last frame (step S324b).
 これにより、図135に示すように、シーケンス番号が格納されるサブフィールド(具体的には、シーケンス番号サブフィールド)のビット長が固定長であっても可変長であっても、そのサブフィールドのビット長を適切に決定することができる。 As a result, as shown in FIG. 135, regardless of whether the bit length of the subfield in which the sequence number is stored (specifically, the sequence number subfield) is fixed or variable, the subfield The bit length can be determined appropriately.
 ここで、ステップS322bの最終判定処理では、復号対象フレームが最終フレームであるか否かを示す最終フレームフラグに基づいて、その復号対象フレームが最終フレームであるか否かを判定してもよい。具体的には、ステップS322bの最終判定処理では、最終フレームフラグが1を示す場合に、その復号対象フレームが最終フレームであると判定し、最終フレームフラグが0を示す場合に、その復号対象フレームが最終フレームではないと判定してもよい。例えば、最終フレームフラグは、そのサブフィールドの1ビット目に含まれていてもよい。 Here, in the final determination process in step S322b, it may be determined whether the decoding target frame is the final frame based on the final frame flag indicating whether the decoding target frame is the final frame. Specifically, in the final determination process in step S322b, when the final frame flag indicates 1, when it is determined that the decoding target frame is the final frame, and when the final frame flag indicates 0, the decoding target frame May not be the last frame. For example, the last frame flag may be included in the first bit of the subfield.
 これにより、図125のステップS203aに示すように、復号対象フレームが最終フレームであるか否かを適切に判定することができる。 Thereby, as shown in step S203a of FIG. 125, it is possible to appropriately determine whether or not the decoding target frame is the final frame.
 より具体的には、ステップS320bにおけるサブフィールドのビット長の決定では、ステップS322bの最終判定処理において、復号対象フレームが最終フレームであると判定された場合には、サブフィールドのビット長を、上述の所定の値である5ビットに決定してもよい。つまり、図125のステップS204aに示すように、サブフィールドのビット長SNが5ビットに決定される。 More specifically, in the determination of the bit length of the subfield in step S320b, if it is determined in the final determination process in step S322b that the decoding target frame is the final frame, the bit length of the subfield is set as described above. The predetermined value of 5 bits may be determined. That is, as shown in step S204a of FIG. 125, the bit length SN of the subfield is determined to be 5 bits.
 また、ステップS320bにおけるサブフィールドのビット長の決定では、ステップS322bの最終判定処理において、復号対象フレームが最終フレームではないと判定された場合に、最終フレームのシーケンス番号の値が1であるときには、サブフィールドのビット長を、1ビットに決定してもよい。また、最終フレームのシーケンス番号の値が2であるときには、そのサブフィールドのビット長を、2ビットに決定してもよい。また、最終フレームのシーケンス番号の値が3または4であるときには、そのサブフィールドのビット長を、3ビットに決定してもよい。また、最終フレームのシーケンス番号の値が5から8の何れかの整数であるときには、そのサブフィールドのビット長を、4ビットに決定してもよい。また、最終フレームのシーケンス番号の値が9から15の何れかの整数であるときには、そのサブフィールドのビット長を、5ビットに決定してもよい。つまり、図125のステップS206a~S210aに示すように、サブフィールドのビット長SNが1~5ビットの何れかに決定される。 Further, in the determination of the bit length of the subfield in step S320b, when it is determined in the final determination process in step S322b that the decoding target frame is not the final frame, the sequence number value of the final frame is 1. The bit length of the subfield may be determined as 1 bit. Further, when the value of the sequence number of the last frame is 2, the bit length of the subfield may be determined to be 2 bits. Further, when the value of the sequence number of the last frame is 3 or 4, the bit length of the subfield may be determined to be 3 bits. Further, when the value of the sequence number of the last frame is any integer from 5 to 8, the bit length of the subfield may be determined to be 4 bits. Further, when the value of the sequence number of the last frame is any integer from 9 to 15, the bit length of the subfield may be determined to be 5 bits. That is, as shown in steps S206a to S210a in FIG. 125, the bit length SN of the subfield is determined to be any one of 1 to 5 bits.
 図136は、実施の形態6の符号化方法の一例を示すフローチャートである。なお、この図136に示すフローチャートは、図124に示すフローチャートに相当する。 FIG. 136 is a flowchart showing an example of the encoding method according to the sixth embodiment. The flowchart shown in FIG. 136 corresponds to the flowchart shown in FIG.
 この符号化方法は、符号化対象の情報を、複数のフレームで構成される可視光信号に符号化する方法であって、図136に示すように、ステップS410aと、ステップS420aと、ステップS430aとを含む。また、これらの複数のフレームのそれぞれはシーケンス番号とフレームペイロードとを含む。 This encoding method is a method of encoding information to be encoded into a visible light signal composed of a plurality of frames. As shown in FIG. 136, step S410a, step S420a, step S430a, including. Each of the plurality of frames includes a sequence number and a frame payload.
 ステップS410aでは、処理対象フレームにおいてシーケンス番号が格納されるサブフィールドのビット長を決定するための情報であるmacSnLengthに基づいて、そのサブフィールドのビット長が可変長か否かを判定する可変長判定処理を行う。 In step S410a, based on macSnLength which is information for determining the bit length of the subfield in which the sequence number is stored in the processing target frame, it is determined whether or not the bit length of the subfield is variable. Process.
 ステップS420aでは、その可変長判定処理の結果に基づいて、そのサブフィールドのビット長を決定する。そして、ステップS430aでは、決定されたサブフィールドのビット長に基づいて、符号化対象の情報の一部を処理対象フレームに符号化する。 In step S420a, the bit length of the subfield is determined based on the result of the variable length determination process. In step S430a, based on the determined bit length of the subfield, a part of the encoding target information is encoded into the processing target frame.
 ここで、ステップS420aにおける上記サブフィールドのビット長の決定では、ステップS421a~S424aを含む。 Here, the determination of the bit length of the subfield in step S420a includes steps S421a to S424a.
 つまり、ステップS410aの可変長判定処理において、サブフィールドのビット長が可変長ではないと判定された場合には、そのサブフィールドのビット長を、上述のmacSnLengthによって示される値に決定する(ステップS421a)。 That is, in the variable length determination process in step S410a, when it is determined that the bit length of the subfield is not variable length, the bit length of the subfield is determined to the value indicated by the above-described macSnLength (step S421a). ).
 一方、ステップS410aの可変長判定処理において、サブフィールドのビット長が可変長であると判定された場合には、処理対象フレームが、上記複数のフレームのうちの最終フレームであるか否か判定する最終判定処理を行う(ステップS422a)。ここで、最終フレームであると判定された場合には(ステップS422aのY)、そのサブフィールドのビット長を所定の値に決定する(ステップS423a)。一方、最終フレームでないと判定された場合には(ステップS422aのN)、最終フレームのシーケンス番号の値に基づいて、そのサブフィールドのビット長を決定する(ステップS424a)。 On the other hand, if it is determined in the variable length determination process in step S410a that the bit length of the subfield is variable length, it is determined whether or not the processing target frame is the last frame of the plurality of frames. A final determination process is performed (step S422a). If it is determined that the frame is the last frame (Y in step S422a), the bit length of the subfield is determined to be a predetermined value (step S423a). On the other hand, when it is determined that the frame is not the last frame (N in step S422a), the bit length of the subfield is determined based on the sequence number value of the last frame (step S424a).
 これにより、図136に示すように、シーケンス番号が格納されるサブフィールド(具体的には、シーケンス番号サブフィールド)のビット長が固定長であっても可変長であっても、そのサブフィールドのビット長を適切に決定することができる。 As a result, as shown in FIG. 136, regardless of whether the bit length of the subfield in which the sequence number is stored (specifically, the sequence number subfield) is fixed or variable, the subfield The bit length can be determined appropriately.
 なお、本実施の形態における復号装置は、プロセッサとメモリとを備え、メモリには、図135に示す復号方法をプロセッサに実行させるプログラムが記録されている。本実施の形態における符号化装置は、プロセッサとメモリとを備え、メモリには、図136に示す符号化方法をプロセッサに実行させるプログラムが記録されている。また、本実施の形態におけるプログラムは、図135に示す復号方法、または図136に示す符号化方法をコンピュータに実行させるプログラムである。 Note that the decoding device according to the present embodiment includes a processor and a memory, and a program that causes the processor to execute the decoding method shown in FIG. 135 is recorded in the memory. The encoding apparatus according to the present embodiment includes a processor and a memory, and a program that causes the processor to execute the encoding method shown in FIG. 136 is recorded in the memory. Further, the program in the present embodiment is a program that causes a computer to execute the decoding method shown in FIG. 135 or the encoding method shown in FIG. 136.
 (実施の形態7)
 本実施の形態では、光IDを可視光信号によって送信する送信方法について説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。
(Embodiment 7)
In this embodiment, a transmission method for transmitting an optical ID by a visible light signal will be described. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments.
 図137は、本実施の形態における受信機がAR画像を表示する例を示す図である。 FIG. 137 is a diagram illustrating an example in which the receiver according to the present embodiment displays an AR image.
 本実施の形態における受信機200は、イメージセンサおよびディスプレイ201を備えた受信機であって、例えばスマートフォンとして構成されている。このような受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Paと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 in the present embodiment is a receiver including an image sensor and a display 201, and is configured as a smartphone, for example. Such a receiver 200 acquires the above-described captured display image Pa, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは送信機100を撮像する。送信機100は、例えば電球のような形態を有し、ガラス球141と、そのガラス球141の内部で炎のように光りながら揺らめく発光部142とを備える。この発光部142は、送信機100に備えられた1つまたは複数の発光素子(例えばLED)の点灯によって光る。この送信機100は、その発光部142を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100. The transmitter 100 has a shape such as a light bulb, for example, and includes a glass bulb 141 and a light emitting unit 142 that sways while shining like a flame inside the glass bulb 141. The light emitting unit 142 emits light when one or more light emitting elements (for example, LEDs) provided in the transmitter 100 are turned on. The transmitter 100 changes in luminance by blinking the light emitting unit 142, and transmits an optical ID (optical identification information) by the luminance change. This light ID is the above-mentioned visible light signal.
 受信機200は、送信機100を通常露光時間で撮像することによって、その送信機100が映し出された撮像表示画像Paを取得するとともに、その通常露光時間よりも短い通信用露光時間で送信機100を撮像することによって、復号用画像を取得する。なお、通常露光時間は、上述の通常撮影モードにおける露光時間であり、通信用露光時間は、上述の可視光通信モードにおける露光時間である。 The receiver 200 captures the transmitter 100 with the normal exposure time, thereby acquiring the captured display image Pa projected by the transmitter 100, and the transmitter 100 with a communication exposure time shorter than the normal exposure time. A decoding image is acquired by imaging. The normal exposure time is the exposure time in the above-described normal photographing mode, and the communication exposure time is the exposure time in the above-described visible light communication mode.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P42と認識情報とをサーバから取得する。受信機200は、撮像表示画像Paのうち、その認識情報に応じた領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P42を重畳し、AR画像P42が重畳された撮像表示画像Paをディスプレイ201に表示する。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P42 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. Then, the receiver 200 superimposes the AR image P42 on the target area, and displays the captured display image Pa on which the AR image P42 is superimposed on the display 201.
 例えば、受信機200は、図51に示す例と同様に、認識情報にしたがって、送信機100が映し出されている領域の左上にある領域を対象領域として認識する。その結果、例えば妖精を示すAR画像P42は、送信機100の周りを飛んでいるように表示される。 For example, similarly to the example shown in FIG. 51, the receiver 200 recognizes the area at the upper left of the area where the transmitter 100 is projected as the target area according to the recognition information. As a result, for example, the AR image P <b> 42 showing a fairy is displayed so as to fly around the transmitter 100.
 図138は、AR画像P42が重畳された撮像表示画像Paの他の例を示す図である。 FIG. 138 is a diagram illustrating another example of the captured display image Pa on which the AR image P42 is superimposed.
 受信機200は、図138に示すように、AR画像P42が重畳された撮像表示画像Paをディスプレイ201に表示する。 As shown in FIG. 138, the receiver 200 displays a captured display image Pa on which the AR image P42 is superimposed on the display 201.
 ここで、上述の認識情報は、撮像表示画像Paにおける閾値以上の輝度を有する範囲が基準領域であることを示す。さらに、その認識情報は、その基準領域に対して予め定められた方向に対象領域があることと、その対象領域が基準領域の中心(または重心)から予め定められた距離だけ離れていることを示す。 Here, the above-described recognition information indicates that a range having a luminance equal to or higher than a threshold value in the captured display image Pa is a reference region. Further, the recognition information indicates that the target area is in a predetermined direction with respect to the reference area, and that the target area is separated from the center (or center of gravity) of the reference area by a predetermined distance. Show.
 したがって、受信機200によって撮像されている送信機100の発光部142が揺らめくと、図138に示すように、撮像表示画像Paの対象領域に重畳されるAR画像P42も、その発光部142の動きに同期するように動く。つまり、発光部142が揺らめくと、撮像表示画像Paに映し出されている発光部142の像142aも揺らめく。この像142aは、上述の閾値以上の輝度を有する範囲であって、基準領域である。すなわち、基準領域が動くため、受信機200は、その基準領域と対象領域との間の距離が予め定められた距離に維持されるように、対象領域を移動させて、その移動する対象領域にAR画像P42を重畳する。その結果、発光部142が揺らめくと、撮像表示画像Paの対象領域に重畳されるAR画像P42も、その発光部142の動きに同期するように動く。なお、基準領域の中心位置は、発光部142の変形によっても移動することがある。したがって、発光部142が変形する場合にも、AR画像42は、その移動する基準領域の中心位置との間の距離が予め定められた距離に維持されるように動くことがある。 Therefore, when the light emitting unit 142 of the transmitter 100 imaged by the receiver 200 fluctuates, the AR image P42 superimposed on the target area of the captured display image Pa also moves as shown in FIG. Move to sync. That is, when the light emitting unit 142 fluctuates, the image 142a of the light emitting unit 142 displayed in the captured display image Pa also fluctuates. This image 142a is a range having a luminance equal to or higher than the above-described threshold and is a reference area. That is, since the reference area moves, the receiver 200 moves the target area so that the distance between the reference area and the target area is maintained at a predetermined distance. The AR image P42 is superimposed. As a result, when the light emitting unit 142 shakes, the AR image P42 superimposed on the target area of the captured display image Pa also moves in synchronization with the movement of the light emitting unit 142. Note that the center position of the reference region may move even when the light emitting unit 142 is deformed. Therefore, even when the light emitting unit 142 is deformed, the AR image 42 may move so that the distance from the center position of the moving reference area is maintained at a predetermined distance.
 また、上述の例では、受信機200は、認識情報に基づいて対象領域を認識し、その対象領域にAR画像P42を重畳するが、その対象領域を中心にAR画像P42を振動させてもよい。つまり、受信機200は、時間に対する振幅の変化を示す関数にしたがって、そのAR画像P42を例えば上下方向に振動させる。その関数は、例えば正弦波などの三角関数である。 In the above example, the receiver 200 recognizes the target area based on the recognition information and superimposes the AR image P42 on the target area. However, the AR image P42 may be vibrated around the target area. . That is, the receiver 200 vibrates the AR image P42 in the vertical direction, for example, according to a function indicating the change in amplitude with respect to time. The function is a trigonometric function such as a sine wave.
 また、受信機200は、上述の閾値以上の輝度を有する範囲の大きさに応じて、AR画像P42の大きさを変化させてもよい。つまり、受信機200は、撮像表示画像Paにおける明るい領域の面積が大きくなるほど、AR画像P42のサイズを大きくし、逆に、その明るい領域の面積が小さくなるほど、AR画像P42のサイズを小さくする。 In addition, the receiver 200 may change the size of the AR image P42 according to the size of a range having a luminance equal to or higher than the above-described threshold. That is, the receiver 200 increases the size of the AR image P42 as the area of the bright region in the captured display image Pa increases, and conversely decreases the size of the AR image P42 as the area of the bright region decreases.
 または、受信機200は、上述の閾値以上の輝度を有する範囲における平均輝度が高いほど、AR画像P42のサイズを大きくし、逆に、その平均輝度が低いほど、AR画像P42のサイズを小さくしてもよい。なお、AR画像P42のサイズの代わりに、AR画像P42の透明度を、その平均輝度に応じて変化させてもよい。 Alternatively, the receiver 200 increases the size of the AR image P42 as the average luminance in the range having the luminance equal to or higher than the above-described threshold is increased, and conversely decreases the size of the AR image P42 as the average luminance is lower. May be. Instead of the size of the AR image P42, the transparency of the AR image P42 may be changed according to the average luminance.
 また、図138に示す例では、発光部142の像142aの中では何れの画素も閾値以上の輝度を有するが、何れかの画素が閾値未満であってもよい。つまり、像142aに相当する、閾値以上の輝度を有する範囲は、環状であってもよい。この場合にも、その閾値以上の輝度を有する範囲が基準領域として特定され、その基準領域の中心(または重心)から予め定められた距離だけ離れた対象領域に、AR画像P42が重畳される。 In the example shown in FIG. 138, any pixel in the image 142a of the light emitting unit 142 has a luminance equal to or higher than the threshold value, but any pixel may be less than the threshold value. That is, the range corresponding to the image 142a and having a luminance equal to or higher than the threshold value may be annular. Also in this case, a range having a luminance equal to or higher than the threshold is specified as the reference region, and the AR image P42 is superimposed on a target region that is separated from the center (or centroid) of the reference region by a predetermined distance.
 図139は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 139 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図139に示すように、照明装置として構成され、例えば壁に描かれた3つの円からなる図形143を照らしながら輝度変化することによって、光IDを送信している。図形143は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as shown in FIG. 139, the transmitter 100 is configured as a lighting device, and transmits a light ID by changing the luminance while illuminating a figure 143 composed of, for example, three circles drawn on a wall. Since the figure 143 is illuminated by the light from the transmitter 100, the luminance changes in the same manner as the transmitter 100, and the light ID is transmitted.
 受信機200は、送信機100によって照らされた図形143を撮像することによって、上述と同様に、撮像表示画像Paと復号用画像とを取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、図形143から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P43と認識情報とをサーバから取得する。受信機200は、撮像表示画像Paのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、図形143が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P43を重畳し、AR画像P43が重畳された撮像表示画像Paをディスプレイ201に表示する。例えば、AR画像P43は、キャラクターの顔画像である。 The receiver 200 acquires the captured display image Pa and the decoding image in the same manner as described above by capturing the figure 143 illuminated by the transmitter 100. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the graphic 143. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P43 and the recognition information corresponding to the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pa as a target area. For example, the receiver 200 recognizes an area where the graphic 143 is projected as a target area. Then, the receiver 200 superimposes the AR image P43 on the target area, and displays the captured display image Pa on which the AR image P43 is superimposed on the display 201. For example, the AR image P43 is a character face image.
 ここで、図形143は、上述のように3つの円からなるが、この図形143には幾何学的な特徴が少ない。したがって、図形143の撮像によって得られる撮像画像だけからでは、その図形143に応じたAR画像を、サーバに蓄積された多くの画像から適切に選択して取得することは難しい。しかし、本実施の形態では、受信機200は、光IDを取得し、その光IDに対応するAR画像P43をサーバから取得する。したがって、サーバに多くの画像が蓄積されていても、その光IDに対応するAR画像P43を、図形143に応じたAR画像として、その多くの画像の中から適切に選択して取得することができる。 Here, the figure 143 is composed of three circles as described above, but the figure 143 has few geometric features. Therefore, it is difficult to appropriately select and acquire an AR image corresponding to the graphic 143 from many images stored in the server from only the captured image obtained by imaging the graphic 143. However, in the present embodiment, the receiver 200 acquires an optical ID and acquires an AR image P43 corresponding to the optical ID from the server. Therefore, even if many images are stored in the server, the AR image P43 corresponding to the light ID can be appropriately selected and acquired as an AR image corresponding to the graphic 143 from the many images. it can.
 図140は、本実施の形態における受信機200の動作を示すフローチャートである。 FIG. 140 is a flowchart showing the operation of the receiver 200 in the present embodiment.
 本実施の形態における受信機200は、まず、複数のAR画像候補を取得する(ステップS541)。例えば、受信機200は、可視光通信と異なる無線通信(BTLEまたはWi-Fiなど)によって、サーバから複数のAR画像候補を取得する。次に、受信機200は、被写体を撮像する(ステップS542)。受信機200は、この撮像によって、上述のように、撮像表示画像Paと復号用画像とを取得する。しかし、その被写体が送信機100の写真である場合には、その被写体からは光IDは送信されていないため、受信機200は、復号用画像に対する復号を行っても光IDを取得することはできない。 The receiver 200 in the present embodiment first acquires a plurality of AR image candidates (step S541). For example, the receiver 200 acquires a plurality of AR image candidates from the server through wireless communication (such as BTLE or Wi-Fi) different from visible light communication. Next, the receiver 200 captures an image of the subject (step S542). The receiver 200 acquires the captured display image Pa and the decoding image as described above by this imaging. However, when the subject is a photograph of the transmitter 100, since the optical ID is not transmitted from the subject, the receiver 200 does not acquire the optical ID even when decoding the decoding image. Can not.
 そこで、受信機200は、光IDを取得することができたか否か、すなわち、被写体から光IDを受信したか否かを判定する(ステップS543)。 Therefore, the receiver 200 determines whether or not the light ID has been acquired, that is, whether or not the light ID has been received from the subject (step S543).
 ここで、光IDを受信していないと判定すると(ステップS543のNo)、受信機200は、自らに設定されているAR表示フラグが1であるか否かを判定する(ステップS544)。AR表示フラグは、光IDが取得されていなくても撮像表示画像PaだけからAR画像を表示してもよいか否かを示すフラグである。AR表示フラグが1である場合には、そのAR表示フラグは、撮像表示画像PaだけからAR画像を表示してもよいこと示し、AR表示フラグが0である場合には、そのAR表示フラグは、撮像表示画像PaだけからAR画像を表示してはいけないこと示す。 Here, if it is determined that the optical ID is not received (No in step S543), the receiver 200 determines whether or not the AR display flag set for itself is 1 (step S544). The AR display flag is a flag indicating whether or not an AR image may be displayed only from the captured display image Pa even if the light ID is not acquired. When the AR display flag is 1, the AR display flag indicates that the AR image may be displayed only from the captured display image Pa. When the AR display flag is 0, the AR display flag is This indicates that the AR image should not be displayed only from the captured display image Pa.
 AR表示フラグが1であると判定すると(ステップS544のYes)、受信機200は、ステップS541で取得された複数のAR画像候補の中から、撮像表示画像Paに対応する候補をAR画像として選択する(ステップS545)。つまり、受信機200は、撮像表示画像Paに含まれる特徴量を抽出し、その抽出された特徴量に関連付けられている候補をAR画像として選択する。 If it is determined that the AR display flag is 1 (Yes in step S544), the receiver 200 selects a candidate corresponding to the captured display image Pa from the plurality of AR image candidates acquired in step S541 as an AR image. (Step S545). That is, the receiver 200 extracts a feature amount included in the captured display image Pa, and selects a candidate associated with the extracted feature amount as an AR image.
 そして、受信機200は、選択された候補であるAR画像を撮像表示画像Paに重畳して表示する(ステップS546)。 Then, the receiver 200 displays the selected candidate AR image superimposed on the captured display image Pa (step S546).
 一方、AR表示フラグが0であると判定すると(ステップS544のNo)、受信機200は、AR画像を表示しない。 On the other hand, if it is determined that the AR display flag is 0 (No in step S544), the receiver 200 does not display the AR image.
 また、ステップS543で光IDを受信したと判定すると(ステップS543のYes)、受信機200は、ステップS541で取得された複数のAR画像候補の中から、その光IDに関連付けられている候補をAR画像として選択する(ステップS547)。そして、受信機200は、選択された候補であるAR画像を撮像表示画像Paに重畳して表示する(ステップS546)。 If it is determined in step S543 that an optical ID has been received (Yes in step S543), the receiver 200 selects a candidate associated with the optical ID from among a plurality of AR image candidates acquired in step S541. It selects as AR image (step S547). Then, the receiver 200 displays the AR image that is the selected candidate so as to be superimposed on the captured display image Pa (step S546).
 なお、上述の例では、AR表示フラグは受信機200に設定されているが、サーバに設定されていてもよい。この場合には、受信機200は、ステップS544において、サーバにAR表示フラグが1であるか0であるかを問い合わせる。 In the above example, the AR display flag is set in the receiver 200, but may be set in the server. In this case, the receiver 200 inquires of the server whether the AR display flag is 1 or 0 in step S544.
 これにより、受信機200が撮像を行っても光IDを受信していないときに、その受信機200に対してAR画像を表示させるか否かを、AR表示フラグによって制御することができる。 Thus, whether or not to display an AR image on the receiver 200 when the optical ID is not received even when the receiver 200 performs imaging can be controlled by the AR display flag.
 図141は、本実施の形態における送信機100の動作を説明するための図である。 FIG. 141 is a diagram for explaining the operation of the transmitter 100 in the present embodiment.
 例えば、送信機100はプロジェクタとして構成されている。ここで、プロジェクタから照射されてスクリーンに反射される光の強度は、そのプロジェクタの光源の経年劣化、または、その光源からスクリーンまでの距離などの各要因によって変化する。光の強度が小さい場合には、送信機100から送信される光IDが受信機200に受信され難くなる。 For example, the transmitter 100 is configured as a projector. Here, the intensity of the light emitted from the projector and reflected on the screen varies depending on factors such as the aging of the light source of the projector or the distance from the light source to the screen. When the light intensity is low, the optical ID transmitted from the transmitter 100 is difficult to be received by the receiver 200.
 そこで、本実施の形態における送信機100は、その各要因に応じた光の強度の変化を抑えるために、光源を発光させるためのパラメータを調整する。このパラメータは、光源を発光させるためにその光源に入力される電流の値と、その発光時間(より具体的には、単位時間当たりの発光時間)とのうちの少なくとも一方である。例えば、電流の値を大きくするほど、発光時間を長くするほど、光源の光の強度は大きくなる。 Therefore, the transmitter 100 according to the present embodiment adjusts a parameter for causing the light source to emit light in order to suppress a change in light intensity according to each factor. This parameter is at least one of the value of the current input to the light source for causing the light source to emit light and the light emission time (more specifically, the light emission time per unit time). For example, the light intensity of the light source increases as the current value increases or the light emission time increases.
 つまり、送信機100は、光源が経年劣化しているほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、タイマを備え、そのタイマによって計測される光源の使用時間が長いほど、その光源の光を強めるようにパラメータを調整する。つまり、送信機100は、使用時間が長いほど、光源の電流の値を高めたり、発光時間を長くしたりする。または、送信機100は、光源から照射される光の強度を検出し、その検出された光の強度が低下しないようにパラメータを調整する。すなわち、送信機100は、検出される光の強度が小さいほど、その光を強めるようにパラメータを調整する。 That is, the transmitter 100 adjusts the parameter so that the light from the light source becomes stronger as the light source deteriorates with age. Specifically, the transmitter 100 includes a timer, and adjusts the parameter to increase the light of the light source as the usage time of the light source measured by the timer is longer. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the usage time is longer. Alternatively, the transmitter 100 detects the intensity of light emitted from the light source, and adjusts the parameters so that the detected light intensity does not decrease. That is, the transmitter 100 adjusts the parameter so as to increase the light intensity as the detected light intensity decreases.
 また、送信機100は、光源からスクリーンまでの照射距離が長いほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、照射されてスクリーンに反射された光の強度を検出し、その検出された光の強度が小さいほど、光源の光を強めるようにパラメータを調整する。つまり、送信機100は、検出された光の強度が小さいほど、その光源の電流の値を高めたり、発光時間を長くしたりする。これによって、反射される光の強度が照射距離に関わらず一定になるように、パラメータが調整される。または、送信機100は、光源からスクリーンまでの照射距離を測距センサによって検出し、その検出された照射距離が長いほど、光源の光を強めるようにパラメータを調整する。 In addition, the transmitter 100 adjusts the parameter so that the light from the light source becomes stronger as the irradiation distance from the light source to the screen becomes longer. Specifically, the transmitter 100 detects the intensity of the light that is irradiated and reflected by the screen, and adjusts the parameter so that the light from the light source increases as the detected light intensity decreases. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the detected light intensity decreases. Thereby, the parameters are adjusted so that the intensity of the reflected light is constant regardless of the irradiation distance. Alternatively, the transmitter 100 detects the irradiation distance from the light source to the screen by a distance measuring sensor, and adjusts the parameter so that the light from the light source is increased as the detected irradiation distance is longer.
 また、送信機100は、スクリーンの色が黒いほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、スクリーンを撮像することによって、そのスクリーンの色を検出し、その検出された色が黒いほど、光源の光を強めるようにパラメータを調整する。つまり、送信機100は、検出された色が黒いほど、その光源の電流の値を高めたり、発光時間を長くしたりする。これによって、反射される光の強度がスクリーンの色に関わらず一定になるように、パラメータが調整される。 Also, the transmitter 100 adjusts the parameter so that the lighter the light of the light source is, the darker the screen is. Specifically, the transmitter 100 detects the color of the screen by imaging the screen, and adjusts the parameter so that the light from the light source is increased as the detected color is blacker. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the detected color is black. Thus, the parameters are adjusted so that the intensity of the reflected light is constant regardless of the color of the screen.
 また、送信機100は、外光が強いほど、その光源の光を強めるようにパラメータを調整する。具体的には、送信機100は、光源をONにして光を照射したときのスクリーンの明るさと、光源をOFFにして光を照射していないときのスクリーンの明るさとの差を検出する。そして、送信機100は、その明るさの差が小さいほど、光源の光を強めるようにパラメータを調整する。つまり、送信機100は、明るさの差が小さいほど、その光源の電流の値を高めたり、発光時間を長くしたりする。これによって、外光に関わらず、光IDのS/N比が一定になるように、パラメータが調整される。または、送信機100は、例えばLEDディスプレイとして構成されている場合には、太陽光の強度を検出し、その太陽光の強度が大きいほど、光源の光を強めるようにパラメータを調整してもよい。 In addition, the transmitter 100 adjusts the parameters so that the light from the light source increases as the external light increases. Specifically, the transmitter 100 detects the difference between the screen brightness when the light source is turned on and irradiated with light and the screen brightness when the light source is turned off and no light is irradiated. Then, the transmitter 100 adjusts the parameter so as to increase the light of the light source as the brightness difference is smaller. That is, the transmitter 100 increases the current value of the light source or lengthens the light emission time as the difference in brightness is smaller. Thus, the parameters are adjusted so that the S / N ratio of the light ID is constant regardless of the external light. Alternatively, when the transmitter 100 is configured as an LED display, for example, the intensity of sunlight may be detected, and the parameter may be adjusted to increase the light from the light source as the intensity of sunlight increases. .
 なお、上述のようなパラメータの調整は、ユーザによる操作が行われたときに実施されてもよい。例えば、送信機100は、キャリブレーションボタンを備え、そのキャリブレーションボタンがユーザによって押下されたときに、上述のパラメータの調整を実施する。または、送信機100は、定期的に上述のパラメータの調整を実施してもよい。 It should be noted that the parameter adjustment as described above may be performed when an operation by the user is performed. For example, the transmitter 100 includes a calibration button, and performs the above-described parameter adjustment when the calibration button is pressed by the user. Alternatively, the transmitter 100 may periodically adjust the parameters described above.
 図142は、本実施の形態における送信機100の他の動作を説明するための図である。 FIG. 142 is a diagram for explaining another operation of the transmitter 100 in the present embodiment.
 例えば、送信機100はプロジェクタとして構成され、光源からの光を、前部材を通してスクリーンに照射する。プロジェクタが液晶プロジェクタの場合には、その前部材は液晶パネルであり、プロジェクタがDLP(登録商標)プロジェクタの場合には、その前部材はDMD(Digital Mirror Device)である。つまり、前部材は、映像の輝度を画素ごとに調整する部材である。また、光源は、前部材に向けて光を照射するが、その光の強度をHighとLowとに切り替える。また、光源は、単位時間あたりのHighの時間を調整することによって、時間平均的な明るさを調整する。 For example, the transmitter 100 is configured as a projector, and irradiates the screen with light from the light source through the front member. When the projector is a liquid crystal projector, the front member is a liquid crystal panel, and when the projector is a DLP (registered trademark) projector, the front member is a DMD (Digital Mirror Device). That is, the front member is a member that adjusts the luminance of the image for each pixel. The light source irradiates light toward the front member, and switches the intensity of the light between High and Low. In addition, the light source adjusts the time-average brightness by adjusting the High time per unit time.
 ここで、前部材の透過率が例えば100%である場合には、プロジェクタからスクリーンへ投影される映像が明るすぎることがないように、光源は暗くなる。つまり、光源は、単位時間あたりのHighの時間を短くする。 Here, when the transmittance of the front member is 100%, for example, the light source is dark so that the image projected from the projector onto the screen is not too bright. That is, the light source shortens the High time per unit time.
 このとき、光源は、輝度変化によって光IDを送信する場合には、光IDのパルス幅を広くする。 At this time, the light source widens the pulse width of the light ID when transmitting the light ID due to a change in luminance.
 一方、前部材の透過率が例えば20%である場合には、プロジェクタからスクリーンへ投影される映像が暗すぎることがないように、光源は明るくなる。つまり、光源は、単位時間あたりのHighの時間を長くする。 On the other hand, when the transmittance of the front member is 20%, for example, the light source becomes bright so that the image projected from the projector onto the screen is not too dark. That is, the light source lengthens High time per unit time.
 このとき、光源は、輝度変化によって光IDを送信する場合には、光IDのパルス幅を狭くする。 At this time, the light source narrows the pulse width of the light ID when transmitting the light ID due to a change in luminance.
 このように、光源が暗い場合には、光IDのパルス幅が広くなり、逆に、光源が明るい場合には、光IDのパルス幅が狭くなるため、光IDの送信によって、光源からの光の強度が弱すぎたり、明るすぎたりしてしまうことを抑えることができる。 As described above, when the light source is dark, the pulse width of the light ID is widened. Conversely, when the light source is bright, the pulse width of the light ID is narrowed. It is possible to prevent the intensity of the light from being too weak or too bright.
 なお、上述の例では、送信機100はプロジェクタであるが、大型LEDディスプレイとして構成されていてもよい。大型LEDディスプレイは、画素スイッチと共通スイッチとを備える。画素スイッチのONおよびOFFによって映像が表現され、共通スイッチのONおよびオフによって光IDが送信される。この場合、機能的に、画素スイッチが前部材に相当し、共通スイッチが光源に相当する。画素スイッチによる平均輝度が高い場合には、共通スイッチによる光IDのパルス幅を短くしてもよい。 In the above example, the transmitter 100 is a projector, but may be configured as a large LED display. The large LED display includes a pixel switch and a common switch. An image is expressed by turning on and off the pixel switch, and an optical ID is transmitted by turning on and off the common switch. In this case, functionally, the pixel switch corresponds to the front member, and the common switch corresponds to the light source. When the average luminance by the pixel switch is high, the pulse width of the light ID by the common switch may be shortened.
 図143は、本実施の形態における送信機100の他の動作を説明するための図である。具体的には、図143は、調光機能付きスポットライトとして構成された送信機100の調光度と、その送信機100の光源に入力される電流(具体的にはピーク電流の値)との関係を示す。 FIG. 143 is a diagram for explaining another operation of the transmitter 100 in the present embodiment. Specifically, FIG. 143 shows the dimming degree of the transmitter 100 configured as a spotlight with dimming function, and the current (specifically, the peak current value) input to the light source of the transmitter 100. Show the relationship.
 送信機100は、自らに備えられている光源に対して指定される調光度を受け付け、その指定された調光度で光源を発光させる。なお、調光度は、光源の平均輝度の最大平均輝度に対する割合である。平均輝度は、瞬間的な輝度ではなく、時間平均における輝度である。また、調光度の調整は、光源に入力される電流の値を調整したり、光源の輝度がLowとなる時間を調整することによって実現される。光源の輝度がLowとなる時間は、光源をオフする時間であってもよい。 The transmitter 100 receives the dimming degree specified for the light source provided in the transmitter 100, and causes the light source to emit light at the specified dimming degree. The dimming degree is a ratio of the average luminance of the light source to the maximum average luminance. The average luminance is not instantaneous luminance but luminance in time average. In addition, the dimming degree is adjusted by adjusting the value of the current input to the light source or adjusting the time during which the luminance of the light source is low. The time when the luminance of the light source becomes Low may be the time when the light source is turned off.
 ここで、送信機100は、送信対象信号を光IDとして送信するときには、その送信対象信号を予め定められたモードで符号化することによって符号化信号を生成する。そして、送信機100は、その符号化信号にしたがって、自らに備えられた光源を輝度変化させることによって、その符号化信号を光ID(すなわち可視光信号)として送信する。 Here, when transmitting the transmission target signal as an optical ID, the transmitter 100 generates an encoded signal by encoding the transmission target signal in a predetermined mode. Then, the transmitter 100 transmits the encoded signal as an optical ID (that is, a visible light signal) by changing the luminance of the light source provided in the transmitter 100 according to the encoded signal.
 例えば、指定された調光度が0%以上x3(%)以下である場合には、送信機100は、デューティ比35%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。x3(%)は例えば50%である。なお、本実施の形態では、デューティ比35%のPWMモードを、第1のモードともいい、上述のx3を、第1の値ともいう。 For example, when the designated dimming degree is 0% or more and x3 (%) or less, the transmitter 100 generates an encoded signal by encoding the transmission target signal in the PWM mode with a duty ratio of 35%. . x3 (%) is, for example, 50%. In the present embodiment, the PWM mode with a duty ratio of 35% is also referred to as a first mode, and the above x3 is also referred to as a first value.
 つまり、送信機100は、指定される調光度が0%以上x3(%)以下である場合には、可視光信号のデューティ比を35%に維持しながら、光源の調光度をピーク電流の値によって調整する。 That is, when the designated dimming degree is 0% or more and x3 (%) or less, the transmitter 100 sets the dimming degree of the light source to the value of the peak current while maintaining the duty ratio of the visible light signal at 35%. Adjust by.
 また、指定された調光度がx3(%)よりも大きく100%以下である場合には、送信機100は、デューティ比65%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。なお、本実施の形態では、デューティ比65%のPWMモードを、第2のモードともいう。 When the designated dimming degree is greater than x3 (%) and equal to or less than 100%, the transmitter 100 encodes the encoded signal by encoding the transmission target signal in the PWM mode with a duty ratio of 65%. Generate. In the present embodiment, the PWM mode with a duty ratio of 65% is also referred to as a second mode.
 つまり、送信機100は、指定される調光度がx3(%)よりも大きく100%以下である場合には、可視光信号のデューティ比を65%に維持しながら、光源の調光度をピーク電流の値によって調整する。 That is, when the designated dimming degree is greater than x3 (%) and equal to or less than 100%, the transmitter 100 sets the dimming degree of the light source to the peak current while maintaining the duty ratio of the visible light signal at 65%. Adjust according to the value of.
 このように、本実施の形態における送信機100は、光源に対して指定される調光度を指定調光度として受け付ける。そして、送信機100は、指定調光度が第1の値以下である場合には、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。また、送信機100は、指定調光度の値が第1の値よりも大きい場合には、その指定調光度で光源を発光させながら、第2のモードで符号化された信号を輝度変化により送信する。具体的には、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された信号のデューティ比よりも大きい。 Thus, the transmitter 100 according to the present embodiment accepts the dimming degree designated for the light source as the designated dimming degree. Then, when the designated dimming degree is equal to or less than the first value, the transmitter 100 transmits the signal encoded in the first mode by the luminance change while causing the light source to emit light at the designated dimming degree. In addition, when the value of the designated dimming degree is larger than the first value, the transmitter 100 transmits the signal encoded in the second mode by changing the luminance while causing the light source to emit light at the designated dimming degree. To do. Specifically, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode.
 ここで、第2のモードのデューティ比は第1のモードのデューティ比よりも大きいため、第2のモードにおける調光度に対するピーク電流の変化率を、第1のモードにおける調光度に対するピーク電流の変化率よりも小さくすることができる。 Here, since the duty ratio of the second mode is larger than the duty ratio of the first mode, the change rate of the peak current with respect to the dimming degree in the second mode is expressed as the change of the peak current with respect to the dimming degree in the first mode. Can be smaller than the rate.
 また、指定される調光度がx3(%)を超えるときには、モードが第1のモードから第2のモードに切り替えられる。したがって、このときには、ピーク電流を瞬間的に低下させることができる。つまり、指定される調光度がx3(%)であるときには、ピーク電流はy3(mA)であるが、指定される調光度がx3(%)を少しでも超えると、ピーク電流をy2(mA)に抑えることができる。なお、y3(mA)は例えば143mAであり、y2(mA)は例えば100mAである。その結果、調光度を大きくするために、ピーク電流がy3(mA)よりも大きくなることを抑えることができ、大きな電流が流れることによって光源が劣化してしまうことを抑制することができる。 Also, when the designated dimming degree exceeds x3 (%), the mode is switched from the first mode to the second mode. Accordingly, at this time, the peak current can be instantaneously reduced. That is, when the designated dimming degree is x3 (%), the peak current is y3 (mA). However, when the designated dimming degree slightly exceeds x3 (%), the peak current exceeds y2 (mA). Can be suppressed. In addition, y3 (mA) is 143 mA, for example, and y2 (mA) is 100 mA, for example. As a result, in order to increase the dimming degree, it is possible to suppress the peak current from becoming larger than y3 (mA), and it is possible to suppress the deterioration of the light source due to the large current flowing.
 また、指定される調光度がx4(%)を超えるときには、モードが第2のモードであっても、ピーク電流がy3(mA)よりも大きくなる。しかし、指定される調光度がx4(%)を超える頻度が少ない場合には、光源の劣化を抑えることができる。なお、本実施の形態では、上述のx4を、第2の値ともいう。また、図143に示す例では、x4(%)は100%未満であるが、100%であってもよい。 Also, when the specified dimming degree exceeds x4 (%), the peak current becomes larger than y3 (mA) even if the mode is the second mode. However, when the specified dimming rate is less than x4 (%), deterioration of the light source can be suppressed. In the present embodiment, x4 described above is also referred to as a second value. In the example shown in FIG. 143, x4 (%) is less than 100%, but may be 100%.
 つまり、本実施の形態における送信機100では、指定調光度が第1の値よりも大きく第2の値以下である場合に、第2のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値は、指定調光度が第1の値である場合に、第1のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値よりも小さい。 That is, in the transmitter 100 according to the present embodiment, when the designated dimming degree is larger than the first value and equal to or smaller than the second value, the signal encoded in the second mode is transmitted by the luminance change. When the specified dimming level is the first value, the peak current value of the light source is smaller than the peak current value of the light source for transmitting the signal encoded in the first mode by a change in luminance.
 これにより、信号を符号化するモードの切り替えによって、指定調光度が第1の値よりも大きく第2の値以下である場合における光源のピーク電流の値は、指定調光度が第1の値である場合における光源のピーク電流の値よりも小さくなる。したがって、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。 As a result, the peak current value of the light source when the designated dimming degree is larger than the first value and equal to or smaller than the second value by switching the mode for encoding the signal is the designated dimming degree being the first value. It becomes smaller than the peak current value of the light source in some cases. Therefore, as the designated dimming degree is increased, a large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed.
 さらに、本実施の形態における送信機100は、指定される調光度がx1(%)以上x2(%)よりも小さい場合には、指定される調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信するとともに、指定される調光度の変化に対してピーク電流の値を一定の値に維持する。x2(%)はx3(%)よりも小さい。なお、本実施の形態では、上述のx2を第3の値ともいう。 Furthermore, when the designated dimming degree is less than or equal to x1 (%) and less than x2 (%), the transmitter 100 in the present embodiment causes the light source to emit light at the designated dimming degree, while in the first mode. In addition to transmitting the signal encoded by the luminance change, the peak current value is maintained at a constant value with respect to the designated dimming change. x2 (%) is smaller than x3 (%). In the present embodiment, x2 described above is also referred to as a third value.
 つまり、送信機100は、指定調光度がx2(%)よりも小さい場合には、指定調光度が小さくなるにしたがって、光源をオフにする時間を長くすることにより、小さくなるその指定調光度で光源を発光させ、かつ、ピーク電流の値を一定の値に維持する。具体的には、送信機100は、符号化信号のデューティ比を35%に維持しながら、複数の符号化信号のそれぞれを送信する周期を長くする。これにより、光源をオフにする時間、すなわち消灯期間が長くなる。その結果、ピーク電流の値を一定に維持しながら、調光度を小さくすることができる。また、指定調光度が小さくなる場合でも、ピーク電流の値が一定に維持されるため、輝度変化によって送信される信号である可視光信号(すなわち光ID)を、受信機200に受信させ易くすることができる。 That is, when the designated dimming degree is smaller than x2 (%), the transmitter 100 increases the time for turning off the light source as the designated dimming degree becomes smaller, and the specified dimming degree becomes smaller. The light source is caused to emit light, and the peak current value is maintained at a constant value. Specifically, the transmitter 100 lengthens the cycle for transmitting each of the plurality of encoded signals while maintaining the duty ratio of the encoded signal at 35%. Thereby, the time to turn off the light source, that is, the extinguishing period becomes longer. As a result, the dimming degree can be reduced while keeping the peak current value constant. Even when the specified dimming level is small, the peak current value is kept constant, so that the visible light signal (that is, the light ID), which is a signal transmitted by a change in luminance, is easily received by the receiver 200. be able to.
 ここで、送信機100は、符号化信号を輝度変化により送信する時間と、光源をオフにする時間とを足した1周期が10ミリ秒を超えないように、光源をオフする時間を決定する。例えば、光源をオフにする時間が長すぎて、その1周期が10ミリ秒を超えると、符号化信号を送信するための光源の輝度変化がちらつきとして人の眼に認識されてしまう虞がある。そのため、本実施の形態では、1周期が10ミリ秒を超えないように、光源をオフする時間が決定されるため、ちらつきが人に認識されてしまうことを抑えることができる。 Here, the transmitter 100 determines the time to turn off the light source so that one cycle obtained by adding the time to transmit the encoded signal due to the luminance change and the time to turn off the light source does not exceed 10 milliseconds. . For example, if the time to turn off the light source is too long and one period exceeds 10 milliseconds, the luminance change of the light source for transmitting the encoded signal may be recognized by the human eye as flickering. . Therefore, in this embodiment, since the time for turning off the light source is determined so that one period does not exceed 10 milliseconds, flicker can be prevented from being recognized by a person.
 さらに、送信機100は、指定調光度がx1(%)よりも小さい場合にも、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。このとき、送信機100は、指定調光度が小さくなるにしたがって、ピーク電流の値を小さくすることにより、その小さくなる指定調光度で光源を発光させる。x1(%)はx2(%)よりも小さい。なお、本実施の形態では、上述のx1を第4の値ともいう。 Further, even when the designated dimming degree is smaller than x1 (%), the transmitter 100 transmits the signal encoded in the first mode by the luminance change while causing the light source to emit light at the designated dimming degree. At this time, the transmitter 100 causes the light source to emit light at the specified dimming degree that becomes smaller by reducing the value of the peak current as the designated dimming degree becomes smaller. x1 (%) is smaller than x2 (%). In the present embodiment, the above x1 is also referred to as a fourth value.
 これにより、指定調光度がより小さくても、その指定調光度で光源を適切に発光させることができる。 Thereby, even if the designated dimming degree is smaller, the light source can be appropriately emitted with the designated dimming degree.
 ここで、図143に示す例では、第1のモードにおける最大のピーク電流の値(すなわちy3(mA))は、第2のモードにおける最大のピーク電流の値(すなわちy4(mA))よりも小さいが、同じであってもよい。すなわち、送信機100は、指定される調光度がx3(%)よりも大きいx3a(%)まで、第1のモードで送信対象信号を符号化する。そして、送信機100は、指定された調光度がx3a(%)である場合には、第2のモードにおける最大のピーク電流の値(すなわちy4(mA))と同じピーク電流の値で光源を発光させる。この場合、x3aが第1の値となる。なお、第2のモードにおける最大のピーク電流の値は、指定される調光度が最大値、すなわち100%であるときのピーク電流の値である。 In the example shown in FIG. 143, the maximum peak current value in the first mode (ie, y3 (mA)) is greater than the maximum peak current value in the second mode (ie, y4 (mA)). Small but the same. That is, the transmitter 100 encodes the transmission target signal in the first mode until the designated dimming degree is x3a (%) larger than x3 (%). Then, when the designated dimming degree is x3a (%), the transmitter 100 turns on the light source with the same peak current value as the maximum peak current value in the second mode (ie, y4 (mA)). Make it emit light. In this case, x3a is the first value. Note that the maximum peak current value in the second mode is the peak current value when the specified dimming degree is the maximum value, that is, 100%.
 つまり、本実施の形態では、指定調光度が第1の値である場合における、光源のピーク電流の値と、指定調光度が最大値である場合における、光源のピーク電流の値とは同じであってもよい。この場合には、y3(mA)以上のピーク電流で光源を発光させる調光度の範囲が広がるため、調光度の広い範囲で、光IDを受信機200に受信させ易くすることができる。言い換えれば、第1のモードでも、大きいピーク電流を光源に流すことができるため、その光源の輝度変化によって送信される信号を、受信機に受信させ易くすることができる。なお、この場合には、大きいピーク電流が流れる期間が長くなるため、光源が劣化し易くなる。 In other words, in the present embodiment, the value of the peak current of the light source when the designated dimming degree is the first value is the same as the value of the peak current of the light source when the designated dimming degree is the maximum value. There may be. In this case, since the range of the dimming degree that causes the light source to emit light with a peak current of y3 (mA) or more is widened, it is possible to make the receiver 200 easily receive the light ID in a wide range of dimming degree. In other words, even in the first mode, since a large peak current can be passed through the light source, it is possible to make it easier for the receiver to receive a signal transmitted by a change in luminance of the light source. In this case, since the period during which a large peak current flows becomes long, the light source easily deteriorates.
 図144は、本実施の形態における光IDの受信し易さを説明するための比較例を示す図である。 FIG. 144 is a diagram showing a comparative example for explaining the ease of receiving the optical ID in the present embodiment.
 本実施の形態では、図143に示すように、調光度が小さい場合には、第1のモードが用いられ、調光度が大きい場合には、第2のモードが用いられる。第1のモードは、調光度の増加が小さくてもピーク電流の増加を大きくするモードであり、第2のモードは、調光度の増加が大きくてもピーク電流の増加を抑えるモードである。したがって、第2のモードによって、大きなピーク電流が光源に流れることが抑えれるため、光源の劣化を抑制することができる。さらに、第1のモードによって、調光度が小さくても大きなピーク電流が光源に流れるため、光IDを受信機200に容易に受信させることができる。 In this embodiment, as shown in FIG. 143, the first mode is used when the dimming degree is small, and the second mode is used when the dimming degree is large. The first mode is a mode in which the increase in peak current is increased even if the increase in dimming degree is small, and the second mode is a mode in which the increase in peak current is suppressed even if the increase in dimming degree is large. Therefore, since the second mode suppresses a large peak current from flowing to the light source, deterioration of the light source can be suppressed. Furthermore, the first mode allows the receiver 200 to easily receive the optical ID because a large peak current flows through the light source even if the dimming degree is small.
 一方、調光度が小さい場合にも第2のモードが用いられる場合には、図144に示すように、調光度が小さい場合には、ピーク電流の値も小さいため、光IDを受信機200に受信させることが難しくなる。 On the other hand, when the second mode is used even when the dimming degree is small, as shown in FIG. 144, when the dimming degree is small, the value of the peak current is also small. It becomes difficult to make it receive.
 したがって、本実施の形態における送信機100では、光源の劣化の抑制と、光IDの受信し易さとの両立を図ることができる。 Therefore, in transmitter 100 according to the present embodiment, it is possible to achieve both suppression of deterioration of the light source and ease of reception of the optical ID.
 また、送信機100は、光源のピーク電流の値が第5の値を超えた場合、その光源の輝度変化による信号の送信を停止してもよい。第5の値は、例えばy3(mA)であってもよい。 Further, when the value of the peak current of the light source exceeds the fifth value, the transmitter 100 may stop transmission of a signal due to a change in luminance of the light source. The fifth value may be, for example, y3 (mA).
 これにより、光源の劣化をさらに抑制することができる。 Thereby, the deterioration of the light source can be further suppressed.
 また、送信機100は、図141に示す例と同様に、光源の使用時間を計測してもよい。そして、その使用時間が所定時間以上である場合、送信機100は、指定調光度よりも大きい調光度で光源を発光させるためのパラメータの値を用いて、信号を輝度変化により送信してもよい。この場合、パラメータの値は、ピーク電流の値または光源をオフにする時間であってもよい。これにより、光源の経時的な劣化によって光IDが受信機200に受信され難くなることを抑えることができる。 Also, the transmitter 100 may measure the usage time of the light source, as in the example shown in FIG. When the usage time is equal to or longer than the predetermined time, the transmitter 100 may transmit a signal by a change in luminance using a parameter value for causing the light source to emit light with a dimming degree greater than the specified dimming degree. . In this case, the parameter value may be a peak current value or a time for turning off the light source. Thereby, it is possible to prevent the optical ID from becoming difficult to be received by the receiver 200 due to deterioration of the light source over time.
 または、送信機100は、光源の使用時間を計測し、その使用時間が所定時間以上である場合、使用時間が所定時間未満である場合よりも、光源の電流のパルス幅を大きくしてもよい。これにより、上述と同様、光源の劣化によって光IDが受信され難くなることを抑えることができる。 Alternatively, the transmitter 100 may measure the usage time of the light source, and if the usage time is equal to or longer than the predetermined time, the transmitter 100 may increase the pulse width of the current of the light source as compared to the case where the usage time is less than the predetermined time. . Thereby, it can suppress that it becomes difficult to receive optical ID by deterioration of a light source like the above-mentioned.
 なお、上記実施の形態では、送信機100は、指定される調光度に応じて第1のモードと第2のモードとが切り換えられるが、ユーザによる操作に応じてそのモードの切り替えを行ってもよい。つまり、送信機100は、ユーザによってスイッチが操作されると、第1のモードを第2のモードに切り替えたり、逆に、第2のモードを第1のモードに切り替えたりする。また、送信機100は、モードが切り換えられるときには、そのことをユーザに通知してもよい。例えば、送信機100は、音を鳴らしたり、人に視認可能な周期で光源を点滅させたり、通知用のLEDを点灯させたりすることによって、モードの切り替えをユーザに通知してもよい。また、送信機100は、モードの切り替えだけでなく、ピーク電流と調光度との関係が変化する時点にも、その関係が変化することをユーザに通知してもよい。その時点は、例えば図143に示す調光度がx1(%)から変化する時点、または調光度がx2(%)から変化する時点である。 In the above-described embodiment, the transmitter 100 is switched between the first mode and the second mode according to the designated dimming degree, but even if the mode is switched according to the operation by the user. Good. That is, when the switch is operated by the user, the transmitter 100 switches the first mode to the second mode, or conversely switches the second mode to the first mode. Further, the transmitter 100 may notify the user when the mode is switched. For example, the transmitter 100 may notify the user of the mode switching by sounding a sound, blinking a light source at a period that can be visually recognized by a person, or turning on a notification LED. Further, the transmitter 100 may notify the user that the relationship changes not only at the mode switching but also at the time when the relationship between the peak current and the dimming degree changes. The time point is, for example, a time point when the dimming degree shown in FIG. 143 changes from x1 (%) or a time point when the dimming degree changes from x2 (%).
 図145Aは、本実施の形態における送信機100の動作を示すフローチャートである。 FIG. 145A is a flowchart showing an operation of transmitter 100 in the present embodiment.
 送信機100は、まず、光源に対して指定される調光度を指定調光度として受け付ける(ステップS551)。次に、送信機100は、信号を光源の輝度変化により送信する(ステップS552)。具体的には、送信機100は、指定調光度が第1の値以下である場合には、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。また、送信機100は、指定調光度が第1の値よりも大きい場合には、その指定調光度で光源を発光させながら、第2のモードで符号化された信号を輝度変化により送信する。ここで、指定調光度が第1の値よりも大きく第2の値以下である場合に、第2のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値は、指定調光度が第1の値である場合に、第1のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値よりも小さい。 First, the transmitter 100 receives the dimming degree designated for the light source as the designated dimming degree (step S551). Next, the transmitter 100 transmits a signal according to a change in luminance of the light source (step S552). Specifically, when the designated dimming degree is equal to or less than the first value, the transmitter 100 causes the light source to emit light at the designated dimming degree, and changes the signal encoded in the first mode according to the luminance change. Send. In addition, when the designated dimming degree is larger than the first value, the transmitter 100 transmits the signal encoded in the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, when the designated dimming degree is larger than the first value and equal to or smaller than the second value, the value of the peak current of the light source for transmitting the signal encoded in the second mode by the luminance change is When the designated dimming level is the first value, it is smaller than the value of the peak current of the light source for transmitting the signal encoded in the first mode by the luminance change.
 図145Bは、本実施の形態における送信機100の構成を示すブロック図である。 FIG. 145B is a block diagram showing a configuration of transmitter 100 in the present embodiment.
 送信機100は、受付部551と、送信部552とを備える。受付部551は、光源に対して指定される調光度を指定調光度として受け付ける(ステップS551)。送信部552は、信号を光源の輝度変化により送信する。具体的には、送信部552は、指定調光度が第1の値以下である場合には、その指定調光度で光源を発光させながら、第1のモードで符号化された信号を輝度変化により送信する。また、送信部552は、指定調光度が第1の値よりも大きい場合には、その指定調光度で光源を発光させながら、第2のモードで符号化された信号を輝度変化により送信する。ここで、指定調光度が第1の値よりも大きく第2の値以下である場合に、第2のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値は、指定調光度が第1の値である場合に、第1のモードで符号化された信号を輝度変化により送信するための光源のピーク電流の値よりも小さい。 The transmitter 100 includes a reception unit 551 and a transmission unit 552. The accepting unit 551 accepts the dimming degree designated for the light source as the designated dimming degree (step S551). The transmission unit 552 transmits a signal according to a change in luminance of the light source. Specifically, when the designated dimming level is equal to or smaller than the first value, the transmission unit 552 causes the light source to emit light at the designated dimming level, and the signal encoded in the first mode is changed by the luminance change. Send. In addition, when the designated dimming degree is larger than the first value, the transmission unit 552 transmits the signal encoded in the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, when the designated dimming degree is larger than the first value and equal to or smaller than the second value, the value of the peak current of the light source for transmitting the signal encoded in the second mode by the luminance change is When the designated dimming level is the first value, it is smaller than the value of the peak current of the light source for transmitting the signal encoded in the first mode by the luminance change.
 これにより、図143に示すように、信号を符号化するモードの切り替えによって、指定調光度が第1の値よりも大きく第2の値以下である場合における光源のピーク電流の値は、指定調光度が第1の値である場合における光源のピーク電流の値よりも小さくなる。したがって、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。 As a result, as shown in FIG. 143, the peak current value of the light source when the designated dimming degree is greater than the first value and equal to or less than the second value by switching the signal encoding mode is It becomes smaller than the value of the peak current of the light source when the luminous intensity is the first value. Therefore, as the designated dimming degree is increased, a large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed.
 図146は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 FIG. 146 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 受信機200は、そのイメージセンサによる被写体の撮像によって、上述の通常撮影画像である撮像表示画像Pkと、上述の可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 acquires the above-described captured display image Pk, which is the normal captured image, and the above-described decoding image, which is the visible light communication image or the bright line image, by capturing the subject with the image sensor.
 具体的には、受信機200のイメージセンサは、サイネージとして構成されている送信機100と、送信機100の隣にいる人物21とを撮像する。送信機100は、上記各実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)と、すりガラスのように透光性を有する透光板144とを備える。1つまたは複数の発光素子は、送信機100の内部で発光し、1つまたは複数の発光素子からの光は、透光板144を透過して外部に照射される。その結果、送信機100の透光板144が明るく光っている状態になる。このような送信機100は、その1つまたは複数の発光素子を点滅させることによって輝度変化し、その輝度変化によって光ID(光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100 configured as signage and the person 21 adjacent to the transmitter 100. The transmitter 100 is the transmitter in each of the above embodiments, and includes one or a plurality of light emitting elements (for example, LEDs) and a light transmitting plate 144 having a light transmitting property like ground glass. The one or more light emitting elements emit light inside the transmitter 100, and light from the one or more light emitting elements is transmitted through the translucent plate 144 and irradiated outside. As a result, the translucent plate 144 of the transmitter 100 shines brightly. Such a transmitter 100 changes in luminance by blinking one or more light emitting elements, and transmits an optical ID (light identification information) by the change in luminance. This light ID is the above-mentioned visible light signal.
 ここで、透光板144には、「ここにスマートフォンをかざしてください」というメッセージが記載されている。そこで、受信機200のユーザは、人物21を送信機100の隣に立たせて、腕を送信機100の上にかけるようにその人物21に指示する。そして、ユーザは、受信機200のカメラ(すなわちイメージセンサ)を人物21および送信機100に向けて撮像を行う。受信機200は、送信機100および人物21を通常露光時間で撮像することによって、それらが映し出された撮像表示画像Pkを取得する。さらに、受信機200は、その通常露光時間よりも短い通信用露光時間で送信機100および人物21を撮像することによって、復号用画像を取得する。 Here, a message “Please hold your smartphone here” is written on the translucent plate 144. Therefore, the user of the receiver 200 places the person 21 next to the transmitter 100 and instructs the person 21 to put his arm on the transmitter 100. Then, the user images the camera (that is, the image sensor) of the receiver 200 toward the person 21 and the transmitter 100. The receiver 200 captures the transmitter 100 and the person 21 with the normal exposure time, thereby obtaining a captured display image Pk on which they are projected. Furthermore, the receiver 200 acquires a decoding image by capturing the transmitter 100 and the person 21 with a communication exposure time shorter than the normal exposure time.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに対応するAR画像P44と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pkのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100であるサイネージが映し出されている領域を対象領域として認識する。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P44 corresponding to the optical ID and the recognition information from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pk as a target area. For example, the receiver 200 recognizes an area in which signage as the transmitter 100 is displayed as a target area.
 そして、受信機200は、その対象領域がAR画像P44によって覆い隠されるように、そのAR画像P44を撮像表示画像Pkに重畳し、その撮像表示画像Pkをディスプレイ201に表示する。例えば、受信機200は、サッカー選手を示すAR画像P44を取得する。この場合、撮像表示画像Pkの対象領域を覆い隠すようにそのAR画像P44が重畳されるため、人物21の隣にサッカー選手が現実に存在するように、撮像表示画像Pkを表示することができる。その結果、人物21は、サッカー選手が隣にいなくても、そのサッカー選手と一緒に写真に写ることができる。より具体的には、人物21の腕をサッカー選手の肩にかけた状態で、そのサッカー選手と一緒に写真に写ることができる。 Then, the receiver 200 superimposes the AR image P44 on the captured display image Pk so that the target area is covered with the AR image P44, and displays the captured display image Pk on the display 201. For example, the receiver 200 acquires an AR image P44 indicating a soccer player. In this case, since the AR image P44 is superimposed so as to cover the target area of the captured display image Pk, the captured display image Pk can be displayed so that a soccer player actually exists next to the person 21. . As a result, even if the soccer player is not next to the person 21, the person 21 can be photographed together with the soccer player. More specifically, in the state where the arm of the person 21 is put on the shoulder of the soccer player, the photograph can be taken together with the soccer player.
 (実施の形態8)
 本実施の形態では、光IDを可視光信号によって送信する送信方法について説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。
(Embodiment 8)
In this embodiment, a transmission method for transmitting an optical ID by a visible light signal will be described. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments.
 図147は、本実施の形態における送信機100の動作を説明するための図である。具体的には、図147は、調光機能付きスポットライトとして構成された送信機100の調光度と、その送信機100の光源に入力される電流(具体的にはピーク電流の値)との関係を示す。 FIG. 147 is a diagram for explaining the operation of the transmitter 100 in the present embodiment. Specifically, FIG. 147 shows the dimming degree of the transmitter 100 configured as a spotlight with dimming function, and the current (specifically, the peak current value) input to the light source of the transmitter 100. Show the relationship.
 本実施の形態における送信機100は、指定された調光度が0%以上x14(%)以下である場合には、デューティ比35%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。つまり、送信機100は、指定される調光度が0%からx14(%)に変化する場合には、可視光信号のデューティ比を35%に維持しながら、ピーク電流の値を増加することによって、その指定された調光度で光源を発光させる。なお、デューティ比35%のPWMモードは、実施の形態7と同様、第1のモードともいい、上述のx14を第1の値ともいう。例えば、x14(%)は、50~60%の範囲内の値である。 When the designated dimming degree is 0% or more and x14 (%) or less, the transmitter 100 according to the present embodiment encodes a transmission target signal in a PWM mode with a duty ratio of 35%. Is generated. In other words, when the designated dimming level changes from 0% to x14 (%), the transmitter 100 increases the peak current value while maintaining the duty ratio of the visible light signal at 35%. The light source is caused to emit light at the specified dimming degree. Note that the PWM mode with a duty ratio of 35% is also referred to as a first mode, as in the seventh embodiment, and the above x14 is also referred to as a first value. For example, x14 (%) is a value within the range of 50 to 60%.
 また、送信機100は、指定された調光度がx13(%)以上100%以下である場合には、デューティ比65%のPWMモードで送信対象信号を符号化することによって符号化信号を生成する。つまり、送信機100は、指定される調光度が100%からx13(%)に変化する場合には、可視光信号のデューティ比を65%に維持しながら、ピーク電流の値を抑えることによって、その指定された調光度で光源を発光させる。なお、デューティ比65%のPWMモードは、実施の形態7と同様、第2のモードともいい、上述のx13を第2の値ともいう。ここで、x13(%)は、x14(%)よりも小さい値であって、例えば、40~50%の範囲内の値である。 Further, when the designated dimming degree is not less than x13 (%) and not more than 100%, the transmitter 100 generates an encoded signal by encoding the transmission target signal in the PWM mode with a duty ratio of 65%. . That is, when the specified dimming level changes from 100% to x13 (%), the transmitter 100 suppresses the peak current value while maintaining the duty ratio of the visible light signal at 65%. The light source is caused to emit light at the specified dimming degree. Note that the PWM mode with a duty ratio of 65% is also referred to as a second mode, as in the seventh embodiment, and the above x13 is also referred to as a second value. Here, x13 (%) is a value smaller than x14 (%), for example, a value within a range of 40 to 50%.
 このように、本実施の形態では、指定される調光度が増加する場合には、PWMモードは、調光度x14(%)において、デューティ比35%のPWMモードからデューティ比65%のPWMモードに切り替えられる。一方、指定される調光度が減少する場合には、PWMモードは、調光度x14(%)よりも小さい調光度x13(%)において、デューティ比65%のPWMモードからデューティ比35%のPWMモードに切り替えられる。つまり、本実施の形態では、指定される調光度が増加する場合と、指定される調光度が減少する場合とで、PWMモードが切り替えられる調光度が異なる。以下、PWMモードが切り替えられる調光度を、切り替え点という。 As described above, in this embodiment, when the specified dimming degree increases, the PWM mode is changed from the PWM mode with the duty ratio of 35% to the PWM mode with the duty ratio of 65% at the dimming degree x14 (%). Can be switched. On the other hand, when the specified dimming degree decreases, the PWM mode is a dimming degree x13 (%) smaller than the dimming degree x14 (%), and the PWM mode from the PWM mode having a duty ratio of 65% to the PWM mode having a duty ratio of 35%. Can be switched to. That is, in the present embodiment, the dimming degree at which the PWM mode is switched is different when the designated dimming degree is increased and when the designated dimming degree is reduced. Hereinafter, the dimming degree at which the PWM mode is switched is referred to as a switching point.
 したがって、本実施の形態では、PWMモードの頻繁な切り替えを抑制することができる。実施の形態7の図143に示す例では、PWMモードの切り替え点は、50%であって、指定される調光度が増加する場合と、指定される調光度が減少する場合とで同じである。その結果、図143に示す例では、指定される調光度の増減が50%の前後で繰り返されると、PWMモードが、デューティ比35%のPWMモードとデューティ比65%のPWMモードとに頻繁に切り替えられる。しかし、本実施の形態では、指定される調光度が増加する場合と、指定される調光度が減少する場合とで、PWMモードの切り替え点が異なるため、このようなPWMモードの頻繁な切り替えを抑えることができる。 Therefore, in this embodiment, frequent switching of the PWM mode can be suppressed. In the example shown in FIG. 143 of the seventh embodiment, the switching point of the PWM mode is 50%, and is the same when the designated dimming degree increases and when the designated dimming degree decreases. . As a result, in the example shown in FIG. 143, when the increase / decrease of the specified dimming degree is repeated around 50%, the PWM mode is frequently switched between the PWM mode with a duty ratio of 35% and the PWM mode with a duty ratio of 65%. Can be switched. However, in this embodiment, since the switching point of the PWM mode is different between the case where the designated dimming degree increases and the designated dimming degree decreases, such frequent switching of the PWM mode is performed. Can be suppressed.
 また、本実施の形態では、実施の形態7の図143に示す例と同様、指定される調光度が小さい場合には、小さいデューティ比のPWMモードが用いられ、逆に、指定される調光度が大きい場合には、大きいデューティ比のPWMモードが用いられる。 In the present embodiment, similarly to the example shown in FIG. 143 of the seventh embodiment, when the designated dimming degree is small, the PWM mode with a small duty ratio is used, and conversely, the designated dimming degree is used. When is large, a PWM mode with a large duty ratio is used.
 したがって、指定される調光度が大きい場合には、大きいデューティ比のPWMモードが用いられるため、調光度に対するピーク電流の変化率を小さくすることができ、小さいピーク電流によって光源を大きい調光度で発光させることができる。例えば、デューティ比35%のように小さいデューティ比のPWMモードでは、ピーク電流を250mAにしなければ、光源を100%の調光度で発光させることができない。しかし、本実施の形態では、大きい調光度に対しては、デューティ比65%のように大きいデューティ比のPWMモードが用いられるため、例えば、ピーク電流をより小さい154mAにするだけで、光源を100%の調光度で発光させることができる。つまり、光源に過電流を流してその光源の寿命を縮めてしまうことを抑えることができる。 Therefore, when the specified dimming degree is large, the PWM mode with a large duty ratio is used, so the rate of change of the peak current with respect to the dimming degree can be reduced, and the light source emits light with a large dimming degree with the small peak current Can be made. For example, in the PWM mode with a duty ratio as small as 35%, unless the peak current is 250 mA, the light source cannot emit light with a dimming degree of 100%. However, in the present embodiment, for a large dimming degree, a PWM mode with a large duty ratio such as a duty ratio of 65% is used. Therefore, for example, the light source is set to 100 by simply setting the peak current to a smaller 154 mA. % Light can be emitted. That is, it is possible to suppress the overcurrent from flowing through the light source and shorten the life of the light source.
 また、指定される調光度が小さい場合には、小さいデューティ比のPWMモードが用いられるため、調光度に対するピーク電流の変化率を大きくすることができる。その結果、小さい調光度で光源を発光させながら、大きいピーク電流によって可視光信号を送信することができる。光源は、入力される電流が大きいほど、明るく発光する。したがって、大きいピーク電流によって可視光信号が送信される場合には、受信機200に可視光信号を受信させ易くすることができる。言い換えれば、受信機200に受信可能な可視光信号を送信することができる調光度の範囲を、より小さい調光度まで広げることができる。例えば、図195に示すように、受信機200は、ピーク電流がIa(mA)以上であれば、そのピーク電流によって送信される可視光信号を受信することができる。この場合、デューティ比65%のように大きいデューティ比のPWMモードでは、受信可能な可視光信号を送信することができる調光度の範囲は、x12(%)以上である。しかし、デューティ比35%のように小さいデューティ比のPWMモードでは、受信可能な可視光信号を送信することができる調光度の範囲を、x12(%)よりも小さいx11(%)以上にすることができる。 Also, when the specified dimming degree is small, the PWM mode with a small duty ratio is used, so that the rate of change of the peak current with respect to the dimming degree can be increased. As a result, a visible light signal can be transmitted with a large peak current while causing the light source to emit light with a small dimming degree. The light source emits light brighter as the input current increases. Therefore, when a visible light signal is transmitted with a large peak current, the visible light signal can be easily received by the receiver 200. In other words, the range of the dimming level in which the visible light signal receivable to the receiver 200 can be transmitted can be expanded to a smaller dimming level. For example, as shown in FIG. 195, if the peak current is equal to or greater than Ia (mA), the receiver 200 can receive a visible light signal transmitted by the peak current. In this case, in the PWM mode with a large duty ratio such as a duty ratio of 65%, the range of dimming degree capable of transmitting a receivable visible light signal is x12 (%) or more. However, in the PWM mode with a small duty ratio such as a duty ratio of 35%, the range of the dimming degree capable of transmitting a receivable visible light signal is set to x11 (%) or more smaller than x12 (%). Can do.
 このように、PWMモードを切り替えることによって、光源の寿命を長くし、且つ、広い調光度の範囲で可視光信号を送信することができる。 Thus, by switching the PWM mode, the life of the light source can be extended and a visible light signal can be transmitted in a wide dimming range.
 図148Aは、本実施の形態における送信方法を示すフローチャートである。 FIG. 148A is a flowchart showing a transmission method according to the present embodiment.
 本実施の形態における送信方法は、光源の輝度変化により信号を送信する送信方法であって、受付ステップS561と、送信ステップS562とを含む。受付ステップS561では、送信機100は、光源に対して指定される調光度を指定調光度として受け付ける。送信ステップS562では、送信機100は、その指定調光度で光源を発光させながら、第1のモードまたは第2のモードで符号化された信号を輝度変化により送信する。ここで、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された前記信号のデューティ比よりも大きい。また、送信ステップS562では、送信機100は、指定調光度が小さい値から大きい値に変更される場合、指定調光度が第1の値であるときに、信号の符号化に用いられるモードを、第1のモードから第2のモードに切り替える。さらに、送信機100は、指定調光度が大きい値から小さい値に変更される場合、指定調光度が第2の値であるときに、信号の符号化に用いられるモードを、第2のモードから第1のモードに切り替える。ここで、第2の値は、第1の値よりも小さい。 The transmission method in the present embodiment is a transmission method for transmitting a signal by a change in luminance of a light source, and includes a reception step S561 and a transmission step S562. In the reception step S561, the transmitter 100 receives the dimming degree designated for the light source as the designated dimming degree. In the transmission step S562, the transmitter 100 transmits the signal encoded in the first mode or the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Further, in the transmission step S562, when the designated dimming level is changed from a small value to a large value, the transmitter 100 changes the mode used for signal encoding when the designated dimming level is the first value. Switch from the first mode to the second mode. Furthermore, when the designated dimming level is changed from a large value to a small value, the transmitter 100 changes the mode used for signal encoding from the second mode when the designated dimming level is the second value. Switch to the first mode. Here, the second value is smaller than the first value.
 例えば、第1モードおよび第2のモードはそれぞれ、図147に示すデューティ比35%のPWMモードおよびデューティ比65%のPWMモードである。また、第1の値および第2の値はそれぞれ、図147に示すx14(%)およびx15(%)である。 For example, the first mode and the second mode are the PWM mode with a duty ratio of 35% and the PWM mode with a duty ratio of 65% shown in FIG. 147, respectively. Further, the first value and the second value are x14 (%) and x15 (%) shown in FIG. 147, respectively.
 これにより、第1のモードと第2のモードとの切り替えが行われる指定調光度(すなわち切り替え点)は、指定調光度が増加する場合と減少する場合とで異なる。したがって、これらのモードの頻繁な切り替えを抑えることができる。すなわち、いわゆるチャタリングの発生を抑えることができる。その結果、信号を送信する送信機100の動作を安定させることができる。また、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された信号のデューティ比よりも大きい。したがって、図143に示す送信方法と同様に、指定調光度を大きくするほど、大きなピーク電流が光源に流れることを抑えることができる。その結果、光源の劣化を抑制することができる。また、光源の劣化が抑えられるため、多様な機器間の通信を長期的に行うことができる。また、指定調光度が小さい場合には、デューティ比が小さい第1のモードが用いられる。したがって、上述のピーク電流を大きくすることができ、受信機200に受信され易い信号を可視光信号として送信することができる。 Thereby, the designated dimming degree (that is, the switching point) at which switching between the first mode and the second mode is performed differs depending on whether the designated dimming degree increases or decreases. Therefore, frequent switching between these modes can be suppressed. That is, the occurrence of so-called chattering can be suppressed. As a result, the operation of the transmitter 100 that transmits a signal can be stabilized. Further, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Therefore, similarly to the transmission method shown in FIG. 143, the larger the specified dimming degree, the more the large peak current can be suppressed from flowing to the light source. As a result, deterioration of the light source can be suppressed. Further, since deterioration of the light source can be suppressed, communication between various devices can be performed for a long time. Further, when the designated dimming degree is small, the first mode with a small duty ratio is used. Therefore, the above-described peak current can be increased, and a signal that is easily received by the receiver 200 can be transmitted as a visible light signal.
 また、送信ステップS562では、第1のモードから第2のモードへの切り替えが行われる際に、送信機100は、符号化された信号を輝度変化により送信するための光源のピーク電流を、第1の電流値から、その第1の電流値よりも小さい第2の電流値に変更する。さらに、第2のモードから第1のモードへの切り替えが行われる際に、送信機100は、ピーク電流を、第3の電流値から、第3の電流値よりも大きい第4の電流値に変更する。ここで、第1の電流値は、第4の電流値よりも大きく、第2の電流値は、第3の電流値よりも大きい。 In addition, in the transmission step S562, when switching from the first mode to the second mode is performed, the transmitter 100 determines the peak current of the light source for transmitting the encoded signal by the luminance change, The current value of 1 is changed to a second current value smaller than the first current value. Furthermore, when switching from the second mode to the first mode is performed, the transmitter 100 changes the peak current from the third current value to a fourth current value that is larger than the third current value. change. Here, the first current value is larger than the fourth current value, and the second current value is larger than the third current value.
 例えば、第1の電流値、第2の電流値、第3の電流値、および第4の電流値はそれぞれ、図147に示す電流値Ie、電流値Ic、電流値Ib、および電流値Idである。 For example, the first current value, the second current value, the third current value, and the fourth current value are the current value Ie, the current value Ic, the current value Ib, and the current value Id shown in FIG. 147, respectively. is there.
 これにより、第1のモードと第2のモードとを適切に切り替えることができる。 This makes it possible to appropriately switch between the first mode and the second mode.
 図148Bは、本実施の形態における送信機100の構成を示すブロック図である。 FIG. 148B is a block diagram showing a configuration of transmitter 100 in the present embodiment.
 本実施の形態における送信機100は、光源の輝度変化により信号を送信する送信機であって、受付部561と、送信部562とを備える。受付部561は、光源に対して指定される調光度を指定調光度として受け付ける。送信部562は、その指定調光度で光源を発光させながら、第1のモードまたは第2のモードで符号化された信号を輝度変化により送信する。ここで、第2のモードで符号化された信号のデューティ比は、第1のモードで符号化された前記信号のデューティ比よりも大きい。また、送信部562は、指定調光度が小さい値から大きい値に変更される場合、指定調光度が第1の値であるときに、信号の符号化に用いられるモードを、第1のモードから第2のモードに切り替える。さらに、送信部562は、指定調光度が大きい値から小さい値に変更される場合、指定調光度が第2の値であるときに、信号の符号化に用いられるモードを、第2のモードから第1のモードに切り替える。ここで、第2の値は、第1の値よりも小さい。 The transmitter 100 in the present embodiment is a transmitter that transmits a signal according to a change in luminance of a light source, and includes a reception unit 561 and a transmission unit 562. The accepting unit 561 accepts the dimming degree designated for the light source as the designated dimming degree. The transmission unit 562 transmits the signal encoded in the first mode or the second mode by the luminance change while causing the light source to emit light at the designated dimming degree. Here, the duty ratio of the signal encoded in the second mode is larger than the duty ratio of the signal encoded in the first mode. Further, when the designated dimming level is changed from a small value to a large value, the transmission unit 562 changes the mode used for signal encoding from the first mode when the designated dimming level is the first value. Switch to the second mode. Furthermore, when the designated dimming level is changed from a large value to a small value, the transmission unit 562 changes the mode used for signal encoding from the second mode when the designated dimming level is the second value. Switch to the first mode. Here, the second value is smaller than the first value.
 このような送信機100によって、図148Aに示すフローチャートの送信方法が実現される。 Such a transmitter 100 implements the transmission method of the flowchart shown in FIG. 148A.
 図149は、本実施の形態における可視光信号の詳細な構成の一例を示す図である。 FIG. 149 is a diagram illustrating an example of a detailed configuration of a visible light signal in the present embodiment.
 このような可視光信号は、PWMモードの信号である。 Such a visible light signal is a PWM mode signal.
 可視光信号のパケットは、Lデータ部と、プリアンブルと、Rデータ部とからなる。なお、Lデータ部およびRデータ部はそれぞれ、ペイロードに相当する。 The visible light signal packet is composed of an L data part, a preamble, and an R data part. Note that each of the L data portion and the R data portion corresponds to a payload.
 プリアンブルは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、プリアンブルは、時間長CだけHighの輝度値を示し、次の時間長CだけLowの輝度値を示し、次の時間長CだけHighの輝度値を示し、次の時間長CだけLowの輝度値を示す。なお、時間長CおよびCは、例えば100μsである。また、時間長CおよびCは、例えば時間長CおよびCよりも10μsだけ短い90μsである。 The preamble alternately indicates High and Low luminance values along the time axis. In other words, the preamble indicates a high luminance value only for the time length C 0 , indicates a low luminance value for the next time length C 1 , indicates a high luminance value for the next time length C 2 , and displays the next time length C 2. Only 3 indicates a low luminance value. The time lengths C 0 and C 3 are, for example, 100 μs. Further, the time lengths C 1 and C 2 are 90 μs shorter than the time lengths C 1 and C 3 by 10 μs, for example.
 Lデータ部は、時間軸に沿ってHighとLowの輝度値を交互に示し、プリアンブルの直前に配置される。つまり、Lデータ部は、時間長D’だけHighの輝度値を示し、次の時間長D’だけLowの輝度値を示し、次の時間長D’だけHighの輝度値を示し、次の時間長D’だけLowの輝度値を示す。なお、時間長D’~D’は、送信対象の信号に応じた数式にしたがって決定される。この数式は、D’=W+W×(3-y)、D’=W+W×(7-y)、D’=W+W×(3-y)、およびD’=W+W×(7-y)である。ここで、定数Wは、例えば110μsであり、定数Wは、例えば30μsである。変数yおよびyは、2ビットで表される0~3の何れかの整数であり、変数yおよびyは、3ビットで表される0~7の何れかの整数である。また、変数y~yは送信対象の信号である。なお、図149~図152では、かけ算を意味する記号として「*」が用いられている。 The L data portion alternately indicates High and Low luminance values along the time axis, and is arranged immediately before the preamble. That is, the L data portion indicates a high luminance value for the time length D ′ 0 , indicates a low luminance value for the next time length D ′ 1 , indicates a high luminance value for the next time length D ′ 2 , The next time length D ′ 3 indicates a low luminance value. Note that the time lengths D ′ 0 to D ′ 3 are determined according to a mathematical formula corresponding to the signal to be transmitted. This equation is expressed as D ′ 0 = W 0 + W 1 × (3-y 0 ), D ′ 1 = W 0 + W 1 × (7−y 1 ), D ′ 2 = W 0 + W 1 × (3-y 2) ), And D ′ 3 = W 0 + W 1 × (7−y 3 ). Here, the constant W 0 is, for example, 110 μs, and the constant W 1 is, for example, 30 μs. The variables y 0 and y 2 are any integer from 0 to 3 represented by 2 bits, and the variables y 1 and y 3 are any integer from 0 to 7 represented by 3 bits. Variables y 0 to y 3 are signals to be transmitted. In FIGS. 149 to 152, “*” is used as a symbol meaning multiplication.
 Rデータ部は、Rデータ部は、時間軸に沿ってHighとLowの輝度値を交互に示し、プリアンブルの直後に配置される。つまり、Rデータ部は、時間長DだけHighの輝度値を示し、次の時間長DだけLowの輝度値を示し、次の時間長DだけHighの輝度値を示し、次の時間長DだけLowの輝度値を示す。なお、時間長D~Dは、送信対象の信号に応じた数式にしたがって決定される。この数式は、D=W+W×y、D=W+W×y、D=W+W×y、およびD=W+W×yである。 The R data part is arranged immediately after the preamble, and the R data part alternately indicates High and Low luminance values along the time axis. That is, the R data portion indicates a high luminance value for the time length D 0 , indicates a low luminance value for the next time length D 1 , indicates a high luminance value for the next time length D 2 , and displays the next time length. the length D 3 shows the luminance value of Low. Note that the time lengths D 0 to D 3 are determined according to a mathematical expression corresponding to the signal to be transmitted. This formula is D 0 = W 0 + W 1 × y 0 , D 1 = W 0 + W 1 × y 1 , D 2 = W 0 + W 1 × y 2 , and D 3 = W 0 + W 1 × y 3 .
 ここで、Lデータ部とRデータ部とは、明るさに対して補完関係がある。つまり、Lデータ部の明るさが明るければ、Rデータ部の明るさは暗く、逆に、Lデータ部の明るさが暗ければ、Rデータ部の明るさは明るくなる。つまり、Lデータ部の時間長とRデータ部の時間長との和は、送信対象の信号に関わらずに一定である。言い換えれば、送信対象の信号に関わらず、光源から送信される可視光信号の時間平均の明るさを一定にすることができる。 Here, the L data portion and the R data portion have a complementary relationship with respect to brightness. That is, if the brightness of the L data portion is bright, the brightness of the R data portion is dark. Conversely, if the brightness of the L data portion is dark, the brightness of the R data portion is bright. That is, the sum of the time length of the L data portion and the time length of the R data portion is constant regardless of the signal to be transmitted. In other words, the brightness of the time average of the visible light signal transmitted from the light source can be made constant regardless of the signal to be transmitted.
 また、D’=W+W×(3-y)、D’=W+W×(7-y)、D’=W+W×(3-y)、およびD’=W+W×(7-y)における、3と7との比率を変更することによって、PWMモードのデューティ比を変更することができる。なお、3と7との比率は、変数yおよびyの最大値と、変数yおよびyの最大値との比率に相当する。例えば、その比率が3:7の場合には、デューティ比が小さいPWMモードが選択され、逆に、その比率が7:3の場合には、デューティ比が大きいPWMモードが選択される。したがって、その比率を調整することによって、PWMモードを、図143および図147に示すデューティ比35%のPWMモードと、デューティ比65%のPWMモードとに切り替えることができる。また、何れのPWMモードに切り替えられているかを受信機200に通知するために、プリアンブルを利用してもよい。例えば、送信機100は、切り替えられたPWMモードに対応付けられたパターンのプリアンブルをパケットに含めることによって、その切り替えられたPWMモードを受信機200に通知する。なお、プリアンブルのパターンは、時間長C、C、CおよびCによって変更される。 Further, D ′ 0 = W 0 + W 1 × (3-y 0 ), D ′ 1 = W 0 + W 1 × (7−y 1 ), D ′ 2 = W 0 + W 1 × (3-y 2 ), And by changing the ratio of 3 and 7 in D ′ 3 = W 0 + W 1 × (7−y 3 ), the duty ratio of the PWM mode can be changed. The ratio between 3 and 7 corresponds to the ratio between the maximum values of the variables y 0 and y 2 and the maximum values of the variables y 1 and y 3 . For example, when the ratio is 3: 7, a PWM mode with a small duty ratio is selected. Conversely, when the ratio is 7: 3, a PWM mode with a large duty ratio is selected. Therefore, by adjusting the ratio, the PWM mode can be switched between the PWM mode with a duty ratio of 35% and the PWM mode with a duty ratio of 65% shown in FIGS. 143 and 147. In addition, a preamble may be used to notify the receiver 200 which PWM mode is being switched to. For example, the transmitter 100 notifies the receiver 200 of the switched PWM mode by including in the packet the preamble of the pattern associated with the switched PWM mode. The preamble pattern is changed according to the time lengths C 0 , C 1 , C 2 and C 3 .
 しかし、図149に示す可視光信号では、パケットに2つのデータ部が含まれているため、そのパケットの送信に時間がかかってしまう。例えば、送信機100がDLPプロジェクタである場合、送信機100は、赤、緑、および青のそれぞれの映像を時分割で投影する。ここで、送信機100は、赤の映像を投影するときに、可視光信号を送信することが望ましい。それは、このとき送信される可視光信号が、赤色の波長を有するため、受信機200に受信され易いからである。赤の映像が継続して投影される期間は例えば1.5msである。なお、この期間を、以下、赤色投影期間という。このように短い赤色投影期間に、上述のLデータ部、プリアンブルおよびRデータ部からなるパケットを送信することは困難である。 However, in the visible light signal shown in FIG. 149, since the packet includes two data parts, it takes time to transmit the packet. For example, when the transmitter 100 is a DLP projector, the transmitter 100 projects each video image of red, green, and blue in a time division manner. Here, the transmitter 100 desirably transmits a visible light signal when projecting a red video. This is because the visible light signal transmitted at this time has a red wavelength and thus is easily received by the receiver 200. The period during which the red video is continuously projected is, for example, 1.5 ms. This period is hereinafter referred to as a red projection period. In such a short red projection period, it is difficult to transmit a packet including the L data portion, the preamble, and the R data portion described above.
 そこで、2つのデータ部のうちRデータ部のみを有するパケットが想起される。 Therefore, a packet having only the R data portion of the two data portions is recalled.
 図150は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。 FIG. 150 is a diagram showing another example of the detailed configuration of the visible light signal in the present embodiment.
 図150に示す可視光信号のパケットは、図149に示す例と異なり、Lデータ部を含んでいない。その代わりに、図150に示す可視光信号のパケットは、無効データと、平均輝度調整部とを含む。 150. The visible light signal packet shown in FIG. 150 does not include the L data portion, unlike the example shown in FIG. Instead, the visible light signal packet shown in FIG. 150 includes invalid data and an average luminance adjustment unit.
 無効データは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、無効データは、時間長AだけHighの輝度値を示し、次の時間長AだけLowの輝度値を示す。時間長Aは、例えば100μsであり、時間長Aは、例えばA=W-Wによって示される。このような無効データは、パケットにLデータ部が含まれていないことを示す。 The invalid data alternately indicates High and Low luminance values along the time axis. In other words, the invalid data indicates a high luminance value for the time length A 0 and indicates a low luminance value for the next time length A 1 . The time length A 0 is, for example, 100 μs, and the time length A 1 is represented by, for example, A 1 = W 0 −W 1 . Such invalid data indicates that the L data portion is not included in the packet.
 平均輝度調整部は、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、無効データは、時間長BだけHighの輝度値を示し、次の時間長BだけLowの輝度値を示す。時間長Bは、例えばB=100+W×((3-y)+(3-y))によって示され、時間長Bは、例えばB=W×((7-y)+(7-y))によって示される。 The average luminance adjustment unit alternately indicates High and Low luminance values along the time axis. In other words, the invalid data indicates a high luminance value for the time length B 0 and indicates a low luminance value for the next time length B 1 . The time length B 0 is represented by, for example, B 0 = 100 + W 1 × ((3-y 0 ) + (3-y 2 )), and the time length B 1 is, for example, B 1 = W 1 × ((7−y 1 ) + (7−y 3 )).
 このような平均輝度調整部によって、パケットにおける平均輝度を、送信対象の信号y~yに関わらず一定にすることができる。つまり、パケットにおいて輝度値がHighの時間長の総和(すなわち合計ON時間)を、A+C+C+D+D+B=790にすることができる。さらに、パケットにおいて輝度値がLowの時間長の総和(すなわち合計OFF時間)を、A+C+C+D+D+B=910にすることができる。 By such an average luminance adjusting unit, the average luminance in the packet can be made constant regardless of the signals y 0 to y 3 to be transmitted. That is, the sum of the lengths of time when the luminance value is High in the packet (that is, the total ON time) can be set to A 0 + C 0 + C 2 + D 0 + D 2 + B 0 = 790. Furthermore, the sum of the time lengths when the luminance value is low in the packet (that is, the total OFF time) can be set to A 1 + C 1 + C 3 + D 1 + D 3 + B 1 = 910.
 しかし、このような可視光信号の構成であっても、パケットにおける全時間長Eのうちの一部の時間長である有効時間長Eを短くすることができない。この有効時間長Eは、パケットにおいて最初にHighの輝度値が現れてから、最後に現れるHighの輝度が終了するまでの時間であって、受信機200が可視光信号のパケットを復調または復号するために必要な時間である。具体的には、有効時間長Eは、E=A+A+C+C+C2+C+D+D+D+D+Bである。なお、全時間長Eは、E=E+Bである。 However, even in such a configuration of the visible light signal, it is impossible to shorten the effective length of time E 1 which is a part of the time length of the total duration E 0 in the packet. The effective duration E 1, first from appeared luminance values of High, a time until the brightness of the last occurrence High is completed, the receiver 200 demodulates or decodes the packets of the visible light signal in packet Is the time required to do. Specifically, the effective time length E 1 is E 1 = A 0 + A 1 + C 0 + C 1 + C 2 + C 3 + D 0 + D 1 + D 2 + D 3 + B 0 . Note that the total time length E 0 is E 0 = E 1 + B 1 .
 つまり、有効時間長Eは、図150に示す構成の可視光信号であっても、最大1700μsであるため、送信機100は、上述の赤色投影期間に、その有効時間長Eだけ継続して1つのパケットを送信することは困難である。 In other words, the effective duration E 1, even visible light signal configuration shown in FIG. 150, for a maximum 1700Myuesu, the transmitter 100, the red projection period described above, continues for its effective time length E 1 It is difficult to transmit one packet.
 そこで、有効時間長Eを短くし、かつ、パケットの平均輝度を送信対象の信号に関わらず一定にするために、HighとLowのそれぞれの輝度値の時間長だけでなく、Hightの輝度値も調整することが想起される。 Therefore, in order to shorten the effective time length E 1 and make the average brightness of the packet constant regardless of the signal to be transmitted, not only the time lengths of the brightness values of High and Low but also the brightness value of High It is also recalled to adjust.
 図151は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。 FIG. 151 is a diagram showing another example of a detailed configuration of a visible light signal in the present embodiment.
 図151に示す可視光信号のパケットでは、図150に示す例と異なり、有効時間長Eを短くするために、平均輝度調整部のHighの輝度値の時間長Bは、送信対象の信号に関わらず最短の100μsに固定されている。その代わりに、図151に示す可視光信号のパケットでは、送信対象の信号に含まれる変数yおよびyに応じて、すなわち、時間長DおよびDに応じて、Highの輝度値が調整される。例えば、時間長DおよびDが短い場合には、送信機100は、図151の(a)に示すように、Highの輝度値を大きな値に調整する。また、時間長DおよびDが長い場合には、送信機100は、図151の(b)に示すように、Highの輝度値を小さな値に調整する。具体的には、時間長DおよびDがそれぞれ最短のW(例えば110μs)である場合には、Highの輝度値は100%の明るさである。また、時間長DおよびDがそれぞれ最大の「W+3W」(例えば200μs)である場合には、Highの輝度値は77.2%の明るさである。 Packet visible light signal shown in FIG. 151, unlike the example shown in FIG. 150, in order to shorten the effective length of time E 1, the time length B 0 of the luminance values of High average luminance adjusting unit transmitted signals Regardless, it is fixed at the shortest 100 μs. Instead, in the visible light signal packet shown in FIG. 151, the High luminance value depends on the variables y 0 and y 2 included in the transmission target signal, that is, according to the time lengths D 0 and D 2. Adjusted. For example, when the time lengths D 0 and D 2 are short, the transmitter 100 adjusts the High luminance value to a large value as shown in FIG. When the time lengths D 0 and D 2 are long, the transmitter 100 adjusts the High luminance value to a small value as shown in FIG. 151 (b). Specifically, when the time lengths D 0 and D 2 are the shortest W 0 (for example, 110 μs), the High luminance value is 100% brightness. When the time lengths D 0 and D 2 are the maximum “W 0 + 3W 1 ” (for example, 200 μs), the high luminance value is 77.2% brightness.
 このような可視光信号のパケットでは、輝度値がHighの時間長の総和(すなわち合計ON時間)を、例えば、A+C+C+D+D+B=610~790にすることができる。さらに、輝度値がLowの時間長の総和(すなわち合計OFF時間)を、A+C+C+D+D+B=910にすることができる。 In such a visible light signal packet, the sum of the time lengths of the brightness value High (that is, the total ON time) can be set to, for example, A 0 + C 0 + C 2 + D 0 + D 2 + B 0 = 610 to 790. . Furthermore, the sum of the time lengths when the luminance value is Low (that is, the total OFF time) can be set to A 1 + C 1 + C 3 + D 1 + D 3 + B 1 = 910.
 しかし、図151に示す可視光信号では、パケットにおける全時間長Eおよび有効時間長Eのそれぞれの最短の時間長を、図150に示す例よりも短くすることはできるが、最大の時間長を短くすることができない。 However, in the visible light signal shown in FIG. 151, the shortest time length of each of the total time length E 0 and the effective time length E 1 in the packet can be made shorter than the example shown in FIG. The length cannot be shortened.
 そこで、有効時間長Eを短くし、かつ、パケットの平均輝度を送信対象の信号に関わらず一定にするために、送信対象の信号に応じて、パケットに含まれるデータ部としてLデータ部とRデータ部とを使い分かることが想起される。 Therefore, to shorten the effective length of time E 1, and, in order to make constant regardless of the average brightness of the packet signal to be transmitted in response to a signal to be transmitted, and L data portion as the data portion included in the packet It is recalled that it can be understood using the R data part.
 図152は、本実施の形態における可視光信号の詳細な構成の他の例を示す図である。 FIG. 152 is a diagram illustrating another example of the detailed configuration of the visible light signal according to the present embodiment.
 図152に示す可視光信号では、図149~図151に示す例と異なり、有効時間長を短くするために、送信対象の信号である変数y~yの総和に応じて、Lデータ部を含むパケットと、Rデータ部を含むパケットとが使い分けられる。 In the visible light signal shown in FIG. 152, unlike the examples shown in FIGS. 149 to 151, in order to shorten the effective time length, the L data portion is selected in accordance with the sum of the variables y 0 to y 3 that are signals to be transmitted. And a packet including the R data portion are selectively used.
 つまり、送信機100は、変数y~yの総和が7以上の場合には、図152の(a)に示すように、2つのデータ部のうちLデータ部のみを含むパケットを生成する。以下、このパケットをLパケットという。また、送信機100は、変数y~yの総和が6以下の場合には、図152の(b)に示すように、2つのデータ部のうちRデータ部のみを含むパケットを生成する。以下、このパケットをRパケットという。 That is, when the sum of the variables y 0 to y 3 is 7 or more, the transmitter 100 generates a packet including only the L data portion of the two data portions, as shown in FIG. 152 (a). . Hereinafter, this packet is referred to as an L packet. In addition, when the sum of the variables y 0 to y 3 is 6 or less, the transmitter 100 generates a packet including only the R data portion of the two data portions as shown in FIG. 152 (b). . Hereinafter, this packet is referred to as an R packet.
 Lパケットは、図152の(a)に示すように、平均輝度調整部と、Lデータ部と、プリアンブルと、無効データとを含む。 The L packet includes an average luminance adjustment unit, an L data unit, a preamble, and invalid data, as shown in FIG. 152 (a).
 Lパケットの平均輝度調整部は、Highの輝度値を示すことなく、時間長B’だけLowの輝度値を示す。時間長B’は、例えばB’=100+W×(y+y+y+y-7)によって示される。 The average luminance adjusting unit of the L packet indicates the low luminance value for the time length B ′ 0 without indicating the high luminance value. The time length B ′ 0 is represented by, for example, B ′ 0 = 100 + W 1 × (y 0 + y 1 + y 2 + y 3 −7).
 Lパケットの無効データは、時間軸に沿ってHighとLowの輝度値を交互に示す。つまり、無効データは、時間長A’だけHighの輝度値を示し、次の時間長A’だけLowの輝度値を示す。時間長A’は、A’=W-Wによって示され、例えば80μsであり、時間長A’は、例えば150μsである。このような無効データは、その無効データを有するパケットに、Rデータ部が含まれていないことを示す。 The invalid data of the L packet alternately indicates high and low luminance values along the time axis. That is, the invalid data indicates a high luminance value for the time length A ′ 0 and indicates a low luminance value for the next time length A ′ 1 . The time length A ′ 0 is indicated by A ′ 0 = W 0 −W 1 and is, for example, 80 μs, and the time length A ′ 1 is, for example, 150 μs. Such invalid data indicates that the R data portion is not included in the packet having the invalid data.
 このようなLパケットでは、全時間長E’は、送信対象の信号に関わらず、E’=5W+12W+4b+230=1540μsである。また、有効時間長E’は、送信対象の信号に応じた時間長であって、900~1290μsの範囲にある。また、全時間長E’が一定の1540μsであるのに対して、輝度値がHighの時間長の総和(すなわち合計ON時間)は、490~670μsの範囲で送信対象の信号に応じて変化する。したがって、送信機100は、このLパケットにおいても、図151に示す例と同様に、合計ON時間に応じて、すなわち時間長DおよびDに応じて、Highの輝度値を100%~73.1%の範囲で変化させる。 In such an L packet, the total time length E ′ 0 is E ′ 0 = 5W 0 + 12W 1 + 4b + 230 = 1540 μs regardless of the signal to be transmitted. The effective time length E ′ 1 is a time length corresponding to a signal to be transmitted and is in the range of 900 to 1290 μs. In addition, while the total time length E ′ 0 is a constant 1540 μs, the sum of the time lengths where the luminance value is High (that is, the total ON time) varies in the range of 490 to 670 μs depending on the signal to be transmitted. To do. Therefore, the transmitter 100 also sets the high luminance value to 100% to 73 in the L packet according to the total ON time, that is, according to the time lengths D 0 and D 2 , as in the example shown in FIG. Change in the range of 1%.
 Rパケットは、図150に示す例と同様、図152の(b)に示すように、無効データと、プリアンブルと、Rデータ部と、平均輝度調整部とを含む。 As in the example shown in FIG. 150, the R packet includes invalid data, a preamble, an R data portion, and an average luminance adjustment portion, as shown in FIG. 152 (b).
 ここで、図152の(b)に示すRパケットでは、有効時間長Eを短くするために、平均輝度調整部におけるHighの輝度値の時間長Bは、送信対象の信号に関わらず最短の100μsに固定されている。また、平均輝度調整部におけるLowの輝度値の時間長Bは、全時間長Eを一定にするために、例えばB=W×(6-(y+y+y+y)によって示される。さらに、図152の(b)に示すRパケットにおいても、送信対象の信号に含まれる変数yおよびyに応じて、すなわち、時間長DおよびDに応じて、Highの輝度値が調整される。 Here, the R packet shown in (b) in FIG. 152, in order to shorten the effective length of time E 1, the time length B 0 of the luminance values of High in the average luminance adjusting unit, shortest regardless signal to be transmitted Of 100 μs. Further, the time length B 1 of the Low luminance value in the average luminance adjustment unit is set to, for example, B 1 = W 1 × (6− (y 0 + y 1 + y 2 + y 3 ) in order to make the total time length E 1 constant. Further, also in the R packet shown in (b) of Fig. 152, High according to the variables y 0 and y 2 included in the transmission target signal, that is, according to the time lengths D 0 and D 2 , The brightness value of is adjusted.
 このようなRパケットでは、全時間長Eは、送信対象の信号に関わらず、E=4W+6W+4b+260=1280μsである。また、有効時間長Eは、送信対象の信号に応じた時間長であって、1100~1280μsの範囲にある。また、全時間長Eが一定の1280μsであるのに対して、輝度値がHighの時間長の総和(すなわち合計ON時間)は、610~790μsの範囲で送信対象の信号に応じて変化する。したがって、送信機100は、このLパケットにおいても、図151に示す例と同様に、合計ON時間に応じて、すなわち時間長DおよびDに応じて、Highの輝度値を80.3%~62.1%の範囲で変化させる。 In such an R packet, the total time length E 0 is E 0 = 4W 0 + 6W 1 + 4b + 260 = 1280 μs regardless of the signal to be transmitted. The effective time length E 1 is a time length corresponding to the signal to be transmitted, and is in the range of 1100 to 1280 μs. In addition, while the total time length E 0 is a constant 1280 μs, the total time length (ie, the total ON time) when the luminance value is High varies within the range of 610 to 790 μs depending on the signal to be transmitted. . Accordingly, the transmitter 100, also in this L packets, similarly to the example shown in FIG. 151, in accordance with the total ON time, i.e. according to the time length D 0 and D 2, the luminance value of the High 80.3% Change in the range of ~ 62.1%.
 このように、図152に示す可視光信号では、パケットにおける有効時間長の最大値を短くすることができる。したがって、送信機100は、上述の赤色投影期間に、その有効時間長EまたはE’だけ継続して1つのパケットを送信することができる。 Thus, in the visible light signal shown in FIG. 152, the maximum value of the effective time length in the packet can be shortened. Therefore, the transmitter 100 can continuously transmit one packet during the above-described red projection period by the effective time length E 1 or E ′ 1 .
 ここで、図152に示す例では、送信機100は、変数y~yの総和が7以上の場合に、Lパケットを生成し、変数y~yの総和が6以下の場合に、Rパケットを生成する。言い換えれば、変数y~yの総和は整数であるため、送信機100は、変数y~yの総和が6よりも大きい場合に、Lパケットを生成し、変数y~yの総和が6以下の場合に、Rパケットを生成する。つまり、この例では、パケットのタイプを切り替えるための閾値は6である。しかし、このようなパケットのタイプの切り替えの閾値は、6に限定されずに、3~10の何れかの値であってもよい。 Here, in the example shown in FIG. 152, the transmitter 100 generates an L packet when the sum of the variables y 0 to y 3 is 7 or more, and when the sum of the variables y 0 to y 3 is 6 or less. , R packets are generated. In other words, since the sum of the variables y 0 to y 3 is an integer, the transmitter 100 generates an L packet when the sum of the variables y 0 to y 3 is greater than 6, and the variables y 0 to y 3 R packet is generated when the sum of is less than or equal to 6. That is, in this example, the threshold value for switching the packet type is 6. However, the threshold for switching the packet type is not limited to 6, and may be any value from 3 to 10.
 図153は、変数y~yの総和と、全時間長および有効時間長との関係を示す図である。図153に示す全時間長は、Rパケットの全時間長Eと、Lパケットの全時間長E’とのうちの大きい方の時間長である。また、図153に示す有効時間長は、Rパケットの有効時間長Eの最大値と、Lパケットの有効時間長E’の最大値とのうちの大きい方の時間長である。なお、図153に示す例では、定数W、W、およびbは、それぞれW=110μs、W=15μsおよびb=100μsである。 FIG. 153 is a diagram showing the relationship between the sum of the variables y 0 to y 3 and the total time length and effective time length. The total time length shown in FIG. 153 is the larger time length of the total time length E 0 of the R packet and the total time length E ′ 0 of the L packet. The effective length of time shown in FIG. 153 is a time length of larger of the maximum value of the effective duration E 1 of R packets, the maximum value of the effective time length E '1 of L packets. In the example shown in FIG. 153, the constants W 0 , W 1 , and b are W 0 = 110 μs, W 1 = 15 μs, and b = 100 μs, respectively.
 全時間長は、図153に示すように、変数y~yの総和に応じて変化するが、その総和が約10で最小になる。また、有効時間長は、図153に示すように、変数y~yの総和に応じて変化するが、その総和が約3で最小になる。 As shown in FIG. 153, the total time length changes according to the sum of the variables y 0 to y 3 , but the sum is about 10 and becomes the minimum. Further, as shown in FIG. 153, the effective time length changes according to the sum of the variables y 0 to y 3 , but the sum is about 3 and becomes the minimum.
 したがって、パケットのタイプの切り替えの閾値は、3~10の範囲で、全時間長および有効時間長のうちの何れを短くするかに応じて設定されてもよい。 Therefore, the packet type switching threshold may be set in the range of 3 to 10 depending on which of the total time length and the effective time length is to be shortened.
 図154Aは、本実施の形態における送信方法を示すフローチャートである。 FIG. 154A is a flowchart showing the transmission method in the present embodiment.
 本実施の形態における送信方法は、発光体の輝度変化によって可視光信号を送信する送信方法であって、決定ステップS571と、送信ステップS572とを含む。決定ステップS571では、送信機100は、信号を変調することによって、輝度変化のパターンを決定する。送信ステップS572では、送信機100は、その発光体に含まれる光源によって表現される赤色の輝度を、決定されたパターンにしたがって変化させることによって可視光信号を送信する。ここで、可視光信号は、データと、プリアンブルと、ペイロードとを含む。データでは、第1の輝度値、および、その第1の輝度値よりも小さい第2の輝度値が、時間軸上に沿って現れ、第1の輝度値および第2の輝度値のうちの少なくとも一方が継続する時間長は第1の所定の値以下である。プリアンブルでは、第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れる。ペイロードでは、第1および第2の輝度値が時間軸上に沿って交互に現れ、第1および第2の輝度値のそれぞれが継続する時間長は第1の所定の値よりも大きく、かつ、上述の信号および所定の方式にしたがって決定されている。 The transmission method in the present embodiment is a transmission method for transmitting a visible light signal by a change in luminance of a light emitter, and includes a determination step S571 and a transmission step S572. In the determination step S571, the transmitter 100 modulates the signal to determine a luminance change pattern. In the transmission step S572, the transmitter 100 transmits a visible light signal by changing the luminance of red expressed by the light source included in the light emitter according to the determined pattern. Here, the visible light signal includes data, a preamble, and a payload. In the data, a first luminance value and a second luminance value smaller than the first luminance value appear along the time axis, and at least one of the first luminance value and the second luminance value is displayed. The length of time during which one continues is less than or equal to the first predetermined value. In the preamble, each of the first and second luminance values appears alternately along the time axis. In the payload, the first and second luminance values appear alternately along the time axis, the length of time that each of the first and second luminance values continues is greater than the first predetermined value, and It is determined according to the above signal and a predetermined method.
 例えば、データ、プリアンブルおよびペイロードはそれぞれ、図152の(a)および(b)に示す無効データ、プリアンブル、およびLデータ部もしくはRデータ部である。また、例えば、第1の所定の値は100μsである。 For example, data, preamble, and payload are invalid data, preamble, and L data portion or R data portion shown in FIGS. 152 (a) and (b), respectively. For example, the first predetermined value is 100 μs.
 これにより、図152の(a)および(b)に示すように、可視光信号は、変調される信号に応じて決定される波形のペイロード(すなわち、Lデータ部またはRデータ部)を1つ含み、2つのペイロードを含んでいない。したがって、可視光信号、すなわち可視光信号のパケットを、短くすることができる。その結果、例えば、発光体に含まれる光源によって表現される赤色の光の発光期間が短くても、その発光期間に可視光信号のパケットを送信することができる。 As a result, as shown in FIGS. 152A and 152B, the visible light signal has one waveform payload (ie, L data portion or R data portion) determined according to the signal to be modulated. Contains no two payloads. Therefore, the visible light signal, that is, the packet of the visible light signal can be shortened. As a result, for example, even if the light emission period of red light expressed by the light source included in the light emitter is short, a packet of visible light signals can be transmitted during the light emission period.
 また、ペイロードでは、第1の時間長の第1の輝度値、第2の時間長の第2の輝度値、第3の時間長の第1の輝度値、第4の時間長の第2の輝度値の順で、それぞれの輝度値が現れてもよい。この場合、送信ステップS572では、送信機100は、第1の時間長と第3の時間長の和が、第2の所定の値よりも小さい場合、第1の時間長と第3の時間長の和が、第2の所定の値よりも大きい場合よりも、光源に流れる電流値を大きくする。ここで、第2の所定の値は、第1の所定の値よりも大きい。なお、第2の所定の値は、例えば220μsよりも大きい値である。 In the payload, the first luminance value of the first time length, the second luminance value of the second time length, the first luminance value of the third time length, the second luminance value of the fourth time length Each luminance value may appear in the order of the luminance value. In this case, in the transmission step S572, the transmitter 100 determines that the first time length and the third time length when the sum of the first time length and the third time length is smaller than the second predetermined value. Is larger than the second predetermined value, the value of the current flowing in the light source is increased. Here, the second predetermined value is larger than the first predetermined value. Note that the second predetermined value is, for example, a value larger than 220 μs.
 これにより、図151および図152に示すように、第1の時間長と第3の時間長の和が小さい場合には、光源の電流値は大きくされ、第1の時間長と第3の時間長の和が大きい場合には、光源の電流値は小さくされる。したがって、データ、プリアンブルおよびペイロードからなるパケットの平均輝度を、信号に関わらずに一定に保つことができる。 Thereby, as shown in FIGS. 151 and 152, when the sum of the first time length and the third time length is small, the current value of the light source is increased, and the first time length and the third time length are increased. When the sum of the lengths is large, the current value of the light source is reduced. Therefore, the average luminance of the packet including data, preamble, and payload can be kept constant regardless of the signal.
 また、ペイロードでは、第1の時間長Dの第1の輝度値、第2の時間長Dの第2の輝度値、第3の時間長Dの第1の輝度値、第4の時間長Dの第2の輝度値の順で、それぞれの輝度値が現れてもよい。この場合、信号から得られる4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値以下である場合、第1~4の時間長D~Dのそれぞれは、D=W+W×y(W、Wは、0以上の整数)に従って決定されている。例えば、図152の(b)に示すように、第3の所定の値は3である。 In the payload, the first luminance value of the first time length D 0 , the second luminance value of the second time length D 1 , the first luminance value of the third time length D 2 , the fourth in order of the second luminance value of the time length D 3, it may appear the respective luminance values. In this case, when the sum of the four parameters y k (k = 0, 1, 2, 3) obtained from the signal is equal to or smaller than the third predetermined value, the first to fourth time lengths D 0 to D 3 Each is determined according to D k = W 0 + W 1 × y k (W 0 , W 1 is an integer of 0 or more). For example, as shown in FIG. 152 (b), the third predetermined value is 3.
 これにより、図152の(b)に示すように、第1~4の時間長D~DのそれぞれをW以上にしながら、信号に応じて短い波形のペイロードを生成することができる。 As a result, as shown in FIG. 152 (b), it is possible to generate a payload having a short waveform according to the signal while setting each of the first to fourth time lengths D 0 to D 3 to be W 0 or more.
 また、4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値以下である場合、送信ステップS572では、データ、プリアンブルおよびペイロードを、データ、プリアンブル、ペイロードの順に送信してもよい。なお、図152の(b)に示す例の場合、そのペイロードはRデータ部である。 When the sum of the four parameters y k (k = 0, 1, 2, 3) is equal to or smaller than the third predetermined value, in the transmission step S572, the data, the preamble, and the payload are replaced with the data, the preamble, and the payload. You may transmit in order. In the case of the example shown in FIG. 152 (b), the payload is the R data portion.
 これにより、図152の(b)に示すように、データ(すなわち無効データ)を含む可視光信号のパケットがLデータ部を含んでいないことを、そのデータによって、そのパケットを受信する受信機200に知らせることができる。 Thereby, as shown in FIG. 152 (b), the fact that the packet of the visible light signal including the data (that is, invalid data) does not include the L data portion indicates that the receiver 200 receives the packet by the data. Can let you know.
 また、4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値よりも大きい場合、第1~4の時間長D~Dのそれぞれは、D=W+W×(A-y)、D=W+W×(B-y)、D=W+W×(A-y)、およびD=W+W×(B-y)(AおよびBはそれぞれ、0以上の整数)に従って決定されていてもよい。 When the sum of the four parameters y k (k = 0, 1, 2, 3) is larger than the third predetermined value, each of the first to fourth time lengths D 0 to D 3 is D 0. = W 0 + W 1 × (A−y 0 ), D 1 = W 0 + W 1 × (By 1 ), D 2 = W 0 + W 1 × (A−y 2 ), and D 3 = W 0 + W It may be determined according to 1 × (By 3 ) (A and B are each integers of 0 or more).
 これにより、図152の(a)に示すように、第1~4の時間長D~D(すなわち、第1~4の時間長D’~D’)のそれぞれをW以上にしながら、上述の総和が大きくても、信号に応じて短い波形のペイロードを生成することができる。 Thereby, as shown in FIG. 152A, each of the first to fourth time lengths D 0 to D 3 (that is, the first to fourth time lengths D ′ 0 to D ′ 3 ) is set to W 0 or more. However, even if the total sum is large, a short waveform payload can be generated according to the signal.
 また、4つのパラメータy(k=0,1,2,3)の総和が第3の所定の値よりも大きい場合、送信ステップS572では、データ、プリアンブルおよびペイロードを、ペイロード、プリアンブル、データの順に送信してもよい。なお、図152の(a)に示す例の場合、そのペイロードはLデータ部である。 If the sum of the four parameters y k (k = 0, 1, 2, 3) is larger than the third predetermined value, in step S572, the data, preamble, and payload are replaced with the payload, preamble, and data. You may transmit in order. In the case of the example shown in FIG. 152 (a), the payload is the L data portion.
 これにより、図152の(a)に示すように、データ(すなわち無効データ)を含む可視光信号のパケットがRデータ部を含んでいないことを、そのデータによって、そのパケットを受信する受信装置に知らせることができる。 As a result, as shown in FIG. 152 (a), the fact that the packet of visible light signal including data (that is, invalid data) does not include the R data portion is indicated to the receiving device that receives the packet by the data. I can inform you.
 また、発光体は、赤色の光源、青色の光源、および緑色の光源を含む複数の光源を有し、送信ステップS572では、その複数の光源のうち、赤色の光源のみを用いて可視光信号を送信してもよい。 The light emitter has a plurality of light sources including a red light source, a blue light source, and a green light source. In the transmission step S572, a visible light signal is generated using only the red light source among the plurality of light sources. You may send it.
 これにより、発光体は、赤色の光源、青色の光源、および緑色の光源を用いて映像を表示することができるとともに、受信機200に受信し易い波長の可視光信号を送信することができる。 Thereby, the light emitter can display an image using a red light source, a blue light source, and a green light source, and can transmit a visible light signal having a wavelength that is easy to receive to the receiver 200.
 なお、発光体は例えばDLPプロジェクタであってもよい。DLPプロジェクタは、上述のように、赤色の光源、青色の光源、緑色の光源を含む複数の光源を有していてもよいが、1つの光源のみを有していてもよい。つまり、DLPプロジェクタは、1つの光源と、DMD(Digital Micromirror Device)と、光源とDMDとの間に配置されるカラーホイールとを備えていてもよい。この場合には、DLPプロジェクタは、光源からカラーホイールを介してDMDへ時分割で出力される赤色の光、青色の光、および緑色の光のうち、赤色の光が出力される期間に、可視光信号のパケットを送信する。 The light emitter may be a DLP projector, for example. As described above, the DLP projector may include a plurality of light sources including a red light source, a blue light source, and a green light source, but may include only one light source. That is, the DLP projector may include one light source, a DMD (Digital Micromirror Device), and a color wheel disposed between the light source and the DMD. In this case, the DLP projector is visible during a period in which red light is output among red light, blue light, and green light that are output in time division from the light source to the DMD via the color wheel. Transmit an optical signal packet.
 図154Bは、本実施の形態における送信機100の構成を示すブロック図である。 FIG. 154B is a block diagram showing a configuration of transmitter 100 in the present embodiment.
 本実施の形態における送信機100は、発光体の輝度変化によって可視光信号を送信する送信機、決定部571と、送信部572とを備える。決定部571は、信号を変調することによって、輝度変化のパターンを決定する。送信部572は、その発光体に含まれる光源によって表現される赤色の輝度を、決定されたパターンにしたがって変化させることによって可視光信号を送信する。ここで、可視光信号は、データと、プリアンブルと、ペイロードとを含む。データでは、第1の輝度値、および、その第1の輝度値よりも小さい第2の輝度値が、時間軸上に沿って現れ、第1の輝度値および第2の輝度値のうちの少なくとも一方が継続する時間長は第1の所定の値以下である。プリアンブルでは、第1および第2の輝度値のそれぞれが、時間軸上に沿って交互に現れる。ペイロードでは、第1および第2の輝度値が時間軸上に沿って交互に現れ、第1および第2の輝度値のそれぞれが継続する時間長は第1の所定の値よりも大きく、かつ、上述の信号および所定の方式にしたがって決定されている。 The transmitter 100 according to the present embodiment includes a transmitter that transmits a visible light signal according to a change in luminance of a light emitter, a determination unit 571, and a transmission unit 572. The determination unit 571 determines a luminance change pattern by modulating a signal. The transmission unit 572 transmits the visible light signal by changing the luminance of red expressed by the light source included in the light emitter according to the determined pattern. Here, the visible light signal includes data, a preamble, and a payload. In the data, a first luminance value and a second luminance value smaller than the first luminance value appear along the time axis, and at least one of the first luminance value and the second luminance value is displayed. The length of time during which one continues is less than or equal to the first predetermined value. In the preamble, each of the first and second luminance values appears alternately along the time axis. In the payload, the first and second luminance values appear alternately along the time axis, the length of time that each of the first and second luminance values continues is greater than the first predetermined value, and It is determined according to the above signal and a predetermined method.
 このような送信機100によって、図154Aに示すフローチャートの送信方法が実現される。 Such a transmitter 100 realizes the transmission method of the flowchart shown in FIG. 154A.
 (実施の形態9)
 本実施の形態では、実施の形態4などと同様、光IDを用いたAR(Augmented Reality)を実現する表示方法および表示装置などについて説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。また、本実施の形態における受信機は、表示装置として構成されている。
(Embodiment 9)
In this embodiment, a display method and a display device that realize AR (Augmented Reality) using an optical ID will be described as in the fourth embodiment. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments. The receiver in this embodiment is configured as a display device.
 図155は、実施の形態9における表示システムの構成を示す図である。 FIG. 155 is a diagram illustrating a configuration of the display system according to the ninth embodiment.
 この表示システム500は、可視光信号を用いた物体認識と拡張現実(Augmented Reality/Mixed Reality)表示とを行う。 This display system 500 performs object recognition and augmented reality (Augmented Reality / Mixed Reality) display using visible light signals.
 送信機100は、例えば図155に示すように、照明装置として構成され、AR対象物501を照らしながら輝度変化することによって、光IDを送信している。AR対象物501は、その送信機100からの光によって照らされているため、送信機100と同様に輝度変化し、光IDを送信している。 For example, as illustrated in FIG. 155, the transmitter 100 is configured as a lighting device, and transmits a light ID by changing the luminance while illuminating the AR object 501. Since the AR object 501 is illuminated by the light from the transmitter 100, the luminance changes similarly to the transmitter 100, and the optical ID is transmitted.
 受信機200は、そのAR対象物501を撮像する。つまり、受信機200は、上述の通常露光時間および通信用露光時間のそれぞれの露光時間でAR対象物501を撮像する。これにより、受信機200は、上述と同様、撮像表示画像と、可視光通信画像または輝線画像である復号用画像とを取得する。 The receiver 200 images the AR object 501. That is, the receiver 200 images the AR object 501 at the exposure times of the normal exposure time and the communication exposure time described above. Thereby, the receiver 200 acquires a captured display image and a decoding image that is a visible light communication image or a bright line image, as described above.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、AR対象物501から光IDを受信する。受信機200は、その光IDをサーバ300に送信する。そして、受信機200は、その光IDに関連付けられているAR画像P11と認識情報とをサーバ300から取得する。受信機200は、撮像表示画像のうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、AR対象物501が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域にAR画像P11を重畳し、AR画像P11が重畳された撮像表示画像をディスプレイに表示する。例えば、AR画像P11は、動画像である。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the AR object 501. The receiver 200 transmits the optical ID to the server 300. Then, the receiver 200 acquires the AR image P11 and the recognition information associated with the optical ID from the server 300. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image as a target area. For example, the receiver 200 recognizes the area where the AR object 501 is projected as the target area. Then, the receiver 200 superimposes the AR image P11 on the target area, and displays the captured display image on which the AR image P11 is superimposed on the display. For example, the AR image P11 is a moving image.
 受信機200は、そのAR画像P11の動画像の全ての表示または再生が完了すると、その動画像の再生完了をサーバ300に通知する。この再生完了の通知を受けたサーバ300は、その受信機200に対してポイントなどの報酬を付与する。なお、受信機200は、動画像の再生完了をサーバ300に通知するときには、再生完了だけでなく、受信機200のユーザの個人情報も通知してもよく、報酬を格納するためのウォレットIDも通知してもよい。サーバ300は、これらの通知を受けることによって、その受信機200に対してポイントを付与する。 When the display or reproduction of all the moving images of the AR image P11 is completed, the receiver 200 notifies the server 300 of the completion of the reproduction of the moving images. The server 300 that has received the notification of the completion of reproduction gives a reward such as points to the receiver 200. When the receiver 200 notifies the server 300 of the completion of reproduction of the moving image, the receiver 200 may notify not only the completion of reproduction but also the personal information of the user of the receiver 200, and the wallet ID for storing the reward You may be notified. By receiving these notifications, the server 300 gives points to the receiver 200.
 図156は、受信機200とサーバ300の処理動作を示すシーケンス図である。 FIG. 156 is a sequence diagram showing processing operations of the receiver 200 and the server 300.
 受信機200は、AR対象物501を撮像することによって、可視光信号として光IDを取得する(ステップS51)。そして、受信機200は、その光IDをサーバ300に送信する(ステップS52)。 The receiver 200 acquires the light ID as a visible light signal by imaging the AR object 501 (step S51). Then, the receiver 200 transmits the optical ID to the server 300 (step S52).
 サーバ300は、その光IDを取得すると(ステップS53)、その光IDに関連付けられている認識情報およびAR画像P11を受信機200に送信する(ステップS54)。 When the server 300 acquires the optical ID (step S53), the server 300 transmits the recognition information associated with the optical ID and the AR image P11 to the receiver 200 (step S54).
 受信機200は、認識情報にしたがって、撮像表示画像のうちの例えばAR対象物501が映し出されている領域を対象領域として認識し、その対象領域にAR画像P11が重畳された撮像表示画像をディスプレイに表示する。そして、その受信機200は、そのAR画像P11である動画像の再生を開始する(ステップS56)。 In accordance with the recognition information, the receiver 200 recognizes, for example, an area in which the AR target 501 is projected in the captured display image as the target area, and displays the captured display image in which the AR image P11 is superimposed on the target area. To display. Then, the receiver 200 starts to reproduce the moving image that is the AR image P11 (step S56).
 次に、受信機200は、その動画像の再生が全て完了したか否かを判定する(ステップS57)。再生が全て完了したと判定すると(ステップS57のYes)、受信機200は、その動画像の再生完了をサーバ300に通知する(ステップS58)。 Next, the receiver 200 determines whether or not the reproduction of the moving image has been completed (step S57). When it is determined that all the reproduction has been completed (Yes in step S57), the receiver 200 notifies the server 300 of the completion of reproduction of the moving image (step S58).
 サーバ300は、その再生完了の通知を受信機200から受けると、その受信機200に対してポイントを付与する(ステップS59)。 When the server 300 receives the reproduction completion notification from the receiver 200, the server 300 gives points to the receiver 200 (step S59).
 ここで、サーバ300は、図157に示すように、受信機200に対してポイントを付与するための条件をより厳しくしてもよい。 Here, as shown in FIG. 157, the server 300 may make conditions for giving points to the receiver 200 more stringent.
 図157は、サーバ300の処理動作を示すフローチャートである。 FIG. 157 is a flowchart showing the processing operation of the server 300.
 サーバ300は、まず、受信機200から光IDを取得する(ステップS60)。次に、サーバ300は、その光IDに関連付けられている認識情報およびAR画像P11を受信機200に送信する(ステップS61)。 The server 300 first acquires an optical ID from the receiver 200 (step S60). Next, the server 300 transmits the recognition information associated with the optical ID and the AR image P11 to the receiver 200 (step S61).
 そして、サーバ300は、受信機200からAR画像P11である動画像の再生完了の通知を受けたか否かを判定する(ステップS62)。ここで、サーバ300は、動画像の再生完了の通知を受けたと判定すると(ステップS62のYes)、さらに、過去に同じAR画像P11が受信機200において再生されたか否かを判定する(ステップS63)。過去に同じAR画像P11が受信機200において再生されていないと判定すると(ステップS63のNo)、サーバ300は、受信機200に対してポイントを付与する(ステップS66)。一方、過去に同じAR画像P11が受信機200において再生されていると判定すると(ステップS63のYes)、サーバ300は、さらに、その過去の再生から所定期間が経過しているか否かを判定する(ステップS64)。例えば、所定期間は、1か月であってもよく、3か月であってもよく、1年などであってもよく、どのような期間であってもよい。 Then, the server 300 determines whether or not a notification of completion of reproduction of the moving image that is the AR image P11 has been received from the receiver 200 (step S62). Here, if the server 300 determines that the notification of the completion of the reproduction of the moving image has been received (Yes in step S62), it further determines whether or not the same AR image P11 has been reproduced in the receiver 200 in the past (step S63). ). If it is determined that the same AR image P11 has not been reproduced in the receiver 200 in the past (No in step S63), the server 300 gives points to the receiver 200 (step S66). On the other hand, when determining that the same AR image P11 has been reproduced in the receiver 200 in the past (Yes in step S63), the server 300 further determines whether or not a predetermined period has elapsed since the past reproduction. (Step S64). For example, the predetermined period may be one month, three months, one year, or any period.
 ここで、サーバ300は、所定期間が経過していないと判定する(ステップS64のNo)、受信機200に対してポイントを付与しない。一方、サーバ300は、所定期間が経過していると判定すると(ステップS64のYes)、さらに、受信機200の現在の場所が、過去に同じAR画像P11が再生された場所(以下、過去再生場所という)と異なるか否かを判定する(ステップS65)。サーバ300は、受信機200の現在の場所が過去再生場所と異なると判定すると(ステップS65のYes)、受信機200に対してポイントを付与する(ステップS66)。一方、サーバ300は、受信機200の現在の場所が過去再生場所と同じであると判定すると(ステップS65のNo)、受信機200に対してポイントを付与しない。 Here, the server 300 determines that the predetermined period has not elapsed (No in step S64), and does not give points to the receiver 200. On the other hand, when the server 300 determines that the predetermined period has elapsed (Yes in step S64), the server 200 further determines that the current location of the receiver 200 is the location where the same AR image P11 has been reproduced in the past (hereinafter referred to as past playback). It is determined whether or not the location is different (step S65). When the server 300 determines that the current location of the receiver 200 is different from the past playback location (Yes in step S65), the server 300 gives points to the receiver 200 (step S66). On the other hand, if the server 300 determines that the current location of the receiver 200 is the same as the previous playback location (No in step S65), the server 300 does not give points to the receiver 200.
 これにより、AR画像P11の全ての再生によってポイントが受信機200に対して付与されるため、受信機200のユーザがAR画像P11の全てを再生しようとする意欲を高めることができる。例えば、サーバ300からデータ量の多いAR画像P11を取得するためには、高いパケット料が必要になり、ユーザは、そのAR画像P11の再生を途中で中断する可能性がある。しかし、ポイントの付与によって、AR画像P11の全てを再生させることができる。なお、ポイントは、パケット料の割引であってもよい。さらに、AR画像P11のデータ量に応じたポイントを受信機200に付与してもよい。 Thereby, since points are given to the receiver 200 by all reproduction of the AR image P11, the user of the receiver 200 can increase the willingness to reproduce all of the AR image P11. For example, in order to acquire the AR image P11 having a large amount of data from the server 300, a high packet fee is required, and the user may interrupt the reproduction of the AR image P11 on the way. However, it is possible to reproduce the entire AR image P11 by giving points. The points may be discounts on packet charges. Furthermore, points corresponding to the data amount of the AR image P11 may be given to the receiver 200.
 図158は、送信機100および受信機200がそれぞれ車両に搭載された場合における通信の例を示す図である。 FIG. 158 is a diagram illustrating an example of communication when the transmitter 100 and the receiver 200 are each mounted on a vehicle.
 車両200nは、上述の受信機200を備え、複数の車両100nは、上述の送信機100を備えている。また、複数の車両100nは、例えば、車両200nの前方を走行している。さらに、車両200nは、複数の車両100nのうちの何れか1つの車両100nと無線で通信している。 The vehicle 200n includes the receiver 200 described above, and the plurality of vehicles 100n include the transmitter 100 described above. In addition, the plurality of vehicles 100n are traveling in front of the vehicle 200n, for example. Furthermore, the vehicle 200n is wirelessly communicating with any one of the plurality of vehicles 100n.
 ここで、車両200nは、前方にある複数の車両100nのうちの何れの車両100nと無線通信しているのかを把握するために、その通信相手の車両100nに対して、可視光信号を送信するように無線通信によって要求する。 Here, the vehicle 200n transmits a visible light signal to the communication partner vehicle 100n in order to grasp which of the plurality of vehicles 100n ahead is communicating with the vehicle 100n. Request by wireless communication.
 通信相手の車両100nは、車両200nからその要求を受けると、後方に向けて可視光信号を送信する。例えば、通信相手の車両100nは、バックライトを点滅させることによって、可視光信号を送信する。 When the communication partner vehicle 100n receives the request from the vehicle 200n, the vehicle 100n transmits a visible light signal backward. For example, the communication partner vehicle 100n transmits a visible light signal by blinking a backlight.
 車両200nは、イメージセンサによって前方を撮像する。これにより、車両200nは、上述と同様、撮像表示画像と復号用画像とを取得する。撮像表示画像には、車両200nの前方を走行する複数の車両100nが映し出されている。 The vehicle 200n images the front with an image sensor. Thereby, the vehicle 200n acquires the captured display image and the decoding image as described above. In the captured display image, a plurality of vehicles 100n traveling in front of the vehicle 200n are displayed.
 車両200nは、その復号用画像における輝線パターン領域の位置を特定し、例えば、撮像表示画像における、その輝線パターン領域と同一の位置にマーカーを重畳する。そして、車両200nは、マーカーが重畳された撮像表示画像を車内のディスプレイに表示する。例えば、複数の車両100nのうちの何れか1つの車両100nのバックライトにマーカーが重畳された撮像表示画像が表示される。これにより、車両200nの運転者などの乗員は、その撮像表示画像を見ることによって、何れの車両100nが通信相手であるのかを容易に把握することができる。 The vehicle 200n specifies the position of the bright line pattern region in the decoding image, and for example, superimposes the marker at the same position as the bright line pattern region in the captured display image. Then, the vehicle 200n displays the captured display image on which the marker is superimposed on a display inside the vehicle. For example, a captured display image in which a marker is superimposed on the backlight of any one of the plurality of vehicles 100n is displayed. Thereby, an occupant such as a driver of the vehicle 200n can easily grasp which vehicle 100n is the communication partner by looking at the captured display image.
 図159は、車両200nの処理動作を示すフローチャートである。 FIG. 159 is a flowchart showing the processing operation of the vehicle 200n.
 車両200nは、その車両200nの周辺にある車両100nと無線通信を開始する(ステップS71)。このとき、車両200nの乗員は、その車両200nのイメージセンサによる周辺の撮像によって得られる画像に、複数の車両が映し出されている場合、それらの複数の車両の中で、何れの車両が無線通信の通信相手であるのかを把握することができない。そこで、車両200nは、可視光信号の送信を無線通信で通信相手の車両100nに要求する(ステップS72)。この要求を受け付けた通信相手の車両100nは、可視光信号を送信する。車両200nは、イメージセンサによって周辺を撮像し、その撮像によって、通信相手の車両100nから送信された可視光信号を受信する(ステップS73)。つまり、車両200nは、上述のように、撮像表示画像および復号用画像を取得する。そして、車両200nは、その復号用画像における輝線パターン領域の位置を特定し、撮像表示画像における、その輝線パターン領域と同一の位置にマーカーを重畳する。これにより、車両200nは、撮像表示画像に複数の車両が映し出されていても、それらの複数の車両のうち、マーカーが重畳された車両を、通信相手の車両100nとして特定することができる(ステップS74)。 The vehicle 200n starts wireless communication with the vehicle 100n around the vehicle 200n (step S71). At this time, when a plurality of vehicles are displayed in an image obtained by imaging the periphery of the vehicle 200n by the image sensor of the vehicle 200n, any of the plurality of vehicles is wirelessly communicated. It is not possible to grasp whether it is a communication partner. Therefore, the vehicle 200n requests the communication partner vehicle 100n to transmit a visible light signal by wireless communication (step S72). The communication partner vehicle 100n that has received this request transmits a visible light signal. The vehicle 200n images the surroundings with an image sensor, and receives the visible light signal transmitted from the communication partner vehicle 100n by the imaging (step S73). That is, the vehicle 200n acquires the captured display image and the decoding image as described above. Then, the vehicle 200n specifies the position of the bright line pattern region in the decoding image, and superimposes the marker at the same position as the bright line pattern region in the captured display image. Thereby, even if a plurality of vehicles are displayed on the captured display image, the vehicle 200n can specify the vehicle on which the marker is superimposed among the plurality of vehicles as the communication partner vehicle 100n (step). S74).
 図160は、本実施の形態における受信機200がAR画像を表示する例を示す図である。 FIG. 160 is a diagram illustrating an example in which the receiver 200 according to the present embodiment displays an AR image.
 受信機200は、そのイメージセンサによる被写体の撮像によって、例えば図54に示すように、撮像表示画像Pkと復号用画像とをそれぞれ繰り返し取得する。 The receiver 200 repeatedly acquires a captured display image Pk and a decoding image, for example, as shown in FIG. 54, by imaging the subject by the image sensor.
 具体的には、受信機200のイメージセンサは、サイネージとして構成されている送信機100と、送信機100の隣にいる人物21とを撮像する。送信機100は、上記各実施の形態における送信機であって、1つまたは複数の発光素子(例えばLED)と、すりガラスのように透光性を有する透光板144とを備える。1つまたは複数の発光素子は、送信機100の内部で発光し、1つまたは複数の発光素子からの光は、透光板144を透過して外部に照射される。その結果、送信機100の透光板144が明るく光っている状態になる。このような送信機100は、その1つまたは複数の発光素子を点滅させることによって輝度変化し、その輝度変化によって光ID(すなわち光識別情報)を送信する。この光IDは、上述の可視光信号である。 Specifically, the image sensor of the receiver 200 images the transmitter 100 configured as signage and the person 21 adjacent to the transmitter 100. The transmitter 100 is the transmitter in each of the above embodiments, and includes one or a plurality of light emitting elements (for example, LEDs) and a light transmitting plate 144 having a light transmitting property like ground glass. The one or more light emitting elements emit light inside the transmitter 100, and light from the one or more light emitting elements is transmitted through the translucent plate 144 and irradiated outside. As a result, the translucent plate 144 of the transmitter 100 shines brightly. Such a transmitter 100 changes in luminance by blinking one or more light emitting elements, and transmits an optical ID (that is, optical identification information) by the change in luminance. This light ID is the above-mentioned visible light signal.
 ここで、透光板144には、「ここにスマートフォンをかざしてください」というメッセージが記載されている。そこで、受信機200のユーザは、人物21を送信機100の隣に立たせて、腕を送信機100に差し出すようにその人物21に指示する。そして、ユーザは、受信機200のカメラ(すなわちイメージセンサ)を人物21および送信機100に向けて撮像を行う。受信機200は、送信機100および人物21を通常露光時間で撮像することによって、それらが映し出された撮像表示画像Pkを取得する。さらに、受信機200は、その通常露光時間よりも短い通信用露光時間で送信機100および人物21を撮像することによって、復号用画像を取得する。 Here, a message “Please hold your smartphone here” is written on the translucent plate 144. Therefore, the user of the receiver 200 places the person 21 next to the transmitter 100 and instructs the person 21 to put out his arm to the transmitter 100. Then, the user images the camera (that is, the image sensor) of the receiver 200 toward the person 21 and the transmitter 100. The receiver 200 captures the transmitter 100 and the person 21 with the normal exposure time, thereby obtaining a captured display image Pk on which they are projected. Furthermore, the receiver 200 acquires a decoding image by capturing the transmitter 100 and the person 21 with a communication exposure time shorter than the normal exposure time.
 受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDに関連付けられているAR画像P45と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pkのうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100であるサイネージが映し出されている領域を対象領域として認識する。 The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires the AR image P45 and the recognition information associated with the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pk as a target area. For example, the receiver 200 recognizes an area in which signage as the transmitter 100 is displayed as a target area.
 そして、受信機200は、その対象領域がAR画像P45によって覆い隠されるように、そのAR画像P45を撮像表示画像Pkに重畳し、その撮像表示画像Pkをディスプレイ201に表示する。例えば、受信機200は、サッカー選手を示すAR画像P45を取得する。この場合、撮像表示画像Pkの対象領域を覆い隠すようにそのAR画像P45が重畳されるため、人物21の隣にサッカー選手が現実に存在するように、撮像表示画像Pkを表示することができる。その結果、人物21は、サッカー選手が隣にいなくても、そのサッカー選手と一緒に写真に写ることができる。 Then, the receiver 200 superimposes the AR image P45 on the captured display image Pk so that the target area is covered with the AR image P45, and displays the captured display image Pk on the display 201. For example, the receiver 200 acquires an AR image P45 indicating a soccer player. In this case, since the AR image P45 is superimposed so as to cover the target area of the captured display image Pk, the captured display image Pk can be displayed so that a soccer player actually exists next to the person 21. . As a result, even if the soccer player is not next to the person 21, the person 21 can be photographed together with the soccer player.
 ここで、AR画像P45は、握手をしている状態のサッカー選手を示す。そこで、人物21は、そのAR画像P45と握手をしている撮像表示画像Pkが得られるように、自らの手を送信機100に差し出す。しかし、人物21には、撮像表示画像Pkに重畳されるAR画像P45が見えず、AR画像P45のサッカー選手と握手ができているのか分からない。 Here, the AR image P45 shows a soccer player in a state of shaking hands. Therefore, the person 21 presents his / her hand to the transmitter 100 so as to obtain the captured display image Pk shaking hands with the AR image P45. However, the person 21 cannot see the AR image P45 superimposed on the captured display image Pk and does not know whether the AR image P45 is shaking hands with the soccer player.
 そこで、本実施の形態における受信機200は、ライブビューとして撮像表示画像Pkを表示装置D5に送信して、その表示装置D5のディスプレイに撮像表示画像Pkを表示させる。表示装置D5のディスプレイは、人物21に向けられている。したがって、人物21は、その表示装置D5に表示される撮像表示画像Pkを見ることによって、AR画像P45のサッカー選手と握手ができているのを把握することができる。 Therefore, the receiver 200 in the present embodiment transmits the captured display image Pk to the display device D5 as a live view, and displays the captured display image Pk on the display of the display device D5. The display of the display device D5 is directed to the person 21. Accordingly, the person 21 can grasp that the AR image P45 is shaking hands with the soccer player by looking at the captured display image Pk displayed on the display device D5.
 図161は、本実施の形態における受信機200がAR画像を表示する他の例を示す図である。 161 is a diagram illustrating another example in which the receiver 200 according to the present embodiment displays an AR image.
 送信機100は、例えば図161に示すように、例えば音楽コンテンツのアルバムのデジタルサイネージとして構成され、輝度変化することによって、光IDを送信している。 For example, as shown in FIG. 161, the transmitter 100 is configured as a digital signage of an album of music content, for example, and transmits an optical ID by changing the luminance.
 受信機200は、送信機100を撮像することによって、上述と同様に、撮像表示画像Prと復号用画像とを繰り返し取得する。受信機200は、その復号用画像に対する復号によって光IDを取得する。つまり、受信機200は、送信機100から光IDを受信する。受信機200は、その光IDをサーバに送信する。そして、受信機200は、その光IDによって特定されるアルバムに関連付けられている第1のAR画像P46、認識情報、第1の音楽コンテンツおよびサブ画像Ps46をサーバから取得する。 The receiver 200 captures the transmitter 100 to repeatedly acquire the captured display image Pr and the decoding image in the same manner as described above. The receiver 200 acquires the optical ID by decoding the decoding image. That is, the receiver 200 receives the optical ID from the transmitter 100. The receiver 200 transmits the optical ID to the server. Then, the receiver 200 acquires, from the server, the first AR image P46, the recognition information, the first music content, and the sub image Ps46 associated with the album specified by the optical ID.
 受信機200は、そのサーバから取得された第1の音楽コンテンツの再生を開始する。これにより、第1の音楽コンテンツである1曲目の歌が受信機200のスピーカから出力される。 The receiver 200 starts playing the first music content acquired from the server. As a result, the first song, which is the first music content, is output from the speaker of the receiver 200.
 さらに、受信機200は、撮像表示画像Prのうち、認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、送信機100が映し出されている領域を対象領域として認識する。そして、受信機200は、その対象領域に第1のAR画像P46を重畳し、さらに、その対象領域の外に、サブ画像Ps46を重畳する。受信機200は、その第1のAR画像P46およびサブ画像Ps46が重畳された撮像表示画像Prをディスプレイ201に表示する。例えば、第1のAR画像P46は、第1の音楽コンテンツである1曲目の歌に関する動画像であり、サブ画像Ps46は、上述のアルバムに関する静止画像である。受信機200は、第1の音楽コンテンツに同期させて、その第1のAR画像P46の動画像を再生する。 Furthermore, the receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pr as a target area. For example, the receiver 200 recognizes an area where the transmitter 100 is projected as a target area. Then, the receiver 200 superimposes the first AR image P46 on the target area, and further superimposes the sub image Ps46 on the outside of the target area. The receiver 200 displays the captured display image Pr on which the first AR image P46 and the sub-image Ps46 are superimposed on the display 201. For example, the first AR image P46 is a moving image related to the first song that is the first music content, and the sub image Ps46 is a still image related to the above-described album. The receiver 200 reproduces the moving image of the first AR image P46 in synchronization with the first music content.
 図162は、受信機200の処理動作を示す図である。 FIG. 162 is a diagram illustrating processing operations of the receiver 200.
 例えば、受信機200は、図161に示す状態と同様、図162の(a)に示すように、第1のAR画像P46と第1の音楽コンテンツとを同期させて再生する。ここで、受信機200のユーザは、受信機200に対する操作を行う。例えば、図162の(b)に示すように、ユーザはスワイプを行う。具体的には、ユーザは、受信機200のディスプレイ201に表示されている第1のAR画像P46に指先を当てた状態で、横方向にその指先を動かす。言い換えれば、ユーザは、第1のAR画像P46を横方向にスライドさせる。この場合、受信機200は、第1の音楽コンテンツの次に上述の光IDに関連付けられている第2の音楽コンテンツと、第1のAR画像P46の次に上述の光IDに関連付けられている第2のAR画像P46cとを、サーバから取得する。例えば、第2の音楽コンテンツは、2曲目の歌であり、第2のAR画像P46cは、2曲目の歌に関する動画像である。 For example, similarly to the state shown in FIG. 161, the receiver 200 reproduces the first AR image P46 and the first music content in synchronization with each other as shown in FIG. 162 (a). Here, the user of the receiver 200 performs an operation on the receiver 200. For example, as shown in (b) of FIG. 162, the user performs a swipe. Specifically, the user moves the fingertip in the horizontal direction while placing the fingertip on the first AR image P46 displayed on the display 201 of the receiver 200. In other words, the user slides the first AR image P46 in the horizontal direction. In this case, the receiver 200 is associated with the second music content associated with the above-described optical ID after the first music content, and with the above-described optical ID after the first AR image P46. The second AR image P46c is acquired from the server. For example, the second music content is the second song, and the second AR image P46c is a moving image related to the second song.
 そして、受信機200は、再生される音楽コンテンツを第1の音楽コンテンツから第2の音楽コンテンツに切り替える。つまり、受信機200は、第1の音楽コンテンツの再生を停止し、第2の音楽コンテンツである2曲目の歌の再生を開始する。 Then, the receiver 200 switches the music content to be reproduced from the first music content to the second music content. That is, the receiver 200 stops playing the first music content and starts playing the second song, which is the second music content.
 このとき、受信機200は、撮像表示画像Prの対象領域に重畳されている画像を、第1のAR画像P46から第2のAR画像P46cに切り替える。つまり、受信機20は、第1のAR画像P46の再生を停止し、第2のAR画像P46cの再生を開始する。 At this time, the receiver 200 switches the image superimposed on the target area of the captured display image Pr from the first AR image P46 to the second AR image P46c. That is, the receiver 20 stops the reproduction of the first AR image P46 and starts the reproduction of the second AR image P46c.
 ここで、第2のAR画像P46cの最初に表示されるピクチャは、第1のAR画像P46の最初に表示されるピクチャと同一である。 Here, the picture displayed at the beginning of the second AR image P46c is the same as the picture displayed at the beginning of the first AR image P46.
 したがって、受信機200は、図162の(a)に示すように、2曲目の歌の再生を開始するときには、まず、第1のAR画像P46の最初のピクチャと同一のピクチャを表示する。その後、受信機200は、図162の(b)に示すように、第2のAR画像P46cに含まれる2枚目以降の複数のピクチャを順に表示する。 Therefore, as shown in FIG. 162 (a), the receiver 200 first displays the same picture as the first picture of the first AR image P46 when starting the reproduction of the second song. Thereafter, the receiver 200 sequentially displays the second and subsequent pictures included in the second AR image P46c, as shown in FIG. 162 (b).
 ここで再び、ユーザが、図162の(b)に示すように、受信機200に対してスワイプを行う。その操作に応じて、受信機200は、上述と同様、第2の音楽コンテンツの次に上述の光IDに関連付けられている第3の音楽コンテンツと、第2のAR画像P46cの次に上述の光IDに関連付けられている第3のAR画像P46dとを、サーバから取得する。例えば、第3の音楽コンテンツは、3曲目の歌であり、第3のAR画像P46dは、3曲目の歌に関する動画像である。 Here again, the user swipes the receiver 200 as shown in FIG. 162 (b). In response to the operation, the receiver 200, as described above, receives the third music content associated with the above-described optical ID next to the second music content, and the above-mentioned second AR image P46c. The third AR image P46d associated with the optical ID is acquired from the server. For example, the third music content is the third song, and the third AR image P46d is a moving image related to the third song.
 そして、受信機200は、再生される音楽コンテンツを第2の音楽コンテンツから第3の音楽コンテンツに切り替える。つまり、受信機200は、第2の音楽コンテンツの再生を停止し、第3の音楽コンテンツである3曲目の歌の再生を開始する。 Then, the receiver 200 switches the music content to be reproduced from the second music content to the third music content. That is, the receiver 200 stops the reproduction of the second music content and starts reproducing the third song that is the third music content.
 このとき、受信機200は、撮像表示画像Prの対象領域に重畳されている画像を、第2のAR画像P46cから第3のAR画像P46dに切り替える。つまり、受信機20は、第2のAR画像P46cの再生を停止し、第3のAR画像P46dの再生を開始する。 At this time, the receiver 200 switches the image superimposed on the target area of the captured display image Pr from the second AR image P46c to the third AR image P46d. That is, the receiver 20 stops the reproduction of the second AR image P46c and starts the reproduction of the third AR image P46d.
 ここで、第3のAR画像P46dの最初に表示されるピクチャは、第1のAR画像P46の最初に表示されるピクチャと同一である。 Here, the picture displayed at the beginning of the third AR image P46d is the same as the picture displayed at the beginning of the first AR image P46.
 したがって、受信機200は、図162の(a)に示すように、3曲目の歌の再生を開始するときには、まず、第1のAR画像P46の最初のピクチャと同一のピクチャを表示する。その後、受信機200は、図162の(d)に示すように、第3のAR画像P46dに含まれる2枚目以降の複数のピクチャを順に表示する。 Therefore, as shown in FIG. 162 (a), the receiver 200 first displays the same picture as the first picture of the first AR image P46 when starting the reproduction of the third song. Thereafter, the receiver 200 sequentially displays the second and subsequent pictures included in the third AR image P46d, as shown in FIG. 162 (d).
 なお、上述の例では、図162の(b)に示すように、受信機200は、動画像であるAR画像をスライドさせる操作(つまりスワイプ)を受け付けると、次の動画像を表示する。しかし、受信機200は、その操作の代わりに、光IDの取り直しがあったときに、次の動画像を表示してもよい。光IDの取り直しがあったときとは、イメージセンサによる撮像によって光IDが再び取得したときである。つまり、受信機200は、撮像を行うことによって、撮像表示画像および復号用画像を繰り返し取得しているが、その繰り返し取得される復号用画像から輝線パターン領域が消えて再び現れるときに、光IDが取り直される。例えば、送信機100に向けられていた受信機200のイメージセンサが、他の向きに向けられたときに、復号用画像から輝線パターン領域が消える。そして、そのイメージセンサが再び送信機100に向けられたときに、復号用画像に輝線パターン領域が現れる。このときに、光IDが取り直される。 In the above-described example, as illustrated in (b) of FIG. 162, when the receiver 200 receives an operation of sliding the AR image that is a moving image (that is, swipe), the receiver 200 displays the next moving image. However, the receiver 200 may display the next moving image when the optical ID is retaken instead of the operation. When the optical ID is re-acquired, the optical ID is acquired again by imaging with the image sensor. That is, the receiver 200 repeatedly acquires the captured display image and the decoding image by performing imaging, but when the bright line pattern region disappears from the repeatedly acquired decoding image and appears again, the optical ID Is retaken. For example, when the image sensor of the receiver 200 that is directed to the transmitter 100 is directed in another direction, the bright line pattern region disappears from the decoding image. When the image sensor is directed to the transmitter 100 again, a bright line pattern region appears in the decoding image. At this time, the optical ID is retaken.
 このように、本実施の形態における表示方法では、受信機200は、イメージセンサによる撮像によって可視光信号を光ID(すなわち識別情報)として取得する。そして、受信機200は、その光IDに関連付けられている動画像である第1のAR画像P46を表示する。次に、受信機200は、第1のAR画像P46をスライドさせる操作を受け付けると、その第1のAR画像P46の次に光IDに関連付けられている動画像である第2のAR画像P46cを表示する。したがって、ユーザに有益な画像を容易に表示することができる。 As described above, in the display method according to the present embodiment, the receiver 200 acquires a visible light signal as an optical ID (that is, identification information) by imaging with the image sensor. Then, the receiver 200 displays the first AR image P46 that is a moving image associated with the optical ID. Next, when receiving an operation of sliding the first AR image P46, the receiver 200 receives a second AR image P46c that is a moving image associated with the light ID next to the first AR image P46. indicate. Therefore, an image useful for the user can be easily displayed.
 また、本実施の形態における表示方法では、第1のAR画像P46および第2のAR画像P46cのそれぞれにおいて、最初に表示されるピクチャ内のオブジェクトは同一の位置にあってもよい。例えば、図162に示す例では、第1のAR画像P46および第2のAR画像P46cのそれぞれの最初に表示されるピクチャは同一である。したがって、それらのピクチャ内のオブジェクトは同一位置にある。例えば、図162の(a)に示すように、オブジェクトである歌手は、第1のAR画像P46および第2のAR画像P46cのそれぞれの最初に表示されるピクチャ内において同一位置にある。その結果、ユーザは、第1のAR画像P46および第2のAR画像P46cが互いに関連していることを容易に把握することができる。なお、図162に示す例では、第1のAR画像P46および第2のAR画像P46cのそれぞれの最初に表示されるピクチャは同一であるが、それらのピクチャ内のオブジェクトが同一位置にあれば、それらのピクチャは異なっていてもよい。 Further, in the display method according to the present embodiment, in each of the first AR image P46 and the second AR image P46c, the object in the picture displayed first may be at the same position. For example, in the example shown in FIG. 162, the first displayed pictures of the first AR image P46 and the second AR image P46c are the same. Therefore, the objects in those pictures are at the same position. For example, as shown in FIG. 162 (a), the singer as the object is in the same position in the first displayed picture of each of the first AR image P46 and the second AR image P46c. As a result, the user can easily grasp that the first AR image P46 and the second AR image P46c are related to each other. In the example shown in FIG. 162, the first displayed pictures of the first AR image P46 and the second AR image P46c are the same, but if the objects in these pictures are at the same position, Those pictures may be different.
 また、本実施の形態における表示方法では、受信機200は、イメージセンサによる撮像によって光IDを再び取得したときには、表示されている動画像の次に光IDに関連付けられている次の動画像を表示する。これにより、ユーザに有益な動画像をより容易に表示することができる。 In the display method according to the present embodiment, when the receiver 200 acquires the light ID again by imaging with the image sensor, the receiver 200 displays the next moving image associated with the light ID next to the displayed moving image. indicate. Thereby, a moving image useful for a user can be displayed more easily.
 また、本実施の形態における表示方法では、受信機200は、図161に示すように、第1のAR画像P46および第2のAR画像P46cのうちの少なくとも一方の動画像が表示される領域の外に、サブ画像Ps46を表示する。これにより、ユーザにより有益な多種多様な画像を容易に表示することができる。 In the display method according to the present embodiment, as shown in FIG. 161, receiver 200 has an area in which at least one moving image of first AR image P46 and second AR image P46c is displayed. The sub image Ps46 is displayed outside. This makes it possible to easily display a wide variety of images that are more useful to the user.
 図163は、受信機200に対する操作の一例を示す図である。 FIG. 163 is a diagram illustrating an example of operations on the receiver 200.
 例えば、図161および図162に示すように、受信機200のディスプレイ201にAR画像が表示されているときに、ユーザは、図163に示すように、縦方向のスワイプを行う。具体的には、ユーザは、受信機200のディスプレイ201に表示されているAR画像に指先を当てた状態で、縦方向にその指先を動かす。言い換えれば、ユーザは、例えば第1のAR画像P46などのAR画像を縦方向にスライドさせる。この場合、受信機200は、上述の光IDに関連付けられている他のAR画像を、サーバから取得する。 For example, as shown in FIGS. 161 and 162, when the AR image is displayed on the display 201 of the receiver 200, the user performs a vertical swipe as shown in FIG. Specifically, the user moves the fingertip in the vertical direction with the fingertip placed on the AR image displayed on the display 201 of the receiver 200. In other words, the user slides an AR image such as the first AR image P46 in the vertical direction, for example. In this case, the receiver 200 acquires another AR image associated with the above-described optical ID from the server.
 図164は、受信機200に表示されるAR画像の例を示す図である。 FIG. 164 is a diagram illustrating an example of an AR image displayed on the receiver 200.
 受信機200は、図163に示すようなスワイプの操作が行われると、上述の他のAR画像としてサーバから取得されたAR画像P47を、撮像表示画像Prに重畳して表示する。 When a swipe operation as shown in FIG. 163 is performed, the receiver 200 displays the AR image P47 acquired from the server as the other AR image described above, superimposed on the captured display image Pr.
 例えば、受信機200は、音楽コンテンツの歌手を示す静止画像であるAR画像P47を、図146および図160に示す例と同様に、撮像表示画像Prに重畳して表示する。ここで、AR画像P47は、撮像表示画像Prのうちの対象領域、つまり、デジタルサイネージである送信機100が映し出されている領域に、重畳されている。したがって、図146および図160に示す例と同様に、その送信機100の隣に人物が立つと、その人物の隣に歌手が現実に存在するように、撮像表示画像Prを表示することができる。その結果、人物は、歌手が隣にいなくても、その歌手と一緒に写真に写ることができる。 For example, the receiver 200 displays the AR image P47, which is a still image indicating the singer of the music content, superimposed on the captured display image Pr as in the examples shown in FIGS. 146 and 160. Here, the AR image P47 is superimposed on a target area in the captured display image Pr, that is, an area where the transmitter 100 that is digital signage is displayed. Therefore, similarly to the examples shown in FIGS. 146 and 160, when a person stands next to the transmitter 100, the captured display image Pr can be displayed so that a singer actually exists next to the person. . As a result, the person can be photographed with the singer even if the singer is not next to it.
 このように、本実施の形態における表示方法では、受信機200は、第1のAR画像P46を横方向にスライドさせる操作を受け付けると、第2のAR画像P46cを表示し、第1のAR画像P46を縦方向にスライドさせる動作を受け付けると、光IDに関連付けられている静止画像であるAR画像P47を表示する。したがって、ユーザに有益な多種多様な画像を容易に表示することができる。 As described above, in the display method according to the present embodiment, when receiving an operation of sliding the first AR image P46 in the horizontal direction, the receiver 200 displays the second AR image P46c and the first AR image. When an operation of sliding P46 in the vertical direction is received, an AR image P47 that is a still image associated with the light ID is displayed. Therefore, it is possible to easily display a wide variety of images useful to the user.
 図165は、撮像表示画像に重畳されるAR画像の例を示す図である。 FIG. 165 is a diagram illustrating an example of an AR image superimposed on a captured display image.
 受信機200は、図165に示すように、撮像表示画像Pr1にAR画像P48を重畳するときには、そのAR画像P48の一部をトリミングして、その一部のみを撮像表示画像Pr1に重畳してもよい。例えば、受信機200は、矩形のAR画像P48の周縁領域を切り取り、AR画像P48の中央の丸い領域のみを撮像表示画像Pr1に重畳してもよい。 As shown in FIG. 165, when superimposing the AR image P48 on the captured display image Pr1, the receiver 200 trims a part of the AR image P48 and superimposes only a part of the AR image P48 on the captured display image Pr1. Also good. For example, the receiver 200 may cut out a peripheral area of the rectangular AR image P48 and superimpose only the round area at the center of the AR image P48 on the captured display image Pr1.
 図166は、撮像表示画像に重畳されるAR画像の例を示す図である。 FIG. 166 is a diagram illustrating an example of an AR image superimposed on a captured display image.
 受信機200は、例えば喫茶店のデジタルサイネージとして構成された送信機100を撮像する。その撮像によって、受信機200は、上述と同様、撮像表示画像Pr2と復号用画像とを取得する。撮像表示画像Pr2には、そのデジタルサイネージである送信機100がサイネージ像100iとして写し出されている。受信機200は、その復号用画像に対する復号によって光IDを取得し、その光IDに関連付けられているAR画像P49をサーバから取得する。そして、受信機200は、撮像表示画像Pr2におけるサイネージ像100iの上側にある領域を対象領域として認識し、その対象領域にAR画像P49を重畳する。AR画像P49は、例えばコーヒーがポットから流れ落ちる動画像である。この動画像であるAR画像P49は、ポットから流れ落ちるコーヒーの領域内の位置がそのAR画像P49の下端に近いほど、その位置における透明度が高くなるように形成されている。これにより、コーヒーが現実に流れ落ちているように、AR画像P49を表示することができる。 The receiver 200 images the transmitter 100 configured as digital signage of a coffee shop, for example. By the imaging, the receiver 200 acquires the captured display image Pr2 and the decoding image as described above. In the captured display image Pr2, the transmitter 100 as the digital signage is displayed as a signage image 100i. The receiver 200 acquires the optical ID by decoding the decoding image, and acquires the AR image P49 associated with the optical ID from the server. Then, the receiver 200 recognizes the region above the signage image 100i in the captured display image Pr2 as the target region, and superimposes the AR image P49 on the target region. The AR image P49 is a moving image in which coffee flows from a pot, for example. The AR image P49, which is the moving image, is formed such that the closer the position in the coffee region flowing down from the pot is to the lower end of the AR image P49, the higher the transparency at that position. Thereby, it is possible to display the AR image P49 so that the coffee actually flows down.
 なお、このようなAR画像P49は、輪郭が曖昧な動画像であれば、どのような動画像であってもよく、例えば炎の動画像であってもよい。AR画像P49が炎の動画像である場合には、AR画像49の周縁部分における透明度は、外側に向かって次第に大きくなる。また、その透明度が時間的に変化してもよい。これにより、AR画像P49をゆらめく炎として現実感あふれるように表示することができる。 Note that such an AR image P49 may be any moving image as long as it has a vague outline, for example, a flame moving image. When the AR image P49 is a moving image of flame, the transparency at the peripheral portion of the AR image 49 gradually increases toward the outside. Moreover, the transparency may change with time. As a result, the AR image P49 can be displayed as a flickering flame so as to overflow with reality.
 また、図162に示す第1のAR画像P46、第2のAR画像P46cおよび第3のAR画像P46dのうちの少なくとも1つの動画像が、図166に示すような透明度を有するように形成されていてもよい。 Also, at least one moving image of the first AR image P46, the second AR image P46c, and the third AR image P46d shown in FIG. 162 is formed to have transparency as shown in FIG. 166. May be.
 つまり、本実施の形態における表示方法では、第1のAR画像P46および第2のAR画像P46cのうちの少なくとも一方の動画像は、その動画像内の位置がその動画像の端に近いほど、当該位置における透明度が高くなるように形成されていてもよい。これにより、その動画像が通常撮影画像に重畳されて表示される場合には、通常撮影画像によって示される環境に、輪郭が曖昧なオブジェクトが現実に存在するように、撮像表示画像を表示することができる。 That is, in the display method according to the present embodiment, at least one moving image of the first AR image P46 and the second AR image P46c has a position in the moving image that is closer to the end of the moving image. You may form so that the transparency in the said position may become high. Thus, when the moving image is displayed superimposed on the normal captured image, the captured display image is displayed so that an object with an ambiguous outline actually exists in the environment indicated by the normal captured image. Can do.
 図167は、本実施の形態における送信機100の一例を示す図である。 FIG. 167 is a diagram illustrating an example of the transmitter 100 in the present embodiment.
 送信機100は、可視光通信モードでの撮像が不可能な受信機、すなわち光通信非対応の受信機に対しても情報を画像IDとして送信することができるように構成されている。つまり、送信機100は、上述と同様、例えばデジタルサイネージとして構成され、輝度変化することによって、光IDを送信する。さらに、送信機100には、ラインパターン151~154が描かれている。これらのラインパターン151~154のそれぞれは、複数の水平方向に沿う短い直線の配列パターンであって、これらの複数の直線はそれぞれ、垂直方向に互い離れて配列されている。つまり、ラインパターン151~154のそれぞれは、バーコードのように構成されている。ラインパターン151は、送信機100に描かれているAの文字の左側に配置され、ラインパターン152は、そのAの文字の右側に配置されている。ラインパターン153は、送信機100に描かれているBの文字に配置され、ラインパターン154は、送信機100に描かれているCの文字に配置されている。なお、A、BおよびCの文字は、例示であって、どのような文字または画像が送信機100に描かれていてもよい。 The transmitter 100 is configured to be able to transmit information as an image ID to a receiver that cannot be imaged in the visible light communication mode, that is, a receiver that does not support optical communication. That is, the transmitter 100 is configured as digital signage, for example, as described above, and transmits the optical ID by changing the luminance. Further, line patterns 151 to 154 are drawn on the transmitter 100. Each of these line patterns 151 to 154 is a short straight line arrangement pattern along a plurality of horizontal directions, and each of the plurality of straight lines is arranged apart from each other in the vertical direction. That is, each of the line patterns 151 to 154 is configured like a barcode. The line pattern 151 is arranged on the left side of the letter A drawn on the transmitter 100, and the line pattern 152 is arranged on the right side of the letter A. The line pattern 153 is arranged on the letter B drawn on the transmitter 100, and the line pattern 154 is arranged on the letter C drawn on the transmitter 100. The characters A, B, and C are examples, and any character or image may be drawn on the transmitter 100.
 光通信非対応の受信機は、イメージセンサの露光時間を上述の通信用露光時間に設定することができないため、送信機100を撮像しても、その撮像によって光IDを取得することはできない。しかし、受信機は、その送信機100の撮像によって、ラインパターン151~154が映し出されている通常撮影画像(すなわち撮像表示画像)を取得し、それらのラインパターン151~154から画像IDを取得することができる。したがって、光通信非対応の受信機は、送信機100から光IDを取得することができなくても、画像IDを取得することができ、その画像IDを光IDの代わりに用いることによって、上述と同様、AR画像を撮像表示画像に重畳して表示することができる。 Since a receiver that does not support optical communication cannot set the exposure time of the image sensor to the above-described communication exposure time, even if the transmitter 100 is imaged, the optical ID cannot be acquired by the imaging. However, the receiver acquires normal captured images (that is, captured display images) on which the line patterns 151 to 154 are projected by imaging of the transmitter 100, and acquires image IDs from these line patterns 151 to 154. be able to. Therefore, a receiver that does not support optical communication can acquire an image ID even if it cannot acquire an optical ID from the transmitter 100. By using the image ID instead of the optical ID, it is possible to acquire the image ID. Similarly to the above, the AR image can be displayed superimposed on the captured display image.
 なお、ラインパターン151~154のそれぞれから同一の画像IDが取得されてもよく、互いに異なる画像IDが取得されてもよい。 Note that the same image ID may be acquired from each of the line patterns 151 to 154, or different image IDs may be acquired.
 図168は、本実施の形態における送信機の他の例を示す図である。 FIG. 168 is a diagram illustrating another example of the transmitter in this embodiment.
 本実施の形態における送信機100eは、送信機本体115と、レンチキュラーレンズ116とを備える。なお、図168の(a)は、送信機100eの上面図を示し、図168の(b)は、送信機100eの正面図を示す。 The transmitter 100e in the present embodiment includes a transmitter main body 115 and a lenticular lens 116. 168 (a) shows a top view of the transmitter 100e, and FIG. 168 (b) shows a front view of the transmitter 100e.
 送信機本体115は、図167に示す送信機100と同様の構成を有する。つまり、送信機本体115の正面には、A、BおよびCの文字と、それらの文字に付随するラインパターンとが描かれている。 The transmitter main body 115 has the same configuration as the transmitter 100 shown in FIG. That is, on the front surface of the transmitter main body 115, characters A, B, and C and line patterns associated with these characters are drawn.
 レンチキュラーレンズ116は、送信機本体115の正面、つまり、A、BおよびCの文字とラインパターンとが描かれている面を覆うように、送信機本体115に取り付けられている。 The lenticular lens 116 is attached to the transmitter main body 115 so as to cover the front surface of the transmitter main body 115, that is, the surface on which the letters A, B, and C and the line pattern are drawn.
 したがって、図168の(c)に示す送信機100eの正面左側から見えるラインパターン151~154と、図168の(d)に示す送信機100eの正面右側から見えるラインパターン151~154とを、異ならせることができる。 Therefore, the line patterns 151 to 154 seen from the front left side of the transmitter 100e shown in FIG. 168 (c) are different from the line patterns 151 to 154 seen from the front right side of the transmitter 100e shown in FIG. 168 (d). Can be made.
 図169は、本実施の形態における送信機100の他の例を示す図である。なお、図169の(a)は、本物として構成されている送信機100が受信機200aによって撮像される例を示す。また、図169の(b)は、その送信機100の偽物として構成されている送信機100fが受信機200aによって撮像される例を示す。 FIG. 169 is a diagram illustrating another example of the transmitter 100 in the present embodiment. In addition, (a) of FIG. 169 shows an example in which the transmitter 100 configured as a real object is imaged by the receiver 200a. FIG. 169 (b) shows an example in which a transmitter 100f configured as a fake of the transmitter 100 is imaged by the receiver 200a.
 本物の送信機100は、図169の(a)に示すように、図167に示す例と同様、光通信非対応の受信機に対して画像IDを送信することができるように構成されている。つまり、送信機100の正面には、A、BおよびCの文字と、ラインパターン154などが描かれている。さらに、送信機100の正面には、文字列161が描かれていてもよい。この文字列161は、赤外線反射塗料、赤外線吸収塗料または赤外線遮断塗料の塗布によって形成されている。したがって、この文字列161は、人の目には見えないが、受信機200aのイメージセンサによる撮像によって得られる通常撮影画像には映し出される。 As shown in FIG. 169 (a), the real transmitter 100 is configured to transmit an image ID to a receiver that does not support optical communication, as in the example shown in FIG. 167. . That is, characters A, B, and C, a line pattern 154, and the like are drawn on the front surface of the transmitter 100. Further, a character string 161 may be drawn on the front surface of the transmitter 100. The character string 161 is formed by applying an infrared reflecting paint, an infrared absorbing paint, or an infrared shielding paint. Therefore, the character string 161 is not visible to the human eye, but is displayed on a normal photographed image obtained by imaging by the image sensor of the receiver 200a.
 受信機200aは、光通信非対応の受信機である。したがって、受信機200aは、送信機100から上述の可視光信号が送信されても、その可視光信号を受信することはできない。しかし、受信機200aは、送信機100を撮像すれば、その撮像によって得られる通常撮影画像に映し出されているラインパターンから、画像IDを取得することができる。さらに、受信機200aは、その通常撮影画像に、文字列161が例えば「ここにスマートフォンをかざしてください」と映し出されていれば、その送信機100が本物であると判定することができる。つまり、受信機200は、取得された画像IDが不正なものでないと判断することができる。言い換えれば、受信機200は、文字列161が通常撮影画像に映し出されているか否かに応じて、画像IDを認証することができる。受信機200は、画像IDが不正なものではないと判断すると、その画像IDをサーバに送信する処理など、その画像IDを用いた処理を行う。 The receiver 200a is a receiver that does not support optical communication. Therefore, even when the above-described visible light signal is transmitted from the transmitter 100, the receiver 200a cannot receive the visible light signal. However, if the receiver 200a captures an image of the transmitter 100, the receiver 200a can acquire the image ID from the line pattern displayed in the normal captured image obtained by the imaging. Furthermore, the receiver 200a can determine that the transmitter 100 is genuine if the character string 161 is displayed on the normal captured image, for example, “Please hold your smartphone here”. That is, the receiver 200 can determine that the acquired image ID is not invalid. In other words, the receiver 200 can authenticate the image ID depending on whether or not the character string 161 is displayed in the normal captured image. When the receiver 200 determines that the image ID is not invalid, the receiver 200 performs processing using the image ID, such as processing for transmitting the image ID to the server.
 一方、上述のような送信機100が不正に複製される場合がある。つまり、本物の送信機100ではなく、その送信機100の偽物として構成された送信機100fが設置されている場合がある。このような偽物の送信機100fの正面には、A、BおよびCの文字と、ラインパターン154fとが描かれている。A、BおよびCの文字と、ラインパターン154fとは、本物の送信機100に描かれているA、BおよびCの文字と、ラインパターン154とに類似するように、悪意のある人によって描かれている。つまり、ラインパターン154fは、ラインパターン154に類似しているが、異なっている。 On the other hand, the transmitter 100 as described above may be illegally copied. That is, there is a case where a transmitter 100f configured as a fake of the transmitter 100 is installed instead of the real transmitter 100. On the front of such a fake transmitter 100f, characters A, B and C and a line pattern 154f are drawn. The letters A, B and C and the line pattern 154f are drawn by a malicious person to resemble the letters A, B and C drawn on the real transmitter 100 and the line pattern 154. It is. That is, the line pattern 154f is similar to the line pattern 154 but different.
 しかし、悪意のある人は、本物の送信機100を不正に複製する場合には、赤外線反射塗料、赤外線吸収塗料または赤外線遮断塗料によって描かれた文字列161を視認することができない。したがって、文字列161は、偽物の送信機100fの正面には描かれていない。 However, a malicious person cannot visually recognize the character string 161 drawn by the infrared reflecting paint, the infrared absorbing paint, or the infrared shielding paint when the genuine transmitter 100 is illegally copied. Therefore, the character string 161 is not drawn in front of the fake transmitter 100f.
 したがって、受信機200aは、このような偽物の送信機100fを撮像すれば、その撮像によって得られる通常撮影画像に映し出されているラインパターンから、不正な画像IDを取得してしまう。しかし、受信機200aは、図169の(b)に示すように、その通常撮影画像に文字列161が映し出されていないため、その画像IDが不正なものであると判断することができる。その結果、受信機200は、その不正な画像IDを用いた処理を禁止することができる。 Therefore, if the receiver 200a images such a fake transmitter 100f, the receiver 200a acquires an illegal image ID from the line pattern displayed in the normal captured image obtained by the imaging. However, as shown in FIG. 169 (b), the receiver 200a can determine that the image ID is illegal because the character string 161 is not displayed in the normal photographed image. As a result, the receiver 200 can prohibit processing using the unauthorized image ID.
 図170は、光通信対応の受信機200と、光通信非対応の受信機200aとを用いたシステムの一例を示す図である。 FIG. 170 is a diagram illustrating an example of a system using a receiver 200 that supports optical communication and a receiver 200a that does not support optical communication.
 例えば、光通信非対応の受信機200aは、送信機100を撮像する。なお、この送信機100には、図167に示す例と同様、ラインパターン154が描かれているが、図168に示す文字列161は描かれていない。したがって、受信機200aは、撮像によって取得される通常撮影画像に映し出されているラインパターンから、画像IDを取得することができるが、その画像IDを認証することができない。そこで、受信機200aは、その画像IDが不正なものであっても、その画像IDを信用し、その画像IDを用いた処理を行う。例えば、受信機200aは、その画像IDに関連付けられた手続きをサーバ300に依頼する。その手続きは、例えば、不正な銀行口座へのお金の振り込みなどである。 For example, the receiver 200a that does not support optical communication images the transmitter 100. In this transmitter 100, the line pattern 154 is drawn as in the example shown in FIG. 167, but the character string 161 shown in FIG. 168 is not drawn. Therefore, the receiver 200a can acquire the image ID from the line pattern displayed in the normal captured image acquired by imaging, but cannot authenticate the image ID. Therefore, even if the image ID is illegal, the receiver 200a trusts the image ID and performs processing using the image ID. For example, the receiver 200a requests the server 300 to perform a procedure associated with the image ID. The procedure is, for example, the transfer of money to an unauthorized bank account.
 一方、光通信対応の受信機200は、送信機100を撮像することによって、上述と同様に、可視光信号である光IDを取得するとともに、画像IDも取得する。そこで、受信機200は、その画像IDが光IDと一致するか否かを判定する。ここで、画像IDが光IDと異なると判定すると、受信機200は、画像IDに関連付けられた手続きの依頼を破棄するようにサーバ300に要求する。 On the other hand, the receiver 200 compatible with optical communication captures an image of the transmitter 100, thereby acquiring an optical ID that is a visible light signal and also acquiring an image ID as described above. Therefore, the receiver 200 determines whether or not the image ID matches the light ID. If it is determined that the image ID is different from the light ID, the receiver 200 requests the server 300 to discard the request for the procedure associated with the image ID.
 したがって、サーバ300は、光通信非対応の受信機200aから、画像IDに関連付けられた手続きを依頼されても、光通信対応の受信機200から要求を受け付けた場合には、その手続きの依頼を破棄する。 Therefore, even if the server 300 receives a request associated with the image ID from the optical communication incompatible receiver 200a, if the server 300 receives a request from the optical communication compatible receiver 200, the server 300 requests the procedure. Discard.
 これにより、不正な画像IDが得られるようなラインパターン154が、悪意のある人によって送信機100に描かれていても、その画像IDに関連付けられた手続きの依頼を適切に破棄することができる。 Accordingly, even if a line pattern 154 that can obtain an unauthorized image ID is drawn on the transmitter 100 by a malicious person, the request for the procedure associated with the image ID can be appropriately discarded. .
 図171は、受信機200の処理動作を示すフローチャートである。 FIG. 171 is a flowchart showing the processing operation of the receiver 200.
 受信機200は、送信機100の撮像によって、通常撮影画像を取得する(ステップS81)。そして、受信機200は、その通常撮影画像に映し出されているラインパターンから画像IDを取得する(ステップS82)。 The receiver 200 acquires a normal captured image by imaging of the transmitter 100 (step S81). Then, the receiver 200 acquires an image ID from the line pattern displayed in the normal captured image (step S82).
 次に、受信機200は、可視光通信により光IDを送信機100から取得する(ステップS83)。つまり、受信機200は、可視光通信モードによる送信機100の撮像によって復号用画像を取得し、その復号用画像に対する復号によって光IDを取得する。 Next, the receiver 200 acquires the light ID from the transmitter 100 by visible light communication (step S83). That is, the receiver 200 acquires a decoding image by imaging of the transmitter 100 in the visible light communication mode, and acquires an optical ID by decoding the decoding image.
 そして、受信機200は、ステップS82で取得された画像IDが、ステップS83で取得された光IDに一致するか否かを判定する(ステップS84)。ここで、一致すると判定すると(ステップS84のYes)、受信機200は、光IDに関連付けられた手続きをサーバ300に依頼する(ステップS85)。一方、一致しないと判定すると(ステップS84のNo)、受信機200は、光IDに関連付けられた手続きの依頼を破棄するようにサーバ300に要求する(ステップS86)。 Then, the receiver 200 determines whether or not the image ID acquired in step S82 matches the optical ID acquired in step S83 (step S84). If it is determined that they match (Yes in step S84), the receiver 200 requests the server 300 for a procedure associated with the optical ID (step S85). On the other hand, if it is determined that they do not match (No in step S84), the receiver 200 requests the server 300 to discard the request for the procedure associated with the optical ID (step S86).
 図172は、AR画像の表示の例を示す図である。 FIG. 172 is a diagram illustrating an example of display of an AR image.
 例えば、送信機100は、サーベルのように形成され、サーベルの柄以外の部分が輝度変化することによって可視光信号を光IDとして送信している。 For example, the transmitter 100 is formed like a saber and transmits a visible light signal as an optical ID by changing the luminance of a portion other than the saber handle.
 受信機200は、図172の(a)に示すように、送信機100の近くから、その送信機100を撮像する。受信機200は、その撮像を行っている間には、上述のように、撮像表示画像Pr3と復号用画像とを繰り返し取得している。そして、受信機200は、復号用画像に対する復号によって光IDを取得すると、その光IDをサーバに送信する。その結果、受信機200は、その光IDに関連付けられているAR画像P50と認識情報とをサーバから取得する。受信機200は、撮像表示画像Pr3のうち、その認識情報に応じた領域を対象領域として認識する。例えば、受信機200は、撮像表示画像Pr3のうち、サーベルの柄以外の部分が映し出されている領域の上にある領域を対象領域として認識する。 The receiver 200 images the transmitter 100 from the vicinity of the transmitter 100 as shown in FIG. While performing the imaging, the receiver 200 repeatedly acquires the captured display image Pr3 and the decoding image as described above. Then, when the receiver 200 acquires the optical ID by decoding the decoding image, the receiver 200 transmits the optical ID to the server. As a result, the receiver 200 acquires the AR image P50 and the recognition information associated with the optical ID from the server. The receiver 200 recognizes an area corresponding to the recognition information in the captured display image Pr3 as a target area. For example, the receiver 200 recognizes, as a target area, an area above the area where a portion other than the saber pattern is displayed in the captured display image Pr3.
 具体的には、図50~図52の例に示すように、識別情報は、撮像表示画像Pr3のうちの基準領域を特定するための基準情報と、その基準領域に対する対象領域の相対位置を示す対象情報とを含んでいる。例えば、その基準情報は、撮像表示画像Pr3における基準領域の位置が、復号用画像における輝線パターン領域の位置と同じであることを示す。さらに、対象情報は、対象領域の位置が基準領域の上であることを示す。 Specifically, as shown in the examples of FIGS. 50 to 52, the identification information indicates reference information for specifying the reference area in the captured display image Pr3, and the relative position of the target area with respect to the reference area. Target information. For example, the reference information indicates that the position of the reference area in the captured display image Pr3 is the same as the position of the bright line pattern area in the decoding image. Further, the target information indicates that the position of the target area is above the reference area.
 したがって、受信機200は、基準情報に基づいて撮像表示画像Pr3から基準領域を特定する。つまり、受信機200は、撮像表示画像Pr3において、復号用画像における輝線パターン領域の位置と同一の位置にある領域を、基準領域として特定する。つまり、受信機200は、撮像表示画像Pr3のうち、サーベルの柄以外の部分が映し出されている領域を、基準領域として特定する。 Therefore, the receiver 200 identifies the reference area from the captured display image Pr3 based on the reference information. That is, the receiver 200 specifies, as a reference area, an area that is in the same position as the bright line pattern area in the decoding image in the captured display image Pr3. That is, the receiver 200 specifies an area in which a portion other than the saber pattern is projected as a reference area in the captured display image Pr3.
 さらに、受信機200は、撮像表示画像Pr3のうち、基準領域の位置を基準として対象情報により示される相対位置にある領域を、対象領域として認識する。上述の例では、対象情報は、対象領域の位置が基準領域の上であることを示すため、受信機200は、撮像表示画像Pr3のうちの基準領域の上にある領域を対象領域として認識する。つまり、受信機200は、撮像表示画像Pr3のうち、サーベルの柄以外の部分が映し出されている領域の上にある領域を、対象領域として認識する。 Further, the receiver 200 recognizes, as the target area, an area at a relative position indicated by the target information with respect to the position of the reference area in the captured display image Pr3. In the above-described example, since the target information indicates that the position of the target area is above the reference area, the receiver 200 recognizes the area above the reference area in the captured display image Pr3 as the target area. . That is, the receiver 200 recognizes, as a target area, an area above the area where a portion other than the saber pattern is projected in the captured display image Pr3.
 そして、受信機200は、その対象領域にAR画像P50を重畳し、AR画像P50が重畳された撮像表示画像Pr3をディスプレイ201に表示する。例えば、AR画像P50は、人の動画像である。 Then, the receiver 200 superimposes the AR image P50 on the target area, and displays the captured display image Pr3 on which the AR image P50 is superimposed on the display 201. For example, the AR image P50 is a moving image of a person.
 ここで、受信機200は、図172の(b)に示すように、送信機100から遠ざかる。したがって、撮像表示画像Pr3に映し出されているサーベルは小さくなる。つまり、復号用画像の輝線パターン領域のサイズが小さくなる。その結果、受信機200は、AR画像P50の大きさを輝線パターン領域の大きさに整合するように小さくする。つまり、受信機200は、輝線パターン領域とAR画像P50とのサイズの比率を一定に保つように、AR画像P50のサイズを調整する。 Here, the receiver 200 moves away from the transmitter 100 as shown in FIG. Therefore, the saber displayed in the captured display image Pr3 becomes small. That is, the size of the bright line pattern area of the decoding image is reduced. As a result, the receiver 200 reduces the size of the AR image P50 so as to match the size of the bright line pattern region. That is, the receiver 200 adjusts the size of the AR image P50 so that the size ratio between the bright line pattern region and the AR image P50 is kept constant.
 これにより、受信機200は、サーベルの上に人が現実に存在するように、撮像表示画像Pr3を表示することができる。 Thereby, the receiver 200 can display the captured display image Pr3 so that a person actually exists on the saber.
 このように、本実施の形態における表示方法では、受信機200は、イメージセンサによる通常露光時間(すなわち第1の露光時間)による撮像によって、通常撮影画像を取得する。また、受信機200は、その通常露光時間よりも短い通信用露光時間(すなわち第2の露光時間)による撮像によって、複数の輝線のパターンからなる領域である輝線パターン領域を含む復号用画像を取得し、その復号用画像に対する復号によって光IDを取得する。次に、受信機200は、通常撮影画像から、復号用画像における輝線パターン領域と同一の位置にある基準領域を特定し、その基準領域に基づいて、通常撮影画像において動画像が重畳される領域を対象領域として認識する。そして、受信機200は、その対象領域に動画像を重畳する。なお、動画像は、図162などに示す第1のAR画像P46および第2のAR画像P46cのうちの少なくとも一方の動画像であってもよい。 As described above, in the display method according to the present embodiment, the receiver 200 acquires a normal captured image by imaging based on the normal exposure time (that is, the first exposure time) by the image sensor. In addition, the receiver 200 acquires a decoding image including a bright line pattern region, which is a region composed of a plurality of bright line patterns, by imaging with a communication exposure time (that is, a second exposure time) shorter than the normal exposure time. Then, the optical ID is obtained by decoding the decoding image. Next, the receiver 200 identifies a reference area at the same position as the bright line pattern area in the decoding image from the normal captured image, and an area in which the moving image is superimposed on the normal captured image based on the reference area Is recognized as a target area. Then, the receiver 200 superimposes the moving image on the target area. The moving image may be a moving image of at least one of the first AR image P46 and the second AR image P46c shown in FIG. 162 and the like.
 また、受信機200は、通常撮影画像における、基準領域の上、下、左または右の領域を対象領域として認識してもよい。 Further, the receiver 200 may recognize the upper, lower, left, or right area of the reference area in the normal captured image as the target area.
 これにより、例えば図50~図52および図172に示すように、基準領域に基づいて対象領域が認識され、その対象領域に動画像が重畳されるため、動画像が重畳される領域の自由度を容易に高めることができる。 As a result, for example, as shown in FIGS. 50 to 52 and 172, the target area is recognized based on the reference area, and the moving image is superimposed on the target area. Can be easily increased.
 また、本実施の形態における表示方法では、受信機200は、輝線パターン領域のサイズに応じて、動画像のサイズを変化させてもよい。例えば、受信機200は、輝線パターン領域のサイズが大きいほど、動画像のサイズを大きくする。 Further, in the display method in the present embodiment, the receiver 200 may change the size of the moving image according to the size of the bright line pattern region. For example, the receiver 200 increases the size of the moving image as the size of the bright line pattern region increases.
 これにより、図172に示すように、動画像のサイズが輝線パターン領域のサイズに応じて変化するため、動画像のサイズが固定されている場合と比べて、その動画像によって示されるオブジェクトがより現実に存在するように、その動画像を表示することができる。 As a result, as shown in FIG. 172, the size of the moving image changes according to the size of the bright line pattern region, so that the object indicated by the moving image is more compared to the case where the size of the moving image is fixed. The moving image can be displayed so that it exists in reality.
 (実施の形態9のまとめ)
 図173Aは、本発明の一態様に係る表示方法を示すフローチャートである。
(Summary of Embodiment 9)
FIG. 173A is a flowchart illustrating a display method according to one embodiment of the present invention.
 本発明の一態様に係る表示方法は、画像を表示する表示方法であって、ステップSG1~SG3を含む。つまり、上述の受信機200である表示装置は、イメージセンサによる撮像によって可視光信号を識別情報(すなわち光ID)として取得する(ステップSG1)。次に、表示装置は、その光IDに関連付けられている第1の動画像を表示する(ステップSG2)。そして、表示装置は、第1の動画像をスライドさせる操作を受け付けると、その第1の動画像の次に上記光IDに関連付けられている第2の動画像を表示する(ステップSG3)。 The display method according to an aspect of the present invention is a display method for displaying an image, and includes steps SG1 to SG3. That is, the display device that is the receiver 200 described above acquires a visible light signal as identification information (that is, a light ID) by imaging with an image sensor (step SG1). Next, the display device displays the first moving image associated with the light ID (step SG2). When the display device receives an operation of sliding the first moving image, the display device displays a second moving image associated with the light ID next to the first moving image (step SG3).
 図173Bは、本発明の一態様に係る表示装置の構成を示すブロック図である。 FIG. 173B is a block diagram illustrating a structure of a display device according to one embodiment of the present invention.
 本発明の一態様に係る表示装置G10は、画像を表示する装置であって、取得部G11と表示部G12とを備える。なお、表示装置G10は、上述の受信機200である。取得部G11は、イメージセンサによる撮像によって可視光信号を識別情報(すなわち光ID)として取得する。次に、表示部G12は、その光IDに関連付けられている第1の動画像を表示する。そして、表示部G12は、第1の動画像をスライドさせる操作を受け付けると、その第1の動画像の次に上記光IDに関連付けられている第2の動画像を表示する。 The display device G10 according to an aspect of the present invention is a device that displays an image, and includes an acquisition unit G11 and a display unit G12. The display device G10 is the receiver 200 described above. The acquisition unit G11 acquires a visible light signal as identification information (that is, a light ID) by imaging with an image sensor. Next, the display unit G12 displays the first moving image associated with the light ID. Then, when receiving an operation of sliding the first moving image, the display unit G12 displays the second moving image associated with the light ID next to the first moving image.
 例えば、第1の動画像および第2の動画像のそれぞれは、図162に示す第1のAR画像P46および第2のAR画像P46cである。図173Aおよび図173Bに示す表示方法および表示装置G10では、第1の動画像をスライドさせる操作、つまりスワイプが受け付けられると、第1の動画像の次に識別情報に関連付けられている第2の動画像が表示される。したがって、ユーザに有益な画像を容易に表示することができる。 For example, each of the first moving image and the second moving image is the first AR image P46 and the second AR image P46c shown in FIG. In the display method and the display device G10 shown in FIGS. 173A and 173B, when an operation of sliding the first moving image, that is, a swipe is received, the second associated with the identification information next to the first moving image is received. A moving image is displayed. Therefore, an image useful for the user can be easily displayed.
 なお、上記実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図156、図157、図159、図171および図173Aのフローチャートによって示される表示方法をコンピュータに実行させる。 In the above embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the display method shown by the flowcharts of FIGS. 156, 157, 159, 171 and 173A.
 (実施の形態10)
 本実施の形態では、実施の形態4および実施の形態9などと同様、光IDを用いたAR(Augmented Reality)を実現する表示方法および表示装置などについて説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。また、本実施の形態における受信機は、表示装置として構成されている。
(Embodiment 10)
In this embodiment, a display method, a display device, and the like that realize AR (Augmented Reality) using an optical ID will be described as in Embodiments 4 and 9. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments. The receiver in this embodiment is configured as a display device.
 図174は、本実施の形態における送信機に描かれる画像の一例を示す図である。また、図175は、本実施の形態における送信機に描かれる画像の他の例を示す図である。 FIG. 174 is a diagram illustrating an example of an image drawn on the transmitter according to the present embodiment. FIG. 175 is a diagram illustrating another example of an image drawn on the transmitter in this embodiment.
 送信機100は、図167に示す例と同様、可視光通信モードでの撮像が不可能な受信機、すなわち光通信非対応の受信機に対しても情報を画像IDとして送信することができるように構成されている。つまり、送信機100には、略四角形の送信画像Im1またはIm2が描かれている。また、送信機100は、上述と同様、例えばサイネージとして構成され、輝度変化することによって、光IDを送信する。なお、送信機100は、光源を備え、その光源の輝度変化によって、光IDを受信機200に直接送信してもよい。または、送信機100は、光源を備え、その光源からの光を送信画像Im1またはIm2に照射し、その送信画像Im1またはIm2に反射させることによって、その光を光IDとして受信機200に送信してもよい。 As in the example shown in FIG. 167, the transmitter 100 can transmit information as an image ID to a receiver that cannot capture an image in the visible light communication mode, that is, a receiver that does not support optical communication. It is configured. That is, the transmitter 100 has a substantially square transmission image Im1 or Im2. The transmitter 100 is configured as a signage, for example, as described above, and transmits an optical ID by changing the luminance. The transmitter 100 may include a light source, and may directly transmit the optical ID to the receiver 200 depending on the luminance change of the light source. Alternatively, the transmitter 100 includes a light source, irradiates light from the light source onto the transmission image Im1 or Im2, and reflects the light on the transmission image Im1 or Im2, thereby transmitting the light to the receiver 200 as an optical ID. May be.
 このような送信機100の送信画像Im1またはIm2は、図174および図175に示すように、略四角形に形成されている。送信画像Im1またはIm2は、略四角形のベース画像Bi1またはBi2と、そのベース画像に付加されるラインパターン155aまたは155bとを有する。 The transmission image Im1 or Im2 of such a transmitter 100 is formed in a substantially rectangular shape as shown in FIGS. 174 and 175. The transmission image Im1 or Im2 has a substantially square base image Bi1 or Bi2 and a line pattern 155a or 155b added to the base image.
 図174に示す例では、ラインパターン155aは、ベース画像Bi1の4辺のそれぞれにおいて、その辺に沿って配列される複数の短い直線の配列パターンであって、これらの短い直線はそれぞれ、その辺に対して直交している。つまり、送信機100のベース画像にロゴタイプが描かれている場合、そのロゴタイプの周囲に信号が埋め込まれている。なお、ラインパターンに含まれる短い直線を、以下、短線という。 In the example shown in FIG. 174, the line pattern 155a is an array pattern of a plurality of short straight lines arranged along each of the four sides of the base image Bi1, and each of these short straight lines has its side. Is orthogonal to. That is, when a logotype is drawn on the base image of the transmitter 100, a signal is embedded around the logotype. A short straight line included in the line pattern is hereinafter referred to as a short line.
 また、図174に示す例では、ラインパターン155aに含まれる複数の短線は、送信機100の中央、すなわちベース画像Bi1の中心に向かうほど濃さが薄くなるように形成されている。これにより、ベース画像Bi1にラインパターン155aが付加されても、そのラインパターン155aを目立ち難くすることができる。 In the example shown in FIG. 174, the plurality of short lines included in the line pattern 155a are formed so that the darkness decreases toward the center of the transmitter 100, that is, the center of the base image Bi1. Thereby, even if the line pattern 155a is added to the base image Bi1, the line pattern 155a can be made inconspicuous.
 なお、ラインパターン155aは、図174に示す例では、ベース画像Bi1の角には配置されていないが、その角にも配置されていてもよい。また、ベース画像Bi1の角が丸くなっている場合には、ラインパターン155aはその角に配置されていなくてもよい。 In the example shown in FIG. 174, the line pattern 155a is not arranged at the corner of the base image Bi1, but may be arranged at the corner. Further, when the corner of the base image Bi1 is rounded, the line pattern 155a may not be arranged at the corner.
 一方、図175に示す例では、ラインパターン155bは、ベース画像Bi2の周縁にある枠線w内に配置されている。例えば、ベース画像Bi2は、ロゴタイプ(具体的にはABCの文字列)を囲うように四角形の枠線wを描くことによって形成されている。ラインパターン155bは、その四角形の枠線wに沿って配列される複数の短線の配列パターンであって、これらの短線は、その枠線wに対して直交している。また、これらの短線は、枠線w内に配置されている。 On the other hand, in the example shown in FIG. 175, the line pattern 155b is disposed within the frame line w at the periphery of the base image Bi2. For example, the base image Bi2 is formed by drawing a rectangular frame w so as to surround a logotype (specifically, a character string of ABC). The line pattern 155b is an array pattern of a plurality of short lines arranged along the rectangular frame line w, and these short lines are orthogonal to the frame line w. Moreover, these short lines are arrange | positioned in the frame line w.
 なお、ラインパターン155bは、図175に示す例では、枠線wの角には配置されていないが、その角にも配置されていてもよい。また、枠線wの角が丸くなっている場合には、ラインパターン155bはその角に配置されていなくてもよい。 In the example shown in FIG. 175, the line pattern 155b is not arranged at the corner of the frame line w, but may be arranged at the corner. Further, when the corner of the frame line w is rounded, the line pattern 155b may not be arranged at the corner.
 図176は、本実施の形態における送信機100および受信機200の例を示す図である。 FIG. 176 is a diagram illustrating an example of the transmitter 100 and the receiver 200 in the present embodiment.
 例えば、図168に示す例と同様、送信機100は、図176に示すように、レンチキュラーレンズ116を備えていてもよい。このようなレンチキュラーレンズ116は、送信機100に描かれている送信画像Im2の枠線wを除く領域を覆うように、その送信機100に取り付けられている。 For example, as in the example shown in FIG. 168, the transmitter 100 may include a lenticular lens 116 as shown in FIG. Such a lenticular lens 116 is attached to the transmitter 100 so as to cover an area excluding the frame line w of the transmission image Im2 drawn on the transmitter 100.
 受信機200は、送信機100の撮像によって、ラインパターン155bが映し出されている通常撮影画像(すなわち撮像表示画像)を取得し、それらのラインパターン155bから画像IDを取得する。ここで、受信機200は、受信機200のユーザに対して、受信機200を動かすように促す。例えば、受信機200は、送信機100の撮像が行われているときに、「受信機を動かしてください」というメッセージを表示する。その結果、受信機200がユーザによって移動される。このとき、受信機200は、通常撮影画像に映し出されている送信機100、すなわち送信画像Im2のうちのベース画像Bi2が変化するか否かを判定することによって、取得された画像IDを認証する。例えば、受信機200は、ベース画像Bi2のロゴタイプが、「ABC」から「DEF」に変化したと判定すると、取得された画像IDが正しいIDであると判断する。 The receiver 200 acquires a normal captured image (that is, a captured display image) on which the line pattern 155b is projected by imaging of the transmitter 100, and acquires an image ID from the line pattern 155b. Here, the receiver 200 prompts the user of the receiver 200 to move the receiver 200. For example, the receiver 200 displays a message “Please move the receiver” when the transmitter 100 is imaging. As a result, the receiver 200 is moved by the user. At this time, the receiver 200 authenticates the acquired image ID by determining whether or not the base image Bi2 in the transmission image Im2 changes, that is, the transmitter 100 displayed in the normal captured image. . For example, if the receiver 200 determines that the logo type of the base image Bi2 has changed from “ABC” to “DEF”, the receiver 200 determines that the acquired image ID is the correct ID.
 光IDを送信する送信機100には、上述の送信画像Im1またはIm2が描かれていてもよい。また、上述の送信画像Im1またはIm2は、送信機100からの光IDを含む光によって照らされ、その光を反射することによって、光IDを送信してもよい。この場合、受信機200は、送信画像Im1またはIm2の画像IDと、光IDとを撮像によって取得することができる。このとき、光IDと画像IDとは同じであってもよく、光IDと画像IDのそれぞれの一部が同じであってもよい。 The above-described transmission image Im1 or Im2 may be drawn on the transmitter 100 that transmits the optical ID. Further, the transmission image Im1 or Im2 described above may be illuminated with light including the light ID from the transmitter 100, and the light ID may be transmitted by reflecting the light. In this case, the receiver 200 can acquire the image ID of the transmission image Im1 or Im2 and the light ID by imaging. At this time, the light ID and the image ID may be the same, or a part of each of the light ID and the image ID may be the same.
 また、送信機100は、送信スイッチがONにされたときには、点灯し、その点灯開始から10秒後に消灯してもよい。送信機100は、この点灯期間中、光IDを送信する。このような場合、受信機200は、画像IDを取得し、送信スイッチがONにされたときに、通常撮影画像に映し出されている送信画像の明るさが急に変化した場合に、その画像IDが正しいIDであると判断してもよい。また、受信機200は、画像IDを取得し、送信スイッチがONにされたときに、通常撮影画像に映し出されている送信画像が明るくなり、所定時間の経過後に暗くなれば、その画像IDが正しいIDであると判断してもよい。これにより、送信画像Im1またはIm2が不正にコピーされて用いられることを抑えることができる。 Further, the transmitter 100 may be turned on when the transmission switch is turned on, and may be turned off 10 seconds after the start of lighting. The transmitter 100 transmits the light ID during this lighting period. In such a case, the receiver 200 acquires the image ID, and when the brightness of the transmission image displayed in the normal captured image suddenly changes when the transmission switch is turned on, the image ID is received. May be determined to be the correct ID. Further, when the receiver 200 acquires the image ID and the transmission switch is turned on, the transmission image displayed in the normal photographed image becomes brighter and becomes darker after the elapse of a predetermined time. It may be determined that the ID is correct. As a result, it is possible to prevent the transmission image Im1 or Im2 from being illegally copied and used.
 図177は、ラインパターンの基本周波数を説明するための図である。 FIG. 177 is a diagram for explaining the fundamental frequency of the line pattern.
 送信画像Im1またはIm2を生成するための符号化装置は、ラインパターンの基本周波数を決定する。このとき、例えば、図177の(a)に示すように、ラインパターンが付加されるベース画像が横長の長方形である場合、符号化装置は、そのベース画像を、図177の(b)に示すように、正方形に変形する。このとき、例えば、長方形のベース画像の短辺が長辺と同じ長さになるように、そのベース画像の形状が変形される。 The encoding device for generating the transmission image Im1 or Im2 determines the fundamental frequency of the line pattern. At this time, for example, as shown in FIG. 177 (a), when the base image to which the line pattern is added is a horizontally long rectangle, the encoding device shows the base image in FIG. 177 (b). So that it transforms into a square. At this time, for example, the shape of the base image is deformed so that the short side of the rectangular base image has the same length as the long side.
 次に、符号化装置は、図177の(c)に示すように、正方形に変形されたベース画像の対角線の長さを基本周期として設定し、その基本周期の逆数である周波数を基本周波数として決定する。なお、正方形に変形されたベース画像を、以下、正方ベース画像という。 Next, as shown in FIG. 177 (c), the encoding apparatus sets the length of the diagonal line of the base image transformed into a square as a basic period, and uses the frequency that is the reciprocal of the basic period as the basic frequency. decide. The base image transformed into a square is hereinafter referred to as a square base image.
 図178Aは、符号化装置の処理動作を示すフローチャートである。また、図178Bは、符号化装置の処理動作を説明するための図である。 FIG. 178A is a flowchart showing the processing operation of the encoding device. FIG. 178B is a diagram for explaining the processing operation of the encoding device.
 まず、符号化装置は、処理対象の情報に対して誤り検出符号(誤り訂正符号ともいう)を付加する(ステップS171)。例えば、符号化装置は、図178Bに示すように、処理対象の情報である13ビットのビット列に対して、8ビットの誤り検出符号を付加する。 First, the encoding device adds an error detection code (also referred to as an error correction code) to information to be processed (step S171). For example, as shown in FIG. 178B, the encoding apparatus adds an 8-bit error detection code to a 13-bit bit string that is information to be processed.
 次に、符号化装置は、誤り検出符号が付加された情報を、それぞれNビットからなる(k+1)個の値xkに分割する。なお、kは1以上の整数である。例えば、符号化装置は、図178Bに示すように、k=6の場合、その情報をそれぞれN=3ビットからなる7個の値xkに分割する。つまり、その情報は、それぞれ3ビットの2進数によって示される値x0、x1、x2、・・・、x6に分割される。例えば、値x0、x1およびx2は、x0=010、x1=010、およびx2=100である。 Next, the encoding apparatus divides the information to which the error detection code is added into (k + 1) values xk each consisting of N bits. Note that k is an integer of 1 or more. For example, as shown in FIG. 178B, when k = 6, the encoding apparatus divides the information into seven values xk each having N = 3 bits. That is, the information is divided into values x0, x1, x2,..., X6 each indicated by a 3-bit binary number. For example, the values x0, x1, and x2 are x0 = 010, x1 = 010, and x2 = 100.
 次に、符号化装置は、値x0~x6のそれぞれに対して、つまり値xkに対して、その値xkに対応する周波数fkを算出する(ステップS173)。例えば、符号化装置は、値xkに対して基本周波数の(A+B×xk)倍の値を、その値xkに対応する周波数fkとして算出する。なお、AおよびBは、正の整数である。これにより、図178Bに示すように、値x0~x6のそれぞれに対して、周波数f0~f6が算出される。 Next, the encoding apparatus calculates a frequency fk corresponding to each value x0 to x6, that is, for the value xk, corresponding to the value xk (step S173). For example, the encoding apparatus calculates a value that is (A + B × xk) times the fundamental frequency with respect to the value xk as the frequency fk corresponding to the value xk. A and B are positive integers. As a result, as shown in FIG. 178B, frequencies f0 to f6 are calculated for values x0 to x6, respectively.
 次に、符号化装置は、周波数f0~f6の先頭に、位置決め周波数fPを付加する(ステップS174)。このとき、符号化装置は、基本周波数のA倍未満、または、基本周波数の(A+B×2N-1)倍よりも大きい値に、位置決め周波数fPを設定する。これにより、図178Bに示すように、周波数f0~f6の先頭に、これらの周波数と異なる位置決め周波数fPが配置される。 Next, the encoding device adds the positioning frequency fP to the head of the frequencies f0 to f6 (step S174). At this time, the encoding apparatus sets the positioning frequency fP to a value less than A times the fundamental frequency or larger than (A + B × 2 N-1 ) times the fundamental frequency. Thereby, as shown in FIG. 178B, a positioning frequency fP different from these frequencies is arranged at the head of the frequencies f0 to f6.
 次に、符号化装置は、上述の正方ベース画像の周縁に、(k+2)個の指定領域を設定する。そして、符号化装置は、指定領域のそれぞれについて、その指定領域の元の色を基準に、正方ベース画像の辺の方向に沿って、その指定領域の輝度値(または色)を周波数fkで変化させる(ステップS175)。例えば、図178Bの(a)または(b)に示すように、正方ベース画像の周縁に(k+2)個の指定領域JP、J0~J6が設定される。なお、正方ベース画像の周縁に枠線がある場合には、その枠線を(k+2)個の領域に分割することによって、(k+2)個のそれぞれの領域が指定領域として設定される。より具体的には、(k+2)個の指定領域は、指定領域JP、JP0、JP1、JP2、JP3、JP4、JP5、JP6の順に、正方ベース画像の4辺に沿って時計周りに設定される。符号化装置は、このように設定された指定領域のそれぞれにおいて、輝度値(または色)を周波数fkで変化させる。この輝度値の変化によって、正方ベース画像に対してラインパターンが付加される。 Next, the encoding apparatus sets (k + 2) designated areas on the periphery of the square base image. Then, the encoding device changes the luminance value (or color) of the designated area at the frequency fk along the direction of the side of the square base image with respect to the original color of the designated area for each designated area. (Step S175). For example, as shown in (a) or (b) of FIG. 178B, (k + 2) designated areas JP, J0 to J6 are set on the periphery of the square base image. When there is a frame line at the periphery of the square base image, each of the (k + 2) areas is set as a designated area by dividing the frame line into (k + 2) areas. More specifically, (k + 2) designated areas are set clockwise along the four sides of the square base image in the order of designated areas JP, JP0, JP1, JP2, JP3, JP4, JP5, and JP6. . The encoding device changes the luminance value (or color) at the frequency fk in each of the designated areas set in this way. A line pattern is added to the square base image by the change in the luminance value.
 次に、符号化装置は、ラインパターン付きの正方ベース画像の縦横比を、元のベース画像の縦横比に戻す(ステップS176)。例えば、図178Bの(a)に示すラインパターン付きの正方ベース画像は、図178Bの(c)に示すラインパターン付きのベース画像に変形される。この場合、ラインパターン付きの正方ベース画像は、縦方向に縮小される。したがって、図178の(c)に示すように、ラインパターン付きのベース画像では、そのベース画像の上下にあるラインパターンの幅は、左右にあるラインパターンの幅よりも小さくなる。 Next, the encoding apparatus returns the aspect ratio of the square base image with the line pattern to the aspect ratio of the original base image (step S176). For example, a square base image with a line pattern shown in (a) of FIG. 178B is transformed into a base image with a line pattern shown in (c) of FIG. 178B. In this case, the square base image with the line pattern is reduced in the vertical direction. Therefore, as shown in FIG. 178 (c), in the base image with the line pattern, the width of the line pattern above and below the base image is smaller than the width of the line pattern on the left and right.
 したがって、ステップS175において、ラインパターンを正方ベース画像に対して付加するときには、図178Bの(b)に示すように、正方ベース画像の上下に付加されるラインパターンと、左右に付加されるラインパターンとのそれぞれの幅を異ならせてもよい。その幅を異ならせるために、例えば、元のベース画像の縦横比の逆の比率を用いてもよい。つまり、符号化装置は、その正方ベース画像の左右に付加される指定領域またはラインパターンの幅に対して、上述の逆の比率を乗算することによって得られる幅を、正方ベース画像の上下に付加される指定領域またはラインパターンの幅に決定する。これにより、ステップS176において、ラインパターン付きの正方ベース画像の縦横比が元に戻されても、図178Bの(d)に示すように、ベース画像の上下にあるラインパターンと、左右にあるラインパターンとのそれぞれの幅を同一にすることができる。 Therefore, when adding a line pattern to the square base image in step S175, as shown in FIG. 178B (b), a line pattern added to the top and bottom of the square base image and a line pattern added to the left and right. The widths of and may be different. In order to make the widths different, for example, a ratio opposite to the aspect ratio of the original base image may be used. In other words, the encoding device adds the width obtained by multiplying the width of the specified area or line pattern added to the left and right of the square base image by the inverse ratio described above to the top and bottom of the square base image. Determine the width of the specified area or line pattern. As a result, even if the aspect ratio of the square base image with the line pattern is restored in step S176, as shown in FIG. 178B (d), the line pattern on the top and bottom of the base image and the line on the left and right Each width with the pattern can be made the same.
 さらに、符号化装置は、ラインパターン付きベース画像の周囲に、すなわち、(k+2)個の指定領域の外側に、それらの指定領域とは異なる色の枠を追加してもよい(ステップS177)。例えば、図178Bに示すように、黒い枠Q1が追加される。これにより、(k+2)個の指定領域を検出し易くすることができる。 Further, the encoding apparatus may add a frame of a color different from those of the designated areas around the base image with the line pattern, that is, outside the (k + 2) designated areas (step S177). For example, as shown in FIG. 178B, a black frame Q1 is added. This makes it easy to detect (k + 2) designated areas.
 図179は、復号装置である受信機200の処理動作を示すフローチャートである。 FIG. 179 is a flowchart showing the processing operation of the receiver 200 which is a decoding device.
 まず、受信機200は、送信画像を撮像する(ステップS181)。次に、受信機200は、その撮像によって得られる通常撮影画像からエッジ検出を行い(ステップS182)、さらに、輪郭を抽出する(ステップS183)。 First, the receiver 200 captures a transmission image (step S181). Next, the receiver 200 performs edge detection from a normal captured image obtained by the imaging (step S182), and further extracts a contour (step S183).
 そして、受信機200は、抽出された輪郭の中から、所定の大きさ以上の四角形の輪郭を有する領域、または、所定の大きさ以上の角丸四角形の輪郭を有する領域に対して、以下のステップS184~S187の処理を実行する。 Then, the receiver 200 applies the following to an area having a rectangular outline larger than a predetermined size or an area having a rounded square outline larger than a predetermined size from the extracted outlines: Steps S184 to S187 are executed.
 つまり、受信機200は、その領域を正方形の領域に透視変換する(ステップS184)。具体的には、変換対象の領域が四角形の領域である場合には、受信機200は、その四角形の頂点を基準にして透視変換を行う。また、変換対象の領域が角丸四角形の領域である場合には、受信機200は、その領域の各辺を延長して、2つの辺が交わる点を基準にして透視変換を行う。 That is, the receiver 200 perspective-transforms the area into a square area (step S184). Specifically, when the conversion target area is a quadrangular area, the receiver 200 performs perspective conversion with reference to the quadrangular vertex. Further, when the conversion target area is a rounded quadrangular area, the receiver 200 extends each side of the area and performs perspective conversion on the basis of the point where the two sides intersect.
 次に、受信機200は、正方形の領域に含まれる複数の指定領域のそれぞれに対して、その指定領域における輝度変化の周波数を求める(ステップS185)。 Next, the receiver 200 obtains the frequency of luminance change in each designated area included in the square area (step S185).
 次に、受信機200は、周波数fPの指定領域を見つけ出し、その周波数fPの指定領域を基準にして、正方形の領域の周縁において時計周りに順に配置されている各指定領域の周波数fkを並べる(ステップS186)。 Next, the receiver 200 finds the designated region of the frequency fP, and arranges the frequencies fk of the designated regions arranged in order in the clockwise direction at the periphery of the square region with reference to the designated region of the frequency fP ( Step S186).
 そして、受信機200は、図178Aに示すステップS171~S174の逆の処理を、周波数の配列に対して行うことによって、ラインパターンを復号する(ステップS187)。つまり、受信機200は、処理対象の情報を取得することができる。 Then, the receiver 200 decodes the line pattern by performing reverse processing of steps S171 to S174 shown in FIG. 178A on the frequency array (step S187). That is, the receiver 200 can acquire information to be processed.
 このような受信機200による処理動作では、ステップS184において、正方形の領域に透視変換することによって、送信画像を、正面からだけでなく、正面以外の方向から撮像した場合でも、送信画像のラインパターンを正しく復号することができる。また、ステップS186において、基本周波数fPを基準に、各指定領域の周波数が順に並べられることによって、送信画像を横向きで撮像したり、上下逆向きに撮像した場合でも、その送信画像のラインパターンを正しく復号することができる。 In such a processing operation by the receiver 200, in step S184, the line pattern of the transmission image is obtained even when the transmission image is imaged not only from the front but also from a direction other than the front by performing perspective transformation into a square area. Can be decoded correctly. In step S186, the frequency of each designated region is arranged in order based on the fundamental frequency fP, so that the line pattern of the transmission image can be obtained even when the transmission image is imaged horizontally or upside down. It can be decoded correctly.
 図180は、受信機200の処理動作を示すフローチャートである。 FIG. 180 is a flowchart showing the processing operation of the receiver 200.
 まず、受信機200は、露光時間を通常露光時間よりも短い通信用露光時間に設定することができるか否かを判定する(ステップS191)。すなわち、受信機200は、自らが光通信非対応の機器であるか、光通信対応の機器であるかを判定する。ここで、受信機200は、通信用露光時間に設定することができないと判定すると(ステップS191のN)、画像信号(すなわち画像ID)を受信する(ステップS193)。通信用露光時間は、例えば1/2000秒以下の時間である。 First, the receiver 200 determines whether or not the exposure time can be set to a communication exposure time shorter than the normal exposure time (step S191). That is, the receiver 200 determines whether it is a device that does not support optical communication or a device that supports optical communication. If the receiver 200 determines that the communication exposure time cannot be set (N in step S191), the receiver 200 receives an image signal (that is, an image ID) (step S193). The communication exposure time is, for example, a time of 1/2000 second or less.
 一方、受信機200は、通信用露光時間に設定することができると判定すると(ステップS191のY)、ラインスキャン時間が端末(すなわち受信機200)またはサーバに登録されているか否かを判定する(ステップS192)。なお、ラインスキャン時間は、図101および図102の例に示すように、イメージセンサに含まれる1つの露光ラインの露光が開始されてから、次の露光ラインの露光が開始されるまでの時間である。受信機200は、このラインスキャン時間が登録されていれば、その登録されているラインスキャン時間を用いて復号用画像を復号する。 On the other hand, when the receiver 200 determines that the communication exposure time can be set (Y in step S191), the receiver 200 determines whether the line scan time is registered in the terminal (that is, the receiver 200) or the server. (Step S192). The line scan time is the time from the start of exposure of one exposure line included in the image sensor to the start of exposure of the next exposure line, as shown in the examples of FIGS. is there. If the line scan time is registered, the receiver 200 decodes the decoding image using the registered line scan time.
 受信機200は、ラインスキャン時間が登録されていないと判定すると(ステップS192のN)、ステップS193の処理を行う。一方、受信機200は、ラインスキャン時間が登録されていると判定すると(ステップS192のY)、そのラインスキャン時間を用いて、可視光信号である光IDを受信する(ステップS194)。 When the receiver 200 determines that the line scan time is not registered (N in step S192), the receiver 200 performs the process in step S193. On the other hand, when the receiver 200 determines that the line scan time is registered (Y in step S192), the receiver 200 receives the light ID that is a visible light signal using the line scan time (step S194).
 受信機200は、可視光信号を受信すると、自らが可視光信号の同一性検証モードに設定されていれば、画像信号と可視光信号との同一性を検証する(ステップS195)。ここで、受信機200は、画像信号と可視光信号とが異なっていれば、それらの信号が異なっていることを示すメッセージまたは画像をディスプレイに表示する。または、受信機200は、それらの信号が異なっていることをサーバに通知する。 When receiving the visible light signal, the receiver 200 verifies the identity between the image signal and the visible light signal if it is set to the visible light signal identity verification mode (step S195). Here, if the image signal is different from the visible light signal, the receiver 200 displays a message or an image indicating that the signals are different on the display. Alternatively, the receiver 200 notifies the server that the signals are different.
 図181Aは、本実施の形態におけるシステムの構成の一例を示す図である。 FIG. 181A is a diagram illustrating an example of a system configuration in the present embodiment.
 本実施の形態におけるシステムは、複数の送信機100と、受信機200とを備える。送信機100は、自走式のロボットとして構成されている。例えば、そのロボットは、自動掃除ロボット、または人とコミュニケーションを行うロボットである。受信機200は、監視カメラまたは環境設置カメラなどのカメラとして構成されている。以下、送信機100を、ロボット100と称し、受信機200を、カメラ200と称す。 The system in the present embodiment includes a plurality of transmitters 100 and a receiver 200. The transmitter 100 is configured as a self-propelled robot. For example, the robot is an automatic cleaning robot or a robot that communicates with a person. The receiver 200 is configured as a camera such as a surveillance camera or an environment-installed camera. Hereinafter, the transmitter 100 is referred to as a robot 100, and the receiver 200 is referred to as a camera 200.
 ロボット100は、可視光信号である光IDをカメラ200に送信する。カメラ200は、そのロボット100から送信される光IDを受信する。 Robot 100 transmits a light ID, which is a visible light signal, to camera 200. The camera 200 receives the light ID transmitted from the robot 100.
 図181Bは、本実施の形態におけるカメラ200の処理を示す図である。 FIG. 181B is a diagram showing processing of the camera 200 in the present embodiment.
 複数のロボット100のそれぞれは、自動走行している。このような場合、まず、カメラ200は、通常撮影モードでの撮像を行い、その撮像によって得られる通常撮影画像から、動いている物体をロボット100として検出する(ステップS221)。次に、カメラ200は、検出されたロボット100に対して、IDを送信するように促すID送信依頼信号を、電波通信によって送信する(ステップS225)。ロボット100は、このID送信依頼信号を受信すると、ロボット100のID(すなわち光ID)の可視光通信による送信を開始する。 Each of the plurality of robots 100 is running automatically. In such a case, first, the camera 200 performs imaging in the normal imaging mode, and detects a moving object as the robot 100 from the normal imaging image obtained by the imaging (step S221). Next, the camera 200 transmits an ID transmission request signal that prompts the detected robot 100 to transmit an ID by radio wave communication (step S225). When the robot 100 receives this ID transmission request signal, the robot 100 starts transmitting the ID of the robot 100 (that is, the light ID) by visible light communication.
 次に、カメラ200は、撮影モードを、通常撮影モードから可視光認識モードに変更する(ステップS226)。この可視光認識モードは、可視光通信モードの一種であっる。具体的には、可視光認識モードは、カメラ200のイメージセンサに含まれる全ての露光ラインのうち、ロボット100の像を捉えている特定の複数の露光ラインのみが、通信用露光時間でのラインスキャンに用いられる。つまり、カメラ200は、その特定の複数の露光ラインに対してのみラインスキャンを行い、他の露光ラインに対する露光を行わない。このようなラインスキャンによって、カメラ200は、ロボット100からID(すなわち光ID)を検出する(ステップS227)。 Next, the camera 200 changes the shooting mode from the normal shooting mode to the visible light recognition mode (step S226). This visible light recognition mode is a kind of visible light communication mode. Specifically, in the visible light recognition mode, among all the exposure lines included in the image sensor of the camera 200, only a plurality of specific exposure lines capturing the image of the robot 100 are lines at the exposure time for communication. Used for scanning. That is, the camera 200 performs line scanning only for the specific plurality of exposure lines, and does not perform exposure for other exposure lines. Through such a line scan, the camera 200 detects an ID (that is, an optical ID) from the robot 100 (step S227).
 次に、カメラ200は、復号用画像(すなわち輝線画像)における可視光信号の位置、つまり輝線パターンが現れている位置と、カメラ200の撮像方向とに基づいて、ロボット100の現在位置を認識する(ステップS228)。そして、カメラ200は、そのロボット100のIDおよび現在位置と、そのIDの検出時刻とを、ロボット100およびサーバに通知する。 Next, the camera 200 recognizes the current position of the robot 100 based on the position of the visible light signal in the decoding image (that is, the bright line image), that is, the position where the bright line pattern appears and the imaging direction of the camera 200. (Step S228). Then, the camera 200 notifies the robot 100 and the server of the ID and current position of the robot 100 and the detection time of the ID.
 そして、カメラ200は、撮影モードを、可視光認識モードから通常撮影モードに変更する(ステップS230)。 Then, the camera 200 changes the shooting mode from the visible light recognition mode to the normal shooting mode (step S230).
 ここで、複数のロボット100のそれぞれは、ロボット検出用信号を送信しながら自動走行していてもよい。このロボット検出用信号は、可視光信号であるが、カメラ200の通常撮影モードにおける撮像でも認識され得る周波数の光信号である。つまり、ロボット検出用信号の周波数は、光IDの周波数よりも低い。 Here, each of the plurality of robots 100 may automatically travel while transmitting a robot detection signal. The robot detection signal is a visible light signal, but is an optical signal having a frequency that can be recognized even in imaging in the normal shooting mode of the camera 200. That is, the frequency of the robot detection signal is lower than the frequency of the optical ID.
 このような場合、カメラ200は、動いている物体をロボット100として検出する代わりに、通常撮影画像からロボット検出用信号を検知したときに(ステップS223)、ステップS225~S230の処理を実行してもよい。 In such a case, instead of detecting the moving object as the robot 100, the camera 200 executes the processes of steps S225 to S230 when detecting a robot detection signal from the normal captured image (step S223). Also good.
 また、複数のロボット100のそれぞれは、位置認識依頼信号を電波通信などによって送信し、かつ、IDを可視光通信によって送信しながら自動走行していてもよい。 Further, each of the plurality of robots 100 may automatically travel while transmitting a position recognition request signal by radio wave communication or the like and transmitting an ID by visible light communication.
 このような場合、カメラ200は、その位置認識依頼信号を受信したときに(ステップS224)、ステップS226~S230の処理を実行してもよい。なお、カメラ200が位置認識依頼信号を受信したときに、通常撮影画像にロボット100が映し出されていない場合がある。このようなときには、カメラ200は、ロボット100が写し出されていないことを、そのロボット100に通知してもよい。つまり、カメラ200は、ロボット100の位置を認識することができないことを、そのロボット100に通知してもよい。 In such a case, when receiving the position recognition request signal (step S224), the camera 200 may execute the processes of steps S226 to S230. When the camera 200 receives the position recognition request signal, the robot 100 may not be displayed in the normal captured image. In such a case, the camera 200 may notify the robot 100 that the robot 100 is not projected. That is, the camera 200 may notify the robot 100 that the position of the robot 100 cannot be recognized.
 図182は、本実施の形態におけるシステムの構成の他の例を示す図である。 FIG. 182 is a diagram illustrating another example of a system configuration according to the present embodiment.
 例えば、送信機100は、複数の光源171を備えて、それらの光源171を輝度変化させることによって、複数の光源171のそれぞれから光IDを送信してもよい。これにより、カメラ200の死角を減らすことができる。つまり、カメラ200において光IDが受信され易くすることができる。また、カメラ200によって複数の光源171が撮像される場合には、カメラ200は、多点測量によって、ロボット100の位置をより適切に認識することができる。つまり、ロボット100の位置認識精度を向上することができる。 For example, the transmitter 100 may include a plurality of light sources 171 and transmit light IDs from the plurality of light sources 171 by changing the luminance of the light sources 171. Thereby, the blind spot of the camera 200 can be reduced. That is, the camera 200 can easily receive the optical ID. Further, when a plurality of light sources 171 are captured by the camera 200, the camera 200 can more appropriately recognize the position of the robot 100 by multipoint surveying. That is, the position recognition accuracy of the robot 100 can be improved.
 また、ロボット100は、複数の光源171のそれぞれから互いに異なる光IDを送信してもい。この場合には、カメラ200は、複数の光源171の全てではなく、一部の光源171だけ(例えば1つの光源171だけ)を撮像した場合であっても、その一部の光源171の光IDから、ロボット100の位置を正確に認識することができる。 In addition, the robot 100 may transmit different light IDs from each of the plurality of light sources 171. In this case, even when the camera 200 captures not all of the plurality of light sources 171 but only a part of the light sources 171 (for example, only one light source 171), the optical ID of the part of the light sources 171 is used. Thus, the position of the robot 100 can be accurately recognized.
 また、ロボット100は、カメラ200からロボット100の現在位置が通知されたときには、そのカメラ200に対してポイントなどの報酬を付与してもよい。 In addition, when the current position of the robot 100 is notified from the camera 200, the robot 100 may give a reward such as a point to the camera 200.
 図183は、本実施の形態における送信機に描かれる画像の他の例を示す図である。 FIG. 183 is a diagram illustrating another example of an image drawn on the transmitter in this embodiment.
 送信機100は、図174および図175に示す例と同様、可視光通信モードでの撮像が不可能な受信機、すなわち光通信非対応の受信機に対しても情報を画像IDとして送信することができるように構成されている。なお、画像IDは、フレームIDとも称される。つまり、送信機100には、略四角形の送信画像Im3が描かれている。また、送信機100は、上述と同様、例えばサイネージとして構成され、輝度変化することによって、光IDを送信する。なお、送信機100は、光源を備え、その光源の輝度変化によって、光IDを受信機200に直接送信してもよい。具体的には、送信画像Im3は、透光性を有する板の表面に描かれ、光源からの光は、その板の裏面に向けて照射される。その結果、光源の輝度変化は、送信画像Im3の輝度変化として現れ、その送信画像Im3の輝度変化によって、光IDが可視光信号として受信機200に送信される。または、送信機100は、液晶ディスプレイまたは有機ELディスプレイなどのディスプレイを備えた表示装置であってもよい。送信機100は、ディスプレイに送信画像Im3を表示させながら、そのディスプレイを輝度変化させることによって、光IDを送信する。あるいは、送信機100は、光源を備え、その光源からの光を送信画像Im3に照射し、その送信画像Im3に反射させることによって、その光を光IDとして受信機200に送信してもよい。 Similarly to the examples shown in FIGS. 174 and 175, the transmitter 100 transmits information as an image ID to a receiver that cannot capture an image in the visible light communication mode, that is, a receiver that does not support optical communication. It is configured to be able to. The image ID is also referred to as a frame ID. That is, the transmitter 100 has a substantially rectangular transmission image Im3 drawn therein. The transmitter 100 is configured as a signage, for example, as described above, and transmits an optical ID by changing the luminance. The transmitter 100 may include a light source, and may directly transmit the optical ID to the receiver 200 depending on the luminance change of the light source. Specifically, the transmission image Im3 is drawn on the surface of a light-transmitting plate, and light from the light source is emitted toward the back surface of the plate. As a result, the luminance change of the light source appears as a luminance change of the transmission image Im3, and the light ID is transmitted to the receiver 200 as a visible light signal by the luminance change of the transmission image Im3. Alternatively, the transmitter 100 may be a display device including a display such as a liquid crystal display or an organic EL display. The transmitter 100 transmits the light ID by changing the luminance of the display while displaying the transmission image Im3 on the display. Alternatively, the transmitter 100 may include a light source, irradiate the transmission image Im3 with light from the light source, and reflect the light on the transmission image Im3, thereby transmitting the light to the receiver 200 as an optical ID.
 このような送信機100の送信画像Im3は、図174および図175に示す送信画像Im1およびIm2のように、略四角形に形成されている。送信画像Im3は、略四角形のベース画像Bi3と、そのベース画像Bi3に付加されるラインパターン155cとを有する。 The transmission image Im3 of such a transmitter 100 is formed in a substantially rectangular shape like the transmission images Im1 and Im2 shown in FIGS. 174 and 175. The transmission image Im3 includes a substantially square base image Bi3 and a line pattern 155c added to the base image Bi3.
 図183に示す例では、ラインパターン155cは、ベース画像Bi3の4辺のそれぞれにおいて、その辺に沿って配列される複数の短い直線の配列パターンであって、これらの短い直線(短線ともいう)はそれぞれ、その辺に対して直交している。また、ラインパターン155cは、32個のブロック(上述の指定領域)からなる。これらのブロックは、後述のように、PHYシンボルともいう。その32個のブロックのそれぞれの周波数の指標は、-1、0、1、2、または3である。-1の指標は、基本周波数の200倍を示し、0の指標は、基本周波数の210倍を示し、1の指標は、基本周波数の220倍を示し、2の指標は、基本周波数の230倍を示し、3の指標は、基本周波数の240倍を示す。ここで、基本周波数は、上述のように、ベース画像Bi3の対角線の長さ(すなわち基本周期)の逆数である。つまり、-1の指標に対応するブロックでは、基本周波数×200の周波数で短線が配列されている。言い換えれば、そのブロックにおける互いに隣り合う短線の間隔は、ベース画像Bi3の対角線の1/200である。したがって、本実施の形態における上記各PHYシンボル(すなわちブロック)は、配列パターンによって、-1、0、1、2および3のうちの何れかの数値を示す。 In the example shown in FIG. 183, the line pattern 155c is an array pattern of a plurality of short lines arranged along the four sides of the base image Bi3, and these short lines (also referred to as short lines). Are each orthogonal to their sides. The line pattern 155c is composed of 32 blocks (the above-mentioned designated area). These blocks are also referred to as PHY symbols, as will be described later. The index of the frequency of each of the 32 blocks is −1, 0, 1, 2, or 3. An index of -1 indicates 200 times the fundamental frequency, an index of 0 indicates 210 times the fundamental frequency, an index of 1 indicates 220 times the fundamental frequency, and an index of 2 indicates 230 times the fundamental frequency. The index of 3 indicates 240 times the fundamental frequency. Here, as described above, the fundamental frequency is the reciprocal of the length of the diagonal line of the base image Bi3 (that is, the fundamental period). That is, in the block corresponding to the index of −1, short lines are arranged at a frequency of fundamental frequency × 200. In other words, the interval between the adjacent short lines in the block is 1/200 of the diagonal line of the base image Bi3. Therefore, each PHY symbol (that is, block) in the present embodiment indicates a numerical value of -1, 0, 1, 2, and 3 depending on the arrangement pattern.
 このような送信画像Im3は、受信機200のイメージセンサによって被写体として撮像される。つまり、被写体は、イメージセンサから見て矩形形状であり、当該被写体の中心領域の光が輝度変化することにより、可視光信号を送信し、当該被写体の周縁にバーコード状のラインパターンが配置されている。 Such a transmission image Im3 is captured as a subject by the image sensor of the receiver 200. In other words, the subject has a rectangular shape as viewed from the image sensor, and a visible light signal is transmitted when the light in the central region of the subject changes in luminance, and a barcode-like line pattern is arranged on the periphery of the subject. ing.
 図184は、フレームIDを構成するMACフレームのフォーマットの一例を示す図である。 FIG. 184 is a diagram showing an example of the format of the MAC frame that constitutes the frame ID.
 MAC(medium access control)フレームは、MACヘッダと、MACペイロードとからなる。MACヘッダは、4ビットで構成される。MACペイロードは、可変長のパディングと、可変長のID1と、固定長のID2とからなる。ID2は、MACフレームが44ビットで構成される場合には、5ビットからなり、MACフレームが70ビットで構成される場合には、3ビットからなる。パディングは、例えば「0000000000001」、「0001」、「01」または「1」などからなり、左端のビットから最初に1が現れるまでの部分である。 The MAC (medium access control) frame consists of a MAC header and a MAC payload. The MAC header is composed of 4 bits. The MAC payload includes variable length padding, variable length ID1, and fixed length ID2. ID2 consists of 5 bits when the MAC frame is composed of 44 bits, and 3 bits when the MAC frame is composed of 70 bits. The padding consists of, for example, “0000000000000001”, “0001”, “01” or “1”, and is a portion from the leftmost bit until 1 appears first.
 ID1は、上述のフレームIDであって、可視光信号によって示される識別情報である光IDと同一の情報である。つまり、可視光信号と、ラインパターンから取得される信号は、同一の識別情報である。これにより、受信機200は、可視光信号を受信することができなくても、送信画像Im3を撮像すれば、その送信画像Im3のラインパターン155cから可視光信号と同一の識別情報を取得することができる。 ID1 is the above-described frame ID, and is the same information as the light ID that is identification information indicated by the visible light signal. That is, the visible light signal and the signal acquired from the line pattern are the same identification information. Thereby, even if the receiver 200 cannot receive the visible light signal, if it captures the transmission image Im3, the receiver 200 acquires the same identification information as the visible light signal from the line pattern 155c of the transmission image Im3. Can do.
 図185は、MACヘッダの構成の一例を示す図である。 FIG. 185 is a diagram illustrating an example of the configuration of the MAC header.
 例えば、MACヘッダのうちのアドレス「0」のビットは、ヘッダバージョンを示す。具体的には、アドレス「0」のビットが「0」の場合には、ヘッダバージョンは1である。 For example, the bit of the address “0” in the MAC header indicates the header version. Specifically, when the bit of the address “0” is “0”, the header version is 1.
 MACヘッダのうちのアドレス「1-2」の2ビットは、プロトコルを示す。具体的には、そのアドレス「1-2」の2ビットが「00」の場合には、MACフレームのプロトコルは、IEC(International Electrotechnical Commission)に準拠し、アドレス「1-2」の2ビットが「01」の場合には、MACフレームのプロトコルは、LinkRay(登録商標) Dataに準拠する。また、アドレス「1-2」の2ビットが「10」の場合には、MACフレームのプロトコルは、IEEE(The Institute of Electrical and Electronics Engineers, Inc.)に準拠する。 2 bits of the address “1-2” in the MAC header indicate the protocol. Specifically, when the 2 bits of the address “1-2” are “00”, the MAC frame protocol conforms to IEC (International Electrotechnical Commission), and the 2 bits of the address “1-2” In the case of “01”, the protocol of the MAC frame conforms to LinkRay (registered trademark) Data. When the 2 bits of the address “1-2” are “10”, the protocol of the MAC frame conforms to IEEE (The Institute of Electrics and Electronics Electronics, Engineers, Inc.).
 MACヘッダのうちのアドレス「3」のビットは、他のプロトコルを示す。具体的には、MACフレームがIECに準拠し、かつ、そのアドレス「3」のビットが「0」の場合には、1パケットあたりのビット数は4ビットである。また、MACフレームがIECに準拠し、かつ、そのアドレス「3」のビットが「1」の場合には、1パケットあたりのビット数は8ビットである。一方、MACフレームがLinkRay Dataに準拠し、かつ、そのアドレス「3」のビットが「0」の場合には、1パケットあたりのビット数は32ビットである。なお、上述の1パケットあたりのビット数は、DATAPARTの長さ(すなわちdatapart長)である。 The bit of the address “3” in the MAC header indicates another protocol. Specifically, when the MAC frame conforms to IEC and the bit of the address “3” is “0”, the number of bits per packet is 4 bits. When the MAC frame conforms to IEC and the bit of the address “3” is “1”, the number of bits per packet is 8 bits. On the other hand, when the MAC frame conforms to LinkRay Data and the bit of the address “3” is “0”, the number of bits per packet is 32 bits. The number of bits per packet described above is the length of DATAPART (that is, the datapart length).
 図186は、パケット分割数を導出するためのテーブルの一例を示す図である。 FIG. 186 is a diagram showing an example of a table for deriving the number of packet divisions.
 受信機200は、ラインパターン155cから、MACフレームに含まれるID1であるフレームIDを復号するとともに、そのフレームIDに対応する分割数を導出する。これは、輝度変化を伴う可視光通信では、光IDとパケット分割数とによって、送受信される情報が定義され、送信画像を用いた通信でも、その可視光通信との互換性を保つためには、その分割数が必要とされるからである。 The receiver 200 decodes the frame ID which is ID1 included in the MAC frame from the line pattern 155c and derives the number of divisions corresponding to the frame ID. This is because in visible light communication with a change in luminance, information to be transmitted and received is defined by the light ID and the number of packet divisions, and in order to maintain compatibility with the visible light communication even in communication using a transmission image. This is because the number of divisions is required.
 本実施の形態における受信機200は、図186に示すテーブルを参照し、ID1のビット数(以下、ID長という)と、datapart長との組を用いて、そのフレームIDに対する分割数を導出する。受信機200は、例えば、MACヘッダのアドレス「3」のビットに基づいて、datapart長が何ビットであるかを特定し、さらに、MACフレームのID1の長さであるID長を特定する。そして、受信機200は、図186に示すテーブルにおいて、その特定されたdatapart長とID長との組に対応付けられている分割数を見つけ出すことによって、その分割数を導出する。具体的には、datapart長が4ビットであって、ID長が10ビットであれば、分割数「5」が導出される。 Referring to the table shown in FIG. 186, receiver 200 in the present embodiment derives the number of divisions for the frame ID using a set of the number of bits of ID1 (hereinafter referred to as ID length) and datapart length. . For example, the receiver 200 specifies how many bits the datapart length is based on the bit of the address “3” of the MAC header, and further specifies an ID length that is the length of ID1 of the MAC frame. Then, the receiver 200 derives the number of divisions by finding out the number of divisions associated with the specified datapart length and ID length in the table shown in FIG. 186. Specifically, if the datapart length is 4 bits and the ID length is 10 bits, the division number “5” is derived.
 なお、受信機200は、図186に示すテーブルから分割数を導出できない場合、つまり、特定されたdatapart長とID長との組に対応付けられている分割数がテーブルに存在しない場合には、分割数を0に決定してもよい。 When receiver 200 cannot derive the number of divisions from the table shown in FIG. 186, that is, when the number of divisions associated with the specified datapart length and ID length does not exist in the table, The number of divisions may be determined as 0.
 また、図186に示すテーブルでは、datapart長「4ビット」とID長「14ビット」との組には、分割数「6」および「7」が対応付けられている。そこで、例えば、フレームIDが符号化される場合には、ID長が15ビットであれば、分割数を「7」に設定してもよい。そして、受信機200がフレームIDを復号する場合に、datapart長が4ビットであって、ID長が15ビットであれば、受信機200は、分割数「7」を導出する。さらに、受信機200は、その15ビットのID1における先頭の1ビットを無視して、14ビットのID1を最終的なフレームIDとして導出してもよい。 In the table shown in FIG. 186, the number of divisions “6” and “7” are associated with the set of the datapart length “4 bits” and the ID length “14 bits”. Therefore, for example, when the frame ID is encoded, if the ID length is 15 bits, the number of divisions may be set to “7”. When the receiver 200 decodes the frame ID and the datapart length is 4 bits and the ID length is 15 bits, the receiver 200 derives the division number “7”. Furthermore, the receiver 200 may derive the 14-bit ID1 as the final frame ID by ignoring the first 1 bit in the 15-bit ID1.
 なお、フレームIDがIEEEのプロトコルに準拠する場合、受信機200は、例えば暫定的に、分割数「0」を導出してもよい。なお、分割数「0」は、分割が行われないことを示す。 Note that, when the frame ID conforms to the IEEE protocol, the receiver 200 may tentatively derive the division number “0”, for example. The division number “0” indicates that no division is performed.
 これにより、輝度変化を伴う可視光通信において用いられる光IDおよび分割数を、送信画像を用いた通信にもフレームIDおよび分割数として適切に適用することができる。つまり、輝度変化を伴う可視光通信と、送信画像を用いた通信との互換性を保つことができる。 Thereby, the light ID and the number of divisions used in visible light communication accompanied by a change in luminance can be appropriately applied to the communication using the transmission image as the frame ID and the number of divisions. That is, it is possible to maintain compatibility between visible light communication involving a change in luminance and communication using a transmission image.
 図187は、PHY符号化を示す図である。 FIG. 187 is a diagram illustrating PHY encoding.
 まず、フレームIDを符号化する符号化装置は、MACフレームに対してECC(Error Check Code)を追加する。次に、符号化装置は、そのECCが追加されたMACフレームを複数のブロックに分割する。この複数のブロックのビット数はN(Nは、例えば2または3)である。符号化装置は、その複数のブロックのそれぞれについて、そのブロックに含まれるNビットによって示される値をグレイコードに変換する。なお、グレイコードは、互いに隣り合う数値において1ビットだけが異なるコードである。言い換えれば、グレイコードは、前後に隣接する符号間のハミング距離が必ず1であるという特性を持つ。互いに隣接するシンボル間での誤りが最も発生しやすいが、このグレイコードを用いれば、そのシンボル間で複数のビットが異なることはないため、誤り検出の効率を向上することができる。 First, the encoding device that encodes the frame ID adds ECC (Error Check Code) to the MAC frame. Next, the encoding apparatus divides the MAC frame to which the ECC is added into a plurality of blocks. The number of bits of the plurality of blocks is N (N is 2 or 3, for example). For each of the plurality of blocks, the encoding apparatus converts a value indicated by the N bits included in the block into a Gray code. Note that the Gray code is a code that differs only in one bit in the numerical values adjacent to each other. In other words, the Gray code has a characteristic that the Hamming distance between adjacent codes before and after is always 1. An error is most likely to occur between adjacent symbols, but if this Gray code is used, a plurality of bits do not differ between the symbols, so that error detection efficiency can be improved.
 そして、符号化装置は、その複数のブロックのそれぞれについて、そのグレイコードに変換された値を、その値に対応するPHYシンボルに変換する。これにより、例えば、それぞれシンボル番号(0~29)が割り当てられた30個のPHYシンボルが生成される。このPHYシンボルは、図183に示すラインパターン155cのブロックに相当し、一定の間隔ごとに短線が配列されたパターン(すなわち縞々のパターン)である。例えば、グレイコードに変換された値が1を示す場合には、図183に示すように、基本周波数の220倍の周波数を有するブロック(すなわちPHYシンボル)が生成される。 Then, the encoding device converts the value converted into the gray code for each of the plurality of blocks into a PHY symbol corresponding to the value. As a result, for example, 30 PHY symbols each assigned with a symbol number (0 to 29) are generated. This PHY symbol corresponds to the block of the line pattern 155c shown in FIG. 183, and is a pattern in which short lines are arranged at regular intervals (that is, a stripe pattern). For example, when the value converted to the gray code indicates 1, as shown in FIG. 183, a block (that is, a PHY symbol) having a frequency 220 times the fundamental frequency is generated.
 図188は、PHYシンボルを有する送信画像Im3の一例を示す図である。 FIG. 188 is a diagram illustrating an example of a transmission image Im3 having a PHY symbol.
 ベース画像Bi3の周囲には、図188に示すように、上述の30個のPHYシンボルと、2つのヘッダシンボルとが配置される。なお、ヘッダシンボルは、PHYシンボルのうちのヘッダとしての機能を有するシンボルある。2つのヘッダシンボルは、回転位置合わせ用のヘッダシンボルと、PHYバージョン指定用のヘッダシンボルとからなる。これらのヘッダシンボルの周波数の指標は-1である。つまり、図183に示すように、これらのヘッダシンボルの周波数は、基本周波数の200倍である。回転位置合わせ用のヘッダシンボルは、30個のPHYシンボルの配置を受信機200に認識させるためのシンボルである。受信機200は、この回転位置合わせ用のヘッダシンボルの位置を基準に、各PHYシンボルの配置を認識する。例えば、このような回転位置合わせ用のヘッダシンボルは、ベース画像Bi3の左上端に配置される。 As shown in FIG. 188, the above 30 PHY symbols and two header symbols are arranged around the base image Bi3. The header symbol is a symbol having a function as a header among the PHY symbols. The two header symbols include a header symbol for rotational alignment and a header symbol for specifying a PHY version. The index of the frequency of these header symbols is -1. That is, as shown in FIG. 183, the frequency of these header symbols is 200 times the fundamental frequency. The header symbol for rotational alignment is a symbol for making the receiver 200 recognize the arrangement of 30 PHY symbols. The receiver 200 recognizes the arrangement of each PHY symbol with reference to the position of the header symbol for rotational alignment. For example, such a header symbol for rotational alignment is arranged at the upper left corner of the base image Bi3.
 PHYバージョン指定用のヘッダシンボルは、PHYバージョンを指定するためのシンボルである。例えば、回転位置合わせ用のヘッダシンボルからのPHYバージョン指定用のヘッダシンボルの相対位置によって、PHYバージョンが指定される。ヘッダシンボル以外の上述の30個のPHYシンボルは、回転位置合わせ用のヘッダシンボルの右側から、ベース画像Bi3の周囲に時計回りに、シンボル番号の小さい順に配置される。 The header symbol for specifying the PHY version is a symbol for specifying the PHY version. For example, the PHY version is specified by the relative position of the header symbol for specifying the PHY version from the header symbol for rotational alignment. The above 30 PHY symbols other than the header symbol are arranged in order from the smallest symbol number in the clockwise direction around the base image Bi3 from the right side of the header symbol for rotational alignment.
 図189は、2つのPHYバージョンを説明するための図である。 FIG. 189 is a diagram for explaining two PHY versions.
 PHYバージョンには、PHYバージョン1とPHYバージョン2とがある。PHYバージョン1では、回転位置合わせ用のヘッダシンボルの右隣りに、PHYバージョン指定用のヘッダシンボルが配置される。PHYバージョン2では、回転位置合わせ用のヘッダシンボルの右隣りには、PHYバージョン指定用のヘッダシンボルは配置されない。つまり、PHYバージョン2では、回転位置合わせ用のヘッダシンボルとPHYバージョン指定用のヘッダシンボルとの間に、シンボル番号「0」のPHYシンボルが挟まれるように、PHYバージョン指定用のヘッダシンボルが配置される。このように、PHYバージョン指定用のヘッダシンボルは、その配置によって、PHYバージョンを示す。 PHY version includes PHY version 1 and PHY version 2. In PHY version 1, a header symbol for specifying a PHY version is arranged on the right side of the header symbol for rotational alignment. In PHY version 2, the header symbol for specifying the PHY version is not arranged on the right side of the header symbol for rotational alignment. That is, in PHY version 2, the header symbol for specifying the PHY version is arranged so that the PHY symbol with the symbol number “0” is sandwiched between the header symbol for rotational alignment and the header symbol for specifying the PHY version. Is done. Thus, the header symbol for specifying the PHY version indicates the PHY version by its arrangement.
 PHYバージョン1では、1つのPHYシンボルあたりのビット数Nは、2であり、ECCは16ビットあり、MACフレームは44ビットである。PHYボディは、MACフレームとECCとからなり、60ビットである。また、最大のID長(ID1の長さ)は34ビットであり、ID2は5ビットである。 In PHY version 1, the number N of bits per PHY symbol is 2, the ECC is 16 bits, and the MAC frame is 44 bits. The PHY body is composed of a MAC frame and ECC, and is 60 bits. The maximum ID length (ID1 length) is 34 bits, and ID2 is 5 bits.
 PHYバージョン2では、1つのPHYシンボルあたりのビット数Nは、3であり、ECCは20ビットあり、MACフレームは70ビットである。PHYボディは、MACフレームとECCとからなり、90ビットである。また、最大のID長(ID1の長さ)は62ビットであり、ID2は3ビットである。 In PHY version 2, the number N of bits per PHY symbol is 3, the ECC is 20 bits, and the MAC frame is 70 bits. The PHY body is composed of a MAC frame and ECC, and is 90 bits. The maximum ID length (ID1 length) is 62 bits, and ID2 is 3 bits.
 図190は、グレイコードを説明するための図である。 FIG. 190 is a diagram for explaining the Gray code.
 PHYバージョン1では、ビット数Nは2である。この場合、図187に示すグレイコード変換では、10進数で表記される「0、1、2、3」に対応する2進数の「00、01、10、11」は、グレイコード「00、01、11、10」に変換される。 In PHY version 1, the number of bits N is 2. In this case, in the Gray code conversion shown in FIG. 187, binary numbers “00, 01, 10, 11” corresponding to “0, 1, 2, 3” expressed in decimal numbers are gray codes “00, 01”. , 11, 10 ".
 PHYバージョン2では、ビット数Nは3である。この場合、図187に示すグレイコード変換では、10進数で表記される「0、1、2、3、4、5、6、7」に対応する2進数の「000、001、010、011、100、101、110、111」は、グレイコード「000、001、011、010、110、111、101、100」に変換される。 In PHY version 2, the number of bits N is 3. In this case, in the Gray code conversion shown in FIG. 187, binary numbers “000, 001, 010, 011,” corresponding to “0, 1, 2, 3, 4, 5, 6, 7” expressed in decimal numbers. 100, 101, 110, 111 "are converted into gray codes" 000, 001, 011, 010, 110, 111, 101, 100 ".
 図191は、受信機200による復号処理の一例を示す図である。 FIG. 191 is a diagram illustrating an example of a decoding process performed by the receiver 200.
 受信機200は、送信機100の送信画像Im3を撮像し、その撮像された送信画像Im3のラインパターン155cに含まれるヘッダシンボル(PHY heder symbol)の位置に基づいて、PHYバージョンを認識する(ステップS601)。なお、受信機200は、可視光通信が可能か否かを判断し、可視光通信が可能でないと判断した場合に送信画像Im3を撮像してもよい。この場合、受信機200は、イメージセンサにより、被写体を撮像することによって撮像画像を取得し、その撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出する。さらに、受信機200は、その少なくとも1つの輪郭の中から、所定の大きさ以上の四角形の輪郭を有する領域、または、所定の大きさ以上の角丸四角形の輪郭を有する領域を、選択領域として選択する。この選択領域には、被写体である送信画像Im3が映し出されている可能性が高い。したがって、ステップS601では、受信機200は、その選択領域のラインパターン155cに含まれるヘッダシンボルの位置に基づいて、PHYバージョンを認識する。 The receiver 200 captures the transmission image Im3 of the transmitter 100, and recognizes the PHY version based on the position of the header symbol (PHY header symbol) included in the line pattern 155c of the captured transmission image Im3 (step) S601). Note that the receiver 200 may determine whether visible light communication is possible, and may capture the transmission image Im3 if it is determined that visible light communication is not possible. In this case, the receiver 200 acquires a captured image by capturing an image of the subject using an image sensor, and extracts at least one contour by performing edge detection of the captured image. Furthermore, the receiver 200 selects, as a selection area, an area having a rectangular outline larger than a predetermined size or an area having a rounded square outline larger than a predetermined size from among the at least one outline. select. There is a high possibility that the transmission image Im3 as the subject is displayed in this selection area. Therefore, in step S601, the receiver 200 recognizes the PHY version based on the position of the header symbol included in the line pattern 155c of the selected area.
 また、受信機200は、上述の可視光通信の判断において、可視光通信が可能と判断した場合に、被写体を撮像するときには、上記各実施の形態と同様、イメージセンサの露光時間を第1の露光時間に設定し、その第1の露光時間で被写体を撮像することで、識別情報を含む復号用画像を取得する。具体的には、受信機200は、上述の可視光通信の判断において、可視光通信が可能と判断した場合に、被写体を撮像するときには、上記各実施の形態と同様、イメージセンサの有する複数の露光ラインに対応する複数の輝線から構成される輝線パターンを含む復号用画像を取得し、その輝線パターンを復号することによって可視光信号を取得する。一方、受信機200は、上述の可視光通信の判断において、可視光通信が可能でないと判断した場合に、被写体を撮像するときには、イメージセンサの露光時間を第2の露光時間に設定し、その第2の露光時間で被写体を撮像することで、撮像画像として通常画像を取得する。ここで、上述の第1の露光時間は、第2の露光時間よりも短い。 In addition, when the receiver 200 determines that visible light communication is possible in the above-described determination of visible light communication, when the subject is imaged, the exposure time of the image sensor is set to the first exposure time as in the above embodiments. By setting the exposure time and imaging the subject with the first exposure time, a decoding image including identification information is acquired. Specifically, in the above-described determination of visible light communication, the receiver 200, when determining that visible light communication is possible, when imaging a subject, as in each of the above embodiments, includes a plurality of image sensors. A decoding image including a bright line pattern composed of a plurality of bright lines corresponding to the exposure line is acquired, and a visible light signal is acquired by decoding the bright line pattern. On the other hand, when the receiver 200 determines that visible light communication is not possible in the above-described determination of visible light communication, when the subject is imaged, the exposure time of the image sensor is set to the second exposure time. By capturing the subject with the second exposure time, a normal image is acquired as a captured image. Here, the first exposure time described above is shorter than the second exposure time.
 次に、受信機200は、ラインパターン155cを構成する複数のPHYシンボルから、ECCが追加されたMACフレームを復元し、そのECCを確認する(ステップS602)。これにより、受信機200は、送信機100からMACフレームを受信する。そして、受信機200は、指定時間以内に指定回数だけ同一のMACフレームを受信したことを確認すると(ステップS603)、分割数(すなわちパケット分割数)を計算する(ステップS604)。つまり、受信機200は、受信機200は、図186に示すテーブルを参照し、MACフレームにおけるID長とdatapart長との組を用いて、そのMACフレームに対する分割数を導出する。これにより、分割数と、MACフレームのID1であるフレームIDとが復号される。つまり、受信機200は、上述の選択領域のラインパターンから識別情報を取得する。具体的には、受信機200は、上述の可視光通信の判断において、可視光通信が可能でないと判断した場合に、被写体を撮像するときには、通常画像のラインパターンから信号を取得する。ここで、可視光信号と、その信号は、同一の識別情報である。 Next, the receiver 200 restores the MAC frame to which the ECC is added from the plurality of PHY symbols constituting the line pattern 155c, and confirms the ECC (Step S602). Thereby, the receiver 200 receives the MAC frame from the transmitter 100. When the receiver 200 confirms that the same MAC frame has been received the designated number of times within the designated time (step S603), the receiver 200 calculates the number of divisions (that is, the number of packet divisions) (step S604). That is, the receiver 200 refers to the table shown in FIG. 186 and derives the number of divisions for the MAC frame using a set of the ID length and the datapart length in the MAC frame. Thereby, the number of divisions and the frame ID which is ID 1 of the MAC frame are decoded. That is, the receiver 200 acquires identification information from the line pattern of the selection area described above. Specifically, when it is determined that visible light communication is not possible in the above-described determination of visible light communication, the receiver 200 acquires a signal from a line pattern of a normal image when imaging a subject. Here, the visible light signal and the signal are the same identification information.
 ところで、送信画像Im3を有する送信機100は、不正に複製される場合がある。例えば、カメラとディスプレイとを備えたスマートフォンなどの機器が、送信画像Im3を有する送信機100になりすますことがある。具体的には、そのスマートフォンは、送信機100の送信画像Im3をカメラで撮像し、その撮像された送信画像Im3をディスプレイに表示する。これにより、スマートフォンは、送信機100のように、送信画像Im3の表示によって、フレームIDを受信機200に送信することができる。 Incidentally, the transmitter 100 having the transmission image Im3 may be illegally copied. For example, a device such as a smartphone provided with a camera and a display may impersonate the transmitter 100 having the transmission image Im3. Specifically, the smartphone captures the transmission image Im3 of the transmitter 100 with a camera and displays the captured transmission image Im3 on the display. Thereby, the smart phone can transmit frame ID to the receiver 200 by the display of the transmission image Im3 like the transmitter 100. FIG.
 そこで、受信機200は、スマートフォンなどの機器に表示される送信画像Im3が不正か否かを判定し、送信画像Im3が不正であると判定する場合には、その不正な送信画像Im3からのフレームIDの復号、またはそのフレームIDの利用を禁止してもよい。 Therefore, the receiver 200 determines whether or not the transmission image Im3 displayed on a device such as a smartphone is illegal. When the receiver 200 determines that the transmission image Im3 is illegal, the frame from the unauthorized transmission image Im3 is determined. The decryption of the ID or the use of the frame ID may be prohibited.
 図192は、受信機200による送信画像Im3の不正検知の方法を説明するための図である。 FIG. 192 is a diagram for explaining a method of detecting fraud of the transmission image Im3 by the receiver 200.
 送信画像Im3は例えば四角形である。不正な送信画像Im3であれば、その送信画像Im3の四角形の枠は、その送信画像Im3を表示するディスプレイの枠に対して、同一平面内で傾いている可能性が高い。一方、正当な送信画像Im3であれば、その送信画像Im3の四角形の枠は、上述の枠に対して、同一平面内で傾いていない。 The transmission image Im3 is, for example, a quadrangle. If the transmission image Im3 is illegal, the rectangular frame of the transmission image Im3 is likely to be inclined in the same plane with respect to the display frame that displays the transmission image Im3. On the other hand, if the transmission image Im3 is valid, the square frame of the transmission image Im3 is not inclined in the same plane with respect to the above-described frame.
 また、不正な送信画像Im3であれば、その送信画像Im3の四角形の枠は、その送信画像Im3を表示するディスプレイの枠に対して、奥行き方向に傾いている可能性が高い。一方、正当な送信画像Im3であれば、その送信画像Im3の四角形の枠は、上述の枠に対して、奥行き方向に傾いていない。 Further, if the transmission image Im3 is illegal, the rectangular frame of the transmission image Im3 is likely to be inclined in the depth direction with respect to the frame of the display that displays the transmission image Im3. On the other hand, if it is a legitimate transmission image Im3, the square frame of the transmission image Im3 is not inclined in the depth direction with respect to the above-described frame.
 受信機200は、上述のような正当な送信画像Im3と不正な送信画像Im3との差異に基づいて、送信画像Im3の不正検知を行う。 The receiver 200 detects the fraud of the transmission image Im3 based on the difference between the legitimate transmission image Im3 and the fraudulent transmission image Im3 as described above.
 具体的には、受信機200は、図192の(a)に示すように、カメラによる撮像によって、送信画像Im3の枠(図192の(a)に示す破線の四角形)と、その送信画像Im3を表示する例えばスマートフォンのディスプレイの枠(図192の(a)に示す実線の四角形)とを認識する。次に、受信機200は、送信画像Im3の枠の2つの対角線のうちの何れか1つと、ディスプレイの枠の2つの対角線のうちの何れか1つとを含む組み合わせごとに、その組み合わせに含まれる2つの対角線のなす角度を算出する。そして、受信機200は、組み合わせごとに算出されたなす角度のうち、絶対値が最小のなす角度が第1閾値(例えば、5度)以上であるか否かを判定することによって、その送信画像Im3が不正か否かを判定する。つまり、受信機200は、送信画像Im3の四角形の枠が、ディスプレイの枠に対して、同一平面内で傾いているか否かによって、その送信画像Imが不正か否かを判定する。受信機200は、その絶対値が最小のなす角度が第1閾値以上であれば、その送信画像Im3が不正であると判定し、そのなす角度が第1閾値未満であれば、その送信画像Im3が正当であると判定する。 Specifically, as shown in FIG. 192 (a), the receiver 200 captures the frame of the transmission image Im3 (the dashed rectangle shown in FIG. 192 (a)) and the transmission image Im3 by imaging with the camera. For example, the display frame of the smartphone (solid square shown in FIG. 192 (a)) is recognized. Next, the receiver 200 is included in each combination including any one of the two diagonals of the frame of the transmission image Im3 and any one of the two diagonals of the frame of the display. An angle formed by two diagonal lines is calculated. Then, the receiver 200 determines whether or not the angle formed with the smallest absolute value among the angles calculated for each combination is equal to or greater than a first threshold (for example, 5 degrees), thereby transmitting the transmission image. It is determined whether Im3 is invalid. That is, the receiver 200 determines whether or not the transmission image Im is illegal depending on whether or not the square frame of the transmission image Im3 is inclined in the same plane with respect to the display frame. The receiver 200 determines that the transmission image Im3 is invalid if the angle formed by the minimum absolute value is equal to or greater than the first threshold, and if the angle formed is less than the first threshold, the transmission image Im3. Is determined to be valid.
 また、図192の(b)に示すように、受信機200は、送信画像Im3の枠の上下方向に互いに対向する2つの辺の比率(a/b)と、スマートフォンのディスプレイの枠の上下方向に互いに対向する2つの辺の比率(A/B)とを算出する。そして、受信機200は、それらの比率を比較する。具体的には、受信機200は、比率(a/b)と比率(A/B)のうちの小さい比率を大きい比率で除算する。そして、受信機200は、その除算によって得られる値が第2閾値(例えば、0.9)以上であるか否かを判定することによって、その送信画像Im3が不正か否かを判定する。つまり、受信機200は、送信画像Im3の四角形の枠が、ディスプレイの枠に対して、奥行き方向に傾いているか否かによって、その送信画像Imが不正か否かを判定する。受信機200は、その除算によって得られる値が第2閾値未満であれば、その送信画像Im3が不正であると判定し、その値が第2閾値以上であれば、その送信画像Im3が正当であると判定する。 Further, as shown in FIG. 192 (b), the receiver 200 determines the ratio (a / b) of two sides facing each other in the vertical direction of the frame of the transmission image Im3 and the vertical direction of the frame of the smartphone display. The ratio of two sides facing each other (A / B) is calculated. And the receiver 200 compares those ratios. Specifically, the receiver 200 divides a small ratio between the ratio (a / b) and the ratio (A / B) by a large ratio. Then, the receiver 200 determines whether or not the transmission image Im3 is illegal by determining whether or not the value obtained by the division is equal to or greater than a second threshold (for example, 0.9). That is, the receiver 200 determines whether or not the transmission image Im is illegal depending on whether or not the square frame of the transmission image Im3 is inclined in the depth direction with respect to the display frame. If the value obtained by the division is less than the second threshold, the receiver 200 determines that the transmission image Im3 is invalid. If the value is equal to or greater than the second threshold, the transmission image Im3 is valid. Judge that there is.
 受信機200は、送信画像Im3が正当である場合にのみ、その送信画像Im3からフレームIDを復号し、送信画像Im3が不正であれば、その送信画像Im3からのフレームIDの復号を禁止する。 The receiver 200 decodes the frame ID from the transmission image Im3 only when the transmission image Im3 is valid, and prohibits the decoding of the frame ID from the transmission image Im3 if the transmission image Im3 is invalid.
 図193は、受信機200による送信画像Im3の不正検知を含む復号処理の一例を示すフローチャートである。 FIG. 193 is a flowchart illustrating an example of a decoding process including fraud detection of the transmission image Im3 by the receiver 200.
 まず、受信機200は、送信画像Im3を撮像し、その送信画像Im3の枠を検出する(ステップS611)。次に、受信機200は、送信画像Im3の枠を内包する四角形枠の検出処理を行う(ステップS612)。四角形枠は、上述のスマートフォンなどの機器が有する四角形のディスプレイの外周からなる枠である。ここで、受信機200は、そのステップS612における検出処理によって、四角形枠が検出されたいか否かを判定する(ステップS613)。四角形枠が検出されていないと判定すると(ステップS613のNo)、受信機200は、フレームIDの復号を禁止する(ステップS619)。 First, the receiver 200 captures the transmission image Im3 and detects the frame of the transmission image Im3 (step S611). Next, the receiver 200 performs detection processing of a rectangular frame that includes the frame of the transmission image Im3 (step S612). The quadrangular frame is a frame formed from the outer periphery of a quadrangular display included in a device such as the smartphone described above. Here, the receiver 200 determines whether or not a rectangular frame is to be detected by the detection process in step S612 (step S613). If it is determined that the quadrangular frame is not detected (No in step S613), the receiver 200 prohibits decoding of the frame ID (step S619).
 一方、四角形枠が検出されたと判定すると(ステップS613のYes)、受信機200は、送信画像Im3の枠と、検出された四角形枠とのそれぞれの対角線のなす角度を算出する(ステップS614)。そして、受信機200は、そのなす角度が第1閾値未満であるか否かを判定する(ステップS615)。ここで、そのなす角度が第1閾値以上であると判定すると(ステップS615のNo)、受信機200は、フレームIDの復号を禁止する(ステップS619)。 On the other hand, if it is determined that a quadrangular frame has been detected (Yes in step S613), the receiver 200 calculates an angle formed by each diagonal line between the frame of the transmission image Im3 and the detected quadrangular frame (step S614). Then, the receiver 200 determines whether or not the angle formed is less than the first threshold (step S615). If it is determined that the angle formed is equal to or greater than the first threshold (No in step S615), the receiver 200 prohibits decoding of the frame ID (step S619).
 一方、そのなす角度が第2閾値未満であると判定すると(ステップS615のYes)、受信機200は、送信画像Im3の枠における2辺の比率(a/b)と、四角形枠における2辺の比率(A/B)とを用いた除算を行う(ステップS616)。そして、受信機200は、その除算によって得られる値が第2閾値未満か否かを判定する(ステップS617)。ここで、その得られる値が第2閾値以上であると判定すると(ステップS617のNo)、受信機200は、フレームIDを復号する(ステップS618)。一方、受信機200は、その得られる値が第2閾値未満であると判定すると(ステップS617のYes)、フレームIDの復号を禁止する(ステップS619)。 On the other hand, if it is determined that the angle formed is less than the second threshold (Yes in step S615), the receiver 200 determines the ratio (a / b) of the two sides in the frame of the transmission image Im3 and the two sides in the rectangular frame. Division using the ratio (A / B) is performed (step S616). Then, the receiver 200 determines whether or not the value obtained by the division is less than the second threshold value (step S617). Here, when it is determined that the obtained value is equal to or greater than the second threshold (No in step S617), the receiver 200 decodes the frame ID (step S618). On the other hand, when the receiver 200 determines that the obtained value is less than the second threshold value (Yes in step S617), the receiver 200 prohibits decoding of the frame ID (step S619).
 なお、上述の例では、受信機200は、ステップS613、S615またはS617の判定結果に応じて、フレームIDの復号を禁止する。しかし、受信機200は、フレームIDの復号を先に実行し、その後、上記各ステップを実行してもよい。この場合には、受信機200は、ステップS613、S615またはS617の判定結果に応じて、復号されたフレームIDの利用を禁止、またはそのフレームIDを破棄する。 In the above-described example, the receiver 200 prohibits decoding of the frame ID according to the determination result of step S613, S615, or S617. However, the receiver 200 may execute the above steps after decoding the frame ID first. In this case, the receiver 200 prohibits the use of the decoded frame ID or discards the frame ID according to the determination result of step S613, S615, or S617.
 また、送信画像Im3には、プリズムシールが貼着されていてもよい。この場合、受信機200は、図176に示す例と同様、受信機200の移動によって、送信画像Im3のプリズムシールの模様または色が変化するか否かを判定する。そして、受信機200は、変化すると判定すると、その送信画像Im3が正当なものであると判断し、その送信画像Im3からフレームIDを復号する。一方、変化しないと判定すると、受信機200は、その送信画像Im3が不正なものであると判断し、その送信画像Im3からのフレームIDの復号を禁止する。なお、上述と同様、受信機200は、フレームIDの復号を先に実行し、その後、上記模様または色の変化の判定を実行してもよい。この場合には、受信機200は、上記模様または色が変化しないと判定すると、復号されたフレームIDの利用を禁止、またはそのフレームIDを破棄する。 Further, a prism seal may be attached to the transmission image Im3. In this case, similarly to the example illustrated in FIG. 176, the receiver 200 determines whether or not the prism seal pattern or color of the transmission image Im3 is changed by the movement of the receiver 200. If the receiver 200 determines that the transmission is changed, the receiver 200 determines that the transmission image Im3 is valid, and decodes the frame ID from the transmission image Im3. On the other hand, if it is determined that there is no change, the receiver 200 determines that the transmission image Im3 is invalid, and prohibits decoding of the frame ID from the transmission image Im3. Note that, as described above, the receiver 200 may perform the frame ID decoding first, and then perform the pattern or color change determination. In this case, when the receiver 200 determines that the pattern or color does not change, the receiver 200 prohibits the use of the decoded frame ID or discards the frame ID.
 また、受信機200は、ユーザに受信機200を送信画像Im3に近づけさせることによって、その送信画像Im3が正当なものであるか否かを判定してもよい。例えば、送信機100は、送信画像Im3を点灯させて、その送信画像Im3を輝度変化させることによって、可視光信号を送信している。そこで、受信機200は、送信画像Im3を撮像したときには、ユーザに対して受信機200をその送信画像Im3に近づけるように促すメッセージをディスプレイに表示する。ユーザは、そのメッセージに応じて、受信機200のカメラ(すなわちイメージセンサ)を、送信画像Im3に近づける。このとき、受信機200のカメラは、送信画像Im3からの光の受光量が増大するため、イメージセンサの露光時間を例えば最小に設定する。その結果、受信機200による送信画像Im3の撮像によってディスプレイに表示される画像には、縞模様が現れる。なお、受信機200が光通信対応であれば、その縞模様は輝線パターンとして明瞭に現れる。一方、受信機200が光通信非対応であっても、その縞模様は輝線パターンとして明瞭には現れないが、ぼんやりと現れ、その縞模様が現れるか否かによって、送信画像Im3が正当なものか否かを判定することができる。つまり、受信機200は、縞模様が現れれば、その送信画像Im3を正当なものと判定し、縞模様が現れなければ、その送信画像Im3を不正なものと判定する。 Further, the receiver 200 may determine whether or not the transmission image Im3 is valid by causing the user to bring the receiver 200 close to the transmission image Im3. For example, the transmitter 100 transmits a visible light signal by turning on the transmission image Im3 and changing the luminance of the transmission image Im3. Therefore, when the receiver 200 captures the transmission image Im3, the receiver 200 displays a message prompting the user to bring the receiver 200 closer to the transmission image Im3 on the display. In response to the message, the user brings the camera (that is, the image sensor) of the receiver 200 closer to the transmission image Im3. At this time, the camera of the receiver 200 sets the exposure time of the image sensor to, for example, the minimum because the amount of light received from the transmission image Im3 increases. As a result, a striped pattern appears in the image displayed on the display when the receiver 200 captures the transmission image Im3. If the receiver 200 is compatible with optical communication, the striped pattern clearly appears as a bright line pattern. On the other hand, even if the receiver 200 is not compatible with optical communication, the stripe pattern does not appear clearly as a bright line pattern, but the transmission image Im3 is legitimate depending on whether or not the stripe pattern appears. It can be determined whether or not. That is, if a stripe pattern appears, the receiver 200 determines that the transmission image Im3 is valid, and if no stripe pattern appears, the receiver 200 determines that the transmission image Im3 is illegal.
 なお、上述と同様、受信機200は、フレームIDの復号を先に実行し、その後、上記縞模様の判定を実行してもよい。この場合には、受信機200は、上記縞模様が現れないと判定すると、復号されたフレームIDの利用を禁止、またはそのフレームIDを破棄する。 Note that, similarly to the above, the receiver 200 may execute the decoding of the frame ID first, and then perform the determination of the stripe pattern. In this case, when the receiver 200 determines that the stripe pattern does not appear, use of the decoded frame ID is prohibited or the frame ID is discarded.
 (変形例)
 本実施の形態における受信機200は、実施の形態9における受信機200の機能を備えた表示装置であってもよい。つまり、表示装置は、可視光通信が可能か否かを判断し、可能な場合には、実施の形態9を含む上記各実施の形態における受信機200と同様、可視光または光IDに関する処理を行う。一方、表示装置は、可視光通信が不可能な場合には、上述の送信画像またはフレームIDに関する処理を行う。なお、その可視光通信とは、被写体の輝度変化によって信号を送信し、イメージセンサがその被写体を撮像することによって得られる、そのイメージセンサの各露光ラインに対応する輝線パターンを、復号することによってその信号を受信する通信方式である。
(Modification)
The receiver 200 in the present embodiment may be a display device having the function of the receiver 200 in the ninth embodiment. That is, the display device determines whether visible light communication is possible. If possible, the display device performs processing related to visible light or light ID as in the receiver 200 in each of the embodiments including the ninth embodiment. Do. On the other hand, when visible light communication is impossible, a display apparatus performs the process regarding the above-mentioned transmission image or frame ID. Note that the visible light communication is by decoding a bright line pattern corresponding to each exposure line of the image sensor, which is obtained by transmitting a signal according to a change in luminance of the subject and imaging the subject by the image sensor. This is a communication method for receiving the signal.
 図194Aは、本変形例に係る表示方法を示すフローチャートである。 FIG. 194A is a flowchart showing a display method according to this modification.
 本発明の一態様に係る表示方法は、画像を表示する表示方法であって、ステップSG1~SG4を含む。つまり、上述の受信機200である表示装置は、まず、可視光通信が可能か否かを判断する(ステップSG4)。ここで、可視光通信が可能と判断する場合(ステップSG4のYes)、表示装置は、イメージセンサにより、被写体を撮像すすることによって可視光信号を識別情報(すなわち光ID)として取得する(ステップSG1)。次に、表示装置は、その光IDに関連付けられている第1の動画像を表示する(ステップSG2)。そして、表示装置は、第1の動画像をスライドさせる操作を受け付けると、その第1の動画像の次に上記光IDに関連付けられている第2の動画像を表示する(ステップSG3)。 The display method according to an aspect of the present invention is a display method for displaying an image, and includes steps SG1 to SG4. That is, the display device that is the above-described receiver 200 first determines whether or not visible light communication is possible (step SG4). Here, when it is determined that visible light communication is possible (Yes in step SG4), the display device acquires a visible light signal as identification information (that is, a light ID) by capturing an image of the subject with the image sensor (step ID). SG1). Next, the display device displays the first moving image associated with the light ID (step SG2). When the display device receives an operation of sliding the first moving image, the display device displays a second moving image associated with the light ID next to the first moving image (step SG3).
 図194Bは、本変形例に係る表示装置の構成を示すブロック図である。 FIG. 194B is a block diagram showing a configuration of a display device according to this modification.
 本発明の一態様に係る表示装置G10は、画像を表示する装置であって、判断部G13と取得部G11と表示部G12とを備える。なお、表示装置G10は、上述の受信機200である。判断部G13は、可視光通信が可能か否かを判断する。取得部G11は、判断部G13において可視光通信が可能と判断された場合に、イメージセンサにより被写体を撮像することによって可視光信号を識別情報(すなわち光ID)として取得する。次に、表示部G12は、その光IDに関連付けられている第1の動画像を表示する。そして、表示部G12は、第1の動画像をスライドさせる操作を受け付けると、その第1の動画像の次に上記光IDに関連付けられている第2の動画像を表示する。 The display device G10 according to an aspect of the present invention is a device that displays an image, and includes a determination unit G13, an acquisition unit G11, and a display unit G12. The display device G10 is the receiver 200 described above. The determination unit G13 determines whether visible light communication is possible. When the determination unit G13 determines that visible light communication is possible, the acquisition unit G11 acquires a visible light signal as identification information (that is, a light ID) by imaging a subject with an image sensor. Next, the display unit G12 displays the first moving image associated with the light ID. Then, when receiving an operation of sliding the first moving image, the display unit G12 displays the second moving image associated with the light ID next to the first moving image.
 例えば、第1の動画像および第2の動画像のそれぞれは、図162に示す第1のAR画像P46および第2のAR画像P46cである。図194Aおよび図194Bに示す表示方法および表示装置G10では、第1の動画像をスライドさせる操作、つまりスワイプが受け付けられると、第1の動画像の次に識別情報に関連付けられている第2の動画像が表示される。したがって、ユーザに有益な画像を容易に表示することができる。また、事前に可視光通信が可能か否かの判断が行われるため、不可能な場合にまで、可視光信号を取得しようとする無駄な処理を省くことができ、処理負担を軽減することができる。 For example, each of the first moving image and the second moving image is the first AR image P46 and the second AR image P46c shown in FIG. In the display method and the display device G10 shown in FIGS. 194A and 194B, when an operation of sliding the first moving image, that is, a swipe is received, the second associated with the identification information next to the first moving image is received. A moving image is displayed. Therefore, an image useful for the user can be easily displayed. In addition, since it is determined in advance whether or not visible light communication is possible, it is possible to omit a wasteful process of acquiring a visible light signal and to reduce a processing burden until it is impossible. it can.
 ここで、表示装置G10は、可視光通信の判断において、可視光通信が可能でないと判断した場合には、送信画像Im3から識別情報(すなわちフレームID)を取得してもよい。この場合、表示装置G10は、イメージセンサにより、被写体を撮像することによって撮像画像を取得し、その撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出する。次に、表示装置G10は、その少なくとも1つの輪郭の中から、所定の大きさ以上の四角形の輪郭を有する領域、または、所定の大きさ以上の角丸四角形の輪郭を有する領域を、選択領域として選択する。そして、表示装置G10は、その選択領域のラインパターンから識別情報を取得する。なお、上述の角丸四角形は、4つの角のそれぞれの外周が円弧状になっている四角形である。 Here, if the display device G10 determines that visible light communication is not possible in the determination of visible light communication, the display device G10 may acquire identification information (that is, a frame ID) from the transmission image Im3. In this case, the display device G10 acquires a captured image by capturing an image of the subject using an image sensor, and extracts at least one contour by performing edge detection of the captured image. Next, the display device G10 selects, from among the at least one outline, an area having a square outline larger than a predetermined size or an area having a rounded square outline larger than a predetermined size. Choose as. Then, the display device G10 acquires identification information from the line pattern of the selected area. The rounded quadrangular shape described above is a quadrangular shape in which the outer periphery of each of the four corners is an arc shape.
 これにより、例えば図183および図188に示す送信画像が被写体として撮像され、その送信画像の領域が選択領域として選択され、その送信画像のラインパターンから識別情報が取得される。したがって、可視光通信が不可能な場合でも、識別情報を適切に取得することができる。 Thereby, for example, the transmission image shown in FIGS. 183 and 188 is captured as a subject, the area of the transmission image is selected as the selection area, and the identification information is acquired from the line pattern of the transmission image. Therefore, even when visible light communication is impossible, identification information can be acquired appropriately.
 また、表示装置G10は、可視光通信の判断において、可視光通信が可能と判断した場合に、被写体を撮像するときには、イメージセンサの露光時間を第1の露光時間に設定し、その第1の露光時間で被写体を撮像することで、識別情報を含む復号用画像を取得する。また、表示装置G10は、可視光通信の判断において、可視光通信が可能でないと判断した場合に、被写体を撮像するときには、イメージセンサの露光時間を第2の露光時間に設定し、その第2の露光時間で前記被写体を撮像することで、撮像画像として通常画像を取得する。ここで、上述の第1の露光時間は、第2の露光時間よりも短い。 Further, when it is determined that visible light communication is possible in the determination of visible light communication, the display device G10 sets the exposure time of the image sensor to the first exposure time when imaging the subject, and the first exposure time is set. By capturing the subject with the exposure time, a decoding image including identification information is acquired. Further, when it is determined that visible light communication is not possible in the determination of visible light communication, the display device G10 sets the exposure time of the image sensor to the second exposure time when the subject is imaged, and the second exposure time. A normal image is acquired as a captured image by capturing the subject with the exposure time of. Here, the first exposure time described above is shorter than the second exposure time.
 これにより、露光時間を切り替えることによって、可視光通信による識別情報の取得と、送信画像の撮像による識別情報の取得とを、適切に切り替えることができる。 Thus, by switching the exposure time, it is possible to appropriately switch between acquisition of identification information by visible light communication and acquisition of identification information by capturing a transmission image.
 また、上述の被写体は、イメージセンサから見て矩形形状であり、その被写体の中心領域が輝度変化することにより、可視光信号を送信し、その被写体の周縁にバーコード状のラインパターンが配置されている。表示装置G10は、可視光通信の判断において、可視光通信が可能と判断した場合に、その被写体を撮像するときには、イメージセンサの有する複数の露光ラインに対応する複数の輝線から構成される輝線パターンを含む復号用画像を取得し、その輝線パターンを復号することによって可視光信号を取得する。可視光信号は例えば光IDである。また、表示装置G10は、可視光通信の判断において、可視光通信が可能でないと判断した場合に、その被写体を撮像するときには、通常画像のラインパターンから信号を取得する。ここで、可視光信号と、その信号は、同一の識別情報である。 Further, the above-mentioned subject has a rectangular shape as viewed from the image sensor, and the center region of the subject changes in luminance, so that a visible light signal is transmitted, and a barcode-like line pattern is arranged around the subject. ing. When the display device G10 determines that visible light communication is possible in the determination of visible light communication, when the subject is imaged, the bright line pattern composed of a plurality of bright lines corresponding to a plurality of exposure lines of the image sensor. And a visible light signal is obtained by decoding the bright line pattern. The visible light signal is, for example, a light ID. Further, when it is determined that visible light communication is not possible in the determination of visible light communication, the display device G10 acquires a signal from the line pattern of the normal image when imaging the subject. Here, the visible light signal and the signal are the same identification information.
 これにより、可視光信号によって示される識別情報と、ラインパターンの信号によって示される識別情報とが同一であるため、可視光通信が不可能であっても、可視光信号によって示される識別情報を適切に取得することができる。 As a result, the identification information indicated by the visible light signal is the same as the identification information indicated by the line pattern signal, and therefore the identification information indicated by the visible light signal is appropriate even if visible light communication is impossible. Can be obtained.
 図194Cは、本変形例に係る通信方法を示すフローチャートである。 FIG. 194C is a flowchart showing a communication method according to this modification.
 本発明の一態様に係る通信方法は、イメージセンサを備えた端末を用いた通信方法であって、ステップSG11~SG13を含む。つまり、上述の受信機200である端末は、まず、端末が可視光通信を行うことが可能か否かを判断する(ステップSG11)。ここで、端末が可視光通信を行うことが可能と判断した場合(ステップSG11のYes)、端末は、ステップSG12の処理を実行する。つまり、端末は、イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、その復号用画像に現れる縞模様から、被写体が送信する第1の識別情報を取得する(ステップSG12)。一方、端末は、ステップSG11での可視光通信の判断において、端末が可視光通信を行うことが可能でないと判断した場合(ステップSG11のNo)、ステップSG13の処理を実行する。つまり、端末は、イメージセンサにより、被写体を撮像することによって撮像画像を取得し、その撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、その少なくとも1つの輪郭の中から、所定の特定領域を特定し、その特定領域のラインパターンから被写体が送信する第2の識別情報を取得する(ステップSG13)。なお、第1の識別情報は、例えば光IDであり、第2の識別情報は、例えば画像IDまたはフレームIDである。 The communication method according to one aspect of the present invention is a communication method using a terminal including an image sensor, and includes steps SG11 to SG13. That is, the terminal that is the above-described receiver 200 first determines whether or not the terminal can perform visible light communication (step SG11). Here, when it is determined that the terminal can perform visible light communication (Yes in step SG11), the terminal executes the process of step SG12. That is, the terminal acquires a decoding image by imaging a subject whose luminance changes with an image sensor, and acquires first identification information transmitted by the subject from a striped pattern that appears in the decoding image (step). SG12). On the other hand, if the terminal determines that the terminal cannot perform visible light communication in the determination of visible light communication in step SG11 (No in step SG11), the terminal executes the process in step SG13. That is, the terminal acquires a captured image by capturing an image of a subject with an image sensor, extracts at least one contour by detecting an edge of the captured image, and selects a predetermined contour from the at least one contour. The second identification information transmitted by the subject is acquired from the line pattern of the specific area (step SG13). Note that the first identification information is, for example, an optical ID, and the second identification information is, for example, an image ID or a frame ID.
 図194Dは、本変形例に係る通信装置の構成を示すブロック図である。 FIG. 194D is a block diagram showing a configuration of a communication apparatus according to this modification.
 本発明の一態様に係る通信装置G20は、イメージセンサを備えた端末を用いた通信装置であって、判断部G21と、第1の取得部G22と、第2の取得部G23とを備える。 The communication device G20 according to an aspect of the present invention is a communication device using a terminal including an image sensor, and includes a determination unit G21, a first acquisition unit G22, and a second acquisition unit G23.
 判断部G21は、端末が可視光通信を行うことが可能か否かを判断する。 The determination unit G21 determines whether or not the terminal can perform visible light communication.
 第1の取得部G22は、判断部G21において、端末が可視光通信を行うことが可能と判断された場合に、イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、その復号用画像に現れる縞模様から被写体が送信する第1の識別情報を取得する。 When the determination unit G21 determines that the terminal can perform visible light communication, the first acquisition unit G22 acquires a decoding image by imaging a subject whose luminance changes with the image sensor. First identification information transmitted from the subject is obtained from the striped pattern appearing in the decoding image.
 第2の取得部G23は、判断部G21において、端末が可視光通信を行うことが可能でないと判断された場合に、イメージセンサにより、被写体を撮像することによって撮像画像を取得し、その撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、少なくとも1つの輪郭の中から、所定の特定領域を特定し、その特定領域のラインパターンから被写体が送信する第2の識別情報を取得する。 When the determination unit G21 determines that the terminal cannot perform visible light communication, the second acquisition unit G23 acquires a captured image by capturing an image of the subject using the image sensor, and the captured image By detecting the edge, at least one contour is extracted, a predetermined specific area is specified from the at least one outline, and second identification information transmitted from the subject is obtained from the line pattern of the specific area To do.
 なお、端末は、通信装置G20に含まれていてもよく、通信装置G20の外部にあってもよい。または、端末は、通信装置G20を含んでいてもよい。つまり、図194Cに示すフローチャートの各ステップは、端末によって実行されてもよく、通信装置G20によって実行されてもよい。 Note that the terminal may be included in the communication device G20 or may be outside the communication device G20. Alternatively, the terminal may include a communication device G20. That is, each step of the flowchart illustrated in FIG. 194C may be executed by the terminal or the communication device G20.
 これにより、受信機200などの端末は、可視光通信ができるか否かに関わらず、送信機などの被写体から、第1の識別情報または第2の識別情報を取得することができる。つまり、端末は、可視光通信を行うことができる場合には、被写体から例えば光IDを第1の識別情報として取得する。一方、端末は、可視光通信を行うことができなくても、その被写体から例えば画像IDまたはフレームIDを第2の識別情報として取得することができる。具体的には、例えば図183および図188に示す送信画像が被写体として撮像され、その送信画像の領域が特定領域(すなわち選択領域)として選択され、その送信画像のラインパターンから第2の識別情報が取得される。したがって、可視光通信が不可能な場合でも、第2の識別情報を適切に取得することができる。 Thereby, a terminal such as the receiver 200 can acquire the first identification information or the second identification information from a subject such as a transmitter regardless of whether or not visible light communication is possible. That is, when the terminal can perform visible light communication, the terminal acquires, for example, the light ID from the subject as the first identification information. On the other hand, even if the terminal cannot perform visible light communication, the terminal can acquire, for example, an image ID or a frame ID from the subject as the second identification information. Specifically, for example, the transmission image illustrated in FIGS. 183 and 188 is captured as a subject, the area of the transmission image is selected as a specific area (that is, a selection area), and second identification information is obtained from the line pattern of the transmission image. Is acquired. Therefore, even when visible light communication is impossible, the second identification information can be appropriately acquired.
 また、端末は、上述の特定領域の特定では、所定の大きさ以上の四角形の輪郭を有する領域、または、所定の大きさ以上の角丸四角形の輪郭を有する領域を、その特定領域として特定してもよい。 In addition, in specifying the specific area described above, the terminal specifies, as the specific area, an area having a rectangular outline larger than a predetermined size or an area having a rounded square outline larger than a predetermined size. May be.
 これにより、例えば図179に示すように、四角形または角丸四角形の領域を特定領域として適切に特定することができる。 Thus, for example, as shown in FIG. 179, a quadrangular or rounded quadrangular region can be appropriately identified as the specific region.
 また、端末は、上述の可視光通信の判断では、端末が露光時間を所定の値以下に変更することができる端末であると特定した場合に、可視光通信を行うことが可能であると判断し、端末が露光時間を前記所定の値以下に変更することができない端末であると特定した場合に、可視光通信を行うことが可能でないと判断してもよい。 Further, in the above-described determination of visible light communication, the terminal determines that visible light communication can be performed when the terminal specifies that the exposure time can be changed to a predetermined value or less. Then, when the terminal specifies that the exposure time cannot be changed to the predetermined value or less, it may be determined that the visible light communication cannot be performed.
 これにより、例えば図180に示すように、可視光信号を行うことが可能か否かを適切に判断することができる。 Thereby, for example, as shown in FIG. 180, it is possible to appropriately determine whether or not a visible light signal can be performed.
 また、端末は、可視光通信の判断において、端末が可視光通信を行うことが可能と判断した場合に、被写体を撮像するときには、イメージセンサの露光時間を第1の露光時間に設定し、その第1の露光時間で被写体を撮像することで、復号用画像を取得してもよい。さらに、端末は、可視光通信の判断において、端末が可視光通信を行うことが可能でないと判断した場合に、被写体を撮像するときには、イメージセンサの露光時間を第2の露光時間に設定し、その第2の露光時間で被写体を撮像することで、撮像画像を取得してもよい。ここで、第1の露光時間は、前記第2の露光時間よりも短い。 In addition, when the terminal determines that the terminal can perform visible light communication in the determination of visible light communication, when imaging the subject, the terminal sets the exposure time of the image sensor to the first exposure time, A decoding image may be acquired by imaging the subject with the first exposure time. Further, when the terminal determines that the terminal is not capable of performing visible light communication in the determination of visible light communication, when imaging the subject, the exposure time of the image sensor is set to the second exposure time, The captured image may be acquired by capturing the subject with the second exposure time. Here, the first exposure time is shorter than the second exposure time.
 これにより、第1の露光時間での撮像によって、輝線パターン領域を有する復号用画像を取得して、その輝線パターン領域に対する復号によって、第1の識別情報を適切に取得することができる。さらに、第2の露光時間での撮像によって、通常撮影画像を撮像画像として取得し、その通常撮影画像に現れているラインパターンから第2の識別情報を適切に取得することができる。これにより、端末は、第1の露光時間と第2の露光時間とを使い分けることによって、その端末に適した第1の識別情報または第2の識別情報を取得することができる。 Thereby, it is possible to acquire a decoding image having a bright line pattern area by imaging at the first exposure time, and appropriately acquire the first identification information by decoding the bright line pattern area. Furthermore, a normal captured image can be acquired as a captured image by imaging at the second exposure time, and the second identification information can be appropriately acquired from the line pattern appearing in the normal captured image. Accordingly, the terminal can acquire the first identification information or the second identification information suitable for the terminal by properly using the first exposure time and the second exposure time.
 また、被写体は、イメージセンサから見て矩形形状であり、その被写体の中心領域が輝度変化することにより、第1の識別情報を送信し、当該被写体の周縁にバーコード状のラインパターンが配置されている。そして、端末は、可視光通信の判断において、端末が可視光通信を行うことが可能と判断した場合に、被写体を撮像するときには、イメージセンサの有する複数の露光ラインに対応する複数の輝線から構成される輝線パターンを含む復号用画像を取得し、その輝線パターンを復号することによって第1の識別情報を取得する。さらに、端末は、可視光通信の判断において、端末が可視光通信を行うことが可能でないと判断した場合に、被写体を撮像するときには、撮像画像のラインパターンから第2の識別情報を取得してもよい。 The subject has a rectangular shape as viewed from the image sensor, and the first identification information is transmitted by changing the luminance of the central region of the subject, and a barcode-like line pattern is arranged around the subject. ing. When the terminal determines that the terminal can perform visible light communication in the determination of visible light communication, the terminal includes a plurality of bright lines corresponding to the plurality of exposure lines of the image sensor when imaging the subject. The decoding image including the bright line pattern to be obtained is acquired, and the first identification information is acquired by decoding the bright line pattern. Further, when the terminal determines that the terminal cannot perform visible light communication in the determination of visible light communication, the terminal acquires second identification information from the line pattern of the captured image when imaging the subject. Also good.
 これにより、中心領域が輝度変化する被写体から、第1の識別情報および第2の識別情報を適切に取得することができる。 Thus, the first identification information and the second identification information can be appropriately acquired from the subject whose central region changes in luminance.
 また、復号用画像から得られる第1の識別情報と、ラインパターンから得られる第2の識別情報は、同一の情報であってもよい。 Also, the first identification information obtained from the decoding image and the second identification information obtained from the line pattern may be the same information.
 これにより、可視光通信が可能な端末でも、可視光通信が不可能な端末でも、その被写体から同じ情報を取得することができる。 Thus, the same information can be acquired from the subject both in a terminal capable of visible light communication and a terminal incapable of visible light communication.
 図194Eは、実施の形態10およびその変形例に係る送信機の構成を示すブロック図である。 FIG. 194E is a block diagram showing a configuration of the transmitter according to Embodiment 10 and its modifications.
 送信機G30は、上述の送信機100に相当する。この送信機G30は、光源G31と、マイクロコントローラG32と、照明板G33とを備える。光源G31は、照明板33の背面側から光を照射する。マイクロコントローラG32は、光源G31の輝度を変化させる。なお、照明板G33は、光源G31からの光を透過させる板、すなわち、透光性を有する板である。また、照明板G33の形状は、例えば矩形形状である。 Transmitter G30 corresponds to the transmitter 100 described above. The transmitter G30 includes a light source G31, a microcontroller G32, and an illumination plate G33. The light source G31 irradiates light from the back side of the illumination plate 33. The microcontroller G32 changes the luminance of the light source G31. The illumination plate G33 is a plate that transmits light from the light source G31, that is, a light-transmitting plate. Moreover, the shape of the illumination plate G33 is, for example, a rectangular shape.
 マイクロコントローラG32は、光源G31を輝度変化させることにより、その光源G31から照明板G33を介して第1の識別情報を送信する。また、また、照明板G33の前面側の周辺にバーコード状のラインパターンG34が配置されており、ラインパターンG34に第2の識別情報が符号化されている。さらに、第1の識別情報と、第2の識別情報は、同じ情報である。 The microcontroller G32 transmits the first identification information from the light source G31 via the illumination plate G33 by changing the luminance of the light source G31. In addition, a barcode-like line pattern G34 is arranged around the front side of the illumination plate G33, and second identification information is encoded in the line pattern G34. Further, the first identification information and the second identification information are the same information.
 これにより、可視光通信を行うことが可能な端末に対しても、不可能な端末に対しても、同じ情報を送信することができる。 Thereby, the same information can be transmitted to a terminal capable of performing visible light communication and a terminal capable of performing visible light communication.
 なお、上記実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPUまたはプロセッサなどのプログラム実行部が、ハードディスクまたは半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。例えばプログラムは、図191、図193、図194A、および図194Cのフローチャートによって示される表示方法をコンピュータに実行させる。 In the above embodiment, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory. For example, the program causes the computer to execute the display method shown by the flowcharts of FIGS. 191, 193, 194A, and 194C.
 (実施の形態11)
 本実施の形態におけるサーバの管理方法は、携帯端末のユーザに適切なサービスを提供することができる方法である。
(Embodiment 11)
The server management method in the present embodiment is a method capable of providing appropriate services to the user of the mobile terminal.
 図195は、本実施の形態におけるサーバを含む通信システムの構成の一例を示す図である。 FIG. 195 is a diagram illustrating an example of a configuration of a communication system including a server in the present embodiment.
 この通信システムは、送信機100、受信機200、第1のサーバ301、第2のサーバ302、および店舗システム310を備える。本実施の形態における送信機100および受信機200は、上記各実施の形態における送信機100および受信機200と同一の機能をそれぞれ備えていてもよい。また、送信機100は、例えば店舗のサイネージとして構成され、輝度変化することによって光IDを可視光信号として送信する。店舗システム310は、送信機100を有する店舗を管理するための少なくとも1つのコンピュータを有する。また、受信機200は、例えば、カメラおよびディスプレイを備えたスマートフォンとして構成された携帯端末である。 This communication system includes a transmitter 100, a receiver 200, a first server 301, a second server 302, and a store system 310. The transmitter 100 and the receiver 200 in this embodiment may have the same functions as the transmitter 100 and the receiver 200 in each of the above embodiments. Further, the transmitter 100 is configured as, for example, a signage of a store, and transmits the light ID as a visible light signal by changing the luminance. The store system 310 has at least one computer for managing a store having the transmitter 100. The receiver 200 is a portable terminal configured as a smartphone including a camera and a display, for example.
 例えば、受信機200のユーザは、受信機200を操作することによって、店舗システム310に対して事前に予約処理を行う。この予約処理は、そのユーザの名前など、そのユーザに関する情報であるユーザ情報と、そのユーザが注文するメニューとを、ユーザが店舗に訪れる前に店舗システム310に登録するための処理である。なお、ユーザは、このような予約処理を行っていなくてもよい。 For example, the user of the receiver 200 performs a reservation process in advance for the store system 310 by operating the receiver 200. This reservation process is a process for registering user information, such as the user's name, on user information and a menu ordered by the user in the store system 310 before the user visits the store. Note that the user does not have to perform such reservation processing.
 そして、ユーザは、店舗に訪れると、その店舗のサイネージである送信機100を受信機200で撮像する。これにより、受信機200は、送信機100から可視光通信によって光IDを受信する。そして、受信機200は、その光IDを、無線通信を介して第2のサーバ302に送信する。第2のサーバ302は、受信機200から光IDを受信すると、その光IDに対応付けられている店舗情報を、無線通信を介して受信機200に送信する。店舗情報は、そのサイネージを掲げる店舗に関する情報である。 Then, when the user visits the store, the receiver 200 images the transmitter 100 that is the signage of the store. Thereby, the receiver 200 receives the optical ID from the transmitter 100 by visible light communication. Then, the receiver 200 transmits the optical ID to the second server 302 via wireless communication. When the second server 302 receives the optical ID from the receiver 200, the second server 302 transmits store information associated with the optical ID to the receiver 200 via wireless communication. The store information is information regarding stores that have the signage.
 受信機200は、第2のサーバ302から店舗情報を受信すると、ユーザ情報およびその店舗情報を、無線通信を介して第1のサーバ301に送信する。第1のサーバ301は、そのユーザ情報および店舗情報を受信すると、その店舗情報によって示される店舗システム310に対して問い合わせることによって、そのユーザ情報によって示されるユーザによる予約処理が完了しているか否かを判定する。 When the receiver 200 receives the store information from the second server 302, the receiver 200 transmits the user information and the store information to the first server 301 via wireless communication. When the first server 301 receives the user information and the store information, the first server 301 makes an inquiry to the store system 310 indicated by the store information to determine whether or not the reservation process by the user indicated by the user information is completed. Determine.
 ここで、第1のサーバ301は、予約処理が完了していると判定すると、店舗システム310に対してユーザの店舗への到着を、無線通信を介して通知する。一方、予約処理が完了していないと判定すると、第1のサーバ301は、その店舗のメニューリストを、無線通信を介して受信機200に送信する。受信機200は、そのメニューリストを受信すると、ディスプレイにそのメニューリストを表示し、ユーザからのメニューの選択を受け付ける。そして、受信機200は、ユーザによって選択されたメニューを選択メニューとして、無線通信を介して第1のサーバ301に通知する。 Here, if the first server 301 determines that the reservation process has been completed, the first server 301 notifies the store system 310 of the arrival of the user at the store via wireless communication. On the other hand, if it is determined that the reservation process has not been completed, the first server 301 transmits the menu list of the store to the receiver 200 via wireless communication. When the receiver 200 receives the menu list, the receiver 200 displays the menu list on the display and accepts a menu selection from the user. Then, the receiver 200 notifies the first server 301 via wireless communication using the menu selected by the user as a selection menu.
 第1のサーバ301は、受信機200からの選択メニューの通知を受け付けると、その選択メニューを、無線通信を介して店舗システム310に通知する。 When the first server 301 receives the notification of the selection menu from the receiver 200, the first server 301 notifies the store system 310 of the selection menu via wireless communication.
 図196は、第1のサーバ301による管理方法を示すフローチャートである。 FIG. 196 is a flowchart showing a management method by the first server 301.
 まず、第1のサーバ301は、受信機200である携帯端末から店舗情報を受信する(ステップS621)。次に、第1のサーバ301は、その店舗情報によって示される店舗に対する予約処理が完了しているか否かを判定する(ステップS622)。ここで、第1のサーバ301は、予約処理が完了していると判定すると(ステップS622のYes)、携帯端末のユーザの店舗への到着を、店舗システム310に通知する(ステップS623)。一方、第1のサーバ301は、予約処理が完了していないと判定すると(ステップS622のNo)、その店舗のメニューリストを携帯端末に通知する(ステップS624)。さらに、第1のサーバ301は、そのメニューリストから選択されたメニューである選択メニューが携帯端末から通知されると、その選択メニューを店舗システム310に通知する(ステップS625)。 First, the first server 301 receives store information from the mobile terminal that is the receiver 200 (step S621). Next, the first server 301 determines whether or not the reservation process for the store indicated by the store information has been completed (step S622). If the first server 301 determines that the reservation process has been completed (Yes in step S622), the first server 301 notifies the store system 310 of the arrival of the mobile terminal user at the store (step S623). On the other hand, when the first server 301 determines that the reservation process has not been completed (No in step S622), the first server 301 notifies the portable terminal of the menu list of the store (step S624). Furthermore, when the selection menu which is the menu selected from the menu list is notified from the portable terminal, the first server 301 notifies the store system 310 of the selection menu (step S625).
 このように、本実施の形態におけるサーバ(すなわち第1のサーバ301)の管理方法では、前記サーバが、携帯端末から店舗情報を受信し、前記店舗情報に基づいて、前記携帯端末のユーザによる店舗のメニューの予約処理が完了しているか否かを判断し、前記予約処理が完了している場合には、店舗システムに対して前記携帯端末の前記ユーザが前記店舗に到着している旨の通知を行う。また、その管理方法では、前記予約処理が完了していない場合には、前記サーバが、前記店舗のメニューリストを前記携帯端末に通知し、前記携帯端末から、メニューの選択を受け付けた場合、前記店舗システムに対して選択されたメニューの通知を行う。また、その管理方法では、前記携帯端末は、前記店舗に設置された被写体を撮影することにより可視光信号を識別情報として取得し、前記識別情報を他のサーバに送信し、前記他のサーバから識別情報に対応する店舗情報を受信し、受信した店舗情報を前記サーバに送信する。 Thus, in the management method of the server (that is, the first server 301) in the present embodiment, the server receives store information from the mobile terminal, and based on the store information, the store by the user of the mobile terminal It is determined whether or not the reservation process of the menu is completed, and if the reservation process is completed, a notification that the user of the mobile terminal has arrived at the store is sent to the store system I do. Further, in the management method, when the reservation process is not completed, the server notifies the mobile terminal of the menu list of the store, and receives a menu selection from the mobile terminal. The selected menu is notified to the store system. In the management method, the portable terminal acquires a visible light signal as identification information by photographing a subject installed in the store, transmits the identification information to another server, and transmits the identification information from the other server. Store information corresponding to the identification information is received, and the received store information is transmitted to the server.
 これにより、携帯端末のユーザは、事前に予約処理をしておけば、店舗に到着すると、すぐに店舗に対して注文メニューの調理を開始させることができ、出来立ての料理を飲食することができる。また、ユーザは、予約処理をしていなくても、店舗に到着すると、すぐにメニューリストからメニューを選択して、店舗に対して注文することができる。 As a result, if the user of the mobile terminal makes a reservation process in advance, when the user arrives at the store, the user can immediately start cooking the order menu for the store, and eat and drink freshly prepared dishes. it can. Further, even if the user does not perform the reservation process, when the user arrives at the store, the user can immediately select the menu from the menu list and place an order with respect to the store.
 なお、受信機200は、店舗情報に代えて、識別情報(つまり光ID)を第1のサーバ301に送信し、第1のサーバ301は、その識別情報に基づいて予約処理の完了の有無を確認してもよい。この場合、識別情報を第2のサーバ302に送信することなく、携帯端末から識別情報が第1のサーバ301に送信される。 The receiver 200 transmits identification information (that is, an optical ID) instead of the store information to the first server 301, and the first server 301 indicates whether or not the reservation process has been completed based on the identification information. You may check. In this case, the identification information is transmitted from the mobile terminal to the first server 301 without transmitting the identification information to the second server 302.
 (実施の形態12)
 本実施の形態では、上記各実施の形態と同様、光IDを用いた通信方法および通信装置などについて説明する。なお、本実施の形態における送信機および受信機は、上記各実施の形態における送信機(または送信装置)および受信機(または受信装置)と同一の機能および構成を有していてもよい。
(Embodiment 12)
In the present embodiment, a communication method and a communication apparatus using an optical ID will be described as in the above embodiments. Note that the transmitter and the receiver in this embodiment may have the same functions and configurations as the transmitter (or the transmission device) and the receiver (or the reception device) in each of the above embodiments.
 図197は、本実施の形態における照明システムを示す図である。 FIG. 197 is a diagram illustrating an illumination system according to the present embodiment.
 この照明システムは、例えば図197の(a)に示すように、複数の第1の照明装置100pと、複数の第2の照明装置100qとを備える。このような照明システムは、例えば、大型店舗の天井に取り付けられる。また、複数の第1の照明装置100pと、複数の第2の照明装置100qとは、それぞれ長尺状に形成され、それらの長手方向に沿って一列に配置される。また、第1の照明装置100pと第2の照明装置100qとは交互に配置されている。 This illumination system includes, for example, a plurality of first illumination devices 100p and a plurality of second illumination devices 100q as shown in FIG. Such an illumination system is attached to the ceiling of a large store, for example. The plurality of first lighting devices 100p and the plurality of second lighting devices 100q are each formed in a long shape, and are arranged in a line along the longitudinal direction thereof. In addition, the first lighting device 100p and the second lighting device 100q are alternately arranged.
 第1の照明装置100pは、上記各実施の形態における送信機100として構成され、照明用の光を発するとともに、可視光信号を光IDとして送信する。また、第2の照明装置100qは、照明用の光を発するとともに、ダミー信号を送信する。つまり、第2の照明装置100qは、周期的に輝度変化することによって、照明用の光を発するとともに、ダミー信号を送信する。受信機によって照明システムが可視光通信モードで撮像されると、その撮像によって得られる上述の可視光通信画像または輝線画像である復号用画像には、第1の照明装置100pに対応する領域に、輝線パターン領域が現れる。しかし、その復号用画像における、第2の照明装置100qに対応する領域には、輝線パターン領域はあらわれない。 The first lighting device 100p is configured as the transmitter 100 in each of the above-described embodiments, emits light for illumination, and transmits a visible light signal as a light ID. In addition, the second illumination device 100q emits illumination light and transmits a dummy signal. In other words, the second illumination device 100q emits illumination light and periodically transmits a dummy signal by changing the luminance periodically. When the illumination system is imaged by the receiver in the visible light communication mode, the above-described visible light communication image or the decoding image that is the bright line image obtained by the imaging is included in an area corresponding to the first illumination device 100p. The bright line pattern area appears. However, the bright line pattern region does not appear in the region corresponding to the second illumination device 100q in the decoding image.
 このように、図197の(a)に示す照明システムでは、互いに隣り合う2つの第1の照明装置100pの間に、第2の照明装置100qが配置されている。これにより、可視光信号を受信する受信機は、復号用画像における輝線パターン領域の端を適切に特定することができ、それぞれの第1の照明装置100pから送信される可視光信号を区別して受信することができる。 Thus, in the illumination system shown in FIG. 197 (a), the second illumination device 100q is disposed between the two first illumination devices 100p adjacent to each other. Accordingly, the receiver that receives the visible light signal can appropriately identify the edge of the bright line pattern region in the decoding image, and distinguishes and receives the visible light signal transmitted from each first illumination device 100p. can do.
 また、第2の照明装置100qが発光しているとき(つまりダミー信号を送信しているとき)の平均輝度と、第1の照明装置100pが発光しているとき(つまり可視光信号を送信しているとき)の平均輝度とは、等しい。したがって、照明システムにおける各照明装置の明るさのばらつきを抑えることができる。なお、その各照明装置の明るさは、人が見て感じる明るさである。これにより、店舗内の人が、その照明システムにおける明るさのばらつきを感じ難くすることができる。また、第2の照明装置100qの輝度変化が、点灯(オン)と消灯(オフ)との切り換えで行われる場合には、第2の照明装置100qが調光機能を有していなくても、第2の照明装置100qの平均輝度を、オンとオフとのデューティ比によって調整することができる。 Further, the average luminance when the second lighting device 100q emits light (that is, when a dummy signal is transmitted) and the average luminance when the first lighting device 100p emits light (that is, transmits a visible light signal). The average brightness is equal. Therefore, variation in the brightness of each lighting device in the lighting system can be suppressed. Note that the brightness of each lighting device is the brightness that humans see and feel. Thereby, the person in a store can make it difficult to sense the variation in the brightness in the lighting system. Further, in the case where the luminance change of the second lighting device 100q is performed by switching between lighting (on) and extinguishing (off), even if the second lighting device 100q does not have a dimming function, The average luminance of the second lighting device 100q can be adjusted by the duty ratio between on and off.
 また、照明システムは、例えば図197の(b)に示すように、複数の第1の照明装置100pを備え、第2の照明装置100qを備えていなくてもよい。この場合、複数の第1の照明装置100pは、それらの長手方向に沿って一列に、かつ、互いに離間して配置される。 Further, for example, as shown in FIG. 197 (b), the lighting system includes a plurality of first lighting devices 100p and does not need to include the second lighting device 100q. In this case, the plurality of first lighting devices 100p are arranged in a line along the longitudinal direction of the first lighting devices 100p and separated from each other.
 したがって、図197の(b)に示す照明システムであっても、図197の(a)に示す照明システムと同様、可視光信号を受信する受信機は、復号用画像における輝線パターン領域の端を適切に特定することができる。その結果、受信機は、それぞれの第1の照明装置100pから送信される可視光信号を区別して受信することができる。 Therefore, even in the illumination system shown in FIG. 197 (b), similarly to the illumination system shown in FIG. 197 (a), the receiver that receives the visible light signal uses the edge of the bright line pattern region in the decoding image. Can be identified appropriately. As a result, the receiver can distinguish and receive visible light signals transmitted from the respective first lighting devices 100p.
 または、複数の第1の照明装置100pは、隣接して配置され、互いに隣接する2つの第1の照明装置100pの境界部分は、カバーによって覆われてもよい。このカバーによって、境界部分から照射される光が遮られる。もしくは、複数の第1の照明装置100pのそれぞれは、その第1の照明装置100pの長手方向の両端部分から光が照射されない構造を有していてもよい。 Alternatively, the plurality of first lighting devices 100p may be arranged adjacent to each other, and a boundary portion between the two first lighting devices 100p adjacent to each other may be covered with a cover. This cover blocks light emitted from the boundary portion. Alternatively, each of the plurality of first lighting devices 100p may have a structure in which light is not irradiated from both end portions in the longitudinal direction of the first lighting device 100p.
 このような図197の(a)および(b)に示す照明システムでは、受信機は、その照明システムに含まれる第1の照明装置100pの長手方向の長さを用いて、第1の照明装置100pからの距離を算出することができる。したがって、受信機は、その受信機の位置を正確に推定することができる。 In the illumination system shown in FIGS. 197 (a) and (b), the receiver uses the length in the longitudinal direction of the first illumination device 100p included in the illumination system to use the first illumination device. The distance from 100p can be calculated. Therefore, the receiver can accurately estimate the position of the receiver.
 図198は、照明装置の配置および復号用画像の一例を示す図である。 FIG. 198 is a diagram illustrating an example of the arrangement of lighting devices and an image for decoding.
 例えば、図198の(a)に示すように、第1の照明装置100pと第2の照明装置100qとは隣接して配置されている。ここで、第2の照明装置100qは、100μs以下の周期でオンとオフとを切り換えることによって、ダミー信号を送信している。 For example, as shown in FIG. 198 (a), the first lighting device 100p and the second lighting device 100q are arranged adjacent to each other. Here, the second lighting device 100q transmits the dummy signal by switching on and off at a cycle of 100 μs or less.
 受信機は、その第1の照明装置100pと第2の照明装置100qとを撮像することによって、図198の(b)に示す復号用画像を撮像する。ここで、第2の照明装置100qにおけるオンとオフとの切り換えの周期は、受信機の露光時間と比較して短すぎる。したがって、復号用画像における第2の照明装置100qに対応する領域(以下、ダミー領域という)の輝度は一様である。また、このダミー領域の輝度は、第1の照明装置100pおよび第2の照明装置100q以外のバックグラウンドに対応する領域よりも高い。また、このダミー領域の輝度は、第1の照明装置100pに対応する領域、すなわち輝線パターン領域における高い輝度よりも低い。 The receiver captures the first illumination device 100p and the second illumination device 100q to capture the decoding image shown in FIG. 198 (b). Here, the on / off switching cycle in the second lighting device 100q is too short compared to the exposure time of the receiver. Therefore, the luminance of the region corresponding to the second illumination device 100q (hereinafter referred to as a dummy region) in the decoding image is uniform. In addition, the luminance of this dummy region is higher than the region corresponding to the background other than the first lighting device 100p and the second lighting device 100q. Further, the luminance of this dummy region is lower than the high luminance in the region corresponding to the first illumination device 100p, that is, the bright line pattern region.
 したがって、受信機は、ダミー領域に対応する照明装置を、輝線パターン領域に対応する照明装置と区別することができる。 Therefore, the receiver can distinguish the illumination device corresponding to the dummy area from the illumination device corresponding to the bright line pattern area.
 図199は、照明装置の配置および復号用画像の他の例を示す図である。 FIG. 199 is a diagram showing another example of the arrangement of lighting devices and a decoding image.
 例えば、図199の(a)に示すように、第1の照明装置100pと第2の照明装置100qとは隣接して配置されている。ここで、第2の照明装置100qは、100μsを超える周期でオンとオフとを切り換えることによって、ダミー信号を送信している。 For example, as shown in FIG. 199 (a), the first lighting device 100p and the second lighting device 100q are arranged adjacent to each other. Here, the second lighting device 100q transmits a dummy signal by switching on and off at a cycle exceeding 100 μs.
 受信機は、その第1の照明装置100pと第2の照明装置100qとを撮像することによって、図199の(b)に示す復号用画像を撮像する。ここで、第2の照明装置100qにおけるオンとオフとの切り換えの周期は、受信機の露光時間と比較して長い。したがって、復号用画像におけるダミー領域の輝度は、一様ではなく、そのダミー領域では、明るい領域と暗い領域とが交互に現れる。例えば、受信機は、既定の最大幅よりも広い暗い領域が復号用画像に現れている場合には、その暗い領域を含む範囲にダミー領域があることを認識することができる。 The receiver captures the first illumination device 100p and the second illumination device 100q to capture the decoding image shown in FIG. 199 (b). Here, the cycle of switching on and off in the second lighting device 100q is longer than the exposure time of the receiver. Therefore, the luminance of the dummy area in the decoding image is not uniform, and bright areas and dark areas appear alternately in the dummy area. For example, when a dark area wider than the predetermined maximum width appears in the decoding image, the receiver can recognize that there is a dummy area in the range including the dark area.
 したがって、受信機は、ダミー領域に対応する照明装置を、輝線パターン領域に対応する照明装置と区別することができる。 Therefore, the receiver can distinguish the illumination device corresponding to the dummy area from the illumination device corresponding to the bright line pattern area.
 図200は、第1の照明装置100pを用いた位置推定を説明するための図である。 FIG. 200 is a diagram for explaining position estimation using the first lighting device 100p.
 受信機200は、上述のように、第1の照明装置100pを撮像することによって、受信機200の位置を推定することができる。 The receiver 200 can estimate the position of the receiver 200 by imaging the first lighting device 100p as described above.
 しかし、受信機200は、その推定された位置における床からの高さが、許容範囲よりも高い場合には、エラーをユーザに通知してもよい。例えば、受信機200は、復号用画像または通常撮影画像に映し出されている第1の照明装置100pの長手方向の長さと、加速度センサの出力などに基づいて、その第1の照明装置100pの位置および向きを特定する。さらに、受信機200は、床から第1の照明装置100pが設置されている天井までの高さを用いて、受信機200の位置における床からの高さを特定する。そして、受信機200は、その受信機200の位置での高さが許容範囲よりも高い場合には、エラーを通知する。なお、上述の第1の照明装置100pの位置および向きは、受信機200に対する相対的な位置および向きである。したがって、第1の照明装置100pの位置および向きが特定されることによって、受信機200の位置および向きが特定されていると言える。 However, the receiver 200 may notify the user of an error when the height from the floor at the estimated position is higher than the allowable range. For example, the receiver 200 determines the position of the first lighting device 100p based on the length in the longitudinal direction of the first lighting device 100p displayed in the decoding image or the normal captured image, the output of the acceleration sensor, and the like. And identify the orientation. Furthermore, the receiver 200 specifies the height from the floor at the position of the receiver 200 using the height from the floor to the ceiling where the first lighting device 100p is installed. Then, when the height at the position of the receiver 200 is higher than the allowable range, the receiver 200 notifies an error. Note that the position and orientation of the first lighting device 100p described above are relative to the receiver 200. Therefore, it can be said that the position and orientation of the receiver 200 are specified by specifying the position and orientation of the first lighting device 100p.
 図201は、受信機200の処理動作を示すフローチャートである。 FIG. 201 is a flowchart showing the processing operation of the receiver 200.
 まず、受信機200は、図201の(a)に示すように、受信機200の位置を推定する(ステップS231)。次に、受信機200は、床から天井までの高さを導出する(ステップS232)。例えば、受信機200は、メモリに保存されている高さを読み出すことによって、その床から天井までの高さを導出する。または、受信機200は、周辺にある送信機から電波によって送信される情報を受信することによって、その床から天井までの高さを導出する。 First, the receiver 200 estimates the position of the receiver 200 as shown in FIG. 201 (a) (step S231). Next, the receiver 200 derives the height from the floor to the ceiling (step S232). For example, the receiver 200 derives the height from the floor to the ceiling by reading the height stored in the memory. Alternatively, the receiver 200 derives the height from the floor to the ceiling by receiving information transmitted by radio waves from transmitters in the vicinity.
 次に、受信機は、ステップS231で推定された受信機200の位置と、ステップS232で導出された、床から天井までの高さに基づいて、床から受信機200までの高さが許容範囲内であるか否かを判定する(ステップS233)。ここで、受信機は、その高さが許容範囲内であると判定すると(ステップS233のYes)、受信機200の位置および向きを表示する(ステップS234)。一方、受信機は、その高さが許容範囲内にないと判定すると(ステップS233のNo)、受信機200の向きのみを表示する(ステップS235)。 Next, the receiver determines that the height from the floor to the receiver 200 is within an allowable range based on the position of the receiver 200 estimated in step S231 and the height from the floor to the ceiling derived in step S232. It is determined whether it is within (step S233). Here, when the receiver determines that the height is within the allowable range (Yes in step S233), the receiver displays the position and orientation of the receiver 200 (step S234). On the other hand, when the receiver determines that the height is not within the allowable range (No in step S233), only the direction of the receiver 200 is displayed (step S235).
 または、受信機200は、図201の(b)に示すように、ステップS235の代わりに、ステップS236を実行してもよい。つまり、受信機は、その高さが許容範囲内にないと判定すると(ステップS233のNo)、位置推定においてエラーが生じたことをユーザに通知する(ステップS236)。 Alternatively, the receiver 200 may execute step S236 instead of step S235 as shown in FIG. 201 (b). That is, when the receiver determines that the height is not within the allowable range (No in step S233), the receiver notifies the user that an error has occurred in the position estimation (step S236).
 図202は、本実施の形態における通信システムの一例を示す図である。 FIG. 202 is a diagram illustrating an example of a communication system according to the present embodiment.
 通信システムは、受信機200とサーバ300とを備える。受信機200は、GPS、電波または可視光信号によって送信される位置情報または送信機IDを受信する。なお、位置情報は、例えば送信機または受信機の位置を示す情報であり、送信機IDは、送信機を識別するための識別情報である。そして、受信機200は、その受信された位置情報または送信機IDをサーバ300に送信する。サーバ300は、その位置情報または送信機IDに関連付けられている地図またはコンテンツを受信機200に送信する。 The communication system includes a receiver 200 and a server 300. The receiver 200 receives position information or a transmitter ID transmitted by GPS, radio waves, or visible light signals. The position information is information indicating the position of the transmitter or the receiver, for example, and the transmitter ID is identification information for identifying the transmitter. Then, the receiver 200 transmits the received position information or transmitter ID to the server 300. The server 300 transmits a map or content associated with the position information or transmitter ID to the receiver 200.
 図203は、本実施の形態における受信機200による自己位置推定の処理を説明するための図である。 FIG. 203 is a diagram for describing self-position estimation processing by the receiver 200 in the present embodiment.
 受信機200は、予め定められた周期ごとに、自己位置推定を行う。この自己位置推定は、複数の処理からなる。上述の周期は、例えば、受信機200による撮像のフレーム周期である。 The receiver 200 performs self-position estimation every predetermined cycle. This self-position estimation consists of a plurality of processes. The above-described cycle is, for example, a frame cycle of imaging by the receiver 200.
 例えば、受信機200は、前のフレーム周期において行われた自己位置推定の結果を直前自己位置として取得する。そして、受信機200は、その直前自己位置からの移動距離および移動方向を、加速度センサおよびジャイロセンサなどからの出力に基づいて推定する。さらに、受信機200は、その推定された移動距離および移動方向にしたがって直前自己位置を変更することによって、現在のフレーム周期における自己位置推定を行う。これにより、第1の自己位置推定結果が得られる。一方、受信機200は、電波と、可視光信号と、加速度センサおよび方位センサからの出力とのうちの少なくとも1つに基づいて、現在のフレーム周期における自己位置推定を行う。これにより、第2の自己位置推定結果が得られる。そして、受信機200は、例えば、Kalman filterなどを用いて、第1の自己位置推定結果に基づく第2の自己位置推定結果の調整を行う。これによって、現在のフレーム周期における最終的な自己位置推定の結果が得られる。 For example, the receiver 200 acquires the result of self-position estimation performed in the previous frame period as the previous self-position. Then, the receiver 200 estimates the moving distance and moving direction from the immediately preceding self-position based on the outputs from the acceleration sensor and the gyro sensor. Furthermore, the receiver 200 performs self-position estimation in the current frame period by changing the previous self-position according to the estimated moving distance and moving direction. Thereby, the first self-position estimation result is obtained. On the other hand, the receiver 200 performs self-position estimation in the current frame period based on at least one of radio waves, visible light signals, and outputs from the acceleration sensor and the azimuth sensor. Thereby, a second self-position estimation result is obtained. And the receiver 200 adjusts the 2nd self-position estimation result based on a 1st self-position estimation result using Kalman filter etc., for example. As a result, a final self-position estimation result in the current frame period is obtained.
 図204は、本実施の形態における受信機200による自己位置推定を示すフローチャートである。 FIG. 204 is a flowchart showing self-position estimation by receiver 200 in the present embodiment.
 まず、受信機200は、電波の強さなどに基づいて、受信機200の位置を推定する(ステップS241)。これにより、受信機200の推定位置Aが得られる。 First, the receiver 200 estimates the position of the receiver 200 based on the strength of radio waves (step S241). Thereby, the estimated position A of the receiver 200 is obtained.
 次に、受信機200は、加速度センサ、ジャイロセンサおよび方位センサなどからの出力に基づいて、受信機200の移動方向および移動方向を計測する(ステップS242)。 Next, the receiver 200 measures the moving direction and moving direction of the receiver 200 based on the outputs from the acceleration sensor, the gyro sensor, the azimuth sensor, and the like (step S242).
 次に、受信機200は、可視光信号を受信し、その受信された可視光信号と、加速度センサおよび方位センサなどからの出力に基づいて、受信機200の位置を計測する(ステップS243)。 Next, the receiver 200 receives the visible light signal, and measures the position of the receiver 200 based on the received visible light signal and outputs from the acceleration sensor and the azimuth sensor (step S243).
 そして、受信機200は、ステップS242で計測された受信機200の移動距離および移動方向と、ステップS243で計測された受信機200の位置とを用いて、ステップS241で得られた推定位置Aを更新する(ステップS243)。この推定位置Aの更新には、Kalman filterなどのアルゴリズムが用いられる。そして、ステップS242以降の処理が繰り返し実行される。 Then, the receiver 200 uses the moving distance and moving direction of the receiver 200 measured in step S242 and the position of the receiver 200 measured in step S243 to obtain the estimated position A obtained in step S241. Update (step S243). For updating the estimated position A, an algorithm such as Kalman filter is used. And the process after step S242 is repeatedly performed.
 図205は、本実施の形態における受信機200の自己位置推定の処理の概略を示すフローチャートである。 FIG. 205 is a flowchart showing an outline of self-position estimation processing of the receiver 200 in the present embodiment.
 まず、受信機200は、例えばBluetooth(登録商標)などの電波の強さなどに基づいて、受信機200のおおまかな位置を推定する(ステップS251)。次に、受信機200は、可視光信号などを用いて、受信機200の詳細な位置を推定する(ステップS252)。これにより、例えば±10cmの誤差範囲で自己位置を推定することができる。 First, the receiver 200 estimates the approximate position of the receiver 200 based on, for example, the strength of radio waves such as Bluetooth (registered trademark) (step S251). Next, the receiver 200 estimates the detailed position of the receiver 200 using a visible light signal or the like (step S252). Thereby, for example, the self-position can be estimated within an error range of ± 10 cm.
 なお、各送信機に割り当て可能な光IDの数は少なく、世界の各送信機に対して唯一の光IDを与えることはできない。しかし、本実施の形態では、上述のステップS251の処理のように、電波の強さなどに基づいて、その電波を送信する送信機が存在するエリアが絞り込まれる。そして、それぞれ同じ光IDを有する複数の送信機がそのエリアに存在しなければ、受信機200は、ステップS252の処理によって、つまり、光IDに基づいて、そのエリアから1つの送信機を特定することができる。 Note that the number of optical IDs that can be assigned to each transmitter is small, and a unique optical ID cannot be given to each transmitter in the world. However, in the present embodiment, as in the process of step S251 described above, the area where the transmitter that transmits the radio wave exists is narrowed down based on the strength of the radio wave. If a plurality of transmitters each having the same optical ID does not exist in the area, the receiver 200 identifies one transmitter from the area by the processing in step S252, that is, based on the optical ID. be able to.
 サーバは、送信機ごとに、その送信機が有する光IDと、その送信機の位置を示す位置情報と、その送信機が有する電波のIDとを互いに関連付けて記憶している。 The server stores, for each transmitter, the optical ID of the transmitter, the position information indicating the position of the transmitter, and the radio wave ID of the transmitter in association with each other.
 図206は、本実施の形態における電波のIDと光IDとの関係を示す図である。 FIG. 206 is a diagram showing the relationship between radio wave IDs and optical IDs in the present embodiment.
 例えば、電波のIDには、光IDと同じ情報が含まれる。なお、電波IDは、Bluetooth(登録商標)またはWi-Fi(登録商標)などに用いられる識別情報である。つまり、送信機は、電波のIDを電波で送信するとともに、その電波のIDの少なくとも一部と一致する情報を光IDとして送信する。例えば、電波のIDに含まれる下位の数ビットが、光IDと一致する。これにより、サーバは、電波のIDと光IDとを一元的に管理することができる。 For example, the same information as the optical ID is included in the radio wave ID. The radio wave ID is identification information used for Bluetooth (registered trademark) or Wi-Fi (registered trademark). That is, the transmitter transmits the radio wave ID by radio wave, and transmits information that matches at least a part of the radio wave ID as an optical ID. For example, the lower few bits included in the radio wave ID match the optical ID. Thus, the server can centrally manage the radio wave ID and the optical ID.
 また、受信機200は、受信機200の付近に、同じ光IDを送信する複数の送信機が存在するかを、電波を介して確認することができる。そして、受信機200は、複数の送信機が存在することを確認した場合には、電波を介して何れかの送信機の光IDを変更させてもよい。 In addition, the receiver 200 can check whether there are a plurality of transmitters that transmit the same optical ID in the vicinity of the receiver 200 via radio waves. Then, when the receiver 200 confirms that there are a plurality of transmitters, the receiver 200 may change the optical ID of any transmitter via radio waves.
 図207は、本実施の形態における受信機200による撮像の一例を説明するための図である。 FIG. 207 is a diagram for describing an example of imaging by the receiver 200 in this embodiment.
 例えば、受信機200は、図207の(a)に示すように、位置Aにおいて、第1の照明装置100pを可視光通信モードで撮像する。さらに、受信機200は、位置Bにおいて、第1の照明装置100pを可視光通信モードで撮像する。ここで、位置Aと位置Bとは、第1の照明装置100pに対して点対称の関係にある。この場合には、受信機200は、位置Aでも位置Bでも、図207の(b)に示すように、その撮像によって同一の復号用画像を生成する。したがって、受信機200は、図207の(b)に示す復号用画像だけからは、受信機200が位置Aにあるのか位置Bにあるのかを区別することができない。そこで、受信機200は、自己位置推定の候補として位置Aと位置Bとを提示してもよい。また、受信機200は、過去の受信機200の位置と、その位置からの移動方向などに基づいて、複数の候補から1つの候補を絞り込んでもよい。また、復号用画像に2つ以上の照明装置が映し出されている場合には、位置Aにおいて得られる復号用画像と、位置Bにおいて得られる復号用画像とは異なる。したがって、この場合には、受信機200の候補となる位置を1つに絞り込むことができる。 For example, as shown in FIG. 207 (a), the receiver 200 images the first lighting device 100p in the visible light communication mode at the position A. Furthermore, the receiver 200 images the first lighting device 100p in the visible light communication mode at the position B. Here, the position A and the position B are point-symmetric with respect to the first lighting device 100p. In this case, the receiver 200 generates the same decoding image by imaging as shown in (b) of FIG. 207 at both the position A and the position B. Therefore, the receiver 200 cannot distinguish whether the receiver 200 is at the position A or the position B only from the decoding image shown in FIG. 207 (b). Therefore, the receiver 200 may present position A and position B as candidates for self-position estimation. The receiver 200 may narrow down one candidate from a plurality of candidates based on the past position of the receiver 200 and the moving direction from the position. In addition, when two or more illumination devices are displayed in the decoding image, the decoding image obtained at the position A is different from the decoding image obtained at the position B. Therefore, in this case, the candidate positions of the receiver 200 can be narrowed down to one.
 なお、受信機200は、方位センサからの出力に基づいて、位置Aと位置Bから1つの位置に絞り込むことができる。しかし、このような場合でも、方位センサの信頼性が低いときには、受信機200は、位置Aおよび位置Bのそれぞれを、受信機200の位置の候補として提示してもよい。 Note that the receiver 200 can narrow down the position A and the position B to one position based on the output from the direction sensor. However, even in such a case, when the reliability of the azimuth sensor is low, the receiver 200 may present each of the position A and the position B as position candidates for the receiver 200.
 図208は、本実施の形態における受信機200による撮像の他の例を説明するための図である。 FIG. 208 is a diagram for describing another example of imaging by the receiver 200 in the present embodiment.
 例えば、第1の照明装置100pの周囲には、ミラー901が配置されていてもよい。これにより、位置Aにおける撮像によって得られる復号用画像と、位置Bにおける撮像によって得られる復号用画像とを異ならせることができる。つまり、復号用画像に基づく自己位置推定によって、受信機200の位置を1つに絞り込めなくなる状況の発生を抑制することができる。 For example, a mirror 901 may be disposed around the first lighting device 100p. Thereby, the decoding image obtained by imaging at the position A can be made different from the decoding image obtained by imaging at the position B. That is, it is possible to suppress the occurrence of a situation where the position of the receiver 200 cannot be narrowed down to one by self-position estimation based on the decoding image.
 図209は、本実施の形態における受信機200によって用いられるカメラを説明するための図である。 FIG. 209 is a diagram for describing a camera used by the receiver 200 in the present embodiment.
 例えば、受信機200は、複数のカメラを備え、その複数のカメラから、可視光通信に用いられるカメラを選択する。具体的には、受信機200は、加速度センサからの出力データに基づいて、受信機200の向きを特定し、複数のカメラから上向きのカメラを選択する。または、受信機200は、受信機200の向きと、複数のカメラの画角とに基づいて、水平方向よりも上向きの撮像を行うことができる1つまたは複数のカメラを選択してもよい。また、受信機200は、複数のカメラを選択した場合には、その複数のカメラの中から、画角において上向きの領域が最も広い1つのカメラをさらに選択してもよい。また、受信機200は、カメラの撮像によって得られた画像のうち一部の領域に対しては、自己位置推定または光IDの受信のための処理を行わなくてもよい。その一部の領域は、水平方向よりも下向きにある領域でもよく、あるいは、水平方向から下に所定の角度だけ傾く方向よりもさらに下向きにある領域である。 For example, the receiver 200 includes a plurality of cameras, and selects a camera used for visible light communication from the plurality of cameras. Specifically, the receiver 200 identifies the direction of the receiver 200 based on output data from the acceleration sensor, and selects an upward camera from a plurality of cameras. Alternatively, the receiver 200 may select one or a plurality of cameras that can perform upward imaging from the horizontal direction based on the orientation of the receiver 200 and the angles of view of the plurality of cameras. In addition, when a plurality of cameras are selected, the receiver 200 may further select one camera having the widest upward area in the angle of view from the plurality of cameras. In addition, the receiver 200 may not perform processing for self-position estimation or reception of an optical ID for a part of an image obtained by imaging by the camera. The partial area may be an area that is downward from the horizontal direction, or may be an area that is further downward from a direction inclined by a predetermined angle from the horizontal direction.
 これにより、受信機200の計算負荷を削減することができる。 Thereby, the calculation load of the receiver 200 can be reduced.
 図210は、本実施の形態における受信機200が送信機の可視光信号を変更させる処理の一例を示すフローチャートである。 FIG. 210 is a flowchart illustrating an example of processing in which the receiver 200 in the present embodiment changes the visible light signal of the transmitter.
 受信機200は、まず、可視光信号として可視光信号Aを受信する(ステップS261)。 First, the receiver 200 receives the visible light signal A as a visible light signal (step S261).
 次に、受信機200は、「可視光信号Aを送信している場合には可視光信号Bに変更する」という命令を電波で送信する(ステップS262)。 Next, the receiver 200 transmits an instruction “change to visible light signal B when visible light signal A is transmitted” by radio waves (step S262).
 そして、送信機100は、受信機からステップS262で送信された命令を受信する。第1の照明装置100pである送信機は、その命令に基づいて、自らが送信する可視光信号が可視光信号Aに設定されている場合には、その設定されている可視光信号Aを可視光信号Bに変更する(ステップS263)。 Then, the transmitter 100 receives the command transmitted in step S262 from the receiver. If the visible light signal transmitted by the transmitter, which is the first lighting device 100p, is set to the visible light signal A based on the command, the visible light signal A that is set is visible. The optical signal is changed to B (step S263).
 図211は、本実施の形態における受信機200が送信機の可視光信号を変更させる処理の他の例を示すフローチャートである。 FIG. 211 is a flowchart showing another example of processing in which receiver 200 in the present embodiment changes the visible light signal of the transmitter.
 受信機200は、まず、可視光信号として可視光信号Aを受信する(ステップS271)。 First, the receiver 200 receives the visible light signal A as a visible light signal (step S271).
 次に、受信機200は、周囲の電波を受信することによって、電波通信可能な送信機を探し、その送信機のリストを作成する(ステップS272)。 Next, the receiver 200 searches for transmitters capable of radio wave communication by receiving surrounding radio waves, and creates a list of the transmitters (step S272).
 次に、受信機200は、作成されたリストに示される複数の送信機の順序を所定の順序で並べ替える(ステップS273)。この所定の順序は、例えば、電波強度の強い順、ランダム順、または、送信機のIDの小さい順である。 Next, the receiver 200 rearranges the order of the plurality of transmitters shown in the created list in a predetermined order (step S273). This predetermined order is, for example, the order in which the radio field intensity is strong, the random order, or the order in which the transmitter ID is small.
 次に、受信機200は、リストに示される1番目の送信機に対して、所定の時間の間、可視光信号Bを送信するように電波で命令する(ステップS274)。そして、受信機200は、ステップS271で受信された可視光信号Aが可視光信号Bに変更されたか否かを判定する(ステップS275)。ここで、受信機200は、変更されたと判定すると(ステップS275のY)、リストに示される1番目の送信機に対して、可視光信号Bの送信を継続するように命令する(ステップS276)。 Next, the receiver 200 instructs the first transmitter shown in the list by radio waves to transmit the visible light signal B for a predetermined time (step S274). Then, the receiver 200 determines whether or not the visible light signal A received in step S271 is changed to the visible light signal B (step S275). Here, when the receiver 200 determines that the change has been made (Y in step S275), the receiver 200 instructs the first transmitter shown in the list to continue the transmission of the visible light signal B (step S276). .
 一方、可視光信号Aが可視光信号Bに変更されないと判定すると(ステップS275のN)、受信機200は、リストの1番目の送信機に対して、可視光信号を変更前の信号に戻すように命令する(ステップS277)。そして、受信機200は、リストに示される1番目の送信機をそのリストから削除し、2番目以降の各送信機の順番を1つ繰り上げる(ステップS278)。そして、受信機200は、ステップS274からの処理を繰り返し実行する。 On the other hand, if it is determined that the visible light signal A is not changed to the visible light signal B (N in step S275), the receiver 200 returns the visible light signal to the signal before the change for the first transmitter in the list. (Step S277). Then, the receiver 200 deletes the first transmitter shown in the list from the list, and increments the order of the second and subsequent transmitters by one (step S278). Then, the receiver 200 repeatedly executes the processes from step S274.
 このような処理によって、受信機200は、受信機200に現在受信されている可視光信号を送信している送信機を適切に特定し、その送信機に対して可視光信号を変更させることができる。 By such processing, the receiver 200 can appropriately identify the transmitter that is transmitting the visible light signal that is currently received by the receiver 200, and cause the transmitter to change the visible light signal. it can.
 (実施の形態13)
 受信機200は、実施の形態2の図18A~図18Cに示す例と同様、自己位置推定と、その推定結果を用いたナビゲーションを行う。受信機200は、自己位置推定を行うときには、復号用画像に含まれる輝線パターン領域の大きさおよび位置を用いる。つまり、受信機200は、受信機200の姿勢と、送信機100の大きさおよび形状と、復号用画像に含まれる輝線パターン領域の大きさ、形状および位置とに基づいて、送信機100に対する受信機200の相対位置を特定する。そして、受信機200は、送信機100からの可視光信号によって特定される地図上の送信機100の位置と、上述の特定された相対位置とを用いて自らの位置を推定する。なお、受信機200の姿勢は、例えば、受信機200に備えられている加速度センサおよび方位センサなどのセンサからの出力データによって特定される受信機200のカメラの向きである。
(Embodiment 13)
The receiver 200 performs self-position estimation and navigation using the estimation result, as in the example shown in FIGS. 18A to 18C of the second embodiment. When performing the self-position estimation, the receiver 200 uses the size and position of the bright line pattern area included in the decoding image. That is, the receiver 200 receives signals from the transmitter 100 based on the attitude of the receiver 200, the size and shape of the transmitter 100, and the size, shape, and position of the bright line pattern region included in the decoding image. The relative position of the machine 200 is specified. Then, the receiver 200 estimates its own position using the position of the transmitter 100 on the map specified by the visible light signal from the transmitter 100 and the specified relative position described above. Note that the attitude of the receiver 200 is, for example, the orientation of the camera of the receiver 200 specified by output data from sensors such as an acceleration sensor and an orientation sensor provided in the receiver 200.
 図212は、受信機200によるナビゲーションを説明するための図である。 FIG. 212 is a diagram for explaining navigation by the receiver 200.
 送信機100は、例えば、図212の(a)に示すように、バス停を案内するためのデジタルサイネージとして構成され、地下街に設定されている。そして、送信機100は、実施の形態2の図18A~図18Cに示す例と同様、可視光信号を送信している。ここで、送信機100には、ARナビゲーションを促す画像が表示されている。受信機200のユーザは、この送信機100を見て、ARナビゲーションによってバス停まで案内してもらいたい場合には、スマートフォンとして構成されている受信機200にインストールされているARナビゲーションのアプリケーションを起動する。この起動によって、受信機200は、内蔵されているカメラに、可視光通信モードでの撮像と、通常撮影モードでの撮像とを交互に切り替えて実行させる。そして、受信機200は、通常撮影モードでの撮像が行われるごとに、その撮像によって得られる通常撮影画像を受信機200のディスプレイに表示する。そして、ユーザは、その受信機200のカメラを送信機100に向ける。これにより、受信機200は、可視光通信モードでの撮像が行われるタイミングで、復号用画像を取得し、その復号用画像に含まれる輝線パターン領域の復号を行うことによって、送信機100からの可視光信号を受信する。そして、受信機200は、その可視光信号によって示される情報(すなわち光ID)をサーバに送信し、その情報に関連付けられている送信機100の地図上の位置を示すデータをそのサーバから受信する。さらに、受信機200は、送信機100の地図上の位置を用いて自己位置推定を行い、その推定された自己位置をサーバに送信する。サーバは、受信機200の位置と、目的地であるバス停までの経路を検索し、地図と、その経路とを示すデータを受信機200に送信する。なお、このときの自己位置推定によって得られた受信機200の位置は、ユーザを目的地に案内するための起点である。 The transmitter 100 is configured as a digital signage for guiding a bus stop, for example, as shown in FIG. 212 (a), and is set in an underground mall. Then, transmitter 100 transmits a visible light signal as in the example shown in FIGS. 18A to 18C of the second embodiment. Here, the transmitter 100 displays an image prompting AR navigation. When the user of the receiver 200 looks at the transmitter 100 and wants to be guided to the bus stop by AR navigation, the user starts the AR navigation application installed in the receiver 200 configured as a smartphone. . By this activation, the receiver 200 causes the built-in camera to alternately perform imaging in the visible light communication mode and imaging in the normal imaging mode. And every time imaging in the normal imaging mode is performed, the receiver 200 displays a normal captured image obtained by the imaging on the display of the receiver 200. Then, the user points the camera of the receiver 200 toward the transmitter 100. Thereby, the receiver 200 acquires the decoding image at the timing when the imaging in the visible light communication mode is performed, and decodes the bright line pattern region included in the decoding image, thereby receiving the signal from the transmitter 100. A visible light signal is received. And the receiver 200 transmits the information (namely, light ID) shown with the visible light signal to a server, and receives the data which shows the position on the map of the transmitter 100 linked | related with the information from the server. . Furthermore, the receiver 200 performs self-position estimation using the position of the transmitter 100 on the map, and transmits the estimated self-position to the server. The server searches the location of the receiver 200 and the route to the destination bus stop, and transmits a map and data indicating the route to the receiver 200. Note that the position of the receiver 200 obtained by the self-position estimation at this time is a starting point for guiding the user to the destination.
 次に、受信機200は、図212の(b)に示すように、その検索された経路にしたがってナビゲーションを開始する。このとき、受信機200は、通常撮影画像に対して方向指示画像431を重畳してディスプレイに表示する。この方向指示画像431は、その検索された経路と、受信機200の現在位置と、カメラの向きとに基づいて生成され、目的地に向かう方向に向けられた矢印として構成されている。 Next, as shown in FIG. 212 (b), the receiver 200 starts navigation according to the searched route. At this time, the receiver 200 superimposes the direction indication image 431 on the normal captured image and displays it on the display. This direction indication image 431 is generated based on the searched route, the current position of the receiver 200, and the direction of the camera, and is configured as an arrow directed in the direction toward the destination.
 そして、受信機200は、地下街を移動するときには、図212の(c)および(d)に示すように、通常撮影画像に映し出されている特徴点の動きに基づいて、現在の自己位置を推定する。 Then, when the receiver 200 moves in the underground shopping street, as shown in FIGS. 212 (c) and (d), the receiver 200 estimates the current self-position based on the movement of the feature points displayed in the normal photographed image. To do.
 さらに、受信機200は、図212の(e)に示すように、図212の(a)に示す送信機100とは異なる送信機100から可視光信号を受信すると、そのときまでに推定されていた自己位置を補正する。つまり、受信機200は、可視光信号を用いた自己位置推定を再び行うことによって、自己位置を更新する。 Furthermore, as shown in (e) of FIG. 212, when the receiver 200 receives a visible light signal from a transmitter 100 different from the transmitter 100 shown in (a) of FIG. Correct the self-position. That is, the receiver 200 updates the self position by performing self position estimation using the visible light signal again.
 そして、受信機200は、図212の(f)に示すように、目的地であるバス停までユーザを案内する。 Then, as shown in FIG. 212 (f), the receiver 200 guides the user to the bus stop that is the destination.
 このように、受信機200は、スタート地点で、まず、可視光信号に基づく自己位置推定を行い、その推定によって得られた自己位置を周期的に更新してもよい。例えば、受信機200は、図212の(c)および(d)に示すように、通常撮影モードでの撮像で、通常撮影画像を一定のフレームレートで取得している場合には、それらの通常撮影画像に映し出されている特徴点の移動量から、自己位置を更新してもよい。そして、受信機200は、通常撮影モードでの撮像が行われている途中で定期的に可視光通信モードでの撮像を行う。受信機200は、図212の(e)に示すように、その可視光通信モードでの撮像によって得られる復号用画像に輝線パターン領域が映し出されていれば、その時点で最近に更新された自己位置を、その映し出されている輝線パターン領域に基づいて補正してもよい。 Thus, at the start point, the receiver 200 may first perform self-position estimation based on the visible light signal and periodically update the self-position obtained by the estimation. For example, as shown in (c) and (d) of FIG. 212, the receiver 200 captures a normal captured image at a constant frame rate when capturing images in the normal capturing mode. The self position may be updated from the amount of movement of the feature point displayed in the captured image. Then, the receiver 200 periodically performs imaging in the visible light communication mode while imaging in the normal imaging mode is being performed. As shown in FIG. 212 (e), if the bright line pattern area is displayed in the decoding image obtained by imaging in the visible light communication mode, the receiver 200 updates the self that has been updated recently at that time. The position may be corrected based on the projected bright line pattern region.
 ここで、受信機200は、輝線パターン領域に対する復号によって、可視光信号を受信することができなくても、自己位置を推定することができる。つまり、受信機200は、復号用画像に映し出されている輝線パターン領域を完全に復号することができなくても、その輝線パターン領域に基づいて、または、輝線パターン領域のようなストライプ状の領域に基づいて、自己位置推定を行ってもよい。 Here, the receiver 200 can estimate the self-position by decoding the bright line pattern area even if it cannot receive the visible light signal. That is, even if the receiver 200 cannot completely decode the bright line pattern area displayed in the decoding image, the receiver 200 is based on the bright line pattern area or a striped area such as the bright line pattern area. Based on the above, self-position estimation may be performed.
 図213は、受信機200による自己位置推定の一例を示すフローチャートである。 FIG. 213 is a flowchart illustrating an example of self-position estimation by the receiver 200.
 受信機200は、地図と、複数の送信機100のそれぞれの送信機データとを、サーバまたは受信機200が有する記録媒体から取得する(ステップS341)。なお、送信機データは、送信機100が配置さているその地図上の位置と、その送信機100の形状およびサイズを示す。 The receiver 200 acquires the map and the transmitter data of each of the plurality of transmitters 100 from the server or the recording medium included in the receiver 200 (step S341). The transmitter data indicates the position on the map where the transmitter 100 is arranged, and the shape and size of the transmitter 100.
 次に、受信機200は、可視光通信モード(すなわち短時間露光)で撮像し、その撮像によって得られる復号用画像からストライプ状の領域(すなわち領域A)を検出する(ステップS342)。 Next, the receiver 200 captures an image in the visible light communication mode (that is, short exposure), and detects a stripe-shaped region (that is, the region A) from the decoding image obtained by the imaging (step S342).
 そして、受信機200は、ストライプ状の領域が可視光信号である可能性があるか否かを判定する(ステップS343)。つまり、受信機200は、そのストライプ状の領域が、可視光信号によって現れた輝線パターン領域であるか否かを判定する。ここで、受信機200は、可視光信号である可能性がないと判定すると(ステップS343のN)、処理を終了する。一方、受信機200は、可視光信号である可能性があると判定すると(ステップS343のY)、さらに、その可視光信号を受信することができたか否かを判定する(ステップS344)。つまり、受信機200は、復号用画像の輝線パターン領域を復号し、その復号によって可視光信号として光IDを取得することができたか否かを判定する。 Then, the receiver 200 determines whether or not there is a possibility that the striped region is a visible light signal (step S343). That is, the receiver 200 determines whether or not the stripe-shaped region is a bright line pattern region that appears due to a visible light signal. Here, when the receiver 200 determines that there is no possibility of being a visible light signal (N in step S343), the process ends. On the other hand, when the receiver 200 determines that there is a possibility of being a visible light signal (Y in step S343), it further determines whether or not the visible light signal has been received (step S344). That is, the receiver 200 decodes the bright line pattern region of the decoding image and determines whether or not the light ID has been acquired as a visible light signal by the decoding.
 ここで、受信機200は、可視光信号を受信することができたと判定すると(ステップS344のY)、復号用画像における領域Aの形状、大きさおよび位置を取得する(ステップS347)。つまり、受信機200は、可視光通信モードでの撮像によって復号用画像にストライプ状の像として映し出されている送信機100の形状、大きさおよび位置を取得する。 Here, when the receiver 200 determines that the visible light signal has been received (Y in step S344), the receiver 200 acquires the shape, size, and position of the region A in the decoding image (step S347). That is, the receiver 200 acquires the shape, size, and position of the transmitter 100 that is displayed as a striped image on the decoding image by imaging in the visible light communication mode.
 そして、受信機200は、送信機100の送信機データと、取得された領域Aの形状、大きさおよび位置とに基づいて、送信機100と受信機200との相対位置を計算し、受信機200の現在位置(つまり現在の自己位置)を更新する(ステップS348)。例えば、受信機200は、ステップS341で取得された各送信機100の送信機データから、受信された可視光信号に対応する送信機100の送信機データを選択する。つまり、受信機200は、地図上に示される複数の送信機100のうち、その可視光信号に対応する送信機100を、領域Aの像として映し出されている撮像対象の送信機100として選択する。そして、受信機200は、ステップS347で取得された送信機100の形状、大きさおよび位置と、撮像対象の送信機100の送信機データに示される形状および大きさとに基づいて、受信機200の送信機100に対する相対位置を計算する。その後、受信機200は、その相対位置と、ステップS341で取得された地図と、撮像対象の送信機100の送信機データによって示される地図上の位置とに基づいて、自己位置を更新する。 Then, the receiver 200 calculates the relative position between the transmitter 100 and the receiver 200 based on the transmitter data of the transmitter 100 and the acquired shape, size, and position of the area A. The current position of 200 (that is, the current self position) is updated (step S348). For example, the receiver 200 selects transmitter data of the transmitter 100 corresponding to the received visible light signal from the transmitter data of each transmitter 100 acquired in step S341. That is, the receiver 200 selects the transmitter 100 corresponding to the visible light signal from among the plurality of transmitters 100 shown on the map as the imaging target transmitter 100 displayed as the image of the area A. . Based on the shape, size, and position of the transmitter 100 acquired in step S347 and the shape and size indicated in the transmitter data of the imaging target transmitter 100, the receiver 200 The relative position with respect to the transmitter 100 is calculated. Thereafter, the receiver 200 updates its own position based on the relative position, the map acquired in step S341, and the position on the map indicated by the transmitter data of the imaging target transmitter 100.
 一方、受信機200は、ステップS344で可視光信号を受信することができないと判定すると(ステップS344のN)、受信機200のカメラによって地図上のどの位置または範囲が撮像されているのかを推定する(ステップS345)。つまり、受信機200は、その時点で推定されている現在の自己位置と、受信機200の撮像部であるカメラの向きまたは方向とに基づいて、地図上において撮像されている位置または範囲を推定する。そして、受信機200は、地図上に示される複数の送信機100の中から、撮像されている可能性が最も高い送信機100が、領域Aの像として映し出されている送信機100であるとみなす(ステップS346)。つまり、受信機200は、受信機200は、地図上に示される複数の送信機100のうち、その撮像されている可能性が最も高い送信機100を、撮像対象の送信機100として選択する。なお、撮像されている可能性が最も高い送信機100は、例えば、ステップS345で推定された撮像の位置または範囲に最も近い送信機100である。 On the other hand, when the receiver 200 determines that the visible light signal cannot be received in step S344 (N in step S344), it estimates which position or range on the map is captured by the camera of the receiver 200. (Step S345). That is, the receiver 200 estimates the position or range imaged on the map based on the current self-position estimated at that time and the direction or direction of the camera that is the imaging unit of the receiver 200. To do. In the receiver 200, the transmitter 100 that is most likely to be imaged among the plurality of transmitters 100 shown on the map is the transmitter 100 that is displayed as an image of the area A. Deemed (step S346). That is, the receiver 200 selects the transmitter 100 having the highest possibility of being imaged as the transmitter 100 to be imaged, from among the plurality of transmitters 100 shown on the map. Note that the transmitter 100 having the highest possibility of being imaged is, for example, the transmitter 100 closest to the imaging position or range estimated in step S345.
 図214は、受信機200によって受信される可視光信号を説明するための図である。 FIG. 214 is a diagram for explaining a visible light signal received by the receiver 200.
 復号用画像に含まれる輝線パターン領域は、2つのケースによって現れる。第1のケースでは、受信機200が、例えば天井に配置されている照明装置などの送信機100を直接的に撮像することによって輝線パターン領域が現れる。言い換えれば、第1のケースでは、輝線パターン領域が現れる原因となる光は直接光である。第2のケースでは、受信機200が送信機100を間接的に撮像することによって輝線パターン領域が現れる。つまり、受信機200は、照明装置などの送信機100を撮像することなく、送信機100からの光を反射している壁または床などの反射領域を撮像する。これによって、復号用画像に輝線パターン領域が現れる。言い換えれば、第2のケースでは、輝線パターン領域が現れる原因となる光は反射光である。 The bright line pattern area included in the decoding image appears in two cases. In the first case, the bright line pattern region appears when the receiver 200 directly images the transmitter 100 such as a lighting device disposed on the ceiling, for example. In other words, in the first case, the light that causes the bright line pattern region to appear is direct light. In the second case, the bright line pattern region appears when the receiver 200 indirectly images the transmitter 100. That is, the receiver 200 images a reflection area such as a wall or a floor reflecting light from the transmitter 100 without imaging the transmitter 100 such as a lighting device. As a result, a bright line pattern region appears in the decoding image. In other words, in the second case, the light that causes the bright line pattern region to appear is reflected light.
 したがって、本実施の形態における受信機200は、復号用画像に輝線パターン領域があれば、その輝線パターン領域が現れているケースが第1のケースか、第2のケースかを判定する。つまり、受信機200は、輝線パターン領域が送信機100の直接光によって現れているのか、送信機100の反射光によって現れているのかを判定する。 Therefore, if there is a bright line pattern area in the decoding image, the receiver 200 according to the present embodiment determines whether the case where the bright line pattern area appears is the first case or the second case. That is, the receiver 200 determines whether the bright line pattern region appears due to the direct light of the transmitter 100 or the reflected light of the transmitter 100.
 そして、受信機200は、第1のケースであると判定すると、復号用画像における輝線パターン領域を、復号用画像に映し出されている送信機100として用いて、その送信機100に対する受信機200の相対位置を特定する。つまり、受信機200は、撮像に用いられたカメラの向きおよび画角と、輝線パターン領域の形状、大きさおよび位置と、送信機100の形状および大きさと用いて、三角測量または幾何学的な測量方法によって、受信機200の相対位置を特定する。 Then, when the receiver 200 determines that this is the first case, the bright line pattern region in the decoding image is used as the transmitter 100 displayed in the decoding image, and the receiver 200 with respect to the transmitter 100 uses the bright line pattern region. Specify the relative position. That is, the receiver 200 uses the orientation and field angle of the camera used for imaging, the shape, size and position of the bright line pattern region, and the shape and size of the transmitter 100 to perform triangulation or geometrical measurement. The relative position of the receiver 200 is specified by the surveying method.
 一方、受信機200は、第2のケースであると判定すると、復号用画像における輝線パターン領域を、復号用画像に映し出されている反射領域として用いて、その送信機100に対する受信機200の相対位置を特定する。つまり、受信機200は、撮像に用いられたカメラの向きおよび画角と、輝線パターン領域の形状、大きさおよび位置と、地図によって示される床または壁の位置および向きと、送信機100の形状および大きさと用いて、三角測量または幾何学的な測量方法によって、受信機200の相対位置を特定する。このとき、受信機200は、輝線パターン領域の位置として、その輝線パターン領域の中心を用いてもよい。 On the other hand, when determining that the receiver 200 is in the second case, the receiver 200 uses the bright line pattern area in the decoding image as a reflection area displayed in the decoding image, and the receiver 200 is relative to the transmitter 100. Identify the location. That is, the receiver 200 uses the orientation and angle of view of the camera used for imaging, the shape, size and position of the bright line pattern region, the position and orientation of the floor or wall indicated by the map, and the shape of the transmitter 100. The relative position of the receiver 200 is specified by triangulation or a geometric surveying method using and the size. At this time, the receiver 200 may use the center of the bright line pattern region as the position of the bright line pattern region.
 図215は、受信機200による自己位置推定の他の例を示すフローチャートである。 FIG. 215 is a flowchart showing another example of self-position estimation by the receiver 200.
 受信機200は、まず、可視光通信モードでの撮像によって可視光信号を受信する(ステップS351)。そして、受信機200は、地図と、複数の送信機100のそれぞれの送信機データとを、サーバまたは受信機200が有する記録媒体(すなわちデータベース)から取得する(ステップS352)。 First, the receiver 200 receives a visible light signal by imaging in the visible light communication mode (step S351). Then, the receiver 200 acquires the map and the transmitter data of each of the plurality of transmitters 100 from the recording medium (that is, the database) included in the server or the receiver 200 (step S352).
 次に、受信機200は、ステップS351で受信された可視光信号が反射光によって受信されたか否かを判定する(ステップS353)。 Next, the receiver 200 determines whether or not the visible light signal received in step S351 is received by reflected light (step S353).
 そして、受信機200は、ステップS353で反射光によって受信されたと判定すると(ステップS353のY)、ステップS351での撮像によって得られる復号用画像におけるストライプ状の領域の中心部を、床または壁に映し出されている送信機100の位置とみなす(ステップS354)。 If the receiver 200 determines that it has been received by the reflected light in step S353 (Y in step S353), the center of the striped region in the decoding image obtained by the imaging in step S351 is placed on the floor or wall. It is regarded as the position of the transmitter 100 being projected (step S354).
 次に、受信機200は、図213のステップS348と同様、送信機100と受信機200との相対位置を計算し、受信機200の現在位置を更新する(ステップS355)。一方、受信機200は、ステップS353で反射光によって受信されていないと判定すると(ステップS353のN)、床または壁の反射を考慮することなく、受信機200の現在位置を更新する。 Next, the receiver 200 calculates the relative position between the transmitter 100 and the receiver 200 and updates the current position of the receiver 200 (step S355), as in step S348 in FIG. On the other hand, if the receiver 200 determines that it is not received by the reflected light in step S353 (N in step S353), the receiver 200 updates the current position of the receiver 200 without considering the floor or wall reflection.
 図216は、受信機200による反射光の判定の例を示すフローチャートである。 FIG. 216 is a flowchart illustrating an example of determination of reflected light by the receiver 200.
 受信機200は、復号用画像からストライプ状の領域または輝線パターン領域を領域Aとして検出する(ステップS641)。次に、受信機200は、その復号用画像の撮像が行われたときのカメラの向きを加速度センサによって特定する(ステップS642)。次に、受信機200は、その時点において既に推定されている受信機200の地図上の位置から、ステップS642で特定されたカメラの向きに、送信機100が存在するか否かを地図データから特定する(ステップS643)。つまり、受信機200は、その時点において推定されている受信機200の地図上における位置と、受信機200の撮像の向きまたは方向と、地図上における各送信機100の位置とに基づいて、送信機100を直接撮像しているか否かを判断する。 The receiver 200 detects a stripe-shaped area or bright line pattern area as the area A from the decoding image (step S641). Next, the receiver 200 specifies the orientation of the camera when the decoding image is captured by the acceleration sensor (step S642). Next, the receiver 200 determines from the map data whether the transmitter 100 exists in the direction of the camera specified in step S642 from the position on the map of the receiver 200 that has already been estimated at that time. Specify (step S643). That is, the receiver 200 transmits based on the estimated position of the receiver 200 on the map, the direction or direction of imaging of the receiver 200, and the position of each transmitter 100 on the map. It is determined whether the apparatus 100 is directly imaged.
 そして、受信機200は、送信機100が存在すると判定すると(ステップS644のYes)、領域Aの光、つまり可視光信号の受信に用いられた光が直接光であると判定する(ステップS645)。一方、受信機200は、送信機100が存在しないと判定すると(ステップS644のNo)、領域Aの光、つまり可視光信号の受信に用いられた光が反射光であると判定する(ステップS646)。 When the receiver 200 determines that the transmitter 100 is present (Yes in step S644), the receiver 200 determines that the light in the area A, that is, the light used for receiving the visible light signal is direct light (step S645). . On the other hand, if the receiver 200 determines that the transmitter 100 does not exist (No in step S644), the receiver 200 determines that the light in the region A, that is, the light used for receiving the visible light signal is reflected light (step S646). ).
 このように、受信機200は、加速度センサを用いて、輝線パターン領域が現れる原因となる光が直接光か反射光かを判定する。また、受信機200は、カメラの向きが上向きであれば、その光が直接光であると判定し、カメラの向きが下向きであれば、その光が反射光であると判定してもよい。 Thus, the receiver 200 determines whether the light that causes the bright line pattern region to appear is direct light or reflected light by using the acceleration sensor. The receiver 200 may determine that the light is direct light if the camera is directed upward, and may determine that the light is reflected light if the camera is directed downward.
 また、受信機200は、加速度センサの出力の代わりに、復号用画像に含まれる輝線パターン領域の光の強さ、位置、または大きさなどに基づいて、直接光か反射光かを判定してもよい。例えば、受信機200は、光の強さが所定強度未満であれば、輝線パターン領域が現れる原因となる光が反射光であると判定する。または、受信機200は、輝線パターン領域の位置が復号用画像の下部にあれば、その光が反射光であると判定する。または、受信機200は、輝線パターン領域の大きさが所定サイズよりも大きければ、その光が反射光であると判定する。 Further, the receiver 200 determines whether the light is direct light or reflected light based on the intensity, position, or size of light in the bright line pattern region included in the decoding image instead of the output of the acceleration sensor. Also good. For example, if the intensity of light is less than a predetermined intensity, the receiver 200 determines that the light that causes the bright line pattern region to appear is reflected light. Alternatively, if the position of the bright line pattern region is below the decoding image, the receiver 200 determines that the light is reflected light. Alternatively, if the size of the bright line pattern region is larger than a predetermined size, the receiver 200 determines that the light is reflected light.
 図217は、受信機200によるナビゲーションの一例を示すフローチャートである。 FIG. 217 is a flowchart showing an example of navigation by the receiver 200.
 受信機200は、例えばアウトカメラとインカメラとディスプレイとを備えているスマートフォンであって、ユーザを目的地まで道案内するための画像をディスプレイに表示することによって、ナビゲーションを行っている。つまり、受信機200は、実施の形態2の図18A~図18Cに示す例のように、ARナビゲーションを実行する。このとき、受信機200は、アウトカメラでの撮像を行い、その撮像によって得られる画像に基づいて、周辺の危険探知を行う。そして、受信機200は、ユーザが危険な状況にあるか否かを判定する(ステップS361)。 The receiver 200 is, for example, a smartphone including an out camera, an in camera, and a display, and performs navigation by displaying an image for guiding the user to the destination on the display. That is, the receiver 200 performs AR navigation as in the example illustrated in FIGS. 18A to 18C of the second embodiment. At this time, the receiver 200 captures an image with an out-camera, and performs surrounding danger detection based on an image obtained by the imaging. Then, the receiver 200 determines whether or not the user is in a dangerous situation (step S361).
 ここで、受信機200は、ユーザが危険な状況にあると判定すると(ステップS361のY)、受信機200のディスプレイに、注意喚起のメッセージを表示する、または、ナビゲーションを停止する(ステップS364)。 Here, when the receiver 200 determines that the user is in a dangerous situation (Y in step S361), the receiver 200 displays a warning message on the display of the receiver 200 or stops navigation (step S364). .
 一方、受信機200は、ユーザが危険な状況でないと判定すると(ステップS361のN)、その受信機200が位置するエリアにおいて歩きスマホが禁止されているか否かを判定する(ステップS362)。例えば、受信機200は、地図データを参照し、受信機200の現在位置が、その地図データによって示される歩きスマホ禁止の範囲に含まれているか否かを判定する。ここで、受信機200は、歩きスマホが禁止されていないと判定すると(ステップS362のN)、ナビゲーションを続行する(ステップS366)。一方、受信機200は、歩きスマホが禁止されていると判定すると(ステップS362のY)、インカメラでユーザの視線を認識することによって、ユーザが受信機200を見ているか否かを判定する(ステップS363)。ここで、受信機200は、受信機200を見ていないと判定すると(ステップS363のN)、ナビゲーションを続行する(ステップS366)。一方、受信機200は、受信機200を見ていると判定すると(ステップS363のY)、受信機200のディスプレイに、注意喚起のメッセージを表示する、または、ナビゲーションを停止する(ステップS364)。 On the other hand, when the receiver 200 determines that the user is not in a dangerous situation (N in step S361), the receiver 200 determines whether walking in the area where the receiver 200 is located is prohibited (step S362). For example, the receiver 200 refers to the map data and determines whether or not the current position of the receiver 200 is included in the walking smartphone prohibition range indicated by the map data. Here, when the receiver 200 determines that the walking smartphone is not prohibited (N in Step S362), the navigation is continued (Step S366). On the other hand, when the receiver 200 determines that the walking smartphone is prohibited (Y in step S362), the receiver 200 determines whether the user is looking at the receiver 200 by recognizing the user's line of sight with the in-camera. (Step S363). Here, if the receiver 200 determines that the receiver 200 is not seen (N in step S363), the navigation is continued (step S366). On the other hand, when it is determined that the receiver 200 is looking at the receiver 200 (Y in step S363), a warning message is displayed on the display of the receiver 200 or the navigation is stopped (step S364).
 そして、受信機200は、ユーザが危険な状況を脱したか、または、ユーザが受信機200を注視している状態を脱したか否かを判定する(ステップS365)。ここで、受信機200は、その状況または状態を脱したと判定すると(ステップS365のY)、ナビゲーションを続行する(ステップS366)。一方、受信機200は、その状況または状態を脱していないと判定すると(ステップS365のN)、ステップS364の処理を繰り返し実行する。 Then, the receiver 200 determines whether or not the user has escaped the dangerous situation or has left the state in which the user is gazing at the receiver 200 (step S365). Here, when the receiver 200 determines that the situation or state has been removed (Y in step S365), the navigation is continued (step S366). On the other hand, if the receiver 200 determines that the situation or state has not been removed (N in step S365), the receiver 200 repeatedly executes the process in step S364.
 また、受信機200は、加速度センサおよびジャイロセンサなどからの出力に基づいて移動速度を検知してもよい。この場合、受信機200は、その移動速度が閾値以上であるか否かを判定し、閾値上である場合には、ナビゲーションを停止させてもよい。このとき、受信機200は、その移動速度での歩行が危険であることを通知するためのメッセージを表示してもよい。これにより、歩きスマホによる危険を回避することができる。 Further, the receiver 200 may detect the moving speed based on outputs from an acceleration sensor, a gyro sensor, and the like. In this case, the receiver 200 determines whether or not the moving speed is equal to or higher than the threshold, and may stop the navigation if the moving speed is above the threshold. At this time, the receiver 200 may display a message for notifying that walking at the moving speed is dangerous. Thereby, the danger by walking smartphone can be avoided.
 ここで、送信機100は、プロジェクタとして構成されていてもよい。 Here, the transmitter 100 may be configured as a projector.
 図218は、プロジェクタとして構成されている送信機100の例を示す図である。 FIG. 218 is a diagram illustrating an example of the transmitter 100 configured as a projector.
 例えば、送信機100は、床または壁に画像441を投影する。また、送信機100は、画像441を投影しながら、その投影に用いられる光を輝度変化させることによって、可視光信号を送信している。なお、投影される画像441には、ARナビゲーションを促す文字などが表示されていてもよい。受信機200は、その床または壁に投影されている画像441を撮像することによって、可視光信号を受信する。そして、受信機200は、その投影されている画像441を用いて自己位置推定を行ってもよい。例えば、受信機200は、その可視光信号に対応付けられている画像441の地図上の位置をサーバから取得し、その画像441の位置を用いて自己位置推定を行う。または、受信機200は、その可視光信号に対応付けられている送信機100の地図上の位置をサーバから取得し、床または壁に投影されている画像441を、上述の第2のケースのように、反射光として扱うことによって、自己位置推定を行ってもよい。 For example, the transmitter 100 projects the image 441 on the floor or wall. Further, the transmitter 100 transmits a visible light signal by projecting the image 441 and changing the luminance of light used for the projection. The projected image 441 may display characters that promote AR navigation. The receiver 200 receives a visible light signal by capturing an image 441 projected on its floor or wall. Then, the receiver 200 may perform self-position estimation using the projected image 441. For example, the receiver 200 acquires the position on the map of the image 441 associated with the visible light signal from the server, and performs self-position estimation using the position of the image 441. Alternatively, the receiver 200 acquires the position on the map of the transmitter 100 associated with the visible light signal from the server, and displays the image 441 projected on the floor or wall in the second case described above. As described above, self-position estimation may be performed by treating it as reflected light.
 図219は、受信機200による自己位置推定の他の例を示すフローチャートである。 FIG. 219 is a flowchart showing another example of self-position estimation by the receiver 200.
 受信機200は、まず、送信機100、所定の画像、または、所定のコード(2次元コードなど)を撮像する(ステップS371)。なお、送信機100の撮像では、受信機200は、その送信機100から可視光信号を受信する。 The receiver 200 first images the transmitter 100, a predetermined image, or a predetermined code (such as a two-dimensional code) (step S371). Note that in imaging by the transmitter 100, the receiver 200 receives a visible light signal from the transmitter 100.
 次に、受信機200は、ステップS371で撮像された被写体の位置(つまり地図上の位置)を取得する。そして、受信機200は、その位置、形状および大きさと、ステップS371での撮像によって得られた画像内における被写体の位置、形状および大きさとに基づいて、受信機200の位置、すなわち自己位置を推定する(ステップS372)。 Next, the receiver 200 acquires the position of the subject imaged in step S371 (that is, the position on the map). Then, the receiver 200 estimates the position of the receiver 200, that is, the self position based on the position, shape, and size of the object and the position, shape, and size of the subject in the image obtained by the imaging in step S371. (Step S372).
 次に、受信機200は、ステップS371での撮像によって得られた画像によって示される所定の位置にユーザを案内するナビゲーションを開始する(ステップS373)。なお、被写体が送信機100であれば、その所定の位置は、可視光信号によって特定される位置である。被写体が所定の画像であれば、その所定の位置は、所定の画像を解析することによって得られる位置である。被写体がコードであれば、その所定の位置は、そのコードを復号することによって得られる位置である。受信機200は、ナビゲーションを行っているときには、カメラによる撮像を繰り返し、その撮像によって得られる通常撮影画像をリアルタイムに順次表示しながら、ユーザの行き先を示す矢印などの方向指示画像をその通常撮影画像に重畳する。ユーザは、受信機200を携帯しながら、その表示される方向指示画像にしたがって移動を開始する。 Next, the receiver 200 starts navigation for guiding the user to a predetermined position indicated by the image obtained by imaging in step S371 (step S373). If the subject is the transmitter 100, the predetermined position is a position specified by a visible light signal. If the subject is a predetermined image, the predetermined position is a position obtained by analyzing the predetermined image. If the subject is a code, the predetermined position is a position obtained by decoding the code. When performing navigation, the receiver 200 repeatedly captures images with the camera, and sequentially displays normal captured images obtained by the imaging in real time, while displaying a direction indication image such as an arrow indicating the destination of the user. Superimpose on. While carrying the receiver 200, the user starts moving according to the displayed direction instruction image.
 次に、受信機200は、GPSなどの位置情報(すなわちGPSデータ)を受信することができるか否かを判定する(ステップS374)。ここで、受信機200は、受信することができると判定すると(ステップS374のY)、そのGPSなどの位置情報を用いて現在の受信機200の自己位置を推定する(ステップS375)。一方、受信機200は、GPSなどの位置情報を受信することができないと判定すると(ステップS374のN)、上述の各通常撮影画像に映し出されている物体または特徴点の動きに基づいて、受信機200の自己位置を推定する(ステップS376)。例えば、受信機200は、上述の各通常撮影画像に映し出されている物体または特徴点の動きを検出し、その動きに基づいて、受信機200の移動方向および移動距離を推定する。そして、受信機200は、その推定された移動方向および移動距離と、ステップS372で推定された位置とに基づいて、現在の受信機200の自己位置を推定する。 Next, the receiver 200 determines whether or not position information such as GPS (that is, GPS data) can be received (step S374). If the receiver 200 determines that it can receive the signal (Y in step S374), the receiver 200 estimates the current position of the receiver 200 using the position information such as GPS (step S375). On the other hand, when the receiver 200 determines that position information such as GPS cannot be received (N in step S374), the receiver 200 receives the information based on the movement of the object or feature point displayed in each of the above normal captured images. The self-position of the machine 200 is estimated (step S376). For example, the receiver 200 detects the movement of the object or feature point displayed in each of the above-described normal captured images, and estimates the moving direction and moving distance of the receiver 200 based on the movement. Then, receiver 200 estimates the current position of receiver 200 based on the estimated moving direction and moving distance, and the position estimated in step S372.
 次に、受信機200は、最近に推定された自己位置が、目的地である所定の位置から予め定められた距離以内にあるか否かを判定する(ステップS377)。ここで、受信機200は、その自己位置がその距離以内にあると判定すると(ステップS377のY)、ユーザが目的地に到着したと判断して、ナビゲーションの処理を終了する。一方、受信機200は、その自己位置がその距離以内にないと判定すると(ステップS377のN)、ユーザが目的地に到着していないと判断して、ステップS374からの処理を繰り返し実行する。 Next, the receiver 200 determines whether or not the recently estimated self-position is within a predetermined distance from a predetermined position as the destination (step S377). Here, when the receiver 200 determines that its own position is within the distance (Y in step S377), the receiver 200 determines that the user has arrived at the destination and ends the navigation process. On the other hand, if the receiver 200 determines that the self-position is not within the distance (N in step S377), the receiver 200 determines that the user has not arrived at the destination, and repeatedly executes the processing from step S374.
 また、受信機200は、ナビゲーションの途中で現在の自己位置を見失った場合、つまり、自己位置を推定することができなくなった場合には、通常撮影画像に対する方向指示画像の重畳を停止し、最後に推定された自己位置を地図上に表示してもよい。または、受信機200は、最後に推定された自己位置を含む周辺の地図を表示してもよい。 Further, when the receiver 200 loses sight of the current self-position during navigation, that is, when the self-position cannot be estimated, the receiver 200 stops superimposing the direction indication image on the normal captured image, and finally The self-position estimated in (1) may be displayed on a map. Alternatively, the receiver 200 may display a map of the surrounding area including the last estimated self-location.
 図220は、送信機100による処理の一例を示すフローチャートである。図220に示す例では、送信機100は、エレベータに設置されている照明装置である。 FIG. 220 is a flowchart illustrating an example of processing performed by the transmitter 100. In the example shown in FIG. 220, the transmitter 100 is a lighting device installed in an elevator.
 送信機100は、エレベータの動作状況を示すエレベータ動作情報を、そのエレベータから得られるか否かを判定する(ステップS381)。なお、エレベータ動作情報は、エレベータが上昇しているか、下降しているか、停止しているかなどの状況と、エレベータが現在位置している階と、停止予定の階などを示していてもよい。 The transmitter 100 determines whether or not elevator operation information indicating the operation status of the elevator can be obtained from the elevator (step S381). The elevator operation information may indicate a situation such as whether the elevator is rising, falling or stopped, a floor where the elevator is currently located, a floor scheduled to stop, and the like.
 ここで、送信機100は、エレベータ動作情報が得られたと判定すると(ステップS381のY)、そのエレベータ動作情報の全てまたは一部の情報を可視光信号によって送信する(ステップS386)。または、送信機100は、その送信機100から送信される可視光信号(すなわち光ID)に、エレベータ動作情報を関連付けてサーバに保持させてもよい。 Here, when the transmitter 100 determines that the elevator operation information has been obtained (Y in step S381), the transmitter 100 transmits all or part of the elevator operation information by a visible light signal (step S386). Alternatively, the transmitter 100 may associate the elevator operation information with the visible light signal (that is, the light ID) transmitted from the transmitter 100 and cause the server to hold the elevator operation information.
 一方、送信機100は、エレベータ動作情報が得られないと判定すると(ステップS381のN)、加速度センサによって、エレベータが停止中、上昇中、および下降中の何れの状態にあるかを認識する(ステップS382)。さらに、送信機100は、エレベータの階表示部から、エレベータが現在位置する階を特定することができるか否かを判定する(ステップS383)。なお、階表示部は、図18Cに示す階数表示部に相当する。ここで、階を特定することができたと判定すると(ステップS383のY)、送信機100は、上述のステップS386の処理を実行する。一方、階を特定することができないと判定すると(ステップS383のN)、送信機100は、さらに、その階表示部をカメラで撮像し、その撮像によって得られた画像から、エレベータが現在位置する階を認識することができるか否かを判定する(ステップS384)。 On the other hand, if the transmitter 100 determines that the elevator operation information cannot be obtained (N in step S381), the transmitter 100 recognizes whether the elevator is in a stopped state, in an ascending state, or in a descending state (step S381). Step S382). Further, the transmitter 100 determines whether or not the floor where the elevator is currently located can be specified from the floor display section of the elevator (step S383). The floor display unit corresponds to the floor display unit shown in FIG. 18C. If it is determined that the floor can be specified (Y in step S383), the transmitter 100 executes the process in step S386 described above. On the other hand, when determining that the floor cannot be specified (N in step S383), the transmitter 100 further captures the floor display unit with a camera, and the elevator is currently located from the image obtained by the imaging. It is determined whether or not the floor can be recognized (step S384).
 ここで、送信機100は、その階を認識することができたと判定すると(ステップS384のY)、上述のステップS386の処理を実行する。一方、送信機100は、階を認識することができないと判定すると(ステップS384のN)、予め定められた可視光信号を送信する(ステップS385)。 Here, when the transmitter 100 determines that the floor has been recognized (Y in step S384), the transmitter 100 executes the process in step S386 described above. On the other hand, if the transmitter 100 determines that the floor cannot be recognized (N in step S384), the transmitter 100 transmits a predetermined visible light signal (step S385).
 図221は、受信機200によるナビゲーションの他の例を示すフローチャートである。図221に示す例では、送信機100は、エレベータに設置されている照明装置である。 FIG. 221 is a flowchart showing another example of navigation by the receiver 200. In the example illustrated in FIG. 221, the transmitter 100 is a lighting device installed in an elevator.
 受信機200は、まず、受信機200の現在位置がエスカレータ上にあるか否かを判定する(ステップS391)。なお、エスカレータは、スロープ状のエスカレータであっても、水平状のエスカレータであってもよい。 The receiver 200 first determines whether or not the current position of the receiver 200 is on the escalator (step S391). The escalator may be a slope escalator or a horizontal escalator.
 ここで、エスカレータ上にあると判定すると(ステップS391のY)、受信機200は、受信機200の動きを推定する(ステップS392)。この動きは、エスカレータ以外の固定された床または壁などを基準とする受信機200の動きである。つまり、受信機200は、まず、エスカレータの動きの方向と速度とをサーバから取得する。そして、受信機200は、SLAM(Simultaneous Localization and Mapping)などのフレーム間画像処理によって認識された、エスカレータ上での受信機200の動きに、そのエスカレータの動きを足し合わせることによって、受信機200の動きを推定する。 Here, if it is determined that it is on the escalator (Y in step S391), the receiver 200 estimates the movement of the receiver 200 (step S392). This movement is a movement of the receiver 200 based on a fixed floor or wall other than the escalator. That is, the receiver 200 first acquires the direction and speed of the escalator movement from the server. The receiver 200 adds the movement of the escalator to the movement of the receiver 200 on the escalator recognized by the inter-frame image processing such as SLAM (SimultaneousaneLocalization and Mapping). Estimate movement.
 一方、受信機200は、エスカレータ上にないと判定すると(ステップS391のN)、受信機200の現在位置がエレベータ内にあるか否かを判定する(ステップS393)。ここで、受信機200は、エレベータ内にないと判定すると(ステップS393のN)、処理を終了する。一方、受信機200は、エレベータ内にあると判定すると(ステップS393のY)、エレベータ(具体的には、エレベータの籠)が現在位置する階を、可視光信号、電波信号または他の手段によって特定することができるか否かを判定する(ステップS394)。 On the other hand, when it is determined that the receiver 200 is not on the escalator (N in Step S391), it is determined whether or not the current position of the receiver 200 is in the elevator (Step S393). Here, if the receiver 200 determines that it is not in the elevator (N in step S393), the process is terminated. On the other hand, when the receiver 200 determines that it is in the elevator (Y in step S393), the floor where the elevator (specifically, the elevator car) is currently located is determined by a visible light signal, a radio signal, or other means. It is determined whether it can be identified (step S394).
 ここで、階を特定することができないと判定すると(ステップS394のN)、受信機200は、ユーザがエレベータから降りる予定の階を表示する(ステップS395)。さらに、受信機200は、ユーザがエレベータから降りることによって受信機200がエレベータから出たか否かを認識するとともに、可視光信号、電波信号またはその他の手段によって、受信機200が現在位置する階を認識する。そして、受信機200は、その認識された階が予定されていた階と異なっていれば、ユーザが降りた階が間違っていることをユーザに通知する(ステップS396)。 Here, if it is determined that the floor cannot be specified (N in step S394), the receiver 200 displays the floor on which the user is scheduled to get off the elevator (step S395). Furthermore, the receiver 200 recognizes whether or not the receiver 200 has exited the elevator when the user gets out of the elevator, and determines the floor where the receiver 200 is currently located by a visible light signal, a radio wave signal, or other means. recognize. Then, if the recognized floor is different from the planned floor, the receiver 200 notifies the user that the floor on which the user has descended is incorrect (step S396).
 また、受信機200は、エレベータが現在位置する階を特定することができたとステップS394で判定すると(ステップS394のY)、ユーザがエレベータから降りる予定の階、つまり、目的の階に受信機200があるか否かを判定する(ステップS397)。ここで、受信機200が目的の階にあると判定すると(ステップS397のY)、受信機200は、ユーザに降りることを促すメッセージなどを表示する(ステップS399)。または、受信機200は、その目的の階に関する広告を表示する。また、受信機200は、ユーザが降りようとしない場合には、警告のメッセージを表示してもよい。 Further, when the receiver 200 determines in step S394 that the floor on which the elevator is currently located can be specified (Y in step S394), the receiver 200 is placed on the floor on which the user is to exit the elevator, that is, the target floor. It is determined whether or not there is (step S397). If it is determined that the receiver 200 is on the target floor (Y in step S397), the receiver 200 displays a message that prompts the user to get off (step S399). Alternatively, the receiver 200 displays an advertisement related to the target floor. Further, the receiver 200 may display a warning message when the user is not going to get off.
 一方、受信機200は、受信機200が目的の階にないと判定すると(ステップS397のN)、ユーザに降りないように注意を促すメッセージなどを表示する(ステップS398)。または、受信機200は、広告を表示してもよい。また、受信機200は、ユーザが降りようとする場合には、警告のメッセージを表示してもよい。 On the other hand, when the receiver 200 determines that the receiver 200 is not on the target floor (N in step S397), the receiver 200 displays a message for alerting the user not to get off (step S398). Alternatively, the receiver 200 may display an advertisement. The receiver 200 may display a warning message when the user is about to get off.
 図222は、受信機200による処理の一例を示すフローチャートである。 FIG. 222 is a flowchart illustrating an example of processing performed by the receiver 200.
 図222に示すフローチャートでは、受信機200は、可視光信号と通常露光画像情報(すなわち通常撮影画像)とを併用する。 222, the receiver 200 uses a visible light signal and normal exposure image information (that is, a normal captured image) in combination.
 例えばスマートフォンまたはスマートグラスなどのウェアラブル機器として構成される受信機200は、通常の露光時間より短い露光時間での撮像によって画像A(すなわち上述の復号用画像)を取得する(ステップS631)。そして、受信機200は、その画像Aを復号することによって、可視光信号を受信する(ステップS632)。受信機200は、受信した可視光信号に基づき、一例として受信機200の現在位置を特定して、所定の位置までのナビゲーションを開始する。 For example, the receiver 200 configured as a wearable device such as a smartphone or smart glass acquires the image A (that is, the above-described decoding image) by imaging with an exposure time shorter than the normal exposure time (step S631). Then, the receiver 200 receives the visible light signal by decoding the image A (step S632). For example, the receiver 200 identifies the current position of the receiver 200 based on the received visible light signal, and starts navigation to a predetermined position.
 受信機200は、次に、上述の短い露光時間より長い露光時間(例えば、自動露光設定による露光時間)での撮像によって画像B(すなわち上述の通常撮影画像)を取得する(ステップS633)。ここで、画像Aは、物体検出または特徴量抽出には不適である。したがって、受信機200は、上述の短い露光時間での撮像による画像Aの取得と、上述の長い露光時間での撮像による画像Bの取得とを交互に所定の回数ずつ繰り返し行う。これにより、受信機200は、その得られた複数の画像Bを用いて上述の物体検出または特徴量抽出などの画像処理を行う(ステップS634)。例えば、受信機200は、画像Bから特定の物体を検出することで、受信機200の位置を補正する。また、例えば、受信機200は、2枚または複数の画像Bのそれぞれから特徴点を抽出し、画像間で同じ特徴点がどのように移動したかを識別する。その結果、受信機200は、2枚または複数の画像Bのそれぞれの撮像時点の間での、受信機200の移動の距離と方向を認識し、受信機200の現在位置を補正することができる。 Next, the receiver 200 acquires an image B (that is, the above-described normal captured image) by imaging with an exposure time longer than the above-described short exposure time (for example, the exposure time based on the automatic exposure setting) (step S633). Here, the image A is not suitable for object detection or feature amount extraction. Therefore, the receiver 200 alternately repeats acquisition of the image A by imaging with the above short exposure time and acquisition of the image B by imaging with the above long exposure time by a predetermined number of times. Thereby, the receiver 200 performs image processing such as the above-described object detection or feature amount extraction using the obtained plurality of images B (step S634). For example, the receiver 200 corrects the position of the receiver 200 by detecting a specific object from the image B. Further, for example, the receiver 200 extracts feature points from each of two or more images B, and identifies how the same feature points have moved between images. As a result, the receiver 200 can recognize the distance and direction of movement of the receiver 200 between the respective imaging points of the two or more images B, and can correct the current position of the receiver 200. .
 図223は、受信機200のディスプレイに表示される画面の一例を示す図である。 FIG. 223 is a diagram illustrating an example of a screen displayed on the display of the receiver 200.
 受信機200は、ナビゲーションのアプリケーションを起動したときには、例えば図223に示すように、送信機100のロゴマークを表示する。そのロゴマークは、例えば、「ARナビ」と記載されたロゴマークである。そして、受信機200は、そのロゴマークを撮像するように、ユーザを誘導してもよい。送信機100は、例えばデジタルサイネージとして構成され、可視光信号を送信するために、そのロゴマークを輝度変化させながら表示している。または、送信機100は、例えばプロジェクタとして構成され、可視光を送信するために、上述のロゴマークを輝度変化させながら床または壁に投影している。受信機200は、そのロゴマークを可視光通信モードで撮像することによって、送信機100からの可視光信号を受信する。なお、受信機200は、ロゴマークの代わりに、送信機100として構成されている近くの照明装置またはランドマークの絵を表示してもよい。 When the navigation application is activated, the receiver 200 displays the logo mark of the transmitter 100 as shown in FIG. 223, for example. The logo mark is, for example, a logo mark described as “AR Navi”. Then, the receiver 200 may guide the user to take an image of the logo mark. The transmitter 100 is configured as, for example, digital signage, and displays the logo mark while changing the luminance in order to transmit a visible light signal. Alternatively, the transmitter 100 is configured as a projector, for example, and projects the above logo mark on the floor or wall while changing the luminance in order to transmit visible light. The receiver 200 receives the visible light signal from the transmitter 100 by imaging the logo mark in the visible light communication mode. Note that the receiver 200 may display a picture of a nearby lighting device or landmark configured as the transmitter 100 instead of the logo mark.
 また、受信機200は、ユーザが何かに困ったときのためのコールセンターの電話番号を表示してもよい。このとき、受信機200は、ユーザの使用言語と、推定されている自己位置とを、コールセンターのサーバに通知してもよい。ユーザの使用言語は、例えば受信機200に予め登録されてもよく、ユーザによる操作によって設定されてもよい。これにより、コールセンターは、受信機200のユーザから電話がかかってきたときに、そのユーザに迅速に対応することができる。例えば、コールセンターは、電話でユーザを目的地まで案内することができる。 Also, the receiver 200 may display a call center telephone number for when the user is in trouble. At this time, the receiver 200 may notify the call center server of the user's language and the estimated self-location. The language used by the user may be registered in advance in the receiver 200, for example, or may be set by an operation by the user. As a result, the call center can quickly respond to a user when a call is received from the user of the receiver 200. For example, a call center can guide a user to a destination by telephone.
 また、受信機200は、予め登録されているランドマークの形態と、そのランドマークの大きさと、そのランドマークの地図上での位置とに基づいて、自己位置を補正してもよい。つまり、受信機200は、通常撮影画像が取得されると、その通常撮影画像からランドマークが映し出されている画像領域を検出する。そして、受信機200は、その通常撮影画像における画像領域の形状、大きさおよび位置と、ランドマークの大きさおよび地図上の位置とに基づいて、自己位置推定を行う。 Further, the receiver 200 may correct the self-position based on the form of the landmark registered in advance, the size of the landmark, and the position of the landmark on the map. That is, when a normal captured image is acquired, the receiver 200 detects an image area where a landmark is projected from the normal captured image. Then, the receiver 200 performs self-position estimation based on the shape, size, and position of the image area in the normal captured image, and the size of the landmark and the position on the map.
 また、受信機200は、インカメラを使って天井または後ろにあるランドマークを認識または検出してもよい。また、受信機200は、カメラによる撮像によって得られる画像のうち、例えば水平方向に対して所定の角度以上(または所定の角度以下)の領域のみを使用してもよい。例えば、送信機100またはランドマークが天井側に配置されることが多ければ、受信機200は、そのカメラの画像のうち、水平方向よりも上側にある被写体が映し出されている領域のみを用いる。受信機200は、その領域のみから、輝線パターン領域またはランドマークの画像領域の検出を行う。これにより、受信機200の処理量を削減することができる。 Also, the receiver 200 may recognize or detect a landmark on the ceiling or behind using the in-camera. In addition, the receiver 200 may use, for example, only an area that is greater than or equal to a predetermined angle (or less than or equal to a predetermined angle) with respect to the horizontal direction, among images obtained by imaging with a camera. For example, if the transmitter 100 or landmark is often arranged on the ceiling side, the receiver 200 uses only the area of the camera image in which the subject above the horizontal direction is projected. The receiver 200 detects the bright line pattern area or the landmark image area only from the area. Thereby, the processing amount of the receiver 200 can be reduced.
 また、受信機200は、図212の例に示すように、ARナビゲーションを行うときには、方向指示画像431を通常撮影画像に重畳するが、さらに、キャラクターを重畳してもよい。 Further, as shown in the example of FIG. 212, the receiver 200 superimposes the direction indication image 431 on the normal captured image when performing AR navigation, but may further superimpose a character.
 図224は、受信機200によるキャラクターの表示例を示す図である。 FIG. 224 is a diagram illustrating a display example of characters by the receiver 200.
 受信機200は、例えばデジタルサイネージとして構成されている送信機100から可視光信号を受信すると、その可視光信号に対応するキャラクター432を上述のAR画像として例えばサーバから取得する。そして、受信機200は、図224に示すように、方向指示画像431だけでなく、そのキャラクター432も通常撮影画像に重畳して表示する。例えば、そのキャラクター432は、飲料水の製造販売会社の広告用のキャラクターであって、その飲料水が封入された缶の画像として表示される。また、そのキャラクター432は、ユーザの目的地までの経路において販売されている飲料水の広告用のキャラクターである。このようなキャラクター432は、方向指示画像431によって指し示される方向または位置にあって、ユーザを先導するように表示されてもよい。このようなキャラクターによる広告は、アフィリエイトによって実現されてもよい。 When the receiver 200 receives a visible light signal from the transmitter 100 configured as digital signage, for example, the receiver 200 acquires the character 432 corresponding to the visible light signal from the server, for example, as the AR image. Then, as shown in FIG. 224, the receiver 200 displays not only the direction indication image 431 but also the character 432 superimposed on the normal captured image. For example, the character 432 is an advertising character for a drinking water manufacturing and sales company, and is displayed as an image of a can in which the drinking water is enclosed. The character 432 is an advertisement character for drinking water sold on the route to the user's destination. Such a character 432 may be displayed in a direction or position indicated by the direction indication image 431 so as to lead the user. The advertisement by such a character may be realized by an affiliate.
 また、キャラクターは、動物または人の形をしていてもよい。この場合、受信機200は、方向指示画像の上を歩くようなキャラクターを通常撮影画像に重畳してもよい。また、複数のキャラクターが通常撮影画像に重畳されてもよい。さらに、受信機200は、キャラクターの代わりに、または、キャラクターとともに、広告用の動画像をコマーシャルとして通常撮影画像に重畳してもよい。 Also, the character may be in the form of an animal or a person. In this case, the receiver 200 may superimpose a character that walks on the direction indication image on the normal captured image. A plurality of characters may be superimposed on the normal captured image. Furthermore, the receiver 200 may superimpose a moving image for advertisement on a normal captured image as a commercial instead of or together with the character.
 また、受信機200は、会社の広告のために支払われた広告料金によって、広告用のキャラクターの大きさおよび表示時間などを変えてもよい。また、複数の広告用のキャラクターが表示される場合には、受信機200は、キャラクターごとに支払われた広告料金に応じて、それらのキャラクターの奥行き方向の表示順序を決定してもよい。また、受信機200は、表示されているキャラクターの商品が販売されている店舗に入ったときに、その店舗に対する課金を電子決済で行ってもよい。 Further, the receiver 200 may change the size and display time of the character for advertisement according to the advertisement fee paid for the advertisement of the company. Further, when a plurality of advertising characters are displayed, the receiver 200 may determine the display order of the characters in the depth direction according to the advertising fee paid for each character. Further, when the receiver 200 enters the store where the displayed character product is sold, the receiver 200 may charge the store by electronic payment.
 また、受信機200は、そのキャラクター432を表示しているときに、他のデジタルサイネージから他の可視光信号を受信した場合には、表示されているキャラクター432を、他の可視光信号に応じたキャラクターに変更してもよい。 In addition, when the receiver 200 receives another visible light signal from another digital signage while displaying the character 432, the receiver 200 changes the displayed character 432 according to the other visible light signal. You may change to a different character.
 受信機200は、会社のコマーシャルとなる動画像を通常撮影画像に重畳してもよい。コマーシャルの動画像または広告の表示時間および回数などに応じて、広告主に対して課金が行われてもよい。受信機200は、コマーシャルの言語として、ユーザの言語を表示してもよく、ユーザがそのコマーシャルの商品の購入意思を店の人に伝えるための文章または音声リンクを、その店舗の人の言語で表示してもよい。また、受信機200は、その商品の値段をユーザの通貨で表示してもよい。 The receiver 200 may superimpose a moving image that is a commercial of the company on a normal captured image. The advertiser may be charged according to the display time and the number of times of the commercial moving image or advertisement. The receiver 200 may display the language of the user as the language of the commercial, and a sentence or a voice link for the user to inform the store person of the intention to purchase the commercial product is displayed in the language of the store person. It may be displayed. The receiver 200 may display the price of the product in the user's currency.
 図225は、受信機200のディスプレイに表示される画面の他の例を示す図である。 FIG. 225 is a diagram illustrating another example of a screen displayed on the display of the receiver 200.
 受信機200は、例えば図225の(a)に示すように、店名「XYZ」の本屋が前方にあることを通知するためのメッセージを、ユーザの使用言語である英語で表示する。このようなメッセージは、上述のように動画のコマーシャルとして表示されてもよい。そして、受信機200は、その本屋に入ると、例えば図225の(b)に示すように、その本屋の店員に伝えるための文章を、その店員の言語(例えば日本語)と、ユーザの使用言語である英語とで表示してもよい。 For example, as shown in FIG. 225 (a), the receiver 200 displays a message for notifying that the bookstore with the store name “XYZ” is ahead in English, which is the user's language. Such a message may be displayed as a moving image commercial as described above. Then, when the receiver 200 enters the bookstore, for example, as shown in FIG. 225 (b), a sentence to convey to the bookstore clerk, the language of the clerk (for example, Japanese), and the use of the user You may display with English which is a language.
 また、受信機200は、ナビゲーションの途中でユーザに寄り道を促してもよい。この場合、受信機200は、余裕時間に応じて寄り道を提案してもよい。余裕時間は、図212の例では、バス停からのバスの発車時刻と、ユーザがそのバス停に到着する時刻との差分である。 Also, the receiver 200 may prompt the user to take a detour during the navigation. In this case, the receiver 200 may propose a detour according to the spare time. In the example of FIG. 212, the margin time is the difference between the departure time of the bus from the bus stop and the time when the user arrives at the bus stop.
 また、受信機200は、近くの店の宣伝を表示してもよい。この場合、受信機200は、すぐ横の店か、これから通る道沿いの店の宣伝を表示してもよい。また、受信機200は、コマーシャルの動画像の再生終了時に、そのコマーシャルに対応する店の横に受信機200が存在するように、動画像の再生開始のタイミングを計算してもよい。また、受信機200は、通り過ぎた店の宣伝表示を停止してもよい。 Further, the receiver 200 may display advertisements of nearby stores. In this case, the receiver 200 may display an advertisement for a store next to it or a store along the way to go. Further, the receiver 200 may calculate the timing for starting the playback of the moving image so that the receiver 200 exists next to the store corresponding to the commercial when the playback of the moving image of the commercial is completed. Further, the receiver 200 may stop the advertisement display of the store that has passed.
 さらに、ユーザが店などに寄り道した場合に、受信機200が元の目的地への案内に戻れるように、店には、起点を得るための送信機100である照明装置などが設置されていてもよい。または、受信機200は、「XYZ店の前から再開する」と記されたボタンを表示してもよい。また、受信機200は、コマーシャルを見て来店した人だけに割引価格を適用してもよく、クーポンを表示してもよい。また、受信機200は、商品購入の支払いを行うために、アプリケーションによってバーコードを表示して電子決済してもよい。 Further, when the user detours to the store or the like, the store is provided with a lighting device or the like as the transmitter 100 for obtaining the starting point so that the receiver 200 can return to the guidance to the original destination. Also good. Alternatively, the receiver 200 may display a button labeled “Restart from the front of the XYZ store”. Further, the receiver 200 may apply the discount price only to the person who visits the store after seeing the commercial, or may display a coupon. In addition, the receiver 200 may display the barcode by an application and perform electronic settlement in order to pay for the purchase of the product.
 また、サーバは、各ユーザの受信機200によるナビゲーションの結果に基づいて、動線分析を行ってもよい。 Further, the server may perform a flow line analysis based on the result of navigation by the receiver 200 of each user.
 また、受信機200は、ナビゲーションにおいてカメラが使用されない場合には、自己位置推定の手法を、加速度センサなどによるPDR(Pedestrian Dead Reckoning)に切り替えてもよい。例えば、ナビゲーションのアプリケーションがオフにされた場合、または、受信機200がユーザのポケットなどに入って、カメラによって得られる画像が真っ暗になる場合に、自己位置推定の手法がPDRに切り替えられる。また、受信機200は、電波(BluetoothまたはWi-Fi)または音波を自己位置推定に利用してもよい。 Further, when the camera is not used for navigation, the receiver 200 may switch the self-position estimation method to PDR (Pedestrian Dead Reckoning) using an acceleration sensor or the like. For example, when the navigation application is turned off, or when the receiver 200 enters the user's pocket or the like and the image obtained by the camera becomes dark, the self-position estimation method is switched to the PDR. The receiver 200 may use radio waves (Bluetooth or Wi-Fi) or sound waves for self-position estimation.
 また、受信機200は、ユーザが間違った方向に行こうとすると、バイブレーションまたは音でそのことを通知してもよい。例えば、受信機200は、交差点においてユーザが正しい方向に行こうとする場合と、間違った方向に行こうとする場合とで、バイブレーションまたは音の種類を異ならせてもよい。なお、受信機200は、移動することなく、間違った方向に向けられたとき、または正しい方向に向けられたときに、上述のバイブレーションまたは音による通知を行ってもよい。これにより、視覚障害者にも使い勝手を向上することができる。なお、正しい方向とは、探索された経路に沿って目的地に向かう方向っであり、間違った方向とは、その正しい方向以外の方向である。 In addition, if the user tries to go in the wrong direction, the receiver 200 may notify the user with vibration or sound. For example, the receiver 200 may change the type of vibration or sound depending on whether the user tries to go in the correct direction or an incorrect direction at the intersection. Note that the receiver 200 may perform the above-described vibration or sound notification when the receiver 200 is directed in the wrong direction or the correct direction without moving. Thereby, usability can be improved even for visually impaired people. The correct direction is a direction toward the destination along the searched route, and the wrong direction is a direction other than the correct direction.
 また、受信機200は、上述の例ではスマートフォンとして構成されているが、スマートウォッチまたはスマートグラスとして構成されていてもよい。受信機200がスマートグラスとして構成されている場合には、受信機200によるカメラを用いたナビゲーションが、そのナビゲーションと関連していないアプリケーションによって中断されることを抑制することができる。 In addition, the receiver 200 is configured as a smartphone in the above-described example, but may be configured as a smart watch or a smart glass. When the receiver 200 is configured as a smart glass, navigation using the camera by the receiver 200 can be prevented from being interrupted by an application not related to the navigation.
 また、受信機200は、ナビゲーションを開始してから一定時間が経過するとそのナビゲーションを終了させてもよい。その一定時間は、目的地までの距離に応じて変更されてもよい。あるいは、受信機200は、GPSのデータが届く場所に入ると、ナビゲーションを終了してもよい。または、受信機200は、GPSの場所が一定距離離れたらナビゲーションを終了してもよい。また、受信機200は、到着予定時刻または到着までの残りの距離を表示してもよい。さらに、受信機200は、図212の例では、目的地であるバス停においてバスが発車する時刻を表示してもよい。 Further, the receiver 200 may end the navigation when a certain time has elapsed after starting the navigation. The certain time may be changed according to the distance to the destination. Alternatively, the receiver 200 may end the navigation when entering a place where GPS data reaches. Alternatively, the receiver 200 may end the navigation when the GPS location is a certain distance away. The receiver 200 may display the estimated arrival time or the remaining distance until arrival. Furthermore, in the example of FIG. 212, the receiver 200 may display the time at which the bus departs at the destination bus stop.
 また、受信機200は、階段または交差点などの位置では、ユーザに注意喚起を行ってもよく、ユーザの希望または健康状態などに応じて、階段ではなくエレベータなどに案内してもよい。例えば、受信機200は、ユーザが高齢(例えば80代の年齢)であれば、階段を避けてエレベータに案内してもよい。また、受信機200は、ユーザが大きな荷物を持っていると判定すると、階段を避けてエレベータに案内してもよい。例えば、受信機200は、加速度センサからの出力に基づいて、ユーザの歩行速度が普段の速度よりも遅いか否かを判定し、遅い場合に、大きな荷物を持っていると判定してもよい。あるいは、受信機200は、加速度センサからの出力に基づいて、ユーザの歩幅が普段の歩幅よりも短いか否かを判定し、短い場合に、大きな荷物を持っていると判定してもよい。さらに、受信機200は、ユーザが女性であれば、安全なコースに案内してもよい。なお、安全なコースは、地図データにおいて示されている。 In addition, the receiver 200 may alert the user at a position such as a staircase or an intersection, and may guide the user to an elevator or the like instead of the staircase according to the user's desire or health condition. For example, if the user is elderly (for example, in his 80s), the receiver 200 may guide the elevator away from the stairs. If the receiver 200 determines that the user has a large baggage, the receiver 200 may guide the elevator away from the stairs. For example, the receiver 200 may determine whether the user's walking speed is slower than the normal speed based on the output from the acceleration sensor, and may determine that the user has a large baggage when it is slow. . Alternatively, the receiver 200 may determine whether or not the user's stride is shorter than the normal stride based on the output from the acceleration sensor, and may determine that the user has a large baggage when it is short. Furthermore, if the user is a woman, the receiver 200 may guide the user to a safe course. The safe course is shown in the map data.
 また、受信機200は、カメラによる撮像によって得られる画像に基づいて、受信機200の周囲の人または車などの障害物を認識してもよい。そして、受信機200は、ユーザがその障害物に衝突しそうな場合には、ユーザに対してその障害物を回避するように促してもよい。例えば、受信機200は、音を鳴らすことによって、ユーザに止まるように、または障害物を避けるように促してもよい。 Further, the receiver 200 may recognize an obstacle such as a person or a car around the receiver 200 based on an image obtained by imaging with a camera. Then, when the user is likely to collide with the obstacle, the receiver 200 may prompt the user to avoid the obstacle. For example, the receiver 200 may prompt the user to stop or avoid an obstacle by making a sound.
 また、受信機200は、ナビゲーションを行うときには、過去の他のユーザの移動時間から、到着予想時刻を修正してもよい。このとき、受信機200は、ユーザの年齢および性別に基づいて修正してもよい。例えば、受信機200は、ユーザが20代の年齢であれば、到着予想時刻を前に修正し、ユーザが80代の年齢であれば、到着予想時刻を後に修正する。 Further, when performing the navigation, the receiver 200 may correct the estimated arrival time from the travel time of other users in the past. At this time, the receiver 200 may correct based on the age and sex of the user. For example, the receiver 200 corrects the estimated arrival time before if the user is in his 20s, and corrects the estimated arrival time later if the user is in his 80s.
 また、受信機200は、ナビゲーションの開始時に同じ送信機100であるデジタルサイネージを撮像した場合でも、ユーザによって目的地を異ならせてもよい。例えば、受信機200は、ユーザの性別に応じて目的地となるトイレの位置を異ならせてもよく、ユーザの国籍などに応じて目的地となる入国または帰国のカウンターを異ならせてもよい。あるいは、受信機200は、ユーザのチケットに応じて目的地となる電車または飛行機の乗り場を異ならせてもよい。また、受信機200は、ユーザのチケットに応じて目的地となるショーの座席を異ならせてもよい。また、受信機200は、ユーザの宗派に応じて目的地となるお祈りスペースを異ならせてもよい。 In addition, the receiver 200 may change the destination depending on the user even when the digital signage that is the same transmitter 100 is imaged at the start of navigation. For example, the receiver 200 may vary the position of the toilet serving as the destination according to the gender of the user, or may vary the counter of entry or return as the destination according to the nationality of the user. Alternatively, the receiver 200 may change the destination of the destination train or airplane according to the user's ticket. In addition, the receiver 200 may change the seat of the show that is the destination according to the user's ticket. In addition, the receiver 200 may vary the prayer space serving as a destination depending on the sect of the user.
 また、受信機200は、ナビゲーションを開始するときには、いきなりナビゲーションを開始することなく、「XYZへの案内を始めますか? Yes/No」のようなダイアログを表示してもよい。また、受信機200は、案内先がどこか(登場ゲート、ラウンジ、または店舗など)をユーザに問い合わせてもよい。 Further, when starting the navigation, the receiver 200 may display a dialog such as “Do you want to start guiding to XYZ? Yes / No” without starting the navigation suddenly. Further, the receiver 200 may inquire of the user where the guidance destination is (such as an entrance gate, a lounge, or a store).
 また、受信機200は、ナビゲーションを行っているときには、他のアプリケーションによる通知または着信を抑制してもよい。これにより、ナビゲーションが中断されることを抑えることができる。 In addition, the receiver 200 may suppress notifications or incoming calls from other applications when performing navigation. Thereby, it can suppress that navigation is interrupted.
 また、受信機200は、目的地として待ち合わせ場所にユーザを案内してもよい。 Also, the receiver 200 may guide the user to a meeting place as a destination.
 図226は、待ち合わせ場所へのナビゲーションを行うためのシステム構成を示す図である。 FIG. 226 is a diagram showing a system configuration for performing navigation to a meeting place.
 例えば、受信機200aを有するユーザと、受信機200bを有するユーザとが待ち合わせ場所に集合する。なお、受信機200aおよび受信機200bのそれぞれは、上述の受信機200の機能を有する。 For example, a user having the receiver 200a and a user having the receiver 200b gather at a meeting place. Note that each of the receiver 200a and the receiver 200b has the function of the receiver 200 described above.
 上述ような待ち合わせが行われる場合、受信機200aは、図226の(a)に示すように、自己位置推定によって得られた位置と、受信機200aの番号と、待ち合わせの相手(すなわち受信機200b)の番号とをサーバ300に送信する。なお、番号は、電話番号であってもよく、受信機を識別し得るものであればどのような識別番号であってもよい。また、番号以外の情報であってもよい。 When waiting as described above is performed, the receiver 200a, as shown in FIG. 226 (a), the position obtained by the self-position estimation, the number of the receiver 200a, and the partner of the waiting (that is, the receiver 200b). ) Is sent to the server 300. The number may be a telephone number or any identification number that can identify the receiver. Moreover, information other than a number may be sufficient.
 サーバ300は、受信機200aからの上記各情報を受信すると、図226の(b)に示すように、受信機200bに対して、受信機200aの位置と番号とを送信する。そして、サーバ300は、受信機200aからの待ち合わせの招待を受け入れるか否かを受信機200bに問い合わせる。ここで、受信機200bのユーザは、受信機200bを操作することによって、その招待を受け入れる。つまり、受信機200bは、図226の(c)に示すように、その待ち合わせを了承することをサーバ300に伝える。そして、受信機200bは、図226の(d)に示すように、自己位置推定によって得られた受信機200bの位置をサーバ300に通知する。 When the server 300 receives the above information from the receiver 200a, the server 300 transmits the position and number of the receiver 200a to the receiver 200b as shown in FIG. 226 (b). Then, the server 300 inquires of the receiver 200b whether or not to accept the meeting invitation from the receiver 200a. Here, the user of the receiver 200b accepts the invitation by operating the receiver 200b. That is, the receiver 200b informs the server 300 that the waiting is accepted as shown in FIG. 226 (c). Then, as illustrated in FIG. 226 (d), the receiver 200b notifies the server 300 of the position of the receiver 200b obtained by the self-position estimation.
 その結果、サーバ300は、受信機200aおよび受信機200bのそれぞれの位置を特定する。そして、サーバ300は、それぞれの位置の中間地点を待ち合わせ場所(すなわち目的地)に設定し、受信機200aおよび受信機200bに対して、その待ち合わせ場所までの経路を通知する。これにより、受信機200aおよび受信機200bのそれぞれで待ち合わせ場所までのARナビゲーションが実行される。なお、上述の例では、受信機200aおよび受信機200bのそれぞれの位置の中間地点が、目的地として設定されるが、他の場所が目的地に設定されてもよい。例えば、それぞれランドマークが設置されている複数の場所のうち、移動時間が最短となる場所が目的地に設定されてもよい。なお、移動時間は、受信機200aおよび受信機200bがその場所に到達するまでの予想時間である。 As a result, the server 300 identifies the positions of the receiver 200a and the receiver 200b. Then, the server 300 sets an intermediate point of each position as a meeting place (that is, a destination), and notifies the receiver 200a and the receiver 200b of a route to the meeting place. Thereby, the AR navigation to the meeting place is executed in each of the receiver 200a and the receiver 200b. In the above-described example, an intermediate point between the positions of the receiver 200a and the receiver 200b is set as the destination, but another place may be set as the destination. For example, a place where the movement time is the shortest among a plurality of places where the landmarks are installed may be set as the destination. The travel time is an estimated time until the receiver 200a and the receiver 200b reach the place.
 これにより、待ち合わせをスムーズに行うことができる。 This makes it possible to wait smoothly.
 ここで、受信機200aは、目的地付近に到着すると、受信機200bを有するユーザを識別するための画像を通常撮影画像に重畳してもよい。 Here, when the receiver 200a arrives near the destination, an image for identifying the user having the receiver 200b may be superimposed on the normal captured image.
 図227は、受信機200aのディスプレイに表示される画面の一例を示す図である。 FIG. 227 is a diagram illustrating an example of a screen displayed on the display of the receiver 200a.
 例えば、サーバ300は、受信機200bの位置を受信機200aに定期的に送信している。この受信機200bの位置は、受信機200bによる自己位置推定によって得られる位置である。したがって、受信機200aは、受信機200bの地図上での位置を把握している。そこで、受信機200aは、通常撮影画像において、受信機200bの位置が映し出されている場合には、図227に示すように、その位置を指し示す矢印433を通常撮影画像に重畳してもよい。なお、受信機200bも、受信機200aと同様、受信機200aの位置を指し示す矢印を通常撮影画像に重畳してもよい。 For example, the server 300 periodically transmits the position of the receiver 200b to the receiver 200a. The position of the receiver 200b is a position obtained by self-position estimation by the receiver 200b. Therefore, the receiver 200a knows the position of the receiver 200b on the map. Therefore, when the position of the receiver 200b is displayed in the normal captured image, the receiver 200a may superimpose an arrow 433 indicating the position on the normal captured image as illustrated in FIG. Note that, similarly to the receiver 200a, the receiver 200b may superimpose an arrow indicating the position of the receiver 200a on the normal captured image.
 これにより、待ち合わせ場所に人が多くいる場合であっても、待ち合わせ相手を容易に見つけ出すことができる。 This makes it easy to find a meeting partner even if there are many people at the meeting place.
 なお、上述の例では、矢印433などのインジケータは、待ち合わせに用いられるが、待ち合わせ以外にも用いられてもよい。受信機200bのユーザは、待ち合わせに関わらず、行き先など何かに困っている場合には、受信機200bを操作することによって、そのことをサーバ300に通知してもよい。この場合、サーバ300は、コールセンターの係員が有する受信機200aのディスプレイに、図227の例に示す画像を表示させてもよい。このとき、サーバ300は、矢印433の代わりにクエスチョンマークを表示させてもよい。これにより、そのコールセンターの係員は、受信機200bのユーザが困っていることを容易に確認することができる。 In the above example, the indicator such as the arrow 433 is used for waiting, but may be used for other than waiting. The user of the receiver 200b may notify the server 300 of this by operating the receiver 200b when there is something wrong with the destination, regardless of the waiting time. In this case, the server 300 may display the image shown in the example of FIG. 227 on the display of the receiver 200a owned by the call center staff. At this time, the server 300 may display a question mark instead of the arrow 433. Thereby, the staff of the call center can easily confirm that the user of the receiver 200b is in trouble.
 また、受信機200は、コンサートホールの内部を案内してもよい。 Further, the receiver 200 may guide the inside of the concert hall.
 図228は、コンサートホールの内部を示す図である。 FIG. 228 shows the inside of the concert hall.
 受信機200は、例えば図228に示すコンサートホールの内部の地図と、そのコンサートホールの入口から座席までの経路434とを、サーバから取得してもよい。例えば、受信機200は、その入口に配置された送信機100から可視光信号を受信することによって自己位置を推定し、その経路434に沿ってユーザを、そのユーザの座席まで案内する。ここで、コンサートホールの内部に階段があれば、受信機200は、加速度センサなどの出力に基づいて、ユーザが階段を何段登ったか、または、何段降りたかを特定し、その特定された段数に基づいて自己位置を更新してもよい。 The receiver 200 may acquire, for example, a map inside the concert hall shown in FIG. 228 and a route 434 from the entrance of the concert hall to the seat from the server. For example, the receiver 200 estimates a self-position by receiving a visible light signal from the transmitter 100 arranged at the entrance, and guides the user along the route 434 to the user's seat. Here, if there is a staircase inside the concert hall, the receiver 200 identifies how many steps the user has climbed or descended on the basis of the output of the acceleration sensor or the like, and is identified. The self position may be updated based on the number of steps.
 また、受信機200は、上述の例では、可視光信号を受信していない場合には、特徴点の動きに基づいて自己位置推定を行ったが、通常撮影画像から特徴点を検出することができなくなった場合には、加速度センサからの出力を用いてもよい。具体的には、受信機200は、通常撮影画像から特徴点を検出することができるときに、上述のように移動距離を推定し、その移動距離と、移動中の加速度センサからの出力データとの関係を学習する。この学習には、例えばDNN(Deep Neural Network)などの機械学習が用いられてもよい。そして、受信機200は、特徴点の検出が不可能になったときに、その学習結果と、移動中の加速度センサからの出力データとを用いて、移動距離を導出する。または、受信機200は、特徴点の検出が不可能になったときには、直前の移動速度で等速に受信機200が移動していると仮定し、その仮定に基づいて移動距離を導出してもよい。 In the above-described example, the receiver 200 performs self-position estimation based on the movement of the feature point when no visible light signal is received. However, the receiver 200 can detect the feature point from the normal captured image. When it becomes impossible, the output from the acceleration sensor may be used. Specifically, when the receiver 200 can detect a feature point from a normal captured image, the receiver 200 estimates the movement distance as described above, and the movement distance and output data from the moving acceleration sensor To learn the relationship. For this learning, for example, machine learning such as DNN (Deep Neural 例 え ば Network) may be used. When the feature point cannot be detected, the receiver 200 derives the moving distance using the learning result and output data from the moving acceleration sensor. Alternatively, when it becomes impossible to detect feature points, the receiver 200 assumes that the receiver 200 is moving at a constant speed at the immediately preceding moving speed, and derives a moving distance based on the assumption. Also good.
 [第1の態様]
 通信方法では、端末の傾きが地面に平行な面に対して所定の角度よりも大きいか否か判断し、所定の角度よりも小さいと判断し、かつ、アウトカメラで輝度変化する被写体を撮像する場合、前記アウトカメラのイメージセンサの露光時間を第1の露光時間に設定し、前記イメージセンサを用いて前記第1の露光時間で前記被写体を撮像することで、復号用画像を取得し、前記復号用画像から前記被写体が送信する第1の信号を復号できる場合には、前記復号用画像から前記第1の信号を復号し、前記第1の信号により特定される位置を取得し、前記復号用画像から前記被写体が送信する信号を復号できない場合には、前記端末に保存されており、複数の送信機の位置を含む地図情報と、端末の位置とを用いて、前記端末の位置から所定の範囲にある送信機に関連する位置を特定する。
[First embodiment]
In the communication method, it is determined whether the inclination of the terminal is greater than a predetermined angle with respect to a plane parallel to the ground, and is determined to be smaller than the predetermined angle, and an object whose brightness changes is captured by the out-camera. In this case, the exposure time of the image sensor of the out-camera is set to a first exposure time, and the subject is imaged at the first exposure time using the image sensor, thereby obtaining a decoding image, When the first signal transmitted from the subject can be decoded from the decoding image, the first signal is decoded from the decoding image, the position specified by the first signal is acquired, and the decoding is performed. When the signal transmitted from the subject cannot be decoded from the image for use, the map is stored in the terminal, and the map information including the positions of a plurality of transmitters and the position of the terminal are used to determine the predetermined position from the terminal position. Range Identifying a location associated with a transmitter.
 図229は、本発明の第1の態様における通信方法の一例を示すフローチャートである。 FIG. 229 is a flowchart showing an example of a communication method according to the first aspect of the present invention.
 まず、受信機200である端末は、端末の傾きが地面に平行な面に対して所定の角度よりも大きいか否か判断する(ステップSG21)。なお、地面に平行な面は、例えば水平面であってもよい。具体的には、端末は、加速度センサからの出力データを用いて端末の傾きを検出することによって、その傾きが大きいか否かを判断する。この傾きは、端末の前面または背面の傾きである。 First, the terminal which is the receiver 200 determines whether or not the inclination of the terminal is larger than a predetermined angle with respect to a plane parallel to the ground (step SG21). The surface parallel to the ground may be a horizontal plane, for example. Specifically, the terminal determines whether the inclination is large by detecting the inclination of the terminal using output data from the acceleration sensor. This tilt is the tilt of the front or back of the terminal.
 ここで、端末は、その傾きが所定の角度よりも大きいと判断し、かつ、アウトカメラで輝度変化する被写体を撮像する場合(ステップSG21のYes)、アウトカメラのイメージセンサの露光時間を第1の露光時間に設定する(ステップSG22)。そして、端末は、そのイメージセンサを用いて第1の露光時間で被写体を撮像することで、復号用画像を取得する(ステップSG23)。 Here, when the terminal determines that the inclination is larger than a predetermined angle and images the subject whose luminance changes with the out camera (Yes in step SG21), the exposure time of the image sensor of the out camera is set to the first time. (Step SG22). Then, the terminal captures the subject with the first exposure time using the image sensor, thereby acquiring a decoding image (step SG23).
 ここで取得される復号用画像は、端末の傾きが地面に平行な面に対して所定の角度よりも大きいときに得られた画像であるため、地面側にある被写体への撮像によって得られる画像ではない。したがって、その復号用画像の撮像では、天井側に配置されている照明装置、または、壁面に設置されているデジタルサイネージなどの、可視光信号を送信可能な送信機100が、被写体として撮像されている可能性が高い。言い換えれば、復号用画像の撮像では、送信機100からの反射光が被写体として撮像されている可能性が低い。したがって、送信機100が被写体として撮像されている可能性が高い復号用画像を適切に取得することができる。つまり、図214および図215のステップS353に示すように、床または壁からの反射光を撮像しているのか、送信機100からの直接光を撮像しているのかを、適切に判定することができる。 Since the decoding image acquired here is an image obtained when the inclination of the terminal is larger than a predetermined angle with respect to a plane parallel to the ground, an image obtained by imaging a subject on the ground side is not. Therefore, in capturing the decoding image, a transmitter 100 capable of transmitting a visible light signal such as a lighting device arranged on the ceiling side or a digital signage installed on a wall surface is imaged as a subject. There is a high possibility. In other words, in capturing a decoding image, there is a low possibility that reflected light from the transmitter 100 is captured as a subject. Therefore, it is possible to appropriately acquire a decoding image that has a high possibility that the transmitter 100 is captured as a subject. That is, as shown in step S353 of FIGS. 214 and 215, it is possible to appropriately determine whether the reflected light from the floor or the wall is imaged or the direct light from the transmitter 100 is imaged. it can.
 次に、端末は、復号用画像から被写体が送信する第1の信号を復号できるか否かを判定する(ステップSG24)。ここで、第1の信号を復号できる場合には(ステップSG24のYes)、端末は、復号用画像から第1の信号を復号し(ステップSG25)、第1の信号により特定される位置を取得する(ステップSG26)。一方、復号用画像から被写体が送信する信号を復号できない場合には(ステップSG24のNo)、端末は、その端末に保存されており、複数の送信機の位置を含む地図情報と、端末の位置とを用いて、その端末の位置から所定の範囲にある送信機に関連する位置を特定する(ステップSG27)。 Next, the terminal determines whether or not the first signal transmitted from the subject can be decoded from the decoding image (step SG24). Here, when the first signal can be decoded (Yes in Step SG24), the terminal decodes the first signal from the decoding image (Step SG25), and acquires the position specified by the first signal. (Step SG26). On the other hand, when the signal transmitted from the subject cannot be decoded from the decoding image (No in step SG24), the terminal stores the map information including the positions of a plurality of transmitters and the position of the terminal. Are used to identify a position related to a transmitter within a predetermined range from the position of the terminal (step SG27).
 これにより、図213のステップS344~S348に示すように、例えば可視光信号である第1の信号を受信することができても、受信することができなくても、被写体となる送信機の位置を特定することができる。その結果、端末の現在の自己位置を適切に推定することができる。 As a result, as shown in steps S344 to S348 in FIG. 213, for example, the position of the transmitter that is the subject can be received even if the first signal that is a visible light signal can be received or not. Can be specified. As a result, the current self-position of the terminal can be estimated appropriately.
 [第2の態様]
 第2の態様の通信方法では、前記第1の通信方法において、前記第1の露光時間は、前記イメージセンサに含まれる複数の露光ラインに対応する輝線が前記復号用画像に現れるように設定される。
[Second embodiment]
In the communication method of the second aspect, in the first communication method, the first exposure time is set such that bright lines corresponding to a plurality of exposure lines included in the image sensor appear in the decoding image. The
 これにより、復号用画像から第1の信号を適切に復号することができる。 Thereby, the first signal can be appropriately decoded from the decoding image.
 [第3の態様]
 第3の態様の通信方法では、前記第1の態様の通信方法において、前記被写体は、輝度変化により信号を送信する第1の送信機からの光が床面で反射した反射光である。
[Third Aspect]
In the communication method according to the third aspect, in the communication method according to the first aspect, the subject is reflected light in which light from the first transmitter that transmits a signal due to a change in luminance is reflected on the floor surface.
 これにより、反射光の撮像によって復号用画像が得られ、その復号用画像から第1の信号を復号することできない場合であっても、第1の送信機の位置を特定することができる。 Thereby, even when the decoding image is obtained by imaging the reflected light and the first signal cannot be decoded from the decoding image, the position of the first transmitter can be specified.
 [第4の態様]
 第4の態様の通信方法では、前記第1の態様の通信方法において、前記アウトカメラのイメージセンサの露光時間を、前記第1の露光時間よりも長い第2の露光時間に設定し、前記第2の露光時間で撮像することで、複数の通常画像を取得し、前記複数の通常画像から複数の空間特徴量を算出し、前記複数の空間特徴量を用いて、前記端末の位置を算出する。
[Fourth aspect]
According to a communication method of a fourth aspect, in the communication method of the first aspect, an exposure time of an image sensor of the out-camera is set to a second exposure time longer than the first exposure time, and the first By capturing images with an exposure time of 2, a plurality of normal images are acquired, a plurality of spatial feature amounts are calculated from the plurality of normal images, and a position of the terminal is calculated using the plurality of spatial feature amounts. .
 なお、通常画像は、上述の通常撮影画像である。 Note that the normal image is the above-described normal photographed image.
 これにより、図212の(c)および(d)に示すように、GPSのデータが届かないような地下街に端末があっても、複数の通常画像から、その端末の現在の自己位置を適切に推定することができる。なお、空間特徴量は、特徴点であってもよい。 As a result, as shown in FIGS. 212 (c) and (d), even if there is a terminal in an underground mall where GPS data does not reach, the current self-position of the terminal is appropriately determined from a plurality of normal images. Can be estimated. The spatial feature amount may be a feature point.
 [第5の態様]
 第5の態様の通信方法では、前記第4の態様の通信方法において、前記第1の露光時間で第2の送信機を撮像することで、復号用画像を取得し、前記復号用画像から前記第2の送信機が送信する第2の信号を復号し、前記第2の信号により特定される位置を取得し、前記第2の信号により特定される位置を前記地図情報における移動開始位置とし、前記複数の空間特徴量を用いて、前記端末の移動量を算出することで、前記端末の位置を特定する。
[Fifth aspect]
In the communication method according to the fifth aspect, in the communication method according to the fourth aspect, a decoding image is obtained by capturing an image of the second transmitter at the first exposure time, and the decoding image is acquired from the decoding image. Decoding the second signal transmitted by the second transmitter, obtaining the position specified by the second signal, and setting the position specified by the second signal as the movement start position in the map information; The position of the terminal is specified by calculating the amount of movement of the terminal using the plurality of spatial feature amounts.
 これにより、図212の(a)に示すように移動開始位置である起点からの移動量に基づいて端末の位置が特定されるため、より正確な自己位置推定を行うことができる。 Thereby, as shown in FIG. 212 (a), the position of the terminal is specified based on the movement amount from the starting point which is the movement start position, so that more accurate self-position estimation can be performed.
 本発明の通信方法は、多様な機器間の通信を行うことができるという効果を奏し、例えば、スマートフォン、グラスまたはタブレットなどの表示装置等に利用できる。 The communication method of the present invention has an effect that communication between various devices can be performed, and can be used for a display device such as a smartphone, a glass, or a tablet.
 G20  通信装置
 G21  判断部
 G22  第1の取得部
 G23  第2の取得部
G20 communication device G21 determination unit G22 first acquisition unit G23 second acquisition unit

Claims (20)

  1.  イメージセンサを備えた端末を用いた通信方法であって、
     前記端末が可視光通信を行うことが可能か否かを判断し、
     前記端末が可視光通信を行うことが可能と判断した場合に、前記イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、前記復号用画像に現れる縞模様から、前記被写体が送信する第1の識別情報を取得し、
     前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記イメージセンサにより、前記被写体を撮像することによって撮像画像を取得し、前記撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、前記少なくとも1つの輪郭の中から、所定の特定領域を特定し、前記特定領域のラインパターンから前記被写体が送信する第2の識別情報を取得する、
     通信方法。
    A communication method using a terminal equipped with an image sensor,
    Determining whether the terminal is capable of visible light communication;
    When it is determined that the terminal can perform visible light communication, the image sensor captures a subject whose luminance changes, thereby obtaining a decoding image, and from the striped pattern appearing in the decoding image, Obtaining first identification information transmitted by the subject;
    In the determination of visible light communication, when it is determined that the terminal cannot perform visible light communication, the image sensor acquires a captured image by capturing the subject, and edge detection of the captured image To extract at least one contour, identify a predetermined specific region from the at least one contour, and obtain second identification information transmitted by the subject from a line pattern of the specific region ,
    Communication method.
  2.  前記特定領域の特定では、
     所定の大きさ以上の四角形の輪郭を有する領域、または、所定の大きさ以上の角丸四角形の輪郭を有する領域を、前記特定領域として特定する、
     請求項1に記載の通信方法。
    In specifying the specific area,
    An area having a quadrangular outline of a predetermined size or more, or an area having a rounded quadrangular outline of a predetermined size or more is specified as the specific area.
    The communication method according to claim 1.
  3.  前記可視光通信の判断では、
     前記端末が露光時間を所定の値以下に変更することができる端末であると特定した場合に、可視光通信を行うことが可能であると判断し、
     前記端末が露光時間を前記所定の値以下に変更することができない端末であると特定した場合に、可視光通信を行うことが可能でないと判断する、
     請求項1に記載の通信方法。
    In the determination of the visible light communication,
    When it is determined that the terminal is a terminal that can change the exposure time to a predetermined value or less, it is determined that visible light communication can be performed,
    When it is determined that the terminal is a terminal that cannot change the exposure time below the predetermined value, it is determined that visible light communication is not possible.
    The communication method according to claim 1.
  4.  前記可視光通信の判断において、前記端末が可視光通信を行うことが可能と判断した場合に、前記被写体を撮像するときには、前記イメージセンサの露光時間を第1の露光時間に設定し、前記第1の露光時間で前記被写体を撮像することで、前記復号用画像を取得し、
     前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記被写体を撮像するときには、前記イメージセンサの露光時間を第2の露光時間に設定し、前記第2の露光時間で前記被写体を撮像することで、前記撮像画像を取得し、
     前記第1の露光時間は、前記第2の露光時間よりも短い、
     請求項1から3のいずれか1項に記載の通信方法。
    In the determination of the visible light communication, when the terminal determines that it is possible to perform visible light communication, when imaging the subject, the exposure time of the image sensor is set to a first exposure time, and the first By capturing the subject with an exposure time of 1, the decoding image is obtained,
    In the determination of the visible light communication, when the terminal determines that the visible light communication is not possible, when imaging the subject, the exposure time of the image sensor is set to a second exposure time, By capturing the subject with a second exposure time, the captured image is acquired,
    The first exposure time is shorter than the second exposure time;
    The communication method according to any one of claims 1 to 3.
  5.  前記被写体は、前記イメージセンサから見て矩形形状であり、当該被写体の中心領域が輝度変化することにより、前記第1の識別情報を送信し、当該被写体の周縁にバーコード状のラインパターンが配置されており、
     前記可視光通信の判断において、前記端末が可視光通信を行うことが可能と判断した場合に、前記被写体を撮像するときには、前記イメージセンサの有する複数の露光ラインに対応する複数の輝線から構成される輝線パターンを含む前記復号用画像を取得し、前記輝線パターンを復号することによって前記第1の識別情報を取得し、
     前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記被写体を撮像するときには、前記撮像画像の前記ラインパターンから前記第2の識別情報を取得する、
     請求項4に記載の通信方法。
    The subject has a rectangular shape when viewed from the image sensor, and the central area of the subject changes in luminance, so that the first identification information is transmitted, and a barcode-like line pattern is arranged around the subject. Has been
    In the determination of the visible light communication, when the terminal determines that the visible light communication is possible, the image sensor is configured by a plurality of bright lines corresponding to a plurality of exposure lines of the image sensor. Obtaining the decoding image including the bright line pattern, obtaining the first identification information by decoding the bright line pattern,
    In the determination of the visible light communication, when the terminal determines that the visible light communication cannot be performed, the second identification information is acquired from the line pattern of the captured image when capturing the subject. ,
    The communication method according to claim 4.
  6.  前記復号用画像から得られる前記第1の識別情報と、前記ラインパターンから得られる前記第2の識別情報は、同一の情報である、
     請求項5に記載の通信方法。
    The first identification information obtained from the decoding image and the second identification information obtained from the line pattern are the same information.
    The communication method according to claim 5.
  7.  前記可視光通信の判断において、前記端末が可視光通信を行うことが可能と判断した場合に、前記第1の識別情報に関連付けられている第1の動画像を表示し、
     前記第1の動画像をスライドさせる操作を受け付けると、前記第1の動画像の次に前記第1の識別情報に関連付けられている第2の動画像を表示する、
     請求項1に記載の通信方法。
    In the determination of the visible light communication, when it is determined that the terminal can perform visible light communication, the first moving image associated with the first identification information is displayed,
    When an operation of sliding the first moving image is received, a second moving image associated with the first identification information is displayed next to the first moving image.
    The communication method according to claim 1.
  8.  前記第2の動画像の表示では、
     前記第1の動画像を横方向にスライドさせる操作を受け付けると、前記第2の動画像を表示し、
     前記第1の動画像を縦方向にスライドさせる動作を受け付けると、前記第1の識別情報に関連付けられている静止画像を表示する
     請求項7に記載の通信方法。
    In the display of the second moving image,
    When an operation of sliding the first moving image in the horizontal direction is received, the second moving image is displayed,
    The communication method according to claim 7, wherein a still image associated with the first identification information is displayed when an operation of sliding the first moving image in the vertical direction is received.
  9.  前記第1の動画像および前記第2の動画像のそれぞれにおいて、最初に表示されるピクチャ内のオブジェクトは同一の位置にある、
     請求項8に記載の通信方法。
    In each of the first moving image and the second moving image, the objects in the picture displayed first are in the same position.
    The communication method according to claim 8.
  10.  前記イメージセンサによる撮像によって前記第1の識別情報を再び取得したときには、表示されている動画像の次に前記第1の識別情報に関連付けられている次の動画像を表示する、
     請求項7または8に記載の通信方法。
    When the first identification information is acquired again by imaging by the image sensor, the next moving image associated with the first identification information is displayed next to the displayed moving image.
    The communication method according to claim 7 or 8.
  11.  前記表示されている動画像および前記次の動画像のそれぞれにおいて、最初に表示されるピクチャ内のオブジェクトは同一の位置にある、
     請求項10に記載の通信方法。
    In each of the displayed moving image and the next moving image, the object in the picture displayed first is in the same position.
    The communication method according to claim 10.
  12.  前記第1の動画像および前記第2の動画像のうちの少なくとも一方の動画像は、前記動画像内の位置が前記動画像の端に近いほど、当該位置における透明度が高くなるように形成されている、
     請求項11に記載の通信方法。
    At least one of the first moving image and the second moving image is formed such that the closer the position in the moving image is to the end of the moving image, the higher the transparency at that position is. ing,
    The communication method according to claim 11.
  13.  前記第1の動画像および前記第2の動画像のうちの少なくとも一方の動画像が表示される領域の外に、画像を表示する、
     請求項12に記載の通信方法。
    Displaying an image outside an area in which at least one of the first moving image and the second moving image is displayed;
    The communication method according to claim 12.
  14.  前記イメージセンサによる第1の露光時間による撮像によって、通常撮影画像を取得し、
     前記第1の露光時間よりも短い第2の露光時間による撮像によって、複数の輝線のパターンからなる領域である輝線パターン領域を含む前記復号用画像を取得し、前記復号用画像に対する復号によって前記第1の識別情報を取得し、
     前記第1の動画像または前記第2の動画像のうちの少なくとも一方の動画像の表示では、前記通常撮影画像から、前記復号用画像における前記輝線パターン領域と同一の位置にある基準領域を特定し、
     前記基準領域に基づいて、前記通常撮影画像において前記動画像が重畳される領域を対象領域として認識し、前記対象領域に前記動画像を重畳する、
     請求項7に記載の通信方法。
    A normal captured image is obtained by imaging with the first exposure time by the image sensor,
    The image for decoding including a bright line pattern region, which is a region composed of a plurality of bright line patterns, is acquired by imaging with a second exposure time shorter than the first exposure time, and the first image is obtained by decoding the decoding image. 1 identification information,
    In displaying the moving image of at least one of the first moving image and the second moving image, a reference region located at the same position as the bright line pattern region in the decoding image is specified from the normal captured image. And
    Recognizing a region in which the moving image is superimposed in the normal captured image as a target region based on the reference region, and superimposing the moving image on the target region;
    The communication method according to claim 7.
  15.  前記第1の動画像または前記第2の動画像のうちの少なくとも一方の動画像の表示では、前記通常撮影画像における、前記基準領域の上、下、左または右の領域を前記対象領域として認識する、
     請求項14に記載の通信方法。
    In displaying the moving image of at least one of the first moving image and the second moving image, the upper, lower, left or right region of the reference region in the normal captured image is recognized as the target region. To
    The communication method according to claim 14.
  16.  前記第1の動画像または前記第2の動画像のうちの少なくとも一方の動画像の表示では、前記輝線パターン領域のサイズが大きいほど、前記動画像のサイズを大きくする、
     請求項14に記載の通信方法。
    In displaying the moving image of at least one of the first moving image and the second moving image, the size of the moving image is increased as the size of the bright line pattern region is increased.
    The communication method according to claim 14.
  17.  イメージセンサを備えた端末を用いた通信装置であって、
     前記端末が可視光通信を行うことが可能か否かを判断する判断部と、
     前記判断部において、前記端末が可視光通信を行うことが可能と判断された場合に、前記イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、前記復号用画像に現れる縞模様から前記被写体が送信する第1の識別情報を取得する第1の取得部と、
     前記判断部において、前記端末が可視光通信を行うことが可能でないと判断された場合に、前記イメージセンサにより、前記被写体を撮像することによって撮像画像を取得し、前記撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、前記少なくとも1つの輪郭の中から、所定の特定領域を特定し、前記特定領域のラインパターンから前記被写体が送信する第2の識別情報を取得する第2の取得部とを備える、
     通信装置。
    A communication device using a terminal equipped with an image sensor,
    A determination unit that determines whether the terminal can perform visible light communication; and
    In the determination unit, when it is determined that the terminal can perform visible light communication, the image sensor captures a subject whose luminance changes, thereby obtaining a decoding image, and the decoding image is obtained as the decoding image. A first acquisition unit for acquiring first identification information transmitted by the subject from the appearing stripe pattern;
    When the determination unit determines that the terminal is not capable of visible light communication, the image sensor acquires a captured image by capturing the subject and performs edge detection of the captured image. Thus, at least one contour is extracted, a predetermined specific area is specified from the at least one outline, and second identification information transmitted by the subject is acquired from a line pattern of the specific area. An acquisition unit,
    Communication device.
  18.  照明板と、
     前記照明板の背面側から光を照射する光源と、
     前記光源の輝度を変化させるマイクロコントローラと、を備え、
     前記マイクロコントローラは、前記光源を輝度変化させることにより、前記光源から前記照明板を介して第1の識別情報を送信し、
     前記照明板の前面側の周辺にバーコード状のラインパターンが配置されており、前記ラインパターンに第2の識別情報が符号化されており、
     前記第1の識別情報と、前記第2の識別情報は、同じ情報である、
     送信機。
    A lighting board;
    A light source that emits light from the back side of the illumination plate;
    A microcontroller for changing the luminance of the light source,
    The microcontroller transmits the first identification information from the light source via the illumination plate by changing the brightness of the light source,
    A barcode-like line pattern is arranged around the front side of the lighting plate, and second identification information is encoded in the line pattern,
    The first identification information and the second identification information are the same information.
    Transmitter.
  19.  前記照明板の形状は、矩形形状である、
     請求項18に記載の送信機。
    The lighting plate has a rectangular shape,
    The transmitter according to claim 18.
  20.  イメージセンサを備えた端末を用いた通信方法を実行するためのプログラムであって、
     前記端末が可視光通信を行うことが可能か否かを判断し、
     前記端末が可視光通信を行うことが可能と判断した場合に、前記イメージセンサにより、輝度変化する被写体を撮像することにより、復号用画像を取得し、前記復号用画像に現れる縞模様から前記被写体が送信する第1の識別情報を取得し、
     前記可視光通信の判断において、前記端末が可視光通信を行うことが可能でないと判断した場合に、前記イメージセンサにより、前記被写体を撮像することによって撮像画像を取得し、前記撮像画像のエッジ検出を行うことで、少なくとも1つの輪郭を抽出し、前記少なくとも1つの輪郭の中から、所定の特定領域を特定し、前記特定領域のラインパターンから前記被写体が送信する第2の識別情報を取得する、
     ことをコンピュータに実行させるプログラム。
    A program for executing a communication method using a terminal equipped with an image sensor,
    Determining whether the terminal is capable of visible light communication;
    When it is determined that the terminal can perform visible light communication, the image sensor captures a subject whose luminance changes, thereby obtaining a decoding image, and the subject from the striped pattern appearing in the decoding image Obtains the first identification information transmitted by
    In the determination of visible light communication, when it is determined that the terminal cannot perform visible light communication, the image sensor acquires a captured image by capturing the subject, and edge detection of the captured image To extract at least one contour, identify a predetermined specific region from the at least one contour, and obtain second identification information transmitted by the subject from a line pattern of the specific region ,
    A program that causes a computer to execute.
PCT/JP2019/014013 2018-03-30 2019-03-29 Communication method, communication device, transmitter, and program WO2019189768A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020511094A JP7287950B2 (en) 2018-03-30 2019-03-29 COMMUNICATION METHOD, COMMUNICATION DEVICE, AND PROGRAM

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
JP2018066406 2018-03-30
JP2018-066406 2018-03-30
JP2018083454 2018-04-24
JP2018-083454 2018-04-24
JP2018-206923 2018-11-01
JP2018206923 2018-11-01
US201962806977P 2019-02-18 2019-02-18
US62/806,977 2019-02-18
US201962808560P 2019-02-21 2019-02-21
US62/808,560 2019-02-21
JP2019042442 2019-03-08
JP2019-042442 2019-03-08

Publications (1)

Publication Number Publication Date
WO2019189768A1 true WO2019189768A1 (en) 2019-10-03

Family

ID=68061897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/014013 WO2019189768A1 (en) 2018-03-30 2019-03-29 Communication method, communication device, transmitter, and program

Country Status (2)

Country Link
JP (1) JP7287950B2 (en)
WO (1) WO2019189768A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938194A (en) * 2021-09-24 2022-01-14 华中科技大学 Method and system for identifying radio ID of target equipment based on physical event perception
CN114666455A (en) * 2020-12-23 2022-06-24 Oppo广东移动通信有限公司 Shooting control method and device, storage medium and electronic device
WO2023272648A1 (en) * 2021-06-30 2023-01-05 Oppo广东移动通信有限公司 Visible-light communication method, apparatus and system, and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014103155A1 (en) * 2012-12-27 2014-07-03 パナソニック株式会社 Information communication method
JP2015179392A (en) * 2014-03-19 2015-10-08 カシオ計算機株式会社 Code symbol display device, information processor, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014103155A1 (en) * 2012-12-27 2014-07-03 パナソニック株式会社 Information communication method
JP2015179392A (en) * 2014-03-19 2015-10-08 カシオ計算機株式会社 Code symbol display device, information processor, and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666455A (en) * 2020-12-23 2022-06-24 Oppo广东移动通信有限公司 Shooting control method and device, storage medium and electronic device
WO2023272648A1 (en) * 2021-06-30 2023-01-05 Oppo广东移动通信有限公司 Visible-light communication method, apparatus and system, and device
CN113938194A (en) * 2021-09-24 2022-01-14 华中科技大学 Method and system for identifying radio ID of target equipment based on physical event perception

Also Published As

Publication number Publication date
JP7287950B2 (en) 2023-06-06
JPWO2019189768A1 (en) 2021-05-13

Similar Documents

Publication Publication Date Title
US10951310B2 (en) Communication method, communication device, and transmitter
JP6876617B2 (en) Display method and display device
JP6524132B2 (en) INFORMATION COMMUNICATION METHOD, INFORMATION COMMUNICATION DEVICE, AND PROGRAM
TWI736702B (en) Information communication method, information communication device and program
JP6378511B2 (en) Information communication method, information communication apparatus, and program
JP7134094B2 (en) Transmission method, transmission device and program
JP7287950B2 (en) COMMUNICATION METHOD, COMMUNICATION DEVICE, AND PROGRAM
WO2014103156A1 (en) Information communication method
JP5608307B1 (en) Information communication method
WO2018110373A1 (en) Transmission method, transmission device, and program
JP2020167521A (en) Communication method, communication apparatus, transmitter, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19774903

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020511094

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19774903

Country of ref document: EP

Kind code of ref document: A1